text
stringlengths
4
2.78M
--- abstract: 'We show that the existent fuzzy $S^2$ and $S^4$ models are natural candidates for the quantum geometry on the corresponding spheres in AdS/CFT correspondence. These models fit nicely the data from the dipole mechanism for the stringy exclusion principle. In the $AdS_2\times S^2$ case, we show that a wrapped fractional membrane can be used to count for the large ground state degeneracy. We also propose a fuzzy $AdS_2$ model whose fundamental commutation relation may underlie the UV/IR connection.' --- [H]{} Ø[[O]{}]{} hep-th/0004072\ .5in **Fuzzy Spheres in AdS/CFT Correspondence\ and Holography from Noncommutativity** .5in [Pei-Ming Ho$^1$, Miao Li$^{2,1}$]{} [*$^1$ Department of Physics, National Taiwan University, Taipei 106, Taiwan*]{} .2in Introduction ============ The nature of spacetime in string/M theory is becoming increasingly discernible. The most universal, yet quite qualitative property appears to be the spacetime uncertainty principle [@UNC]. This principle is in the same spirit as the UV/IR connection in the AdS/CFT correspondence [@UVIR], the latter underlies most of physics in by far the only concrete realization of the holographic principle. Spacetime noncommutativity manifests itself in many different situations in string/M theory. Its physical origin is the fact that all physical objects in the theory are inherently extended objects, such as strings, D-branes, and D0-brane partons in matrix theory [@MT]. It is not surprising therefore that when one increases one’s time resolution, any probe available will increase its physical size, and the space becomes more and more uncertain. This fundamental property of string/M theory is counter to the usual Wilsonian renormalization group intuition. It may be the ultimate key to resolving some long-standing puzzles, such as the quantum black hole physics, the problem of cosmological constant. There exists concrete mathematical realization of space noncommutativity. When D-branes are embedded in a constant B field background, the world-volume theory becomes noncommutative, and this subject has recently attracted a lot of attention. Yet this noncommutativity is best viewed as effective only, although its origin is also stringy. In our view, the recently proposed mechanism to explain the “stringy exclusion principle” [@SEP] hints at an even more interesting possibility, that we can explore the spacetime noncommutativity in the full string/M theory. There have been proposals that the remarkable phenomenon of “stringy exclusion” is due to noncommutativity of the quantum sphere in question [@JR; @HRT]. The mechanism of [@MST] provides a physical means for us to directly study the nature of the sphere in AdS/CFT correspondence. It was already pointed out in [@MIAO] that this mechanism agrees very well with the spacetime uncertainty relations, and a better model for noncommutativity on the sphere is the fuzzy sphere. It naturally respects the rotational invariance. We will find that the data coming from quantizing a dipole a la Myers [@RM] mesh perfectly with fuzzy spheres whose construction is available. One might say that the noncommutative Yang-Mills theory is a geometric manifestation of the B field on the D-brane worldvolume, and the fuzzy sphere is a geometric manifestation of a R-R anti-symmetric tensor field on the spacetime. We will in the second section discuss the $AdS_2\times S^2$ case. The dipole formed of a wrapped membrane is studied, we find that its location on $S^2$ is quantized. This is the origin of the fuzzy $S^2$. It is quite novel that for angular momentum $M\le N$, $N$ the cut-off, the membrane is static on $S^2$, so the angular momentum is completely induced from the Chern-Simons coupling. We also argue for the existence of a fractional membrane whose tension is only $1/N$ of its original tension. This fact together with the cut-off on the angular momentum enables us to count the ground state entropy of $AdS_2\times S^2$. It is of the order $N^2$. One may view our explanation as a bulk microscopic one for the entropy. We also propose a fuzzy $AdS_2$ model. Interestingly, the noncommutativity between time and the radial coordinate may be regarded as a fundamental explanation of the UV/IR relation. In sect.3, we show that the fuzzy $S^4$ constructed in [@CLT] fits nicely the data obtained in the dipole mechanism. Finally in sect.4, we motivate these fuzzy sphere models by considering matrix theory on a sphere. The construction of fuzzy spheres of 2 and 4 dimensions in this paper can be generalized to any even dimension. However, it does not work for odd dimensions, so our proposal is not contradictory to the proposal of q-deforming $AdS_d\times S^d$ spaces for $d=3,5$ [@JR; @HRT]. Surely there are many questions that remain, such as how to use fuzzy spheres to do physics directly. We hope to return to these problems later. $AdS_2\times S^2$ and Fuzzy Sphere ================================== The Dipole Mechanism {#dipole} -------------------- Among all examples in AdS/CFT correspondence, the boundary theory of $AdS_2\times S^2$ is most poorly understood [@ANDY]. However, since the “stringy exclusion principle" works in all other cases, we believe it still holds here in this case. What we shall say in this section is somewhat speculative, but the fact that we can accurately account for the ground state degeneracy lends a strong support to this picture. $AdS_2\times S^2$ can be obtained by taking the near horizon limit of the 4 dimensional extremal Reissner-Nordtström solution. The metric reads \[adsm\] &ds\^2=l\_p\^2(-[r\^2N\^2]{}dt\^2+[N\^2r\^2]{}dr\^2 +N\^2d\_2\^2),\ &F=-N d\_2, where $l_p$ is the 4 dimensional Planck length, $N$ is the magnetic charge, and integrally quantized. This metric can also be obtained by taking the near horizon limit of the 4 dimensional charged black hole in string theory [@MS]. Unavoidably, there must be four different charges $Q_i$, each associated to a kind of branes. For instance, by wrapping two sets of membranes and two sets of M5-branes in $T^7$, one obtains a 4D charged, extremal black hole [@KT]. The brane configuration is as follows. Denote the coordinates of $T^7$ by $x_i$, $i=1, \dots, 7$. A set of membranes are wrapped on $(x_1, x_2)$, another set are wrapped on $(x_3, x_4)$. A set of M5-branes are wrapped on $(x_1, x_3, x_5, x_6, x_7)$, the second set are wrapped on $(x_2, x_4, x_5, x_6, x_7)$. By setting all charges to be equal $Q_i=N$, the above metric results, and the magnetic field is just the linear combination of all anti-symmetric tensor fields involved. Note that here for simplicity, we consider the most symmetric case in which all the charges appearing in the harmonics $1+Q_il_p/r$ are just $N$ which in turn is equal to the number of corresponding branes used to generate this potential. As a consequence, the tension of the branes compensates the volume of the complementary torus. This means that the size of each circle of $T^7$ is at the scale of the M theory Planck length. The size of the torus $T^7$ is unchanged by going to the near horizon limit. This fact makes the implementation of the dipole mechanism of [@MST] a subtle problem. To understand this point, imagine that a graviton moving on $S^2$ is the manifestation of a membrane wrapped on $T^2$ of $T^2\times T^5$. This membrane is charged with respect to the field generated by M5-branes wrapped on the $T^5$ factor. Now the corresponding $F$ is the reduction of $F^{(4)}$ with two indices along $T^2$. The coupling of the membrane to this field is therefore $\int C^{(3)}$. Upon integrating over $T^2$, we obtain Ndt, where we used $d\Omega_2 =\sin\theta d\theta\wedge d\phi$. Assuming the membrane move only along the $\phi$ direction so that the kinetic term is -[1l\_p]{}dt. This term is too large and will make the total energy of the candidate graviton the Planck scale, as we shall see in a moment. Fortunately, this problem is resolved by a mechanism to obtain a fractional membrane. The membrane transverse to a set of M5-branes necessarily lies on $T^2$ parallel to another set of M5-brane. If this membrane is tightly bound to these M5-branes, its tension is down by a factor $1/N$, and its charge is unaltered. To see that its tension indeed becomes much smaller, imagine that one of the circles in $T^2$ is the M circle, then the membrane is interpreted as a fundamental string, and the M5-branes to which it is bound are interpreted as D4-branes. It is well-known that a fundamental string melt into D-branes has a tension $(g_s/N)$ times its original tension [@GKP]. Since the M circle in question is at the Planck scale, so $g_s=1$. Thus the tension of the membrane is down by a factor $1/N$. Now, the total action of the membrane is given by S=-[1R]{}dt +Ndt, where we used $R=Nl_p$. The angular momentum conjugate to $\phi$ is M=R(1-R\^2\^2\^2)\^[-1/2]{}\^2 +N. Upon quantization, $M$ is an integer. Solving $\dot{\phi}$ in terms of $M$ and substituting it into the energy, we find E()=[1R]{}(\^2+(M-N)\^2)\^[1/2]{}. For a stable orbit, $dE/d\theta =0$. This condition leads to (M-N)(N\^2-(M-N)) =0. When $|M|\le N$, the solution is \[qcon\] =[MN]{}. And in this case the energy $E=1/R$ which is independent of $M$. This we take as a surprising result. Its origin is the fact that the membrane is completely at rest at the angle $\cos\theta =M/N$, its angular momentum is induced from the C-S coupling. When $|M|>N$, the other solution is \[larg\] =[NM]{}, and the energy is E=[1R]{}. In this case $E$ does depend on the angular momentum $M$. Unlike in its higher dimensional analogue [@MST], the dipole mechanism alone does not forbid higher angular momentum. (It can be checked that at values in (\[qcon\]) and (\[larg\]), the energy is always a local minimum.) It is interesting to note that for the standard membrane whose tension is not fractional (this membrane is not bound to a set of M5-brane), $|M|\le N$ modes saturate the M theory spacetime uncertainty relation $\Delta t\Delta x^2 >l_p^3$. A calculation similar to the above leads to energy $l_p^{-1}$. The uncertainty in time is then $\Delta t\sim l_p$. The uncertainty in space is $\Delta x\sim l_p$, since it is just the size of a circle. Of course for a fractionated membrane, the energy is smaller, thus the uncertainty in time is larger, the spacetime uncertainty relation is satisfied. It is also interesting to note that for the standard membrane with $|M|>N$, the uncertainty relation is violated, and on this basis we rule out larger angular momenta. Fuzzy $S^2$ ----------- The above discussion naturally leads us to a fuzzy sphere model for the factor $S^2$ in $AdS_2\times S^2$. The definition of fuzzy $S^2$ [@MADORE] is specified by a representation of the $su(2)$ Lie algebra \[S2\] \[X\^i,X\^j\]=i\^[ijk]{}X\^k. This algebra respects $SO(3)$ invariance. If the representation is irreducible and is $2N+1$ dimensional, then the Casimir is \_i (X\^i)\^2= N(N+1). The eigenvalues of any of $X^i$ are $-N, \dots, N$. It is rather clear now that if we identify $(X^il_p)$ with the Cartesian coordinates of $\R^3$ in which $S^2$ is embedded, we get a nice match between the fuzzy sphere and the physics we have learned. The radius can be defined either by the largest of the eigenvalue of $X^3$, say, or by the Casimir. The difference between these two definitions becomes vanishingly small in the large N limit. Now the eigenvalue of $X^1 l_p/R$ are $M/N$, with $|M|\le N$. Using the polar coordinates, this is just the quantization on $\cos\theta$ in (\[qcon\]). The algebra generated by $X^i$ is $(2N+1)^2$ dimensional. This happens to be equal to the number of all modes of angular momenta satisfying $M\le 2N$: \[mult\] \_[M=0]{}\^[2N]{} (2M+1)=(2N+1)\^2. This result is reminiscent of the stringy exclusion principle in $AdS_3\times S^3$ [@SEP]. In the CFT dual to $AdS_3\times S^3$ there are a number of interesting properties [@SEP; @JR]. First, the chiral primary operators of charges higher than $(N, N)$ for the quantum numbers $(2J_L, 2J_R)$ with respect to the $SU(2)_L\times SU(2)_R$ symmetry group can be written in terms of products of chiral primary operators of lower charges. Since products of chiral primary operators in the dual CFT are interpreted to correspond to multi-particle states in $AdS$, this means that there is a cutoff on the angular momenta by $N$ for single-particle states. The stringy exclusion principle is that multi-particle states also have a cutoff of angular momenta at $2N$. It is natural to adopt the same interpretation here for $S^2$. The algebra generated by $X$ contains multi-particle states and has the cutoff of angular momentum at $2N$ as shown in (\[mult\]). The single-particle states is cutoff at angular momentum $N$ as we have shown earlier in sect.\[dipole\], which is also the angular momentum of the $2N+1$ dimensional representation of $X$. To compare with the construction of fuzzy $S^4$ we will discuss later, we point out that there is another representation of $X^i$ based on Pauli matrices $\sigma_i$. Let $X^i$ be the following matrices acting on the tensor space $V^{\otimes n}$ where $V$ is a two dimensional vector space: \[tenp\] X\^i=(\_i…1\_2 +…+1\_2…\_i)\_, where the subscript ‘sym’ indicates the totally symmetrized tensor product of $\otimes V$. If the rank $n$ of this tensor product is $2N$, obviously the totally symmetrized space is $2N+1$ dimensional, and the matrices $X^i$ are $(2N+1)\times (2N+1)$ matrices. These matrices satisfy the $su(2)$ Lie algebra. We shall see that the fuzzy $S^4$ we need is a simple generalization of the construction (\[tenp\]). Moreover, the rank of the tensor product is also $2N$. Finally, we note that on the fuzzy $S^2$, $\cos\theta$ is conjugate to $\phi$ with the “Planck constant” given by $1/N$, namely $\cos\theta =-(i/N)\partial_\phi$, because the angular momentum $M$ is conjugate to $\phi$. Using this and the identification X\_1=N,X\_2=N, X\_3=N, one can in fact derive the algebra of fuzzy sphere (\[S2\]). The Entropy of $AdS_2\times S^2$ -------------------------------- An interesting property making $AdS_2\times S^2$ quite different from other AdS spaces is its huge ground state degeneracy. The entropy is simply $\pi N^2$, a quarter of the area of $S^2$ measured in the Planck unit. Its origin is the Bekenstein-Hawking entropy of the 4D charged black hole. Of course this entropy was explained microscopically using the brane construction [@MS], although as any such explanation, the argument goes through when the brane theory is weakly coupled, thus the horizon is not macroscopic. As the dual theory of $AdS_2\times S^2$ is not known yet, the best one can do is to give an bulk explanation of the entropy of the ground states. That one can do this is one advantage of working in low dimensions. We have argued that the angular momentum has a cut-off $N$, so the gravity theory in the full four dimensional spacetime reduces to one in two dimensions only. One does expect that the resulting 2D theory is renormalizable. In our argument for the entropy we will ignore interactions, so we are dealing with a free theory on $AdS_2$. Consider a finite temperature situation. The relevant Euclidean metric we will use is ds\^2=[R\^2z\^2]{}(dt\^2+dz\^2), namely it is the metric on the upper half plane. The conformal factor is irrelevant for a massless field. As we have seen, all the 2D fields in question are massive, and their mass is independent of angular momentum, and is just $m=1/R$. Thus for a scalar field, the Euclidean action is S\_E=[12]{}dtdz( (\_t)\^2+(\_z)\^2 +[1z\^2]{}\^2). The partition function is given by Z=e\^[-S\_E]{}. Due to a nice property of $AdS_2$, the partition function is independent of the temperature. This is because we can rescale the period $\beta$ of the Euclidean time away by performing $t\rightarrow \beta t$, $z\rightarrow \beta z$, and the action remains intact. As a consequence, the partition function is a pure number. As a bonus, the average energy E=-\_Z=0, thus we always stay in the ground state. Thus the contribution to the entropy from a scalar is simply $S=\ln Z$. If there are $N^2$ such scalars, the total entropy is S=N\^2Z. Of course based on supersymmetry we also expect the contribution from fermions. The number of fermions is also of the order $N^2$. We conclude that the ground state entropy indeed is of the order $N^2$. Note that the number of fields in $AdS$ induced from the fuzzy sphere by Kaluza-Klein reduction is precisely proportional to $N^2$ at large $N$. So we have accounted for the entropy in $AdS_2\times S^2$ up to an overall constant. It remains to compute $Z$ exactly. The following simple argument shows that it is a finite number. For a massive particle, $AdS_2$ acts as a finite box. Both the spatial size and the temporal size are the same, due to the scaling invariance. The free energy $F$ is just the Casimir energy and scales inversely with the size of the box, and consequently $\beta F$ is independent of this size and is a finite number. Finally, there can be contribution from other massive modes to the entropy, for instance contribution from wrapped ordinary membranes. These membranes have energy of order $l_p^{-1}$, much heavier, and we expect that their contribution is suppressed by factor $1/N^2$. Fuzzy $AdS_2$ ------------- The UV/IR connection has its simplest manifestation in $AdS_2$. We already showed that the spatial coordinate $z$ scales with time. Alternatively $r=1/z$ scales inversely with time, thus acts as the energy scale. We propose a simple explanation of this connection based on fuzzy $AdS_2$. For a massless field living on $AdS_2\times S^2$, its decomposition into harmonic functions on $S^2$ must match its decomposition on $AdS_2$. This requirement was used in [@JR] to argue that if the $AdS_3$ part of $AdS_3\times S^3$ is q-deformed, then the $S^3$ part should also be q-deformed. For the same reason we expect that $AdS_2$ should be fuzzy because $S^2$ is fuzzy. Since the fuzzy $S^2$ is defined by saying that the Cartesian coordinates satisfy the Lie algebra of $SU(2)$, the fuzzy $AdS_2$ should be defined by saying that the Cartesian coordinates satisfy the Lie algebra of $SU(1,1)$. Let $X_{-1}, X_0, X_1$ be the Cartesian coordinates of $AdS_2$. Then &\[X\_[-1]{}, X\_0\]=-il\_p X\_1,\ &\[X\_0, X\_1\]=il\_p X\_[-1]{},\ &\[X\_1, X\_[-1]{}\]=il\_p X\_0, which is obtained from $S^2$ by a “Wick rotation” of the time directions. The radial coordinate $r$ and the boundary time coordinate $t$ are defined in terms of the $X$’s as r=X\_[-1]{}+X\_1, t=(r\^[-1]{}X\_0+X\_0 r\^[-1]{}), where we symmetrized the products of $r^{-1}$ and $X_0$ so that $t$ is a Hermitian operator. The metric in terms of these coordinates assumes the form (\[adsm\]). It follows that the commutation relation for $r$ and $t$ is \[rt\] \[r,t\]=-iRl\_p. This relation (\[rt\]) is suggestive of the identification of $r/(Rl_p)$ with the conjugate variable of $t$, which is just the Hamiltonian of the boundary theory. This noncommutativity contains the UV/IR relation. Furthermore, it also suggests the uncertainty relation rtRl\_p, which implies that if we demand to have a classical description of $t$, then we will lose all physical distinction of the value of $r$. This is just what holography is– one can describe the theory in $AdS$ by a field theory on the boundary space, which is viewed as a classical space. It would be interesting to see if we can extend this nice incorporation of holography in noncommutativity to other examples of $AdS/CFT$ dualities. The identification $E=r/(Rl_p)$ would seem a drastic reduction in energy scale. This is not so. Indeed if one follows the analysis in the second reference of [@UVIR], one would find that for a massless graviton or a fractional membrane, the energy scale associated to $r$ is $E=r/R^2$, even smaller than $E=r/(Rl_p)$. For a massive graviton whose mass is the Planck scale, one would obtain our relation. This massive graviton can be obtained by wrapping a membrane on $T^2$ of $T^7$. However, as we have suggested, the modes responsible for the ground state entropy appear to come from fractional membranes. Fuzzy $S^4$ in $AdS_7\times S^4$ ================================ Fuzzy $S^4$ ----------- The first unambiguous implementation of the dipole mechanism is in $AdS_7\times S_4$. The graviton moving on $S^4$ is polarized under the $F^{(4)}$ field strength to become a membrane, by a mechanism similar to one proposed by Myers [@RM]. The size of the membrane is quantized [@MST; @MIAO], and it is natural to conjecture that this quantization leads to a fuzzy $S^4$. A fuzzy $S^4$ was proposed in [@CLT] to describe a longitudinal M5 brane in the matrix theory. Define the Cartesian coordinates of the fuzzy $S^4$ by totally symmetrized tensor products of $n$ copies of gamma matrices as \[X\] X\_i=(\_i+ \_i+)\_. Thus $X_i$ are $4^n\times 4^n$ matrices. Explicitly, we have ${(X_i)_{(a_1 a_2\cdots a_n)}}^{(b_1 b_2\cdots b_n)}$, where $a_i, b_i=1,2,3,4$ are indices of a Dirac spinor in 5 dimensional Euclidean space. The subscript ‘sym’ in (\[X\]) means that all $a_i$’s and $b_j$’s are separately totally symmetrized among themselves. As an operator, $X_i$ can act on an element of $V^{\otimes n}$, where $V$ stands for the space of 5D Dirac spinors. Instead of symmetrizing the indices of $X_i$, it is equivalent to restrict it to act on a smaller space $\H_n$, which consists of only elements of $V^{\otimes n}$ which have totally symmetrized indices. A basis of $\H_n$ is $\{v^{(k_1,k_2,k_3,k_4)}\}$, where $v^{(k_1,k_2,k_3,k_4)}_{a_1\cdots a_n}$ equals one if there are $k_A$ indices $a_i$ equal to $A$, and is zero otherwise. (Obviously $\sum_{A=1}^{4}k_A=n$.) The dimension of $\H_n$ is $N_0=\frac{1}{3!}(n+3)(n+2)(n+1)$, hence $X_i$ can also be realized as $N_0\times N_0$ matrices. The algebra generated by $X_i$ is invariant under $SO(5)$. Let $\sigma_i$ ($i=1,2,3$) denote Pauli matrices. We take the convention that \_i&=&\_i\_3, i=1,2,3, \[Gamma\]\ \_4&=&\_0\_1,\ \_5&=&\_0\_2, \[G3\] where $\s_0$ stands for the $2\times 2$ unit matrix. Therefore the index $a$ of a Dirac spinor can be written as $a=(\a,\a')$ ($\a,\a'=1,2$) corresponding to the decomposition of $\Gamma_i$ in (\[Gamma\]). One can check that \_[i=1]{}\^[5]{}\_i\_iwhen acting on $\H_2$. Using this and $\{\Gamma_i,\Gamma_j\}=2\delta_{ij}$, we can calculate \_[i=1]{}\^[5]{} X\_i\^2 &=& \_[i=1]{}\^[5]{} (\_i\^2+\_i\^2+ )\ &&+2\_[A=1]{}\^[n-1]{}\_[B=A+1]{}\^[n]{} (\_i\_i)\ && (5n+n(n-1)), where in the second line $\Gamma_i$ appears only in the $A$-th and the $B$-th places in the tensor product. It follows that R\^2=\^2 n(n+4). For large $n$, $\lam\sim R/n$. For the four-form field flux $2\pi N$ on $S^4$, the radius is $R=l_p(\pi N)^{1/3}$. The identity \_[i\_1 i\_2i\_5]{}X\_[i\_2]{}X\_[i\_5]{}=\^3 X\_[i\_1]{} is used in [@CLT] to argue that this configuration carries locally the charge for a longitudinal 5-brane. What will be of interest is the spectrum of the operator $\sum_{i=1}^{3}X_i^2$. It is easier to calculate $\sum_{i=4,5}X_i^2$. We have the identity \_[i=1]{}\^[3]{}\_i\_i=2P-, where $P$ is the permutation operator: $P(A\otimes B)=B\otimes A$. So \_[i=4,5]{}\_i\_i= 2P\_2-1-(\_0\_3)(\_0\_3), where $P_2$ is defined by P\_2((A\_1B\_1)(A\_2B\_2)) =((A\_1B\_2)(A\_2B\_1)). Consider the following three vectors $v^{(2,0,0,0)}, v^{(1,1,0,0)}, (v^{(1,0,0,1)}-v^{(0,1,1,0)})$ in $\H_2$. Recall that an index $a$ has two parts $a=(\a,\a')$ in accordance with (\[Gamma\]-\[G3\]). The first two vectors have their second parts of indices $\a'$ symmetrized, while the last vector has them antisymmetrized. The spectrum of $\sum_{i=1,2}(\s_0\otimes\s_i)\otimes(\s_0\otimes\s_i)$ is $\{0, 2, -2\}$, for the three vectors above, respectively. Now consider the vector $v_m\equiv v^{(m,(n-m),0,0)}$. For this case the first parts of indices are all equal to $1$, and the second parts of indices are totally symmetrized. It is straightforward to find that $(\sum_{i=4,5}X_i^2)v_m=\lam^2(2n+4m(n-m))v_m$. It follows that \[r2\] r\^2\_[i=1]{}\^[3]{}X\_i\^2=\^2( (2m-n)\^2+2n ) when acting on $v_m$. In the limit of large $n$, \[sinpsi\] . Another relation we will need below is \[X4X5\] \[X\_4, X\_5\]=2i, which approaches $2iRr/n$ in the large $n$ limit. We do not claim that (\[r2\]), (\[sinpsi\]) and (\[X4X5\]) supply the complete spectra for these operators. We have only considered a special class of eigenvectors $v_m$. As an example we consider the vector which is a linear combination of vectors of the form $v^{(k_1,(n/2-k_1),k_3,(n/2-k_3))}$, with half of its indices having their first parts being $1$, and the other half being $2$. We require that this vector has its second parts of indices antisymmetrized between any two pairs of indices which have different values for their first parts of indices. (For a pair of indices whose first parts are the same, their second parts are necessarily symmetrized.) One finds that $\sum_{i=4,5}X_i^2\simeq 0$ on this vector. This means that the maximal value of $r^2$ is $R^2$, which is not included in (\[r2\]). Noncommutativity of Fuzzy Graviton ---------------------------------- Following [@MST; @MIAO], consider a graviton moving in $S^4$. According to Myers, the graviton is a fuzzy sphere due to the $C$ field background, so we have a 2-sphere in directions $X_{1,2,3}$ moving with a constant angular momentum in the hemisphere parametrized by $X_{4,5}$. Let the radius of the 2-sphere be denoted by $r$, and &X\_4=,\ &X\_5=, where $R$ is the radius of $S^4$. The angular momentum conjugate to $\phi$ is found to be $L=Nr/R$ for the giant graviton [@MST; @MIAO]. Upon quantization, we have the canonical commutation relation $[L,\phi]=-i$, which implies that $[r,\phi]=-i\frac{R}{N}$. As a result, in the lowest order approximation (Poisson limit) in $1/N$, \~ir. Comparing this with (\[X4X5\]), we find that we need $n=2N$. As a further check, we recall that the stable value of $\sin\psi=r/R$ is found to be quantized as =M/N for integer $M$ ranging from $0$ to $N$. Comparing this with (\[sinpsi\]), we find a match if $m=M$ and $n=2N$. This match works in the leading order in the large $N$ limit. This is not a worry, since we expect corrections to $\sin\psi$ calculated in [@MST; @MIAO], just as we expect corrections to the energy formula $M/R$. For a scalar graviton viewed in $AdS_7$, the correct energy formula is $\sqrt{M(M+3)}/R$. The fact that these membrane states do not give a complete spectrum of $\sin\psi$ should not bother us because they are not all the possible physical states. It is also interesting to note that the commutation relations among $X_{1,2,3}$ from (\[X\]) are not exactly those for a fuzzy 2-sphere. It appears to be composed of two fuzzy hemispheres with opposite orientations. This again is not a contradiction because we can not exactly identify the worldvolume noncommutativity of the membrane with the target space noncommutativity of spacetime. However, there should be some relations between the two kinds of noncommutativities. For instance, one would intuitively expect that the target space uncertainty should not be smaller than the worldvolume uncertainty, since the spacetime is itself defined by the probes. There can be a closer relation between them but it is still elusive. In [@BV] the Matrix model for the six dimensional $(2,0)$ superconformal field theory was analyzed, and it was found that the matrix variables for the transverse coordinates of M5-branes indeed satisfy an algebra which is essentially the algebra of fuzzy $S^4$ (\[X\]). Noncommutativity from Matrix Theory =================================== In this section we construct the matrix model compactified on a $d$-sphere, and show that the fuzzy $S^2$ and $S^4$ are configurations with minimal energy. The matrix model action in flat spacetime has the potential term $\mbox{Tr}(\frac{1}{4}[X_i, X_j]^2)$, where $X_i$’s are $N\times N$ matrices. This term is invariant under the Poincare group. It is natural to guess that the matrix model compactified on a sphere has a term like \[potential\] (\[X\_i, X\_j\]\^2 +(X\_i\^2-R\^2)) in the action, where the index $i$ goes from $1$ to $d$ for a $d$-dimensional sphere, and $\Lam$ is the Lagrange multiplier by which the radius condition \[radius\] \_[i=1]{}\^[d]{}X\_i\^2=R\^2 is imposed. New terms should be included in the matrix model action in the presence of background fields. For the case of $S^2$ and $S^4$, a three-form field background with only indices in spatial directions exists. It is coupled to a matrix current [@TAYLOR] which vanishes when $\dot{X}_i=0$. Since we will consider only the static states of the matrix model, such interaction terms can be omitted. The 6-form field background is also relevant for the case of $S^2$, and the same thing happens [@TAYLOR]. In [@DOUG] a matrix model for the 2-sphere is proposed where the complex coordinates $z,\bar{z}$ are promoted to matrices. However it was found that there is a singularity on the moduli space. This corresponds to the loss of the global $SU(2)$ symmetry due to the fact that this symmetry is nonlinearly realized in terms of the complex coordinates, and thus can not be preserved when $z,\bar{z}$ are noncommutative. By adopting the Cartesian coordinates, we are able to preserve the global symmetry, but the whole supersymmetric action remains to be worked out. For our purpose, (\[potential\]) is the only part of the action that we need. The equations of motion derived from the action (\[potential\]) is \[XXX\] \[X\_j, \[X\_j, X\_i\]\]-X\_i=0, in addition to the kinetic term, which vanishes if we set $\dot{X}_i=0$ to minimize the total energy. Thus the moduli space of this model is given by solutions of (\[radius\]) and (\[XXX\]). The fuzzy 2-sphere and 4-sphere, and its generalization to other spheres of even dimensions following [@CLT], are solutions of both relations. Incidentally, an expansion of $X_i$ around the solution of fuzzy spheres can be viewed as covariant derivatives on dual fuzzy sphere. There should be a T-duality which relates D0-branes on a classical sphere to D2-branes on a fuzzy sphere. What we have shown in this section is that the static configuration of partons in the matrix theory happens to coincide with the quantum geometry which we argued to exist in some $AdS\times S$ spaces. It would be interesting to further explore whether this is a pure coincidence or a general rule. Such rules should tell us about how to identify the rank $N$ in the matrix theory and the flux $N$ of background fields. Acknowledgment {#acknowledgment .unnumbered} ============== We would like to thank D. Minic, S. Ramgoolam and A. Strominger for correspondence, and Y.S. Wu for discussions. This work is supported in part by the National Science Council, Taiwan, and the Center for Theoretical Physics at National Taiwan University. The work of M.L. is also supported by a “Hundred People Project” grant. .8cm [99]{} T. Yoneya, p. 419 in “Wandering in the Fields", eds. K. Kawarabayashi and A. Ukawa (World Scientific, 1987) ; see also p. 23 in “Quantum String Theory", eds. N. Kawamoto and T. Kugo (Springer, 1988);\ M. Li and T. Yoneya, hep-th/9611072, Phys. Rev. Lett. 78 (1997) 1219;\ M. Li and T. Yoneya, “Short-distance Space-time Structure and Black Holes in String Theory: A Short Review of the Present Status, hep-th/9806240, Jour. Chaos, Solitons and Fractals 10 (1999) 423. L. Susskind and E. Witten, “The Holographic Bound in Anti-de Sitter Space”, hep-th/9805114;\ A. Peet and J. Polchinski , “UV/IR Relations in AdS Dynamics”, hep-th/9809022. T. Banks, W. Fischler, S. Shenker and L. Susskind, “M Theory As A Matrix Model: A Conjecture”, hep-th/9610043, Phys. Rev. D55 (1997) 5112. J. Maldacena and A. Strominger, “AdS3 Black Holes and a Stringy Exclusion Principle”, hep-th/9804085, JHEP 9812 (1998) 005. A. Jevicki and S. Ramgoolam, “Noncommutative gravity from AdS/CFT correspondence’, hep-th/9902059, JHEP 9904 (1999) 032. P.-M. Ho, S. Ramgoolam and R. Tatar, “Quantum Spacetimes and Finite N Effects in 4D Yang-Mills Theories”, hep-th/9907145. J. McGreevy, L. Susskind and N. Toumbas, “Invasion of the Giant Gravitons from Anti-de Sitter Space”, hep-th/0003075. M. Li, “Fuzzy Gravitons From Uncertain Spacetime", hep-th/0003173. R. Myers, “Dielectrc-Branes”, hep-th/9910053, JHEP 9912 (1999) 022. J. Castelino, S. Lee and W. Taylor, “Longitudinal 5-branes as 4-sphere in Matrix Theory”, hep-th/9712105, Nucl. Phys. B526 (1998) 334. A. Strominger, “AdS2 Quantum Gravity and String Theory", hep-th/9809027, JHEP 9901 (1999) 007;\ J. Maldacena, J. Michelson and A. Strominger, “Anti-de Sitter Fragmentation", hep-th/9812073, JHEP 9902 (1999) 011. J. Maldacena and A. Strominger, “Statistical Entropy of Four-Dimensional Black Holes", hep-th/9603069, Phys. Rev. Lett. 77 (1996) 428. I. R. Klebanov and A. A. Tseytlin, “Intersecting M-branes as Four-Dimensional Black Holes", hep-th/9604166. S. Gukov, I. R. Klebanov, A. M. Polyakov, “Dynamics of $(n,1)$ Strings”, hep-th/9711112. J. Madore, An Introduction to Noncommutative Differential Geometry and Its Physical Applications, Cambridge U. Press, 2nd (1999). M. Berkooz, H. Verlinde, “Matrix Theory, AdS/CFT and Higgs-Coulomb Equivalence”, hep-th/9907100. D. Kabat, W. Taylor, “Linearized Supergravity from Matrix Theory”, hep-th/9712185, Phys. Lett. B426 (1998) 297;\ W. Taylor, M. Van Raamsdonk, “Angular Momentum and Long-range Gravitational Interactions for Matrix Theory Configurations with Fermion Background”, hep-th/9812239. M. R. Douglas, “D-branes in Curved Space”, hep-th/9703056.
--- abstract: | Recently, many techniques have been introduced that allow the (automated) classification of the runtime complexity of term rewrite systems (TRSs for short). In earlier work, the authors have shown that for confluent TRSs, innermost polynomial runtime complexity induces polytime computability of the functions defined. In this paper, we generalise the above result to full rewriting. Following our previous work, we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the runtime complexity of the TRS and the runtime complexity of the TM is polynomially related. Our result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of non-deterministic polytime-computation based on runtime complexity analysis of rewrite systems. author: - | Martin Avanzini\ Institute of Computer Science\ University of Innsbruck, Austria\ [martin.avanzini@uibk.ac.at](martin.avanzini@uibk.ac.at) - | Georg Moser\ Institute of Computer Science\ University of Innsbruck, Austria\ [georg.moser@uibk.ac.at](georg.moser@uibk.ac.at) title: 'Technical Report: Complexity Analysis by Graph Rewriting Revisited[^1]\' --- [^1]: This research is supported by FWF (Austrian Science Fund) projects P20133.
--- abstract: 'Recent advances in modern Natural Language Processing (NLP) research have been dominated by the combination of Transfer Learning methods with large-scale language models, in particular based on the Transformer architecture. With them came a paradigm shift in NLP with the starting point for training a model on a downstream task moving from a blank specific model to a general-purpose pretrained architecture. Still, creating these general-purpose models remains an expensive and time-consuming process restricting the use of these methods to a small sub-set of the wider NLP community. In this paper, we present HuggingFace’s `Transformers` library, a library for state-of-the-art NLP, making these developments available to the community by gathering state-of-the-art general-purpose pretrained models under a unified API together with an ecosystem of libraries, examples, tutorials and scripts targeting many downstream NLP tasks. HuggingFace’s `Transformers` library features carefully crafted model implementations and high-performance pretrained weights for two main deep learning frameworks, PyTorch and TensorFlow, while supporting all the necessary tools to analyze, evaluate and use these models in downstream tasks such as text/token classification, questions answering and language generation among others. The library has gained significant organic traction and adoption among both the researcher and practitioner communities. We are committed at HuggingFace to pursue the efforts to develop this toolkit with the ambition of creating the standard library for building NLP systems.' author: - | Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz^^, Jamie Brew\ \ HuggingFace Inc., Brooklyn, USA\ ^^ NAVER LABS Europe, Grenoble, France\ \ `{first-name}@huggingface.co` bibliography: - 'mybib.bib' title: 'Transformers: State-of-the-art Natural Language Processing' --- Introduction ============ In the past 18 months, advances on many Natural Language Processing (NLP) tasks have been dominated by deep learning models and, more specifically, the use of Transfer Learning methods [@ruder2019transfer] in which a deep neural network language model is pretrained on a web-scale unlabelled text dataset with a general-purpose training objective before being fine-tuned on various downstream tasks. Following noticeable improvements using Long Short-Term Memory (LSTM) architectures [@Howard2018UniversalLM; @peters2018deep], a series of works combining Transfer Learning methods with large-scale Transformer architectures [@Vaswani2017AttentionIA] has repeatedly advanced the state-of-the-art on NLP tasks ranging from text classification [@Yang2019XLNetGA], language understanding [@Liu2019RoBERTaAR; @Wang2018GLUEAM; @Wang2019SuperGLUEAS], machine translation [@Lample2019CrosslingualLM], and zero-short language generation [@Radford2019LanguageMA] up to co-reference resolution [@joshi2019spanbert] and commonsense inference [@Bosselut2019COMETCT]. While this approach has shown impressive improvements on benchmarks and evaluation metrics, the exponential increase in the size of the pretraining datasets as well as the model sizes [@Liu2019RoBERTaAR; @shoeybi2019megatron] has made it both difficult and costly for researchers and practitioners with limited computational resources to benefit from these models. For instance, RoBERTa [@Liu2019RoBERTaAR] was trained on 160 GB of text using 1024 32GB V100. On Amazon-Web-Services cloud computing (AWS), such a pretraining would cost approximately 100K USD. Contrary to this trend, the booming research in Machine Learning in general and Natural Language Processing in particular is arguably explained significantly by a strong focus on knowledge sharing and large-scale community efforts resulting in the development of standard libraries, an increased availability of published research code and strong incentives to share state-of-the-art pretrained models. The combination of these factors has lead researchers to reproduce previous results more easily, investigate current approaches and test hypotheses without having to redevelop them first, and focus their efforts on formulating and testing new hypotheses. To bring Transfer Learning methods and large-scale pretrained Transformers back into the realm of these best practices, the authors (and the community of contributors) have developed `Transformers`, a library for state-of-the art Natural Language Processing with Transfer Learning models. `Transformers` addresses several key challenges: #### Sharing is caring `Transformers` gathers, in a single place, state-of-the art architectures for both Natural Language Understanding (NLU) and Natural Language Generation (NLG) with model code and a diversity of pretrained weights. This allows a form of training-computation-cost-sharing so that low-resource users can reuse pretrained models without having to train them from scratch. These models are accessed through a simple and unified API that follows a classic NLP pipeline: setting up configuration, processing data with a tokenizer and encoder, and using a model either for training (adaptation in particular) or inference. The model implementations provided in the library follow the original computation graphs and are tested to ensure they match the original author implementations’ performances on various benchmarks. #### Easy-access and high-performance `Transformers` was designed with two main goals in mind: (i) be as easy and fast to use as possible and (ii) provide state-of-the-art models with performances as close as possible to the originally reported results. To ensure a low entry barrier, the number of user-facing abstractions to learn was strongly limited and reduced to just three standard classes: configuration, models and tokenizers, which all can be initialized in a simple and unified way by using a common ‘from\_pretrained()‘ instantiation method. #### Interpretability and diversity There is a growing field of study, sometimes referred as *BERTology* from BERT [@Devlin2018BERTPO], concerned with investigating the inner working of large-scale pretrained models and trying to build a science on top of these empirical results. Some examples include [@Tenney2019BERTRT], [@Michel2019AreSH], [@clark2019what]. `Transformers` aims at facilitating and increasing the scope of these studies by (i) giving easy access to the inner representations of these models, notably the hidden states, the attention weights or heads importance as defined in [@Michel2019AreSH] and (ii) providing different models in a unified API to prevent overfitting to a specific architecture (and set of pretrained weights). Moreover, the unified front-end of the library makes it easy to compare the performances of several architectures on a common language understanding benchmark. `Transformers` notably includes pre-processors and fine-tuning scripts for *GLUE* [@Wang2018GLUEAM], *SuperGLUE* ([@Wang2019SuperGLUEAS]) and *SQuAD1.1* [@Rajpurkar2016SQuAD10]. #### Pushing best practices forward `Transformers` seeks a balance between sticking to the original authors’ code-base for reliability and providing clear and readable implementations featuring best practices in training deep neural networks so that researchers can seamlessly use the code-base to explore new hypothesis derived from these models. To accommodate a large community of practitioners and researchers, the library is deeply compatible with (and actually makes compatible) two major deep learning frameworks: PyTorch [@paszke2017automatic] and TensorFlow (from release 2.0) [@tensorflow2015-whitepaper]. #### From research to production Another essential question is how to make these advances in research available to a wider audience, especially in the industry. `Transformers` also takes steps towards a smoother transition from research to production. The provided models support *TorchScript*, a way to create serializable and optimizable models from PyTorch code, and features production code and integration with the *TensorFlow Extended* framework. Community ========= The development of the `Transformers` originally steamed from open-sourcing internals tools used at HuggingFace but has seen a huge growth in scope over its ten months of existence as reflected by the successive changes of name of the library: from `pytorch-pretrained-bert` to `pytorch-transformers` to, finally, `Transformers`. A fast-growing and active community of researchers and practitioners has gathered around `Transformers`. The library has quickly become used both in research and in the industry: at the moment, more than 200 research papers report using the library[^1]. `Transformers` is also included either as a dependency or with a wrapper in several popular NLP frameworks such as `Spacy` [@spacy2], `AllenNLP` [@Gardner2017AllenNLP] or `Flair` [@akbik2018coling]. `Transformers` is an ongoing effort maintained by the team of engineers and research scientists at HuggingFace[^2], with support from a vibrant community of more than 120 external contributors. We are committed to the twin efforts of developing the library and fostering positive interaction among its community members, with the ambition of creating the standard library for modern deep learning NLP. `Transformers` is released under the Apache 2.0 license and is available through *pip* or from source on GitHub[^3]. Detailed documentation along with on-boarding tutorials are available on HuggingFace’s website[^4]. Library design ============== `Transformers` has been designed around a unified frontend for all the models: parameters and configurations, tokenization, and model inference. These steps reflect the recurring questions that arise when building an NLP pipeline: defining the model architecture, processing the text data and finally, training the model and performing inference in production. In the following section, we’ll give an overview of the three base components of the library: configuration, model and tokenization classes. All of the components are compatible with PyTorch and TensorFlow (starting 2.0). For complete details, we refer the reader to the documentation available on <https://huggingface.co/transformers/>. Core components --------------- All the models follow the same philosophy of abstraction enabling a unified API in the library. **Configuration** - A configuration class instance (usually inheriting from a base class ‘PretrainedConfig‘) stores the model and tokenizer parameters (such as the vocabulary size, the hidden dimensions, dropout rate, etc.). This configuration object can be saved and loaded for reproducibility or simply modified for architecture search. The configuration defines the architecture of the model but also architecture optimizations like the heads to prune. Configurations are agnostic to the deep learning framework used. **Tokenizers** - A Tokenizer class (inheriting from a base class ‘PreTrainedTokenizer‘) is available for each model. This class stores the vocabulary token-to-index map for the corresponding model and handles the encoding and decoding of input sequences according to the model’s tokenization-specific process (ex. Byte-Pair-Encoding, SentencePiece, etc.). Tokenizers are easily modifiable to add user-selected tokens, special tokens (like classification or separation tokens) or resize the vocabulary. Furthermore, Tokenizers implement additional useful features for the users, by offering values to be used with a model; these range from token type indices in the case of sequence classification to maximum length sequence truncating taking into account the added model-specific special tokens (most pretrained Transformers models have a maximum sequence length they can handle, defined during their pretraining step). Tokenizers can be instantiated from existing configurations available through `Transformers` originating from the pretrained models or created more generally by the user from user-specifications. **Model** - All models follow the same hierarchy of abstraction: a base class implements the model’s computation graph from encoding (projection on the embedding matrix) through the series of self-attention layers and up to the last layer hidden states. The base class is specific to each model and closely follows the original implementation, allowing users to dissect the inner workings of each individual architecture. Additional wrapper classes are built on top of the base class, adding a specific head on top of the base model hidden states. Examples of these heads are language modeling or sequence classification heads. These classes follow similar naming pattern: `XXXForSequenceClassification` or `XXXForMaskedLM` where `XXX` is the name of the model and can be used for adaptation (fine-tuning) or pre-training. All models are available both in PyTorch and TensorFlow (starting 2.0) and offer deep inter-operability between both frameworks. For instance, a model trained in one of frameworks can be saved on drive for the standard library serialization practice and then be reloaded from the saved files in the other framework seamlessly, making it particularly easy to switch from one framework to the other one along the model life-time (training, serving, etc.). **Auto classes** - In many cases, the architecture to use can be automatically guessed from the shortcut name of the pretrained weights (e.g. ‘bert-base-cased‘). A set of `Auto` classes provides a unified API that enable very fast switching between different models/configs/tokenizers. There are a total of four high-level abstractions referenced as `Auto` classes: `AutoConfig`, `AutoTokenizer`, `AutoModel` (for PyTorch) and `TFAutoModel` (for TensorFlow). These classes automatically instantiate the right configuration, tokenizer or model class instance from the name of the pretrained checkpoints. Training -------- **Optimizer** - The library provides a few optimization utilities as subclasses of PyTorch ‘torch.optim.Optimizer‘ which can be used when training the models. The additional optimizer currently available is the Adam optimizer [@Kingma2014AdamAM] with an additional weight decay fix, also known as ‘AdamW‘ [@loshchilov2017fixing]. **Scheduler** - Additional learning rate schedulers are also provided as subclasses of PyTorch ‘torch.optim.lr\_scheduler.LambdaLR‘, offering various schedules used for transfer learning and transformers models with customizable options including warmup schedules which are relevant when training with Adam. Experimenting with `Transformers` ================================= In this section, we present some of the major tools and examples provided in the library to experiment on a range of downstream Natural Language Understanding and Natural Language Generation tasks. Language understanding benchmarks --------------------------------- The language models provided in `Transformers` are pretrained with a general purpose training objective, usually a variant of language modeling like *standard (sometime called causal) language modeling* as used for instance in [@Radford2019LanguageMA] or *masked language modeling* as introduced in [@Devlin2018BERTPO]. A pretrained model is often evaluated using wide-range language understanding benchmarks. `Transformers` includes several tools and scripts to evaluate models on *GLUE* ([@Wang2018GLUEAM]) and *SuperGLUE* ([@Wang2019SuperGLUEAS]). These two benchmarks gather a variety of datasets to evaluate natural language understanding systems. Details of the datasets can be found in the Appendix on page \[appendix\]. Regarding the machine comprehension tasks, the library feature evaluations on *SQuAD1.1* ([@Rajpurkar2016SQuAD10]) and *SQuAD2.0* ([@Rajpurkar2018KnowWY]). Others currently-supported benchmarks include *SWAG* ([@Zellers2018SWAGAL]), *RACE* ([@Lai2017RACELR]) and *ARC* ([@Clark2018ThinkYH]). Language model fine-tuning -------------------------- Fine-tuning a language model on a downstream text corpus usually leads to significant gains for tasks on this corpus, in particular when the domain is different (domain adaptation). It also significantly reduces the amount of training data required for fine-tuning on a target task in the target domain. `Transformers` provides simple scripts to fine-tune models on custom text datasets with the option to add or remove tokens from the vocabulary and several other adaptability features. Ecosystem --------- **Write with Transformer** Because Natural Language Processing does not have to be serious and boring, the generative capacities of auto-regressive language models available in `Transformers` are showcased in an intuitive and playful manner. Built by the authors on top of `Transformers`, *Write with Transformer*[^5] is an interactive interface that leverages the generative capabilities of pretrained architectures like GPT, GPT2 and XLNet to suggest text like an auto-completion plugin. Generating samples is also often used to qualitatively (and subjectively) evaluate the generation quality of language models [@Radford2019LanguageMA]. Given the impact of the decoding algorithm (top-K sampling, beam-search, etc.) on generation quality [@Holtzman2019TheCC], *Write with Transformer* offers various options to dynamically tweak the decoding algorithm and investigate the resulting samples from the model. ![Write With Transformer[]{data-label="fig:wwt"}](wwt-screenshot.png){width="\textwidth"} **Conversational AI** HuggingFace has been using Transfer Learning with Transformer-based models for end-to-end Natural language understanding and text generation in its conversational agent, *Talking Dog*. The company also demonstrated in fall 2018 that this approach can be used to reach state-of-the-art performances on academic benchmarks, topping by a significant margin the automatic metrics leaderboard of the *Conversational Intelligence Challenge 2* held at the Thirty-second Annual Conference on Neural Information Processing Systems (NIPS 2018). The approach used to reach these performances is described in [@Wolf2019TransferTransfoAT; @golovanov2019large] and the code and pretrained models, based on the `Transformers` library, are available online[^6]. **Using in production** To facilitate the transition from research to production, all the models in the library are compatible with *TorchScript*, an intermediate representation of a PyTorch model that can then be run either in Python in a more efficient way, or in a high-performance environment such as C++[^7]. Fine-tuned models can thus be exported to production-friendly environment. Optimizing large machine learning models for production is an ongoing effort in the community and there are many current engineering efforts towards that goal. The distillation of large models (e.g. *DistilBERT* [@sanh2019distilbert]) is one of the most promising directions. It lets users of `Transformers` run more efficient versions of the models, even with strong latency constraints and on inexpensive CPU servers. We also convert Transformers models to `Core ML` weights that are suitable to be embbeded inside a mobile application, to enable on-the-edge machine learning. Code is also made available[^8]. **Community** Many libraries in NLP and Machine Learning have been created on top of `Transformers` or have integrated `Transformers` as a package dependency or through wrappers. At the time of writing, the authors have been mostly aware of `FastBert`[^9], `FARM`[^10], `flair` [@akbik2018coling; @akbik2019naacl], `AllenNLP` [@Gardner2017AllenNLP] and `PyText`[^11] but there are likely more interesting developments to be found, from research and internal projects to production packages. Architectures ============= Here is a list of architectures for which reference implementations and pretrained weights are currently provided in `Transformers`. These models fall into two main categories: generative models (GPT, GPT-2, Transformer-XL, XLNet, XLM) and models for language understanding (Bert, DistilBert, RoBERTa, XLM). - BERT ([@Devlin2018BERTPO]) is a bi-directional Transformer-based encoder pretrained with a linear combination of *masked language modeling* and *next sentence prediction* objectives. - RoBERTa ([@Liu2019RoBERTaAR]) is a replication study of BERT which showed that carefully tuning hyper-parameters and training data size lead to significantly improved results on language understanding. - DistilBERT ([@sanh2019distilbert]) is a smaller, faster, cheaper and lighter version BERT pretrained with knowledge distillation. - GPT ([@Radford2018GPT]) and GPT2 ([@Radford2019LanguageMA]) are two large auto-regressive language models pretrained with *language modeling*. GPT2 showcased zero-shot task transfer capabilities on various tasks such as machine translation or reading comprehension. - Transformer-XL ([@Dai2019TransformerXLAL]) introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segment-level recurrence and relative positional encoding schemes. - XLNet ([@Yang2019XLNetGA]) builds upon Transformer-XL and proposes an auto-regressive pretraining scheme combining BERT’s bi-directional context flow with auto-regressive language modeling by maximizing the expected likelihood over permutations of the word sequence. - XLM ([@Lample2019CrosslingualLM]) shows the effectiveness of pretrained representations for cross-lingual language modeling (both on monolingual data and parallel data) and cross-lingual language understanding. We systematically release the model with the corresponding pretraining heads (language modeling, *next sentence prediction* for BERT) for adaptation using the pretraining objectives. Some models fine-tuned on downstream tasks such as *SQuAD1.1* are also available. Overall, more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English. Some of these non-English pretrained models are multi-lingual models (with two of them being trained on more than 100 languages) [^12]. Related work ============ The design of `Transformers` was inspired by earlier libraries on transformers and Natural Language Processing. More precisely, organizing the modules around three main components (configuration, tokenizers and models) was inspired by the design of the `tensor2tensor` library [@tensor2tensor] and the original code repository of Bert [@Devlin2018BERTPO] from Google Research while concept of providing easy caching for pretrained models steamed from features of the `AllenNLP` library [@Gardner2017AllenNLP] open-sourced by the Allen Institute for Artificial Intelligence (AI2). Works related to the `Transformers` library can be generally organized along three directions, at the intersection of which stands the present library. The first direction includes Natural Language Processing libraries such as `AllenNLP`[^13] [@Gardner2017AllenNLP], `SpaCy`[^14] [@spacy2], `flair`[^15] [@akbik2018coling; @akbik2019naacl] or `PyText`[^16]. These libraries precede `Transformers` and target somewhat different use-cases, for instance those with a particular focus on research for `AllenNLP` or a strong attention to production constrains (in particular with a carefully tuned balance between speed and performance) for `SpaCy`. The previously mentioned libraries have now been provided with integrations for `Transformers`, through a direct package dependency for `AllenNLP`, `flair` or `PyText` or through a wrapper called `spacy-transformers`[^17] for `SpaCy`. The second direction concerns lower-level deep-learning frameworks like PyTorch [@paszke2017automatic] and TensorFlow [@tensorflow2015-whitepaper] which have both been extended with model sharing capabilities or hubs, respectively called `TensorFlow Hub`[^18] and `PyTorch Hub`[^19]. These hubs are more general and while they offer ways to share models they differ from the present library in several ways. In particular, they provide neither a unified API across models nor standardized ways to access the internals of the models. Targeting a more general machine-learning community, these Hubs lack the NLP-specific user-facing features provided by `Transformers` like tokenizers, dedicated processing scripts for common downstream tasks and sensible default hyper-parameters for high performance on a range of language understanding and generation tasks. The last direction is related to machine learning research frameworks that are specifically used to test, develop and train architectures like Transformers. Typical examples are the `tensor2tensor`[^20] library [@tensor2tensor], `fairseq`[^21] [@ott2019fairseq] and `Megatron-LM`[^22]. These libraries are usually not provided with the user-facing features that allow easy download, caching, fine-tuning of the models as well as seamless transition to production. Conclusion ========== We have presented the design and the main components of `Transformers`, a library for state-of-the-art NLP. Its capabilities, performances and unified API make it easy for both practitioners and researchers to access various large-scale language models, build and experiment on top of them and use them in downstream task with state-of-the-art performance. The library has gained significant organic traction since its original release and has become widely adopted among researchers and practitioners, fostering an active community of contributors and an ecosystem of libraries building on top of the provided tools. We are committed to supporting this community and making recent developments in transfer learning for NLP both accessible and easy to use while maintaining high standards of software engineering and machine learning engineering. \[appendix\] GLUE and SuperGLUE ================== The datasets in GLUE are: CoLA ([@warstadt2018neural]), Stanford Sentiment Treebank (SST) ([@socher2013recursive]), Microsoft Research Paragraph Corpus (MRPC) [@dolan2005automatically], Semantic Textual Similarity Benchmark (STS) [@agirre2007semantic], Quora Question Pairs (QQP) [@WinNT], Multi-Genre NLI (MNLI) [@williams2018broad], Question NLI (QNLI) [@Rajpurkar2016SQuAD10], Recognizing Textual Entailment (RTE) [@dagan2006pascal; @bar2006second; @giampiccolo2007third; @bentivogli2009fifth] and Winograd NLI (WNLI) [@levesque2011winograd]. The datasets in SuperGLUE are: Boolean Questions (BoolQ) [@clark2019boolq], CommitmentBank (CB) [@demarneffe:cb], Choice of Plausible Alternatives (COPA) [@roemmele2011choice], Multi-Sentence Reading Comprehension (MultiRC) [@khashabi2018looking], Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) [@zhang2018record], Word-in-Context (WiC) [@pilehvar2018wic], Winograd Schema Challenge (WSC) [@rudinger2018winogender], Diverse Natural Language Inference Collection (DNC) [@poliak2018dnc], Recognizing Textual Entailment (RTE) [@dagan2006pascal; @bar2006second; @giampiccolo2007third; @bentivogli2009fifth] and Winograd NLI (WNLI) [@levesque2011winograd] [^1]: <http://search.arxiv.org:8081/?query=huggingface&qid=1565055415921multi_nCnN_-1835167213&byDate=1> [^2]: <https://huggingface.co> [^3]: <https://github.com/huggingface/transformers> [^4]: <https://huggingface.co/transformers/> [^5]: <https://transformer.huggingface.co> [^6]: <https://github.com/huggingface/transfer-learning-conv-ai> [^7]: <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html> [^8]: <https://github.com/huggingface/swift-coreml-transformers> [^9]: <https://github.com/kaushaltrivedi/fast-bert> [^10]: <https://github.com/deepset-ai/FARM> [^11]: <https://github.com/facebookresearch/pytext> [^12]: <https://huggingface.co/transformers/multilingual.html> [^13]: <https://allennlp.org/> [^14]: <https://spacy.io//> [^15]: <https://github.com/zalandoresearch/flair> [^16]: <https://github.com/facebookresearch/pytext> [^17]: <https://github.com/explosion/spacy-transformers> [^18]: <https://github.com/tensorflow/hub> [^19]: <https://pytorch.org/hub> [^20]: <https://github.com/tensorflow/tensor2tensor> [^21]: <https://github.com/pytorch/fairseq> [^22]: <https://github.com/NVIDIA/Megatron-LM>
--- abstract: 'Despite being $>10$Myr, there are $\sim$10 debris discs with as much CO gas as in protoplanetary discs. Such discs have been assumed to be “hybrid”, i.e., with secondary dust but primordial gas. Here we show that both the dust and gas in such systems could instead be secondary, with the high CO content caused by accumulation of neutral carbon (C$^0$) that shields CO from photodissociating; i.e., these could be “shielded secondary discs”. New ALMA observations are presented of HD131835 that detect $\sim 3 \times 10^{-3}$ M$_\oplus$ of C$^0$, the majority 40-200au from the star, in sufficient quantity to shield the previously detected CO. A simple semi-analytic model for the evolution of CO, C and O originating in a volatile-rich planetesimal belt shows how CO shielding becomes important when the viscous evolution is slow (low $\alpha$ parameter) and/or the CO production rate is high. Shielding by C$^0$ may also cause the CO content to reach levels at which CO self-shields, and the gas disc may become massive enough to affect the dust evolution. Application to the HD 131835 observations shows these can be explained if $\alpha \sim 10^{-3}$; an inner cavity in C$^0$ and CO may also mean the system has yet to reach steady state. Application to other debris discs with high CO content finds general agreement for $\alpha=10^{-3}$ to $0.1$. The shielded secondary nature of these gas discs can be tested by searching for C$^0$, as well as CN, N$_2$ and CH$^{+}$, which are also expected to be shielded by C$^0$.' author: - | Quentin Kral,$^{1,2}$[^1] Sebastian Marino, $^{1}$ Mark C. Wyatt,$^{1}$ Mihkel Kama,$^{1}$ and\ $^{1}$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^{2}$LESIA, Observatoire de Paris, Universit[é]{} PSL, CNRS, Sorbonne Universit[é]{}, Univ. Paris Diderot,\ Sorbonne Paris Cit[é]{}, 5 place Jules Janssen, 92195 Meudon, France\ $^{3}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\ date: 'Accepted 1928 December 15. Received 1928 December 14; in original form 1928 October 11' title: 'Imaging \[CI\] around HD 131835: reinterpreting young debris discs with protoplanetary disc levels of CO gas as shielded secondary discs' --- \[firstpage\] accretion, accretion discs – star:HD 131835, HD 21997, HD 138813, 49 Ceti, HD 32297, HD 156623, HD 121191, HD 121617, HD 131488 – circumstellar matter – Planetary Systems. Introduction ============ Gas is now detected routinely around main sequence stars that possess planetesimal belts similar to the Kuiper belt. On the order of 20 systems show the presence of gas [e.g., @2013ApJ...776...77K; @2016ApJ...828...25L; @2017ApJ...839...86H; @2016MNRAS.461.3910G; @2017ApJ...849..123M] and this number will soon increase, mostly thanks to upcoming surveys with ALMA. Most of these systems have CO detected in emission (often colocated with the planetesimal belt) and for a handful of them ionised carbon and oxygen were detected with Herschel . Neutral carbon (C$^0$) has been predicted to be a good tracer of gas in these systems [@2017MNRAS.469..521K] and has now been detected around 49 Ceti and $\beta$ Pic [@2017ApJ...839L..14H; @Catal]. The origin of the gas around these main sequence stars is still debated for some systems (because of their youth that can be close to 10 Myr or older), i.e., whether the gas is a remnant of the protoplanetary disc (called primordial hereafter) or released at a later stage (i.e., secondary). However, for at least three systems (HD 181327, $\beta$ Pic and Fomalhaut) the CO mass detected is so low that it cannot be primordial as neither CO nor H$_2$ (even assuming an extreme CO-to-H$_2$ ratio of $10^{-6}$) could shield CO from photodissociating owing to the UV radiation from the interstellar radiation field [@2016MNRAS.460.2933M; @2017MNRAS.464.1415M; @2017ApJ...842....9M]. We also note that the presence of gas around the 440 Myr old Fomalhaut star [@2017ApJ...842....9M] and possibly around the 1 Gyr old $\eta$ Corvi [@2017MNRAS.465.2595M] leaves no doubt concerning the secondary origin of the gas there. @2017MNRAS.469..521K show that a second generation model where the gas is released from volatile-rich planetesimals colliding in the belts [e.g. @2012ApJ...758...77Z] can explain most of the detections and non-detections so far (for $>$10 Myr systems), assuming planetesimals with a composition similar to Solar System comets, reinforcing the idea that most of the observed gas is secondary. However, the high-mass gas discs, called hybrid discs by @2013ApJ...776...77K because of their high, assumed primordial, CO content but low debris-disc like secondary dust mass, could not be explained with this model and were deemed to be of primordial origin. We consider in this paper whether these high gas mass systems can actually also be explained as being of secondary origin due to an ingredient that was missing in the previous model: [*the C$^0$ shielding effect*]{}. [^2] This effect is now accounted for in a new model presented in this paper and applied to a new \[CI\] ALMA detection towards HD 131835 also presented in this paper. HD 131835 is a 15-16 Myr old [@2002AJ....124.1670M; @2012ApJ...746..154P] A-type star located at a distance[^3] of 145$\pm$9 pc that is probably a member of the Upper Centaurus Lupus moving group [a subgroup of the Sco-Cen association, @2011MNRAS.416.3108R]. An infrared (IR) excess around this star has first been reported by @2006ApJ...644..525M, establishing the presence of a bright debris disc. The disc was first resolved in mid-infrared using T-ReCS from which it could be deduced that multiple disc components were needed to fit the data [@2015ApJ...815L..14H]. More recently, SPHERE observations in the near-IR led to the detection of a series of narrow concentric rings . Such concentric rings may be explained by the presence of large quantities of gas [e.g. @2001ApJ...557..990T; @2013Natur.499..184L; @2018ApJ...856...41R]. CO gas has recently been discovered around HD 131835 using APEX [@2015ApJ...814...42M] and has been followed-up with ALMA from which the total CO mass derived (using an optically thin line) is $\sim$ 0.04 M$_\oplus$ [after correction for the new GAIA distance, @2017ApJ...849..123M]. This CO mass is of the same order of magnitude as for the famous TW Hya protoplanetary disc [@2013ApJ...776L..38F], leading to the possibility that the disc is primordial. From the new \[CI\] observations we present in this paper, we show that the observed gas in the so-called “hybrid” disc around HD 131835 (or HIP 73145) is compatible with being a shielded debris disc, i.e., a massive gas disc of secondary origin. Furthermore, we conclude that some or all of the so-called hybrid discs might not be hybrid after all but just entirely second generation, both in their gas and dust content. For this reason, from now on we will call these discs: [*shielded discs*]{} rather than hybrid. To make the distinction easier, we call the low CO mass systems of secondary origin “unshielded” and the high CO mass systems “shielded”. This has important consequences for our understanding of the fate of protoplanetary discs that, if true, would not survive as long as previously thought and would indicate that massive secondary gaseous debris discs can start early, while planet formation may still be on-going. In Sec. \[shield\], we start by introducing the new concept of shielded discs that can explain massive CO gas discs as being of secondary origin rather than hybrid. In Sec. \[obs\], we present new continuum observations of HD 131835 with ALMA and a fit of the data with a dust model. We then carry on by showing the first neutral carbon detections of HD 131835 with ALMA and a fit of the data set with several gas models in Sec. \[gas\]. In section \[discu\], we discuss the implications of this new \[CI\] detection and how it backs up the shielded disc scenario. We first describe a self-consistent physical model that can explain the observed high CO mass together with the new \[CI\] detection assuming a secondary origin of the gas. We then constrain the viscosity of the observed gas disc (using an $\alpha$ parametrization), derive the expected composition of the (exo)planetesimals in this system and discuss the dust/gas interactions that may happen. Finally, before concluding, we discuss the possibility that all hybrid discs discovered so far (possibly up to 9 systems) are in fact shielded discs, i.e., not hybrid, and we derive some constraints on their gas disc viscosity ($\alpha$) and predict their content in C$^0$ that is within reach of ALMA. Secondary gas discs and introduction to the new category of shielded discs {#shield} ========================================================================== Standard evolution of low-mass unshielded secondary gas discs ------------------------------------------------------------- ![\[figevol\] Evolution of the C$^0$ number density as a function of radius and for several timesteps (from $10^3$ yr to 15 Myr) for carbon inputted at a steady rate at 90 au. The carbon gas is assumed to spread viscously with time with a timescale that depends on the viscosity parametrized by $\alpha$. We show a fast evolution with $\alpha=0.5$ (black solid lines) and a slower evolution with $\alpha=10^{-3}$ (green dashed lines). The high-$\alpha$ disc reaches a steady state after a few Myr whilst the low-$\alpha$ disc is still evolving after 15 Myr. The red dotted line shows the critical C$^0$ density above which C$^0$ starts shielding CO from photodissociation. For the low-$\alpha$ case, we note that the C$^0$ density is high enough to extend the CO lifetime and CO will accumulate owing to C$^0$ shielding, which will lead to a shielded disc as defined in Sec. \[shield\]. To compute the C$^0$ input rate, we assumed $\dot{M}_{\rm CO}=10^{-2}$ M$_\oplus$ and an ionisation fraction of 0.1. The orange lines show the steady state number density for $\alpha=0.5$ (solid) and $10^{-3}$ (dashed) based on Eq. \[Sig\].](sigma.pdf){width="8cm"} In a secondary gas production scenario, gas is released in the planetesimal belt, while volatile-rich bodies collide with each other. CO and water are expected to be the major constituents of comets (assuming Solar System-like compositions), and will possibly dominate the release [note that this depends on the exact mechanism that releases this gas, see @2018ApJ...853..147M]. In the standard secondary approach presented in @2016MNRAS.461..845K [@2017MNRAS.469..521K], CO and water photodissociate very quickly because of the UV radiation from the interstellar radiation field (i.e., typically in about 100 yr) because there is not enough CO to self-shield nor enough H$_2$ that could increase the CO or water lifetime. Carbon, oxygen and hydrogen are thus created very quickly through photodissociation, which builds an atomic gas disc that viscously spreads. In our model, the viscosity is parametrized with an $\alpha$ parameter [as in @2013ApJ...762..114X], which has been inferred to be high in the $\beta$ Pic gas disc, i.e., $\alpha>0.1$, leading to a viscous timescale $t_\nu < 0.5$ Myr [@2016MNRAS.461..845K] as the viscous evolution timescale at $R_0$, $t_\nu(R_0)$ is given by $R_0^2 \Omega/(\alpha c_s^2)$, where the orbital frequency $\Omega$ and the sound speed $c_s$ are both estimated at $R_0$ (taken to be 85 au for $\beta$ Pic). However, in systems in which the carbon ionisation fraction is smaller, and therefore the magnetorotational instability less effective [@2016MNRAS.461.1614K], the resulting $\alpha$ could be much smaller (e.g., $10^{-3}$), resulting in a much longer viscous timescale. To illustrate the implication of a change in $\alpha$ for the evolution of the disc, Fig. \[figevol\] shows the temporal evolution of a gas disc with a steady input of gas at a rate of $\dot{M}_{\rm CO}=10^{-2}$ M$_\oplus$/Myr at $R_0=90$ au, which is converted to a C$^0$ input rate by assuming an ionisation fraction $f$ of 0.1. The evolution is worked out semi-analytically using solutions found by @2011MNRAS.410.1007T, where we assumed the temperature to be $\propto R^{-\beta}$, with $\beta=1/2$. For a $\beta$ Pic-like system with $\alpha=0.5$ (solid black lines), the steady state is reached after only a few Myr (i.e., longer timescale lines all coincide on the steady state uppermost solid black line). For a model in which $\alpha=10^{-3}$ (green dashed lines), the steady state is not yet reached after 15 Myr (last epoch shown in the figure). In this case, the evolution is 500 times slower and much more carbon can accumulate in the system, reaching number densities $n_{{\rm C}^0}$ of $\sim 10^4$ cm$^{-3}$ at $R_0$ after 15 Myr. We also plot the predicted analytical steady state values that we find to be [updating the solutions for the case $\beta=1/2$ by @1974MNRAS.168..603L to our case of a steady input] $$\label{Sig} \Sigma(R) = \frac{\dot{M}}{3 \pi \nu_0} \begin{cases} \left(\frac{R}{R_0}\right)^{\beta-3/2} & \text{for $R<R_0$}\\ \left(\frac{R}{R_0}\right)^{\beta-2} & \text{for $R>R_0$}\\ \end{cases} ,$$ where $\nu_0=\alpha c_s^2(R_0)/\Omega(R_0)$ is the viscosity at $R_0$, the temperature $T \propto R^{-\beta}$, and $\dot{M}$ is the input rate (for C$^0$, $\dot{M}=12/28 \dot{M}_{\rm CO} (1-f)$). @2012MNRAS.423..505M also derive this formula (in their appendices) and show that it is valid not only for $\beta=1/2$ but for all values of $\beta$. Therefore, at steady state the number density scales as $R^{3\beta/2 -3}$ for $R<R_0$ and as $R^{3\beta/2 -7/2}$ for $R>R_0$ as shown by the orange analytical steady state profiles plotted in Fig. \[figevol\]. Shielded discs {#shieldeddd} -------------- ![\[figphd\] C$^0$ photoionisation [solid, @1988ASSL..146...49V] and CO photodissociation cross sections between 900 and 1100 Å. The same energetic photons that can photodissociate CO can photoionise neutral carbon.](COphdCIphioncrosssection.pdf){width="8cm"} The shielded disc case happens when the C$^0$ density becomes high enough that C$^0$ gets optically thick to UV radiation. The C$^0$ photoionisation cross section is roughly constant (see Fig. \[figphd\]) and equal to $\sigma_i=1.6\times10^{-17}$ cm$^2$ [@1988ASSL..146...49V] from 900$\,$Å$\,$ (i.e., close to the Lyman break at 13.6eV) to $\sim 1100\,$Å$\,$ (i.e., 11.26eV, which is the ionisation potential of carbon). Fig. \[figphd\] shows that the photodissociation cross sections of CO are affected by the same photons (i.e., from 910 to 1075$\,$Å) that can ionise carbon. Calculating the C$^0$ column density $N_{{\rm C}^0}$ from $n_{{\rm C}^0}$ in Fig. \[figevol\] for the $\alpha=10^{-3}$ model at 15 Myr, we find that for a UV photon from the interstellar radiation field [IRF, that dominates the UV flux at the typical distances of typical planetesimal belts $R_0$, @2018ApJ...859...72M] travelling in the vertical direction, C$^0$ is indeed optically thick to UV radiation (we note that it is much more optically thick in the radial direction and the star UV field is thus not expected to contribute to CO photodissociation in these shielded discs). This means that if C$^0$ is blocking the UV photons, CO will photodissociate on longer timescales. Indeed, @2012MNRAS.427.2328R compute that the CO photodissociation timescale (in yr) in the presence of C$^0$ is $$\label{tco} t_{\rm CO}=120 \, {\rm yr} \times \exp(\sigma_i N_{{\rm C}^0})$$ where 120 yr is the photodissociation timescale without shielding, i.e., when only the IRF is taken into account. This shows that shielding of CO by C$^0$ starts happening for $N_{{\rm C}^0} \gtrsim 10^{16}$ cm$^{-2}$. We note that we considered shielding in the vertical direction and shielding in the radial direction is expected to be much higher because of the greater column density towards the star. Fig. \[figCIneed\] shows how much C$^0$ mass is needed for the CO photodissociation timescale to depart from 120 yr. Here, we also assume that $R_0=90$ au and a width of 70 au so that $M_{{\rm C}^0}=N_{{\rm C}^0} 2 \pi R \Delta R \mu_c m_H$, where $\mu_c$ is the mean molecular weight of carbon. For a C$^0$ mass of $\sim 10^{-3}$ M$_\oplus$, $t_{\rm CO}$ is 200 yr and as the evolution follows a very steep exponential profile, for $M_{{\rm C}^0}=1.2 \times 10^{-2}$ M$_\oplus$, then $t_{\rm CO}=10^5$ yr. For the latter case, this means that the CO mass expected from second generation models such as described in @2017MNRAS.469..521K can be more than $t_{\rm CO}/120 \sim 10^3$ times larger than previously thought and one can accumulate CO masses of the order of the mass content found in a protoplanetary disc such as TW Hydra [@2013ApJ...776L..38F]. ![\[figCIneed\] CO photodissociation timescale (in yr) as a function of the C$^0$ mass (M$_{{\rm C}^0}$ in M$_\oplus$) in the gas disc. This timescale depends exponentially on C$^0$ mass and a small amount of C$^0$ can create a large accumulation of CO that is then very hard to photodissociate because the UV photons that could potentially do that are mostly absorbed by C$^0$ beforehand (see also Fig. \[figphd\]). We assumed a disc located at 90 au of width 70 au. The dashed line is for a C$^0$ mass of $10^{-2}$ M$_\oplus$ for which the photodissociation timescale is already more than 100 times longer than when just assuming the interstellar radiation field ($\sim$120 yr).](tphVSMC1.pdf){width="8cm"} Therefore, the high CO mass content ($\sim0.04$ M$_\oplus$) found by @2017ApJ...849..123M for HD 131835 could be compatible with a gas production of secondary origin for high enough C$^0$ masses in the disc as summarised by the sketch presented in Fig. \[figcartoon\]. This will be checked in more detail in Sec. \[discu\] presenting some new modelling of this C$^0$ shielding effect but first let us check the spatial distribution of the dust (Sec. \[obs\]) and gas from new ALMA observations and whether there is enough C$^0$ observed to produce this shielding (Sec. \[gas\]). ![\[figcartoon\] Sketch of the secondary gas model, where 1) gas is released from the solid bodies in a Kuiper-belt like disc, 2) C$^0$ is produced by photodissociation of CO and viscously spread towards the inner region, 3) the C$^0$ gas disc can become massive enough that it absorbs most photons from the interstellar radiation field that could have photodissociated CO so that CO accumulates and can reach protoplanetary CO mass level. Note that carbon is also present in the green region and that it is more extended outwards than CO.](cartoonbisb.pdf){width="8cm"} ALMA continuum observation of HD 131835 {#obs} ======================================= HD 131835 was observed by ALMA in band 8 on 22$^{\rm nd}$ March 2017 as part of the cycle 4 project 2016.1.0.01253. The observations were carried out using 42 antennas with baselines ranging from 15 to 160m. The total observing time on source (excluding overheads) was 14.1min and the mean PWV was 0.57mm. Out of the four spectral windows provided by the ALMA correlator, three focused on observing the dust continuum with 128 channels centred at 480.201, 482.159 and 494.201 GHz (bandwidth of 2 GHz). The fourth spectral window targeted the CI ($^{3}$P$_0$-$^{3}$P$_1$) transition at 492.160651 GHz (i.e., a rest wavelength of 609.135 $\mu$m) with a higher spectral resolution of 488.281 kHz (0.297 km/s at the rest frenquency of the line) over 3840 channels (i.e., a bandwidth of 1.875 GHz) centred at 492.201 GHz. Titan was used as a flux calibrator, while J1427-4206 and J1454-3747 were used as bandpass and phase calibrator, respectively. We used the pipeline provided by ALMA to apply calibrations. Dust continuum observations --------------------------- Fig. \[figcont\] shows the ALMA dust continuum map of HD 131835 at $\sim$487 GHz (i.e., 616 $\mu$m). All the analysis that follows uses the visibilities but we show the image to introduce the data to the reader and provide some first qualitative idea of the observation. We deconvolved the image using the CLEAN algorithm and Briggs weighting with a robust parameter of 0.5. This yields a synthesised beam of 0.89 $\times$ 0.68 arcsec (i.e., $\sim 129 \times 97$ au, assuming a distance of 145 pc from Earth). The peak signal-to-noise ratio (S/N) in the image is $\sim$32. The image rms around the disc is $\sigma=250\,\mu$Jy/beam. The total flux in the Briggs-weighted image is 17.9$\pm$2.1 mJy, where the error comes from the image noise and flux calibration uncertainties ($\sim$ 10%) added in quadrature. ![\[figcont\] ALMA dust continuum map of HD 131835 at 487 GHz (band 8). The image is Briggs-weighted (with robust=0.5). The ellipse shows the beam of 0.89 $\times$ 0.68 arcsec and the North and East directions are indicated by the two arrows. The x- and y-axes indicate the offset from the stellar position in RA and Dec (in arcsec).](Image_hd131835_continuum_br05b.pdf){width="8cm"} The bulk of the emission is within 1 arcsec (i.e., $\sim$145 au) and appears consistent with a highly inclined axisymmetric disc as found from previous resolved observations of the disc at different wavelengths . The disc parameters such as inclination, PA, and disc mass are derived in the next subsection \[dustfit\] using an MCMC approach to fit a dust disc model to the observed visibilities. Fit of the continuum data {#dustfit} ------------------------- ![image](cornerreduce800.pdf){width="15cm"} We now compare the ALMA visibilities ($V_{\rm obs}$) for the dust continuum to an axisymmetric dust model. The disc model is parametrized as a ring of radius $R_0$ with a Gaussian radial profile of full width at half maximum (FWHM) $W_{\rm ring}$. We also assume a Gaussian vertical profile of aspect ratio $h=H/R$ ($H$ being the scaleheight) such that the dust density distribution is $$\label{dustmodel} \rho_d(R,Z)=\rho_0 \exp\left(-\frac{(R-R_0)^2}{2\sigma_d^2}\right) \exp\left(-\frac{Z^2}{2H^2} \right),$$ where $\rho_0$ is the density at $R_0$ in the midplane and $\sigma_d=W_{\rm ring}/(2 \sqrt{2 \log 2})$. We then use the line radiative transfer code RADMC-3D [@2012ascl.soft02015D] to compute images for a given dust model [as in @2016MNRAS.460.2933M] and GALARIO [@2018MNRAS.tmp..402T] to convert them into model visibilities ($V_{\rm mod}$) that can be compared to the data in an MCMC fashion. For the dust optical properties, we assume an astrosilicate composition with a density of 2.7 g$\,$cm$^{-3}$ [@2003ApJ...598.1017D] and use a mass-weighted mean opacity of $\kappa=2.5$ cm$^2$g$^{-1}$ at 610 $\mu$m that is computed using the Mie theory code from @1983asls.book.....B. The size distribution is assumed to be between a minimum size of 2.5 $\mu$m [corresponding to the blow out size derived from @1979Icar...40....1B] and a maximum size of 1 cm (larger grains do not participate to flux observed in band 8) with a power law index of -3.5, which is similar to what is predicted analytically for debris discs [@1969JGR....74.2531D] or from numerical simulations . ![\[figuv\] Real part (top) and Imaginary part (bottom) of the continuum data visibilities (black dots with errors bars) along with the best-fit model (overplotted red line) described in Sec. \[dustfit\] and whose parameters are given in Table \[tab1\]. The uv-distances are given in units of observing wavelength and visibilities in Jansky.](uvplot.png){width="8cm"} Parameters Best-fit values ----------------------------- ------------------------- $R_0$ (au) $90.5^{+3.8}_{-3.7}$ $M_{\rm dust}$ (M$_\oplus$) $0.71^{+0.03}_{-0.05}$ $W_{\rm ring}$ (au) $75.5^{+9.9}_{-15.2}$ $h$ $0.16^{+0.06}_{-0.09}$ $i$ (deg) $79.4^{+6.8}_{-5.2}$ PA (deg) $59^{+2}_{-1.9}$ RA offset (”) $-0.09^{+0.02}_{-0.02}$ Dec offset (”) $-0.07^{+0.02}_{-0.02}$ : Table describing the best-fit parameters of the dust modelling using an MCMC method (see Sec. \[dustfit\]). We list the median $\pm$ uncertainties, which are based on the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles of the marginalised distributions.[]{data-label="tab1"} Therefore, the free parameters that are left, which are fitted in our model, are: $R_0, W_{\rm ring}, h, {\rm inclination}\,(i)$, PA and $M_{\rm dust}$, where the latter is the total dust mass up to 1 cm bodies. We also allow an offset in RA (offset x) and Dec (offset y) to account for astrometric uncertainties in the ALMA data. We use a Bayesian MCMC approach to constrain the 8 free parameters of the model. We sample the parameter space by using the emcee module [see @2010CAMCS...5...65G; @2013PASP..125..306F for the details of the method]. We assume that the priors are uniform and the posterior distribution is then given by the product between the prior distribution function and the likelihood function, which is assumed to be $\propto \exp(-\chi^2/2)$, with $\chi^2= \sum (V_{\rm obs}-V_{mod})^2/\sigma_V^2$ ($\sigma_V$ being the variance of the data). We ran the MCMC with 100 walkers and for 1200 steps after the burn-in period. The posterior distributions obtained are shown in Fig. \[figcornercont\] for the four most important parameters and we summarise all the best-fit parameters in Table \[tab1\]. The inclination ($\sim$ 79 deg) and PA ($\sim$ 59 deg) found by our MCMC calculations are consistent with other studies [@2015ApJ...815L..14H; @2015ApJ...814...42M]. The offsets in x and y are also consistent with the ALMA astrometric uncertainties ($\sim$0.1”, see ALMA technical handbook[^4]). The disc radius $R_0$ is constrained to be around 90 au. The width (FWHM) of the ring that we derive from our model is $\sim 76$ au (0.52”), which is however smaller than the beam size and thus must be understood as having large uncertainties. A strict upper limit of 95 au can, however, be derived at the 99.7% level. From these results, the bulk of the parent belt would be between $\sim$50 and 140 au. @2015ApJ...802..138H observed HD 131835 with T-ReCS in the mid-IR and show that the best fit is for a continuous disc extending from $\sim$40 au to 350 au plus two rings at $\sim$124 and $\sim$260 au (after correcting for the new GAIA distance). The two rings may be made of very small micron-sized grains (see discussion) and are thus not expected to be seen in the ALMA image. The continuous disc observed with T-ReCS extends farther away than ALMA is sensitive to, because mid-IR observations are sensitive to smaller grains that have very eccentric orbits (owing to radiation pressure) and create an extended halo (whilst ALMA is not). observed the disc with SPHERE and detected the presence of 3 concentric rings at $\sim 116, 78,$ and $46$ au. They also published an ALMA band 6 continuum image that looks similar to Fig. \[figcont\] with a lower resolution and also find that the ALMA emission is more compact than at smaller wavelengths. The dust mass derived from our model is $\sim$0.7 M$_\oplus$. This is consistent with the fit by @2015ApJ...814...42M who found $0.65\pm0.25$ M$_\oplus$ (after correcting for the new GAIA distance). These estimates depend on the assumptions made for the composition and size distribution of the bodies whereas the structure does not. The aspect ratio estimation $h$ is limited because of the relatively low resolution of the observation but we find a best-fit of 0.16 and that it is smaller than 0.29 at 99.7% confidence level. It, however, may be much smaller than 0.16 as observed with SPHERE in the near-IR . Finally, we show the best-fit model along with the data in the visibility plane (azimuthally averaged assuming $i=79.4$ degrees and PA = 59 degrees) in Fig. \[figuv\] [in a similar way as in @2011ApJ...740...38H]. We also subtracted our best fit model from the data in the image plane and checked that the remaining residuals were all within the expected noise level. ALMA \[CI\] observation of HD 131835 {#gas} ==================================== C$^0$ gas detection ------------------- ![image](mom0_hd131835briggs05_CI.pdf){width="12cm"} To obtain the CI $^{3}$P$_0$-$^{3}$P$_1$ (at a rest wavelength of 609.135 $\mu$m) emission map, we first subtract the continuum emission directly from the visibilities (with the task uvcontsub in CASA) using only channels where no \[CI\] emission is expected. Fig. \[figc1\] shows the moment-0 (i.e. spectrally integrated) \[CI\] detection around HD 131835. Once again, we show the image to familiarise the reader with the qualitative features of the observation but we only use the observed visibilities in the rest of the paper when it comes to fitting models (see Sec. \[gasfit\]). We used the CLEAN algorithm to deconvolve the image using Briggs (with robust=0.5). The synthesized beam (shown as an ellipse in Fig. \[figc1\]) has a size of 0.89 $\times$ 0.63 arcsec. We find that the 1$\sigma$ noise level near the disc in the moment-0 image is 66 mJy$\,$km/s/beam so that the peak S/N is 18. The total integrated flux is 2.8$\pm$0.4 Jy$\,$km/s, which was measured by integrating over a region large enough that it contains all disc emission but small enough that it is not affected by noise contamination (an ellipse of size 3 $\times$ 2.5 arcsec). The error takes account of the noise in the image and flux calibration uncertainties that were added in quadrature. ![\[figc1radpro\] Radial profile of \[CI\] (blue solid line) and continuum (green dashed line) obtained by vertically averaging the moment-0 image shown in Fig. \[figc1\] and the continuum image from Fig. \[figcont\], respectively. We also look for asymmetries in the gas profile by overplotting the SW side of the gas disc on top of the NE side (red dotted line). The errors shown as transparent shaded area are 2$\sigma$. The profiles are normalised by their respective maximum.](radprofs_hd131835briggs05_CIb.pdf){width="8cm"} In Fig. \[figc1radpro\], we show the radial profile of the \[CI\] emission along the midplane. The flux is integrated vertically within $\pm$100 au from the rotated moment-0 image. The 2$\sigma$ integrated noise is shown as a blue shaded area around the emission profile. To look for any significant asymmetries, we overplotted the SW side of the disc on its NE side (red dotted line). This shows that there are no obvious signs of asymmetry within the allowed error bars. This plot also confirms that most of the emission is within 200 au. We also compare the gas disc emission to the continuum emission (green dashed line) and find that they are similar. ![\[figpv\] PV diagram of the \[CI\] gas disc around HD 131835. The white lines correspond to the locations and radial velocities at which gas is expected to emit if on a circular orbit at 40 and 200 au, respectively.](PV_hd131835briggs05_CIb.pdf){width="8cm"} Finally, we have access to the gas kinematics by looking at the emission in the different channels that show the emission at a given radial velocity ($\pm \Delta v$, the channel width). In order to exploit the extra information in velocity it is convenient to use a position-velocity (PV) diagram that is shown in Fig. \[figpv\]. This is produced by rotating each slice of the \[CI\] data cube by the PA of the disc and averaging vertically over $\pm$100 au. Each channel then corresponds to a horizontal line on the plot. The white lines shown in Fig. \[figpv\] correspond to the locations and radial velocities at which gas is expected to emit if on a circular orbit at 40 and 200 au [assuming a 1.77 M$_\odot$ star, @2015ApJ...814...42M]. The PV-diagram thus tells us that the bulk of the gas disc is between 40 and 200 au and a cavity or a depleted zone may be present interior to 40 au, which we will test further by comparing to models in the next subsection \[gasfit\]. Integrating the PV diagram along its x-axis, we obtain the emission spectrum shown in Fig. \[figspec\] at the channel width resolution ($\Delta v=0.297$ km/s). The thin vertical line corresponds to a velocity of 3.57 km/s (which is the best-fit value found from fitting the CI cube, see Sec. \[gasfit\]) and is close to the centre of the line. ![\[figspec\] Spectrum of the \[CI\] line detected around HD 131835 at a resolution of 0.297 km/s. The thin vertical line corresponds to a velocity of 3.57 km/s.](spectrum.pdf){width="8cm"} Fit of the \[CI\] data {#gasfit} ---------------------- To model the gas data, we use the same approach as described in the previous Sec. \[dustfit\] for the dust continuum. The only difference being that now we have to produce a cube for the image for the different frequency channels (tracing different radial velocities of the gas). We also use RADMC-3D to produce the \[CI\] gas image and convert to visibilities using GALARIO [@2018MNRAS.tmp..402T] that can then be compared to the ALMA visibility data cube in an MCMC fashion. We note that the \[CI\] emission predicted could be in the non-local thermal equilibrium (non-LTE) regime and thus we would have to add the collider density as an additional free parameter and take account of the radiation from the dust impinging on the gas. We first used the radiative transfer code LIME that can handle non-LTE situations to check whether LTE is a good assumption for our case. We find that a good fit of the \[CI\] line only happens for C$^0$ masses that are high enough that LTE is a good assumption. Indeed, taking the worst case scenario which is that electrons are only formed through carbon photoionisation and not from other metals, with a low ionisation fraction, down to $10^{-2}$, and are the only colliders, we find that LTE is still a good approximation. In Sec. \[dali\], we give estimates of the ionisation fraction in HD 131835 from a photodissociation region code and show that the ionisation fraction depends on the exact content in carbon and it can vary from $10^{-5}$ to $10^{-1}$. However, we find that including hydrogen collisions coming from H$_2$O photodissociation (if water is also released together with CO) would mean that LTE is reached even in the absence of electrons (as then hydrogen atoms can play the role of colliders). Indeed, the collision rates for hydrogen and C$^0$ are very similar to those of electrons with C$^0$. For a temperature between 10 and 200K, the collision rates between H and C$^0$ are $\sim 1.7 \times 10^{-10}$ cm$^3$/s, which translates to a critical H density of $\sim 300$ cm$^{-3}$ to be in LTE. This hydrogen density is small compared to what is expected in shielded discs, even for the case where only a small fraction of hydrogen is released together with CO (see Sec. \[comcomp\]). We thus assume that LTE is a good approximation even for very low carbon ionisation fractions. ![\[figtau\] \[CI\] optical depth of the best-fit model. The blue contour shows the $\tau=1$ level.](tauCImom0.png){width="8cm"} After investigation with LIME (and RADMC-3D), we also find that the \[CI\] line is optically thick for all the cases that match the observed data with an optical thickness $>1$ within 1”, where most of the emission is observed (see Fig. \[figtau\]). This means that we will be able to find a good constraint on the gas temperature from the line fitting. As one LIME model takes about 10 mins to compute, it would anyway be impossible to study a large parameter space in non-LTE and we instead use RADMC-3D in LTE that can compute image cubes in a few seconds and is thus suitable for an MCMC exploration. We start by fitting for the inclination $i$, PA and offsets to check that it is similar to the dust disc parameters (see Fig. \[figcorngasi\]). We also fit for the star radial velocity, which has been shown to be between 2-4 km/s (heliocentric frame) by previous studies [@2006ApJ...644..525M; @2015ApJ...814...42M]. By comparing the posterior distributions of the gas and dust disc, we find that the inclination and PA of both discs are very similar. The astrometric offsets are also consistent between the two data sets. The best fit is for a stellar radial velocity around 3.57$\pm$0.1 km/s (heliocentric frame), which is consistent with what has been found by previous studies [@2006ApJ...644..525M; @2015ApJ...814...42M] and shows that the gas is comoving with the star, as expected. ### Gaussian fit {#gaussfit} ![image](cornergas10000gauss.pdf){width="14cm"} As the data does not show any hint of asymmetry, we assume an axisymmetric gas model. We first fit the data with a ring having a Gaussian radial profile as for the dust. Therefore, the surface density profile of the gas follows $$\Sigma(R) = \Sigma_0 \exp\left(-\frac{(R-R_0)^2}{2\sigma_g^2}\right) ,$$ where $\Sigma_0$ is the surface density at $R_0$ and $\sigma_g=\Delta R/(2 \sqrt{2 \log 2})$, $\Delta R$ being the width (FWHM) of the gas disc. The scaleheight is fixed by the temperature such that $H=c_s/\Omega$, with the sound speed $c_s=\sqrt{kT/(\mu m_H)}$ and $\Omega$ the orbital frequency. The mean molecular mass $\mu$ could be much higher than in protoplanetary discs as the gas mass would be dominated by carbon and oxygen in H$_2$-depleted secondary gas discs. We have thus chosen $\mu=14$ as in @2017MNRAS.469..521K. The temperature profile is assumed to be a double power law profile [motivated from @2016MNRAS.461..845K] defined by $$\label{Teq} T(R) = \begin{cases} T_0 \left(\frac{R}{R_0}\right)^{-\beta_1^t} & \text{for $R<R_0$}\\ T_0 \left(\frac{R}{R_0}\right)^{-\beta_2^t} & \text{for $R>R_0$}\\ \end{cases}$$ The free parameters of this gas model are thus $\Sigma_0$, $\Delta R$, $T_0$, $\beta^t_1$, $\beta^t_2$. The results are shown in Fig. \[figcorngasg\] and the best-fit parameters are listed in Table \[tab2\]. For this Gaussian radial profile, we find that the location of the gas disc $R_0 \sim 97^{+17}_{-8}$ au is consistent with that of the dust disc ($\sim 90$ au). The FWHM of the gas disc is best-fit by $86^{+18}_{-10}$ au, which is slightly larger than that derived for the dust disc ($\sim$ $75^{+10}_{-15}$ au) but is still consistent with being of the same radial extension within error bars. The temperature at $R_0$ is best fit by $T_0 \sim 28$ K and a -1 radial power law. The best-fit surface density of C$^0$ at $R_0$ is $1.7 \times 10^{-5}$ kg/m$^2$, which corresponds to a column density of $\sim 10^{17}$ cm$^{-2}$, which is enough to start shielding CO (as seen in Sec. \[shield\]). This can also be translated into a number density $\Sigma_0/(2 H \mu m_H) \sim 10^{3}$ cm$^{-3}$, assuming the best fit $T_0$ of $\sim 28$ K to derive $H$. ### Double power law fit {#lawfit} In the traditional unshielded secondary gas scenario picture, CO is expected to be colocated with the dust, because that CO is released from planetesimals that are traced by the dust seen in the ALMA continuum observations and photodissociates quickly, but this is not necessarily the case for shielded discs (as then CO can viscously spread, see Sec. \[refined\]). This is also not necessarily the case for carbon (neutral or ionised) that is not destroyed by photo-chemical processes and can evolve for longer, and may even spread viscously towards the central star as suggested by @2013ApJ...762..114X [@2016MNRAS.461..845K]. To investigate the plausible larger extent of C$^0$, we now assume a double power law radial profile for the gas density splitting at $R_0$. This model is motivated by the expected viscous evolution of C$^0$ (see Fig. \[figevol\]). To be consistent, we also use the double power law temperature profile splitting at $R_0$ described by Eq. \[Teq\]. We define the surface density at $R_0$ to be $\Sigma_0$. The gas surface density is thus given by $$\Sigma(R) = \begin{cases} \Sigma_0 \left(\frac{R}{R_0}\right)^{-\beta_1^s} & \text{for $R<R_0$}\\ \Sigma_0 \left(\frac{R}{R_0}\right)^{-\beta_2^s} & \text{for $R>R_0$}\\ \end{cases}$$ and the temperature profile follows Eq. \[Teq\]. ![image](cornergas10000.pdf){width="18cm"} This leaves us with 10 free parameters that are $R_0$, $\Sigma_0$, $T_0$, $\beta_1^s$ $\beta_2^s$, $\beta_1^t$, $\beta_2^t$, $i$, PA and $v_\star$. We explore the parameter space with the same MCMC procedure as presented for the continuum and previous gas Gaussian profile and the results are shown in Fig. \[figcorngas\]. The best-fit parameters are also summarised in Table \[tab2\]. ------------------------------- --------------------- ---------------------- Parameters Gaussian Double power law $R_0$ (au) $97^{+17}_{-8}$ $118^{+19}_{-18}$ $\Sigma_0/10^{-5}$ (kg/m$^2$) $1.7^{+2.1}_{-0.4}$ $1.2^{+2.2}_{-0.7}$ $T_0$ (K) $28^{+4}_{-4}$ $30^{+5}_{-4}$ $\Delta R$ (au) $86^{+18}_{-10}$ - $\beta_1^s$ - $3.6^{+3}_{-3}$ $\beta_2^s$ - $5.3^{+3}_{-4}$ $\beta_1^t$ $0.9^{+0.4}_{-0.6}$ $-0.2^{+0.4}_{-0.3}$ $\beta_2^t$ $1.2^{+0.9}_{-0.6}$ $1.9^{+0.7}_{-1.6}$ $i$ (deg) - $77^{+3.1}_{-2.4}$ PA (deg) - $59^{+1}_{-1}$ $v_\star$ (km/s) - $3.57^{+0.1}_{-0.1}$ ------------------------------- --------------------- ---------------------- : Table describing the best-fit parameters of the Gaussian and double power law \[CI\] gas modelling using an MCMC method (see Sec. \[gasfit\]). We list the median $\pm$ uncertainties, which are based on the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles of the marginalised distributions.[]{data-label="tab2"} We find that the gas disc is likely to be peaking in surface density at $R_0 \sim 118$ au but the error bars are large ($\pm 19$ au) because of the low spatial resolution and due to a slight degeneracy with $\Sigma_0$, the surface density at $R_0$. Indeed, if an optically thick gas disc is moved at a greater distance from the star but with a smaller surface density, the integrated flux over the beam can still be the same as the original disc. The best fit surface density at $R_0$ is $1.2^{+2.2}_{-0.7} \times 10^{-5}$ kg/m$^2$, which corresponds to a column density of $\sim 6^{+11}_{-3} \times 10^{16}$ cm$^{-2}$, which is enough to start shielding CO (as seen in Sec. \[shield\]). This can also be translated into a number density $\Sigma_0/(2 H \mu m_H) \sim 0.6^{+1.1}_{-0.3} \times 10^{3}$ cm$^{-3}$, assuming the best fit $T_0$ of $\sim 30$ K and $R_0\sim 110$ au to derive $H$. The slope of the surface density for $R>R_0$, i.e. $\beta_2^s$, is not well constrained but is best fit by a steep value close to 5. This means that there may not be much mass hiding beyond $R_0$. Despite the uncertainties, the probable steepness of the slope is however helpful to constrain the models as it suggests that the gas disc has not reached steady state. Indeed, the outer surface density slope might be too steep compared to a steady state decretion profile. At steady state, we expect (see Eq. \[Sig\]) the accretion profile ($R<R_0$) surface density to scale as $R^{-n}$ and the decretion profile ($R>R_0$) as $R^{-n-1/2}$, where $n$ is the radial scaling in viscosity, i.e., $\nu \propto R^n$ [@2012MNRAS.423..505M]. For a temperature profile in $R^\beta$, as $n=1.5-\beta$, it means that for $\beta=1/2$, $\Sigma \propto R^{-1}$ for $R<R_0$ and $\Sigma \propto R^{-3/2}$ for $R>R_0$. If we assume that $\beta_2^t=3/2$, the steady state surface density decretion profile should scale as $R^{-1/2}$, therefore if a profile as steep as $\Sigma(R>R_0) \propto R^{-5}$ is confirmed, this would imply that steady state is not reached, which is most likely the case here (see also section \[shield\]). But we note that within the error bars, we cannot entirely rule out a steady state profile. Only higher resolution observations would settle this matter. For $R<R_0$, $\beta_1^s$ is also not well constrained, except to be $>-2.5$ (at the 99.7% level). This is because the line is becoming optically thick quite rapidly and thus adding more mass in the inner region does not increase the emergent intensity. It is therefore hard to tell from the inner accretion profile whether the gas disc is at steady state for which higher spatial resolution observations would again be necessary. We have also run another MCMC simulation where we added another free parameter $R_{\rm min}$ (on top of the double power law model) below which the surface density becomes zero. We find that an upper limit of $R_{\rm min}=35$ au can be set, meaning that gas is needed to extend at least to 35 au to explain the data. However, values of $R_{\rm min}<35$ au are not ruled out by the fitting process and it may well be that the gas disc extends further inwards (an observation at higher resolution would also be needed to constrain that better). The gas disc is thus at a minimum spanning the range 35-100 au, which is slightly more extended (taking into account error bars for the dust model) than the dust disc in the inner regions and could as well be in the outer regions. This favours that the gas had time to viscously spread since it has been produced. The temperature $T$ seems to be well constrained to around 30 K ($\pm4$ K) in the whole disc (at least for $R<R_0$ where $\beta_1^t$ is close to zero). A good constraint arises because the line is optically thick (see Fig. \[figtau\]) and the observed emission depends only on the temperature and the surface area of emission, which is fixed by the geometry of the disc. In the outer regions, the temperature drops with $\beta_2^t>0$. We compare the best-fit model to the observations in the visibility space by plotting the moment-0 visibilities in Fig. \[figuvgas\] (i.e., the visibilities in each channel are summed together), even though the best-fit was found by fitting each of these different channels. The model represents well the overall shape of the visibilities, but leaves a few significant residuals probably linked to the detailed structure of the emission. ![\[figuvgas\] Real part (top) and Imaginary part (bottom) of the \[CI\] gas moment-0 (integrated across the different channels) visibilities (black dots with errors bars) along with the best-fit model (overplotted red line) described in Sec. \[gasfit\] and whose parameters are given in Table \[tab2\]. The uv-distances are given in units of observing wavelength and moment-0 visibilities in Jansky km/s.](uvplotgasmom0.png){width="8cm"} This physical modelling of the \[CI\] data allows us to place some constraints on the neutral carbon mass around HD 131835. Indeed, we find that the best fit is for a mass $M_{{\rm C}^0}=3.1 \times 10^{-3}$ M$_\oplus$. From the MCMC, we find that $2.7 \times 10^{-3}<M_{{\rm C}^0}<1.2 \times 10^{-2}$ M$_\oplus$ (16$^{\rm th}$ and 84$^{\rm th}$ percentiles), assuming that $\beta_1^s<3$. Higher values of $\beta_1^s$ would be unphysical because the steady state level for the range of temperature slopes $\beta=[-1,0]$ derived from the MCMC are expected to be in $R^{\beta-3/2}$ (see Eq. \[Sig\]). The C$^0$ mass is not well constrained because the line is optically thick in the whole disc but we will show in Sec. \[app\] that in order to match the CO mass derived from an ALMA optically thin line observation [@2017ApJ...849..123M] with this C$^0$ mass imposes a narrower range of C$^0$ masses. We note that the C$^0$ mass derived here is in the right range of values to be able to start shielding CO from photodissociating as can be seen from Fig. \[figCIneed\] presented in the introduction of shielded discs (see Sec. \[shieldeddd\]). Discussion {#discu} ========== Temperature, ionisation fraction, and chemical reactions expected for a shielded disc of secondary origin {#dali} --------------------------------------------------------------------------------------------------------- From our modelling of the \[CI\] line, we find that $2.7 \times 10^{-3}<M_{{\rm C}^0}<1.2 \times 10^{-2}$ M$_\oplus$. The C$^{+}$ upper limit from Herschel (at 30 K, assuming LTE) is $2\times 10^{-3}$ M$_\oplus$ [@2015ApJ...814...42M]. This means that the upper limit on the ionisation fraction is $\sim 0.4$, which is already lower than that found for $\beta$ Pic . We now use the DALI 2D code[^5] to compute the carbon ionisation fraction in the disc taking into account the UV flux from the central star and IRF. For the DALI simulations, we start with a cometary abundance of hydrogen, carbon and oxygen (i.e., for a carbon abundance set to 1, then $N_{\rm H}=20$ and $N_{\rm O}=11$). We set a star spectrum similar to HD 131835, i.e. a 15 Myr A-type star of about 10 L$_\odot$. We impose that the total carbon mass is around $10^{-2}$ M$_\oplus$ and has a constant surface density between 1 and 100 au (with an aspect ratio of 0.1), which is consistent (within error bars) with values derived from the \[CI\] observations. Each simulation takes a day to compute and we clearly cannot iterate to a best-fit model but we use the simulations to have a feel for the typical ionisation fraction and temperature to expect in these shielded discs. ![image](ionfracdali.pdf){width="8cm"} ![image](Tgasdali.pdf){width="8cm"} Fig. \[figionftdali\] (left) shows the result of the simulation for the ionisation fraction. We expect that in the midplane of the disc and beyond 10 au, the ionisation fraction can become very small ($\sim 10^{-5}$) owing to C$^0$ shielding that catches most of the UV photons from the IRF in the upper layer so that the midplane cannot be ionised. Within 10 au, the ionisation fraction goes back up because of the UV photons coming from the star. We note that our grid started at 1 au and we expect that this is an effect of the grid because the radial optical depth would be much higher for a grid starting at the star location and therefore UV photons from the star would be caught at a much smaller radius, thus decreasing the ionisation fraction below 10 au. Our assumption that the ionisation fraction is low in these discs, lower than 0.1, seems to be justified from this test simulation. We also note that we ran some simulations with 10 times less carbon and the ionisation fraction was becoming higher, closer to 0.1. It is therefore dependent on the exact carbon mass at different radii. Higher resolution images would enable us to pinpoint the carbon density as a function of radius and compute a better estimate of the ionisation fraction in the disc. Fig. \[figionftdali\] (right) shows the simulation results for the temperature in a typical shielded disc (i.e., with enough carbon to shield CO over long timescales). We find that, indeed, the temperature can reach values of order 30-40K in the midplane close to 100 au. This is in agreement with the temperature obtained from fitting the optically thick \[CI\] line in Sec. \[gasfit\]. We note that the temperature can indeed be lower than the dust temperature (which would be around 50K at 100 au around a star similar to HD 131835 assuming black body emission). By looking at what is setting the temperature, we find that the dominant heating is from C$^0$ photoionisation and the dominant cooling is from the OI and CII lines. CO is not the only molecule that can be shielded owing to C$^0$. In the DALI simulations, we find that CN, N$_2$ as well as CH$^{+}$ are also shielded by C$^0$ and thus, they can accumulate over time. This effect can be quantified using the results from the study by @2012MNRAS.427.2328R. Similar to the photodissociation timescale of CO used in Eq. \[tco\], @2012MNRAS.427.2328R find that C$^0$ shielding increases the photodissociation timescales of CN, N$_2$ and CH$^{+}$ as $1/(S+(1-S)\exp(-\sigma_i N_{{\rm C}^0}))$. Whilst $S=0$ in Eq. \[tco\] for CO, it is equal to 0.018, 0, and 0.036 for CN, N$_2$ and CH$^{+}$, respectively. This means that for these 3 molecular species, C$^0$ shielding can be almost as strong as for CO. We thus predict that the photodissociation timescales of these species, respectively 60, 190 and 100 yr [assuming photons coming from the IRF, @1988ASSL..146...49V] could be orders of magnitude longer than expected in unshielded discs [see @2018ApJ...853..147M], which may lead to these species being detectable in shielded discs together with C$^0$. CH$^{+}$ would be particurlarly interesting to detect as it would give some constraints on the amount of hydrogen in the system (as it would mainly come from C+H) and potentially on the amount of water on the planetesimals. New models to refine the total C$^0$ mass and viscosity ($\alpha$) in the gas disc {#CIm} ---------------------------------------------------------------------------------- In this section, we first start by presenting a new analytical model of shielded discs that takes into account the shielding of CO by C$^0$ and shows what range of $\alpha$ or $\dot{M}_{\rm CO}$ values can lead to a shielded configuration (Sec. \[an\]). This analytical model can be useful to understand the different scalings but it lacks accuracy as it neglects CO self-shielding. That is why we then develop a more sophisticated semi-analytical box model in Sec. \[refined\] that takes into account CO self-shielding and shows the temporal evolution of the build-up of gas at the radius $R_0$ of gas injection. We use this more accurate model in Sec. \[app\] to fit the observed CO mass in HD 131835 and find what typical $\alpha$ value is needed to explain the data. In turn, it provides the C$^0$ mass that can accumulate for a given $\alpha$ and CO production rate that we can compare to our new CI observations finding that indeed our model is consistent with both CO and CI data. We note that this refined model is 0D but could be improved by turning it into a 1D model in the future as it would be useful when higher resolution observations are at hand. For the data we have, such a model would not be useful and is therefore deemed beyond the scope of this paper. ### An analytic model {#an} As previously explained in Sec. \[gasfit\], the C$^0$ mass is not well constrained from the \[CI\] observation we presented but assuming that the gas produced is secondary, we can improve the constraints on the C$^0$ mass using our knowledge of the much more accurate CO mass (since this was measured from an optically thin line) and computing the C$^0$ mass that would be required to explain this amount of CO. With the knowledge of this more accurate (model-based) C$^0$ mass, we can then put some constraints on the viscosity (parametrized with $\alpha$) that would allow such a C$^0$ mass to accumulate over the age of the system. Assuming that CO is produced at a rate $\dot{M}_{\rm CO}$ and that the CO mass required to fit the observations is $M_{\rm CO}$, a photodissociation timescale $t_{\rm CO}=M_{\rm CO}/\dot{M}_{\rm CO}$ is required. Using Eq. \[tco\], this translates to the C$^0$ mass needed to shield CO $$\label{mc1} M_{{\rm C}^0{\rm n}}=\frac{2 \pi R \Delta R \mu_c m_H}{\sigma_i} \ln \left(\frac{M_{\rm CO}}{120{\rm yr} \times \dot{M}_{\rm CO}}\right),$$ where $\dot{M}_{\rm CO}$ should be expressed in units of mass per yr and $\mu_c$ is the mean molecular weight of carbon. The C$^0$ mass is set through the CO input rate such that $\dot{M}_{{\rm C}^0}=12/28 (1-f) \dot{M}_{\rm CO}$, where $f$ is the ionisation fraction. C$^0$ at $R_0$ is assumed to evolve to steady state over local viscous timescales such that to get the C$^0$ mass needed $M_{{\rm C}^0{\rm n}}$, the local viscous timescale should be $t_{\nu}(R_0)=M_{{\rm C}^0{\rm n}}/\dot{M}_{{\rm C}^0}=R_0^2/(12 \nu(R_0))$, where $\nu=\alpha c_s^2/\Omega$ is the viscosity parametrized by an $\alpha$ parameter. This translates as a condition on $\alpha$ to get the C$^0$ mass needed and therefore the CO mass required $$\label{alphane} \alpha=\frac{\mu \sigma_i \sqrt{G M_\star} \dot{M}_{\rm CO} (1-f)}{ 56 \pi \mu_c \Delta R k T_0 \sqrt{R} \ln \left(\frac{M_{\rm CO}}{120{\rm yr} \times \dot{M}_{\rm CO}}\right)}.$$ Fig. \[figalpha\] shows the value of $\alpha$ that is required to produce a C$^0$ mass that can produce the observed 0.04 M$_\oplus$ of CO for different values of $\dot{M}_{\rm CO}$ for a disc located at $R_0=90$ au with, again, a width $\Delta R=70$ au. To the right of the black dash dotted line, the CO mass produced would be below 0.04 M$_\oplus$ (because $\alpha$ is so high that there is insufficient C$^0$ to shield CO); similarly more CO would be expected to the left of the line. There are 4 different regimes shown on the plot with different grey shaded areas. At the extreme right of the plot, $\alpha>1$ and is therefore too large to be physical. In the middle, this is the regime where the gas disc has had time to reach steady state. At the extreme left of the plot, $\alpha$ is small and the gas disc has not yet had time to reach steady state. The viscous timescale must be $>15$ Myr (age of the system) to not be at steady state, which corresponds to $\alpha<6\times10^{-3}$. The horizontal dashed line shows the minimum value of $\dot{M}_{\rm CO}$ to have been able to produce a total CO mass of 0.04 M$_\oplus$ over 15 Myr[^6]. We also showed in Sec. \[gasfit\] that the gas disc is most likely not at steady state as the outer surface density slope may be too steep and thus the disc could have spread only for a maximum of 15 Myr (likely much less as secondary production of gas presumably starts when the protoplanetary disc is dispersed, i.e. after a few Myr). To respect all the constraints so far, we are thus left with the part of parameter space shown by the solid line, i.e., $2.5 \times 10^{-4}<\alpha<6 \times 10^{-3}$ and $2 \times 10^{-3}<\dot{M}_{\rm CO}<4 \times 10^{-2}$ M$_\oplus$/Myr. For instance, we calculate the corresponding CO photodissociation timescale and C$^0$ mass for $\dot{M}_{\rm CO}=10^{-2}$ M$_\oplus$/Myr (the leftmost red point) that are 4 Myr and $1.9\times10^{-2}$ M$_\oplus$, respectively (which is for $\alpha=1.5\times10^{-3}$). The C$^0$ mass obtained is thus compatible with the C$^0$ mass derived from observations in Sec. \[gasfit\]. We note that in the process of converting $\dot{M}_{\rm CO}$ to a C$^0$ input rate (to then work out the $\alpha$ value needed to explain the observed CO mass), we assumed an ionisation fraction $f$ of 0.1. However, this does not affect the results because the dependence is in $1-f$ and $f$ is assumed to be $<0.1$ as already discussed in Sec. \[dali\]. The range of $\dot{M}_{\rm CO}$ expected for HD 131835 from a secondary gas production model can be derived from the dust luminosity using Eq. 2 in @2017MNRAS.469..521K and does not depend on the photodissociation timescale. This calculation finds $10^{-2}<\dot{M}_{\rm CO}<0.5$ M$_\oplus$/Myr by assuming a CO-to-dust mass ratio on the comets between 1 and 30% but we caution that the uncertainties can be up to a factor 10 on this range [@2017MNRAS.469..521K]. This predicted range and the plausible $\dot{M}_{\rm CO}$ range needed overlap and are thus compatible, which gives confidence that a secondary model can indeed explain the dust, neutral (and ionised) carbon and CO observations all together. The derived value of $\alpha$ is much smaller than that derived in the $\beta$ Pic gas disc [for which we found $\alpha>0.1$, @2016MNRAS.461..845K]. However, this might be expected for the gas disc around HD 131835 owing to the optical thickness to UV photons that can ionise carbon resulting in a much lower ionisation fraction and thus a lower $\alpha$ value if the viscosity is triggered by the magnetorotational instability [@1998RvMP...70....1B; @2016MNRAS.461.1614K and references therein]. This low-$\alpha$ value could also come from non-ideal MRI effects as a lower ionisation fraction may result in ambipolar diffusion becoming important, thus slowing down the viscous evolution [see @2016MNRAS.461.1614K]. In any case, this lower value of $\alpha$ is compatible with the wide range of values found in protoplanetary discs [e.g. between $10^{-4}$ and 0.04 as estimated in @2017ApJ...837..163R]. ![\[figalpha\] The dash-dotted black line shows the $\alpha$ value needed to produce enough C$^0$ that can shield CO and produce a total CO mass of 0.04 M$_\oplus$ as a function of the production rate of CO in the belt $\dot{M}_{\rm CO}$. The two vertical lines show $\alpha=1$ (dashed), and $\alpha=6\times10^{-3}$ (dotted) corresponding to a viscous timescale of 15 Myr (i.e., the age of HD 131835). The horizontal dashed line shows the minimum value of $\dot{M}_{\rm CO}$ to have been able to produce 0.04 M$_\oplus$ of CO within 15 Myr. The 3 red points show the corresponding photodissociation timescales and C$^0$ masses for three different $\alpha$ values corresponding to $\dot{M}_{\rm CO}=[10^{-2},1,10^{2}]$M$_\oplus$/Myr, respectively. The solid section of the dash-dotted line shows the $\alpha$ range that can explain the CO and C$^0$ observations within the frame of our analytical model described in Sec. \[an\].](mdotVSalpha.png){width="8cm"} ### A more refined semi-analytic model {#refined} ![image](Figure_CO_CI_evolutionv4.pdf){width="17cm"} The analytic model described in Sec. \[an\] is now improved to take into account the temporal evolution of the gas as well as CO self-shielding that can become important for the high CO masses involved. Now, we also include a negative feedback on the total C$^0$ mass, which is that for an increasing C$^0$ mass, CO has time to viscously spread (because of the longer photodissociation timescale) and thus less C$^0$ is produced from CO at a given radius (which means CO destruction and C$^0$ production cannot be assumed to be constant in time). The previous section should therefore just be used to get an idea of the different scalings but not to compute predictions. For the new refined semi-analytic model, we start with no gas at $t=0$ and input a constant CO gas mass $\dot{M}_{\rm CO}$ per unit time. The CO mass $M_{\rm CO}$ is then evolved over time at the radius $R_0$ as $$\label{dMco} \frac{{\rm d}M_{\rm CO}}{{\rm d}t}=\dot{M}_{\rm CO}^+-\dot{M}_{\rm CO}^-,$$ where $\dot{M}_{\rm CO}^+=\dot{M}_{\rm CO}$ is the production rate of CO and $\dot{M}_{\rm CO}^-=M_{\rm CO}/t_{\rm CO removal}$ is the destruction rate of CO, with $t_{\rm CO removal}=(1/(t_{\rm COph})+1/t_\nu)^{-1}$, where $t_{\rm COph}$ includes the photodissociation timescale due to C$^0$ shielding (Eq. \[tco\]) and self-shielding . In a similar way, the carbon mass $M_{\rm C}$ follows $$\label{dMc1} \frac{{\rm d}M_{\rm C}}{{\rm d}t}=\dot{M}_{\rm C}^+-\dot{M}_{\rm C}^-,$$ where $\dot{M}_{\rm C}^+=(12/28) M_{\rm CO}/t_{\rm COph}$ and $\dot{M}_{\rm C}^-=M_{\rm C}/t_\nu$. We evolve this set of equations for 30 Myr for different $\alpha$ values ($10^{-1}$ and $10^{-3}$) and different $\dot{M}_{\rm CO}$ ($10^{-1}$ and $10^{-3}$ M$_\oplus$/Myr) and the resulting CO and C$^0$ masses are plotted in Fig. \[figcoc1evol\] in blue and orange, respectively. Motivated by our previous results, for these plots we assume that $R_0=90$ au, the ionisation fraction is close to 0, the temperature is $\sim 30$ K and the planetesimal belt spreads over $\Delta R=70$ au. The classical unshielded case, where shielding of CO by C$^0$ is unimportant is shown in the bottom left panel of Fig. \[figcoc1evol\]. This happens for high $\alpha$ values or low $\dot{M}_{\rm CO}$. In this case, we see that the steady state CO mass (which is $120\,{\rm yr} \times \dot{M}_{\rm CO}=1.2 \times 10^{-7}$ M$_\oplus$ and shown as a blue horizontal dotted line) is reached in less than 500 yr. The C$^0$ mass accumulates until reaching a few viscous timescales (which is $\sim$0.1 Myr for $\alpha=0.1$) when the viscous spreading balances the carbon input rate. The final C$^0$ mass reached is $\dot{M}_{{\rm C}^0} t_\nu = 3.6 \times 10^{-5}$ M$_\oplus$ as shown by the horizontal orange dotted line. The other case for which $\dot{M}_{\rm CO}=10^{-3}$ M$_\oplus$/Myr (bottom right plot) is very similar but eventually the C$^0$ mass becomes large enough that C$^0$ starts shielding CO from photodissociating (the C$^0$ shielding mass above which this happens is shown as a horizontal orange dashed line, which is taken to be when $\sigma_i N_{{\rm C}^0}=1$ in Eq. \[tco\] and is close to $2 \times 10^{-3}$ M$_\oplus$ as shown in Fig. \[figCIneed\]). This is why after $\sim 1$ Myr, the CO mass goes back up again before plateauing to its new steady-state value. This starts happening because the maximum C$^0$ mass that can be reached is higher than the mass to start shielding CO (i.e., $\dot{M}_{{\rm C}^0} t_\nu>2 \pi R \Delta R \mu_c m_H/\sigma_i$), which was not the case previously for a higher $\alpha$. ![\[figcoc1alpha\] CO (top) and C$^0$ (bottom) gas masses after 1, 3 and 10 Myr (thin lines) and 30 Myr (thick) as a function of $\alpha$ for 3 different $\dot{M}_{\rm CO}$ of $10^{-1}$ (solid), $10^{-2}$ (dashed), and $10^{-3}$ M$_\oplus$/Myr (dotted). This shows the final stages of gas evolution depicted in Fig. \[figcoc1evol\] for a wider variety of $\alpha$ and $\dot{M}_{\rm CO}$. We keep the blue and orange colours for CO and C$^0$, respectively, to match the convention used in Fig. \[figcoc1evol\]. The horizontal dashed line in the bottom plot shows when the C$^0$ mass becomes high enough to start shielding CO significantly.](Figure_CO_CI_alpha_v4.pdf){width="8cm"} The top left plot is for a higher CO input rate (and high $\alpha$) and it evolves similarly to the case that has just been presented, except for the intermediate plateau that is now affected by CO self-shielding. The blue horizontal dashed line at $8 \times 10^{-5}$ M$_\oplus$, is when the CO photodissociation timescale is increased due to self-shielding by $e \sim 2.72$ but self-shielding starts affecting the CO mass before CO reaches that level, and this further increases the CO lifetime. The top right subplot in Fig. \[figcoc1evol\] is the most characteristic of what could explain shielded discs (note the more extended y-scale for this subplot). In this case, the C$^0$ mass that can be reached is two orders of magnitude above where C$^0$ starts shielding CO, meaning that the exponential term in Eq. \[tco\] can reach very high values and therefore CO can survive several orders of magnitude longer than without shielding. The CO mass that is reached for this case is $\sim 0.8$ M$_\oplus$, i.e. more than 4 orders of magnitude higher than the expected mass of $\sim 10^{-5}$ M$_\oplus$ without shielding. For this case, shielding from C$^0$ dominates over self-shielding. The shielding timescale becomes so long that the maximum CO mass that can be reached becomes dominated by viscous spreading, which is why the CO mass plateaus at $\dot{M}_{\rm CO} t_\nu \sim 0.8$ M$_\oplus$ (blue dash-dotted line). The C$^0$ mass does not reach its maximum theoretical value of $\dot{M}_{{\rm C}^0} t_\nu$ (orange dotted line) because CO spreads more rapidly than it is photodissociated and thus the maximum C$^0$ mass that can be reached is a factor $\sim t_\nu/t_{\rm COph}$ smaller. This also means that as CO spreads viscously, it is not necessarily colocated with the main dust belt (as was originally thought for gas discs of secondary origin), but it should in any case be colocated with the carbon gas disc (at least in the region where C$^0$ shielding is efficient, i.e., this colocation may not apply at large radii, see Fig. \[figevol\]). Fig. \[figcoc1alpha\] shows the CO (top) and C$^0$ (bottom) gas masses after 1, 3, 10 Myr (thin lines) and 30 Myr (thick) as a function of $\alpha$ for 3 different $\dot{M}_{\rm CO}$ of $10^{-1}$ (solid), $10^{-2}$ (dashed), and $10^{-3}$ M$_\oplus$/Myr (dotted). For high $\alpha$ values (or low $\dot{M}_{\rm CO}$), the final CO mass reached is $120$ yr $\times \, \dot{M}_{\rm CO}$, as expected without shielding and the C$^0$ mass reaches its maximum at a value of 12/28 $\dot{M}_{\rm CO} t_\nu \propto \dot{M}_{\rm CO}/\alpha$, which explains the C$^0$ evolution with $\alpha$ and $\dot{M}_{\rm CO}$. When $\alpha$ becomes smaller or $\dot{M}_{\rm CO}$ becomes higher, a new regime is attained where the CO mass reaches orders of magnitude higher values which can explain the typical CO masses observed with ALMA for shielded discs. The final CO mass in this regime plateaus because it is then dominated by the viscous evolution since the photodissociation timescale becomes greater than the viscous timescale. In this shielded regime, the C$^0$ mass plateaus because the CO mass input rate at $R_0$ becomes lower owing to CO spreading and thus less C$^0$ can be produced per unit of time. Therefore, this figure shows that there is clearly two regimes of secondary gas production with a sharp transition between gas discs of low CO and C$^0$ masses (e.g. $\beta$ Pic) and high CO and C$^0$ masses (e.g. HD 131835) depending on $\alpha$ and $\dot{M}_{\rm CO}$. However, we notice that if the gas production only started recently (i.e., $\sim$1 Myr, lowermost thin lines on Fig. \[figcoc1alpha\]) and for low values of $\dot{M}_{\rm CO}$, a disc that is on its way to becoming shielded could be mistaken for an unshielded low-mass secondary gas disc. This is because before becoming a high mass gas disc, there is a build-up process, the timescale of which is fixed by the time to reach a C$^0$ mass high enough to start shielding CO as shown in Fig. \[figcoc1evol\]. Also, we note that the gas input rate will decrease over time because the dust mass loss rate will become smaller owing to collisions that are depleting the belt mass [e.g. @2007ApJ...663..365W; @2008ApJ...673.1123L]. Thus, a shielded disc can become unshielded again after a certain period of time. ### Application of the refined semi-analytic model to HD 131835 {#app} ![image](Figure_Chi2_predicted_CI_mass_v4.pdf){width="18cm"} We now apply the model presented in the previous section \[refined\] to the specific case of HD 131835 and try to explain the high observed CO mass of 0.04 M$_\oplus$ in this system. We ran a grid of 1000 models for different $\dot{M}_{\rm CO}$ and $\alpha$ values and in Fig. \[figco13\] (top), we compute the predicted CO mass shown in colour scale and plot contours that show where the predicted CO mass is within 1, 3, and 5$\sigma_{\rm CO}$ from the mass inferred from the observations, where the uncertainty $\sigma_{\rm CO}$ is assumed to be 0.5 dex from the observed mass [@2017ApJ...849..123M]. As the models are not always at steady state, we ran simulations for different ages. HD 131835 is $\sim$15 Myr old but the secondary production started after the protoplanetary disc dispersed, i.e., likely after a few Myr. We plot the best-fit models for 3, 10 and 30 Myr. 10 Myr is likely to be the most realistic unless the gas production is linked to a recent event (a few Myr old) and we also show 30 Myr when most of the parameter space is then at steady state. The horizontal part of the best-fit region (i.e., where the contours are horizontal for small $\alpha$) is for models that have not reached steady state yet. The yellow region shows where the C$^0$ mass produced through the secondary evolution is not high enough to shield CO from photodissociating (i.e., it is not the shielded disc regime). In the rest of the parameter space, CO is shielded owing to C$^0$ and the contours show where the CO mass is closest to 0.04 M$_\oplus$. The green dashed line shows the CO production rate predicted from the stellar and dust disc parameters [see @2017ApJ...842....9M noting that uncertainties are up to a factor 10 on that value]. From the 10 Myr plot, we conclude that $10^{-3}<\alpha<10^{-2}$ can explain the observed CO mass, which is similar to what was found with the more simple analytical model presented in Sec. \[an\] as can be seen from the blue dashed line (on top plots of Fig. \[figco13\]) representing the analytical model developed in Sec. \[an\]. We then compute the C$^0$ mass that is needed to explain that CO mass on the bottom row of Fig. \[figco13\] (the red and purple lines are for $2.7 \times 10^{-3}$ and $1.2 \times 10^{-2}$ M$_\oplus$, respectively, corresponding to the range of plausible masses derived from observations). We find that by overplotting the best-fit contours for CO mass from the top plot on the bottom one, then for 10 Myr, the C$^0$ mass needed to explain the observed CO mass is between $5 \times 10^{-3}$ and $10^{-2}$ M$_\oplus$, which is similar to what we deduced from the \[CI\] observations in Sec \[gasfit\] but is slightly narrower. We find that the best-fit is not necessarily at steady state. If $\alpha \sim 10^{-3}$, the disc surface density is still evolving with time as can be inferred from the fact that the contours are horizontal in this range. We note that even for larger $\alpha$, the inner and outer regions are not necessarily at steady state (even though it is not on the horizontal part anymore) because the global viscous timescale can be $\sim 10$ times longer than the local viscous timescale for which steady state is reached at $R_0$ only , i.e., the gas disc can be at steady state in the parent belt but not yet in the inner/outer regions. As noted in Sec. \[gasfit\], as the outer radial profile of the gas density is very steep, it could be that the gas disc has not reached steady state and thus the global viscous timescale should be $<15$ Myr (the age of the system), which translates as $\alpha<6 \times 10^{-3}$ as already described in Sec. \[an\]. Using Fig. \[figco13\] and fixing $\alpha=6 \times 10^{-3}$, we therefore find a maximum $\dot{M}_{\rm CO}$ of 0.3 M$_\oplus$/Myr to still produce a best-fit (at the 99.7% uncertainty level) in the case when steady state is not yet reached. We note that when higher resolution images become available, the present model should be extended to a 1D code that viscously evolves CO and carbon together, since that would allow to compare the predicted radial profiles to that observed. Cometary composition {#comcomp} -------------------- Using Eq. 1 in @2017MNRAS.469..521K, we find that the dust production rate in this belt is $\sim 1.7$ M$_\oplus$/Myr, where we assumed a ${\rm d}r/r$ of $\sim$0.85 as derived in Sec. \[dustfit\], and the fiducial values listed in @2017MNRAS.469..521K. This in turns leads to a prediction of the CO-to-solid mass ratio of $<20\%$ using the maximum $\dot{M}_{\rm CO}$ of 0.3 M$_\oplus$/Myr that we have just derived in the previous section [see Eq. 2 in @2017MNRAS.469..521K]. The upper limit we derive here is thus consistent with the largest values known for Solar System comets where the CO-to-dust mass ratio is of 3-27% . As the CO-to-dust mass ratio is found to be consistent with Solar System comets, we can predict the amount of water that should be released together with CO. Using that $2 \times 10^{-3}<\dot{M}_{\rm CO}<0.3$ M$_\oplus$/Myr and that typical H$_2$O-to-CO mass ratios span a large range of 3-160 , this translates to $6 \times 10^{-3}<\dot{M}_{\rm H_2O}< 2$ M$_\oplus$/Myr (we limit the upper bound to 2M$_\oplus$/Myr because this rate cannot be greater than the dust mass loss rate estimated above). The water photodissociation timescale is very short and will not be much affected by the lack of UV photons with $\lambda<110$ nm (see Fig. \[figphd\]) because lower energy photons with $110<\lambda<190$ nm can still photodissociate H$_2$O (contrary to CO). Thus, the amount of water is expected to be small but we can derive the amount of neutral hydrogen H that should survive together with carbon and oxygen for more than the age of the system (a few viscous timescales). Assuming that the release of CO (or that the onset of the collisional cascade) started 10 Myr ago (i.e., after the protoplanetary disc dispersed), we find that a mass $\sim 7 \times 10^{-3}-2$ M$_\oplus$ of H could be in the system. Following a similar procedure, we also find that the total oxygen mass in the system coming from CO and H$_2$O photodissociations should be in the range 0.06-20 M$_\oplus$, which thus may be the dominant species in the system[^7]. Even if water is not released together with CO, then the oxygen mass will still be very similar to the carbon mass (times 16/12) and so would be of order $10^{-2}$ M$_\oplus$. @2015ApJ...814...42M used Herschel to look for \[OI\] in HD 131835 and assuming LTE plus that the temperature is $\sim30$ K, they find a mass upper limit of $10^{-2}$ M$_\oplus$, which is similar to our estimation when oxygen only comes from CO but slightly lower than our lower mass prediction if water is released as well. However, LTE may not be a good approximation for OI (and then the oxygen upper limit could be much higher) because the critical collider density (of electrons or H) to reach LTE for OI is $\sim 3 \times 10^5$ cm$^{-3}$, which may be higher than predicted by our models (especially away from $R_0$). Indeed, in Sec. \[lawfit\], we found that the C$^0$ number density could be a few $10^3$ cm$^{-3}$ but owing to the low ionisation fraction, the electron density (coming from carbon photoionisation) may be rather small. If water is released, the hydrogen number density could be 10-500 times higher, i.e. the exact value of the hydrogen number density is needed to determine whether LTE is reached. If LTE is not reached, it would mean that the mass upper limit from Herschel could be order of magnitude higher in non-LTE, as already shown in @2016MNRAS.461..845K. From the observations at hand, it is thus hard to put any constraints on the release of water from the planetesimals, we can only say that the OI upper limit from Herschel is consistent with a secondary origin. Dust/gas interactions {#gasdr} --------------------- Owing to the high-CO, carbon and presumably oxygen content in the gas disc around HD 131835, the gas density may be high enough to drag the dust. To check that, we work out the dimensionless stopping time [@2001ApJ...557..990T] $$\begin{gathered} \label{gasdrag} T_s=8 \left( \frac{M_\star}{1.7 M_\odot}\right)^{1/2} \left( \frac{T}{30K}\right)^{-1/2} \left( \frac{R}{100{\rm au}}\right)^{-3/2} \\ \left( \frac{\rho_s}{1500 {\rm kg/m^3}}\right) \left( \frac{s}{10 \mu{\rm m}}\right) \left( \frac{n_g}{10^5 {\rm cm^{-3}}}\right)^{-1}, \end{gathered}$$ where $\rho_s$ is the density of the solids, $s$ the grain size, $T$ the gas temperature and $n_g$ the gas density. This equation shows that in less than 10 orbital timescales at 100 au, a dust grain of about 10 microns would be entrained by the gas and migrate radially if its density is around $10^5$ cm$^{-3}$. Such high densities are not inconceivable close to $R_0$ in the disc around HD 131835. Indeed, assuming that the CO gas disc is 0.04 M$_\oplus$ [@2017ApJ...849..123M] and extends over $\Delta R=70$ au, we find that the CO number density is already $\sim 5 \times 10^3$ cm$^{-3}$. In Sec. \[lawfit\], we also found that the C$^0$ number density could go up to a few $10^3$ cm$^{-3}$ on top of which some oxygen and hydrogen (coming from the release of water together with CO) would increase the total number density up to $>10^4$ cm$^{-3}$. Hence, the gas is expected to be able to exchange angular momentum with the smallest dust grains in the system, therefore potentially leading to a radial displacement of these grains from their point of origin. The exact interplay between the effect of gas drag and collisions is complex. Collisions will prevent grains from migrating too far (due to gas drag) as they may be destroyed before reaching their new steady state position, which needs to be modelled accurately. Here, we simply suggest that this gas drag may be able to explain the presence of narrow rings as observed with SPHERE in scattered light because small micron-sized grains will accumulate at the outer edge of the gas disc where its density plummets [@2001ApJ...557..990T] and a second ring also composed of larger grains will be left at the parent belt position . The increased lifetime of this small dust could also result in shielded secondary discs having on average a smaller grain size that is closer to the blow-out limit [i.e., around $2.5\mu$m in this system, @1979Icar...40....1B] than their unshielded counterparts, evidence for which is seen in @2016ApJ...828...25L. Are all hybrid discs actually shielded discs of secondary origin? {#allhyb} ----------------------------------------------------------------- \[tabco\] ----------------- --------- ------------ -------------------- ------- ---------------------- ------- ------------- -- System $R$ $\Delta R$ $M_{\rm CO}$ $T^a$ $L_{\rm IR}/L_\star$ age $L_\star$ (au) (au) (M$_\oplus$) ($K$) (Myr) ($L_\odot$) HD 131835 (1,3) 90 80 0.04 30 $2 \times 10^{-3}$ 16 10 HD 21997 (2,3) 70$^b$ 30$^b$ 0.06 9 $6 \times 10^{-4}$ 42 14 HD 121191 (4,5) 100$^c$ 40$^c$ $3 \times 10^{-3}$ 6 $2 \times 10^{-3}$ 15 7 HD 121617 (4) 83 60 0.02 18 $5 \times 10^{-3}$ 16 14 HD 131488 (4) 85 40 0.09 10 $2 \times 10^{-3}$ 16 12 ----------------- --------- ------------ -------------------- ------- ---------------------- ------- ------------- -- \(a) Gas temperatures are lower limits because of possible beam dilution. (b) For HD 21997, we take the planetesimal belt radius at which 70% of the surface density is located as being the reference radius, and the disc half-width as being from the reference radius to the inner radius derived in (3). (c) The disc around HD 121191 was not resolved (or only marginally) in (4) and thus we use an SED fit (from (5) where we added the ALMA flux) to obtain the disc radius [and we correct from blackbody using @2015MNRAS.454.3207P]. For the disc width, we arbitrarily assume the same width as for HD 131488. References: (1) this paper; (2) @2013ApJ...776...77K; (3) @2013ApJ...777L..25M; (4) @2017ApJ...849..123M; (5) @2017MNRAS.469..521K ![image](Figure_Chi2_CO_predicted_CI_mass_hybrid_discs_v4.pdf){width="13cm"} The new scenario suggested in this paper opens the possibility that all potential hybrid discs are actually shielded discs of secondary origin. So far, 9 systems (including HD 131835) have been suggested to be hybrid, namely HD 21997, HD 138813, 49 Ceti, HD 32297, HD 156623, HD 121191, HD 121617, and HD 131488 [@2013ApJ...776...77K; @2016ApJ...828...25L; @2017ApJ...839...86H; @2016MNRAS.461.3910G; @2017ApJ...849..123M]. Out of these 9 potential hybrid members, 5 have been detected with an optically thin CO isotope line (namely HD 131835, HD 21997, HD 121191, HD 131488 and HD 121617) and thus have accurate CO measurements [@2017ApJ...849..123M]. For these, we apply the same method as described in Sec. \[app\] to compute the range of $\alpha$ and C$^0$ mass values that can fit the observed CO masses for each system (listed in Table \[tabco\]). The results are shown in Fig. \[figalpha2\] in a similar way to Fig. \[figco13\]. We find that the range of $\dot{M}_{\rm CO}$ implied by the best-fitting values (contours) always overlaps with the predicted range [green dashed lines from @2017ApJ...842....9M] derived for a given system. This reinforces the point that these systems could be shielded discs rather than hybrid discs. In Fig. \[figalpha2\] (right), we also show the potential C$^0$ mass that is needed to explain the observed CO. The predicted C$^0$ masses are all above $10^{-3}$ M$_\oplus$ and would be easily detectable with ALMA, which therefore have the potential to assess the secondary origin of these gas discs. We find that HD 131835 and HD 21997 need $\alpha$ values lower than $10^{-2}$ to explain the observations, while HD 121191, HD 121617 and HD 131488 can be explained with $10^{-2}<\alpha<1$ owing to their high CO input rate. The latter 3 systems are thus expected to be at steady state while the former 2 may still be evolving. We conclude that all hybrid discs with accurate CO masses observed so far may be consistent with being shielded discs of secondary origin and that observations of \[CI\] with ALMA have the potential to confirm this possibility. The observed C$^0$ mass would also give insights into the most probable $\alpha$ value for each system and thus provide information as to what is driving the viscosity in these debris disc systems [e.g. whether this is the MRI, @2016MNRAS.461.1614K]. We note, however, that this shielded/unshielded dichotomy seems to emerge because of differences both in $\alpha$ and $\dot{M}_{\rm CO}$. Interestingly, the $\dot{M}_{\rm CO}$ values for $\beta$ Pic and most hybrid discs are similar but the CO content observed is orders of magnitude different. A difference in $\alpha$ could explain these two different regimes but our model does not address the origin of the $\alpha$ values. It could be that $\beta$ Pic is actually still evolving towards that shielded stage (see the thin lines on Fig. \[figcoc1alpha\] that are for an evolution that has started only recently). However, it could be that $\alpha$ can be very different from system to system depending on the ionisation fraction or some other parameters that are yet to be understood. Confirming observationally whether hybrid/shielded discs are of primordial/secondary origin {#prim} ------------------------------------------------------------------------------------------- In this paper, we showed that massive CO gas discs may be well explained by secondary production of gas released from planetesimals in debris belts. However, can we rule out that the observed gas is primordial? The main difference between a primordial and secondary gas disc is that we expect the former to be dominated by H$_2$ with a typical H$_2$-to-CO ratio of $10^4$, while the latter is depleted in H$_2$ and dominated by CO, carbon, oxygen (and hydrogen, if water is released together with CO). This has several consequences which we comment on further below. We find that it is too early to rule out a primordial origin for sure, even though, at the moment, the primordial origin is suggested based on the sole high-gas-mass content of these discs but there are no observational/theoretical proofs to back it up. Importantly, it is still to be proven how these discs could survive against photo-evaporation for $>$10Myr [e.g. @2012MNRAS.422.1880O]? Here, we provide some ideas for some new observations that could be used to find the real origin of the gas. [*Temperature*]{}: Gas temperatures that are much lower than dust temperatures seem to be the norm for shielded/hybrid discs [e.g. @2017ApJ...849..123M]. In a protoplanetary disc such low temperatures are not expected but it remains an open question whether in a hybrid disc scenario with low dust content such temperatures could be reached. To the authors’ knowledge this has not been simulated so far. Another possibility is that the measured low excitation temperature could also be due to non-LTE effects, which would mean the actual kinetic temperature is higher [e.g. @2015MNRAS.447.3936M]. However, as shown in Fig. \[figionftdali\] (right), for a gas with a secondary origin, it is possible to reach very low kinetic temperatures. We also find (using DALI) that adding sulfur in these discs can lower the temperatures even further. The exact composition of the gas disc is thus very important to set up the temperature but it goes beyond the scope of this paper to study this in detail. These low temperatures could be another hint that a secondary origin is more likely. [*Spatial distribution of CO VS dust and VS C$^0$*]{}: One prediction of the model is that CO in these discs can extend further than the main planetesimal belt as it has time to spread viscously, which may explain, for instance, why the CO gas disc around HD 21997 is more extended inwards than its dust belt [@2013ApJ...776...77K]. Moreover, we also predict that C$^0$ should be colocated with CO in the inner region but not in the outer region. Indeed, CO will evolve together with C$^0$ only in the regions where the C$^0$ density is high enough to shield CO beyond which the CO density would fall drastically, meaning that CO and C$^0$ would not be colocated in the outer region. In a protoplanetary disc, we expect an onion structure, with CO in the midplane and C$^0$ above it and thus CO and C$^0$ are expected to be colocated radially (though not vertically). The main observational difference between the primordial VS secondary profile is that of the steep drop of CO predicted for the secondary origin (while C$^0$ can extend further out) and thus we expect this non-colocation of CO and C$^0$ only in the secondary scenario. We note that this will happen in regions where the carbon density is smaller and deep observations may be needed to actually see the non-colocation of CO and CI. Resolving together CO and C$^0$ with ALMA would thus enable to test for consistency with a model in which the gas is secondary. [*Scaleheight*]{}: We also note that from a high-resolution ALMA image of the system, one could measure the scaleheight of the gas disc. This scaleheight depends only on the mean molecular weight of the gas (as soon as we know the gas disc temperature) and we could thus directly show that the gas is secondary/primordial from such an observation. This has already been tried for 49 Ceti where @2017ApJ...839...86H find some evidence that the scaleheight is low in this gas-rich system (hence favouring a high molecular weight and thus a secondary origin). Note that for this to work, the temperature should be known rather accurately at a specific radius where the scaleheight is measured. We could use the same method as described in Sec. \[gasfit\] to get the temperature using data with a higher resolution or multitransitions of optically thin species such as C$^{18}$O [e.g. @2017MNRAS.464.1415M]. [*Spatial distribution of mm-dust*]{}: From such a high-resolution ALMA image, we could also study in more detail the spatial distribution of the millimetre dust. In Sec. \[gasdr\], we showed that for a secondary origin, grains of tens of microns or smaller could migrate due to the gas. However, our Eq. \[gasdrag\] can also be used for a primordial origin for which the gas density would go up by orders of magnitude. Assuming an H$_2$-to-CO ratio of $>10^3$, we would have a gas density that can reach $\gtrsim 10^7$ (as we had found that the CO gas density from observations is $\sim 5 \times 10^3$ cm$^{-3}$), at which point millimetre grains would start migrating as well, which could form an enhancement of dust at a specific location in the disc [@2001ApJ...557..990T]. All of this together would help to confirm the origin of the gas in these systems. Summary/Conclusion ================== The most major outcome of this paper is that we present a new way to explain the so-called hybrid discs ($>$10 Myr old systems, which contain protoplanetary disc levels of CO gas) with a second generation model. In this secondary approach, molecular gas is released from planetesimals which then photodissociates to create an atomic gas disc that spreads viscously. If the molecular gas input rate is high enough and/or that the produced atomic gas can accumulate for long viscous timescales (i.e., small $\alpha$ values), we find that the neutral carbon mass that can accumulate is high enough to start shielding CO (see Sect. \[shield\]). The massive gas discs produced by this new secondary scenario are thus not hybrid discs (for which gas is supposed to be primordial) but rather [*shielded discs*]{} of secondary origin as we call them throughout the paper. By fitting the new \[CI\] ALMA observation around HD 131835 (see Sec. \[gasfit\]), we find that the C$^0$ is mostly between 40-200 au with a mass that is $2.7 \times 10^{-3}<M_{{\rm C}^0}<1.2 \times 10^{-2}$ M$_\oplus$, which is high enough to shield CO. In Sec. \[an\], we present a simple analytical model that integrates shielding by C$^0$ and we compute the $\alpha$ value needed to get the CO mass of 0.04 M$_\oplus$ observed with ALMA towards HD 131835. We find that in order to shield CO at the right level, $\alpha$ should be between $[2.5-60]\times10^{-4}$, which would create $\sim 10^{-2}$ M$_\oplus$ of C$^0$ in agreement with the observations. We then refine the analytical model (see Sec. \[refined\]) and include self-shielding from CO as well as a time-dependence as C$^0$ takes time to accumulate, and we also take into account that the CO photodissociation timescale can become longer than the viscous timescale for shielded discs, meaning that CO will spread viscously before it has time to photodissociate. This refined model confirms our previous results and that $\alpha$ should be on the order of $10^{-3}$ to get a good fit of both CO, C$^0$ and the dust (as the CO input rate is set by the amount of dust in the system as more dust means more CO released). We also find that the C$^0$ mass required is compatible with that observed. Finally, in Sec. \[allhyb\], we show that this new shielded disc scenario could explain all massive (previously called hybrid) gas discs as being shielded discs of secondary origin. We find that the C$^0$ mass in these systems should be high and detectable with ALMA, which therefore has the potential to test our model further (and distinguish it from a primordial origin model, see Sec. \[prim\]) and also to constrain the viscosity in these systems, thus giving access to the possibility of studying what is the main driver of angular momentum in these discs. From a more detailed perspective, we find that the second generation shielded discs can have CO that is not necessary colocated with the parent belt (thus explaining resolved systems such as HD 21997). We also find that the CO density in shielded secondary discs should abruptly drop when the C$^0$ density drops below a certain threshold, thus giving the opportunity to distinguish from a primordial gas disc from high-resolution images of CO and CI. Another smoking-gun evidence to distinguish between primordial and secondary would be to measure the scaleheight of these discs (see Sec. \[prim\]). By using a PDR-like model, we also find that the typical low temperature of these massive CO discs can be explained in a shielded disc scenario because of the interplay between heating by carbon photoionisation and cooling through the OI and CII lines. For these massive gas discs, we also find that CN, N$_2$, and CH$^{+}$ will also be partially shielded by C$^0$ and are thus expected to be the most abundant molecular species, after CO, in these shielded discs. We note that the detection of CH$^{+}$ would lead to an estimate of the amount of hydrogen in these systems, as well as potentially the amount of water that is released from planetesimals together with CO. We find that the total gas mass in HD 131835 may be able to drag the smallest dust grains, which may potentially explain the creation of rings similar to what observed in scattered light with SPHERE in this system. Moreover, because of the increase lifetime of small dust grains due to gas drag, we expect that shielded secondary discs should have on average a smaller grain size that is closer to the blow-out limit than their unshielded counterparts, evidence for which is seen in @2016ApJ...828...25L. Acknowledgments {#acknowledgments .unnumbered} =============== This paper is dedicated to Manon. We thank the referee for a fair and insightful report. QK thanks Rik van Lieshout for insightful discussions about theoretical aspects of the gas model. QK thanks Simon Bruderer for his help on the DALI code. QK and MCW acknowledge funding from STFC via the Institute of Astronomy, Cambridge Consolidated Grant. LM acknowledges support from the Smithsonian Institution as a Submillimeter Array (SMA) Fellow. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.1.0.01253.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Repub- lic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. Balbus, S. A., & Hawley, J. F. 1998, Reviews of Modern Physics, 70, 1 Bohren, C. F., & Huffman, D. R. 1983, New York: Wiley, 1983, Brandeker, A., Cataldi, G., Olofsson, G., et al. 2016, [A & A]{}, 591, A27 Brinch, C., & Hogerheijde, M. R. 2010, [A & A]{}, 523, A25 Bruderer, S., van Dishoeck, E. F., Doty, S. D., & Herczeg, G. J. 2012, [A & A]{}, 541, A91 Bruderer, S. 2013, [A & A]{}, 559, A46 Burns, J. A., Lamy, P. L., & Soter, S. 1979, [Icarus]{}, 40, 1 Cataldi, G., Brandeker, A., Olofsson, G., et al. 2014, [A & A]{}, 563, A66 Cataldi, G., Brandeker, A., Wu, Y., et al. 2018, [ApJ]{}, 861, 72 Dohnanyi, J. S. 1969, , 74, 2531 Draine, B. T. 2003, [ApJ]{}, 598, 1017 Dullemond, C. P., Juhasz, A., Pohl, A., et al. 2012, Astrophysics Source Code Library, ascl:1202.015 Favre, C., Cleeves, L. I., Bergin, E. A., Qi, C., & Blake, G. A. 2013, [ApJ]{}, 776, L38 Feldt, M., Olofsson, J., Boccaletti, A., et al. 2017, [A & A]{}, 601, A7 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, [[*PASP *]{}]{}, 125, 306 Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016a, [A & A]{}, 595, A1 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016b, [A & A]{}, 595, A2 Goodman, J., & Weare, J. 2010, Communications in Applied Mathematics and Computational Science, Vol. 5, No. 1, p. 65-80, 2010, 5, 65 Greaves, J. S., Holland, W. S., Matthews, B. C., et al. 2016, [MNRAS]{}, 461, 3910 Higuchi, A. E., Sato, A., Tsukagoshi, T., et al. 2017, [ApJ]{}, 839, L14 Hughes, A. M., Wilner, D. J., Andrews, S. M., et al. 2011, [ApJ]{}, 740, 38 Hughes, A. M., Lieman-Sifry, J., Flaherty, K. M., et al. 2017, [ApJ]{}, 839, 86 Hung, L.-W., Fitzgerald, M. P., Chen, C. H., et al. 2015a, [ApJ]{}, 802, 138 Hung, L.-W., Duch[ê]{}ne, G., Arriaga, P., et al. 2015b, [ApJ]{}, 815, L14 Kamp, I., Woitke, P., Pinte, C., et al. 2011, [A & A]{}, 532, A85 K[ó]{}sp[á]{}l, [Á]{}., Mo[ó]{}r, A., Juh[á]{}sz, A., et al. 2013, [ApJ]{}, 776, 77 Kral, Q., Th[é]{}bault, P., & Charnoz, S. 2013, [A & A]{}, 558, A121 Kral, Q., Wyatt, M., Carswell, R. F., et al. 2016, [MNRAS]{}, 461, 845 Kral, Q., & Latter, H. 2016, [MNRAS]{}, 461, 1614 Kral, Q., Krivov, A. V., Defr[è]{}re, D., et al. 2017a, The Astronomical Review, 13, 69 Kral, Q., Clarke, C., & Wyatt, M. 2017b, Handbook of Exoplanets, Edited by Hans J. Deeg and Juan Antonio Belmonte. Springer Living Reference Work, ISBN: 978-3-319-30648-3, 2017, id.165, 165 Kral, Q., Matr[à]{}, L., Wyatt, M. C., & Kennedy, G. M. 2017c, [MNRAS]{}, 469, 521 Krivov, A. V., L[ö]{}hne, T., & Srem[č]{}evi[ć]{}, M. 2006, [A & A]{}, 455, 509 Lieman-Sifry, J., Hughes, A. M., Carpenter, J. M., et al. 2016, [ApJ]{}, 828, 25 L[ö]{}hne, T., Krivov, A. V., & Rodmann, J. 2008, [ApJ]{}, 673, 1123 Lynden-Bell, D., & Pringle, J. E. 1974, [MNRAS]{}, 168, 603 Lyra, W., & Kuchner, M. 2013, [Nature]{}, 499, 184 Mamajek, E. E., Meyer, M. R., & Liebert, J. 2002, [AJ]{}, 124, 1670 Marino, S., Matr[à]{}, L., Stark, C., et al. 2016, [MNRAS]{}, 460, 2933 Marino, S., Wyatt, M. C., Pani[ć]{}, O., et al. 2017, [MNRAS]{}, 465, 2595 Matr[à]{}, L., Wilner, D. J., [Ö]{}berg, K. I., et al. 2018a, [ApJ]{}, 853, 147 Matr[à]{}, L., Marino, S., Kennedy, G. M., et al. 2018b, [ApJ]{}, 859, 72 Matr[à]{}, L., Dent, W. R. F., Wyatt, M. C., et al. 2017a, [MNRAS]{}, 464, 1415 Matr[à]{}, L., MacGregor, M. A., Kalas, P., et al. 2017b, [ApJ]{}, 842, 9 Matr[à]{}, L., Pani[ć]{}, O., Wyatt, M. C., & Dent, W. R. F. 2015, [MNRAS]{}, 447, 3936 Metzger, B. D., Rafikov, R. R., & Bochkarev, K. V. 2012, [MNRAS]{}, 423, 505 Mo[ó]{}r, A., [Á]{}brah[á]{}m, P., Derekas, A., et al. 2006, [ApJ]{}, 644, 525 Mo[ó]{}r, A., Juh[á]{}sz, A., K[ó]{}sp[á]{}l, [Á]{}., et al. 2013, [ApJ]{}, 777, L25 Mo[ó]{}r, A., Henning, T., Juh[á]{}sz, A., et al. 2015, [ApJ]{}, 814, 42 Mo[ó]{}r, A., Cur[é]{}, M., K[ó]{}sp[á]{}l, [Á]{}., et al. 2017, [ApJ]{}, 849, 123 Mumma, M. J., & Charnley, S. B. 2011, [ARA&A]{}, 49, 471 Owen, J. E., Clarke, C. J., & Ercolano, B. 2012, [MNRAS]{}, 422, 1880 Pawellek, N., & Krivov, A. V. 2015, [MNRAS]{}, 454, 3207 Pecaut, M. J., Mamajek, E. E., & Bubar, E. J. 2012, [ApJ]{}, 746, 154 Pringle, J. E. 1981, [ARA&A]{}, 19, 137 Rafikov, R. R. 2017, [ApJ]{}, 837, 163 Richert, A. J. W., Lyra, W., & Kuchner, M. J. 2018, [ApJ]{}, 856, 41 Riviere-Marichalar, P., Barrado, D., Augereau, J.-C., et al. 2012, [A & A]{}, 546, L8 Riviere-Marichalar, P., Barrado, D., Montesinos, B., et al. 2014, [A & A]{}, 565, A68 Rizzuto, A. C., Ireland, M. J., & Robertson, J. G. 2011, [MNRAS]{}, 416, 3108 Rollins, R. P., & Rawlings, J. M. C. 2012, [MNRAS]{}, 427, 2328 Takeuchi, T., & Artymowicz, P. 2001, [ApJ]{}, 557, 990 Tanaka, T. 2011, [MNRAS]{}, 410, 1007 Tazzari, M., Beaujean, F., & Testi, L. 2018, [MNRAS]{}, Th[é]{}bault, P., & Augereau, J.-C. 2007, [A & A]{}, 472, 169 van Dishoeck, E. F. 1988, Rate Coefficients in Astrochemistry, 146, 49 van Leeuwen, F. 2007, [A & A]{}, 474, 653 Visser, R., van Dishoeck, E. F., & Black, J. H. 2009, [A & A]{}, 503, 323 Wyatt, M. C., Smith, R., Su, K. Y. L., et al. 2007, [ApJ]{}, 663, 365 Xie, J.-W., Brandeker, A., & Wu, Y. 2013, [ApJ]{}, 762, 114 Zuckerman, B., & Song, I. 2012, [ApJ]{}, 758, 77 Additional corner plot ====================== ![image](cornergas8000incl.png){width="14cm"} \[lastpage\] [^1]: E-mail: qkral@ast.cam.ac.uk [^2]: We note that C$^0$ shielding was taken into account in the numerical model by @2016MNRAS.461..845K studying $\beta$ Pic, but only for its impact on ionisation fraction and temperature, but not for the CO content as the C$^0$ mass for $\beta$ Pic is not large enough to shield CO. @2017MNRAS.464.1415M showed later that indeed in $\beta$ Pic, the shielded CO photodissociation rate due to C$^0$ is 0.96 of the unshielded one, i.e. not important. [^3]: We note that the Hipparcos distance assumed in previous studies was $122.75^{+16.2}_{-12.8}$, which is consistent within error bars with the GAIA DR1 value, but the median ratio is 1.18 , which can affect some earlier derived model values such as dust or gas masses, stellar luminosities, ... In particular, distances from previous studies should be multiplied by 1.18 before comparing to our study. [^4]: <https://almascience.eso.org/documents-and-tools/cycle6/alma-technical-handbook> [^5]: The low gas density and very low abundance of hydrogen in the system we study is strictly outside the parameter range for which DALI – a protoplanetary disc code – is validated to give correct results. We caution the reader to interpret our results as a guide to future work rather than a final result for the HD 131835 disc. [^6]: We note that there is also a maximum production rate if we assume that the whole belt mass, which cannot be more massive than the solid mass available in protoplanetary discs (i.e., $\sim 0.1 \times M_\odot/100 \sim 300$ M$_\oplus$ assuming a gas-to-dust ratio of 100), is released in CO over 15 Myr, which would give $\sim$ 20 M$_\oplus$/Myr. [^7]: However, we note that the upper value of 20 M$_\oplus$ is large and in the upper range compared to the total amount of oxygen expected to be available at this stage
--- abstract: 'We complement studies of the neutral pion transition form factor $\pi^0 \rightarrow \gamma^{(*)} \gamma^{(*)}$ with calculations for the electromagnetic decay widths of the processes $\pi^0 \rightarrow e^+ e^-$, $\pi^0 \rightarrow e^+ e^- \gamma$ and $\pi^0 \rightarrow e^+ e^- e^+ e^-$. Their common feature is that the singly- or doubly-virtual transition form factor serves as a vital input that is tested in the non-perturbative low-momentum region of QCD. We determine this form factor from a well-established and symmetry-preserving truncation of the Dyson-Schwinger equations. Our results for the three- and four-body decays match results of previous theoretical calculations and experimental measurements. For the rare decay we employ a numerical method to calculate the process directly by deforming integration contours, which in principle can be generalized to arbitrary integrals as long as the analytic structure of the integrands are known. Our result for the rare decay is in agreement with dispersive calculations but still leaves a $2\sigma$ discrepancy between theory and experiment.' author: - Esther Weil - Gernot Eichmann - 'Christian S. Fischer' - Richard Williams bibliography: - 'baryonspionff.bib' title: Electromagnetic decays of the neutral pion --- Introduction ============ Over the last years, low-energy electromagnetic processes in the meson sector have seen continuous interest both in the theoretical and experimental physics communities. Electromagnetic decays of the pion are particularly interesting because they combine the non-perturbative physics of dynamical mass generation and the associated generation of (pseudo-) Goldstone bosons with the Abelian anomaly and its perturbative elements, thus creating an interesting laboratory for theoretical approaches to non-perturbative QCD. Moreover, the rare decay $\pi^0 \to e^+ e^-$ poses a puzzle since theoretical estimates show a discrepancy with the experimental result from the KTeV E799-II experiment at Fermilab [@Abouzaid:2006kk; @Dorokhov:2007bd] of similar magnitude as the muon g-2. In this work we focus on electromagnetic decays of pseudoscalar mesons such as the rare decay $\pi^0 \to e^+ e^-$, the Dalitz decay $\pi^0 \to e^+ e^- \gamma$ and double Dalitz decay $\pi^0 \to e^+ e^- e^+ e^-$ using a well explored combination of Dyson-Schwinger and Bethe-Salpeter equations. All calculations involve the $\pi^0\to\gamma^{(\ast)}\gamma^{(\ast)}$ transition form factor (TFF) with one or two off-shell photons, thus testing it in the region of (very) low momenta. The present work complements a recent evaluation of the TFF at large space-like momenta, see Ref [@Eichmann:2017wil]. The paper is organised as follows. In the next section we give a short introduction to the details of our calculations and discuss features of the resulting TFF. In section \[sec3\] we then give results for the leptonic three- and four-body decays of the neutral pion and compare with the experimental values. In section \[sec4\] we discuss corresponding results for the rare decay of the pion into an electron-positron pair. We conclude in section \[sec5\]. Transition form factor 0 {#sec2} ========================= Kinematics and definitions {#sec:kinematics} -------------------------- We begin by defining the transition current and the kinematic regions of interest. The $\pi^0\to\gamma\gamma$ transition matrix element is given by $$\begin{aligned} \label{pigg-current} \Lambda^{\mu\nu}(Q,Q') = e^2\,\frac{F(Q^2,{Q'}^2)}{4\pi^2 f_\pi}\,\varepsilon^{\mu\nu\alpha\beta} {Q'}^\alpha Q^\beta \,,\end{aligned}$$ where $Q$ and $Q'$ are the photon momenta, $f_\pi\approx 92$ MeV is the pion’s electroweak decay constant, and $e^2=4\pi \alpha_\text{em}$ the squared electromagnetic charge. The pseudoscalar transition is described by a single scalar function, the transition form factor $F(Q^2,{Q'}^2)$, and the convention of prefactors is such that $F(0,0)=1$ in the chiral limit due to the Abelian anomaly. In the following it is useful to work with the average photon momentum $\Sigma$ and the pion momentum $\Delta$, $$\begin{aligned} \label{kinematics-0} \Sigma = \frac{Q+Q'}{2}\,, \qquad \Delta = Q-Q'\,,\end{aligned}$$ with $\Delta^2 = -m_\pi^2$ for an on-shell pion. The process then depends on two Lorentz invariants, $$\begin{aligned} \label{li-2} \renewcommand{\arraystretch}{1.2} \eta_+ &= \displaystyle \frac{Q^2+{Q'}^2}{2} = \Sigma^2+\frac{\Delta^2}{4}\,, \\ \omega &= \displaystyle \frac{Q^2-{Q'}^2}{2} = \Sigma\cdot \Delta\,,\end{aligned}$$ or vice versa: $\{ Q^2, \, {Q'}^2 \} = \eta_+ \pm \omega$, with the third invariant fixed when the pion is on-shell: $$\begin{aligned} \eta_- = Q\cdot Q' = \Sigma^2-\frac{\Delta^2}{4} = \eta_+ + \frac{m_\pi^2}{2} \,.\end{aligned}$$ Note that the TFF is symmetric in $Q^2$ and ${Q'}^2$ so it can only depend on $\omega$ quadratically. For practical calculations it is convenient to introduce the alternative variables $$\begin{aligned} \label{alt-kinematics} \sigma = \Sigma^2\,, \quad t = \frac{\Delta^2}{4}\,, \quad Z = \hat{\Sigma}\cdot\hat{\Delta} = \frac{\Sigma\cdot\Delta}{2\sqrt{\sigma t}}\end{aligned}$$ where $t=-m_\pi^2/4$ when the pion is on-shell and a hat denotes a normalized four-vector. We will refer to those in section \[sec:rare-decay\] and Appendix \[app:sing\]. ![Kinematic domains in $Q^2$ and $Q'^2$ including the symmetric (red line) and asymmetric limits ($Q^2$, $Q'^2$ axes). The area with $|\omega|< \eta_+$ corresponds to space-like momentum transfer. In the timelike region we show the relevant domains for the Dalitz and double Dalitz decays and the dotted lines indicate the vector-meson pole locations (not to scale).\[fig:phasespace-1\]](figures/phasespace-1){width="0.87\columnwidth"} In the physical processes we study in this work the TFF is tested in both space-like and time-like regions as shown in Fig. \[fig:phasespace-1\]. The time-like region, where either $Q^2<0$ or ${Q'}^2<0$, contains the physical singularities such as the vector-meson poles in the complex plane of $Q^2$ and ${Q'}^2$. For dilepton decays this region is kinematically restricted such that $Q^2,Q'^2\ge -m_\pi^2$. The double Dalitz decay $\pi\to 2 e^+ 2e ^-$ is constrained to the light blue shaded area below $\eta_+<0$, whilst the Dalitz decay $\pi\to e^+ e^- \gamma$ probes the asymmetric time-like form factor, given by the dark blue lines along the $Q^2,Q'^2$ axes. These decays are discussed in section \[sec3\]. The space-like region with both photon virtualities positive, $Q^2>0$ and $Q'^2>0$, is free of any physical singularities. The region that is relevant for the rare decay $\pi^0 \to e^+ e^-$, discussed in section \[sec4\], is the doubly-virtual or symmetric limit (red line in Fig. \[fig:phasespace-1\]) when $Q^2=Q'^2$ viz. $\omega=0$. Direct experimental measurements of the spacelike TFF are available in the singly-virtual or asymmetric limit with one of $\left\{Q^2,Q'^2\right\}$ vanishing [@Behrend:1990sr; @Gronberg:1997fj; @Aubert:2009mc; @Uehara:2012ag]. In addition, very different kinematic regions to these can be tested where the pion is ‘off-shell’ corresponding to space-like momentum transfer $\Delta^2>0$. This and various applications are discussed in [@Eichmann:2017wil]. Triangle diagram ---------------- ![The transition form factor given by Eq. . The non-perturbative input is the Bethe-Salpeter amplitude $\Gamma_\pi$ of the pion (grey circle), the dressed quark propagators (straight lines) and the dressed quark-photon vertices $\Gamma_\nu$ (blue circles). \[fig.pigg:diagram\] ](./figures/pigg){width="0.7\columnwidth"} In the impulse approximation, the transition form factor $\pi^0 \rightarrow \gamma^{(*)} \gamma^{(*)}$ is displayed in Fig. \[fig.pigg:diagram\] and given by $$\begin{aligned} \label{eqn:PseudoScalarFormFactor} \Lambda^{\mu\nu} &= 2e^2\, \text{Tr} \int \!\! \frac{d^4k}{(2\pi)^4} \, S(k_+)\,\Gamma_\pi(k,\Delta)\,S(k_-) \nonumber \\ &\times \Gamma^\mu(r_-,-Q)\,S(k+\Sigma)\,\Gamma^\nu(r_+,Q')\,.\end{aligned}$$ The photon momenta were defined in the previous subsection. In addition, $k$ is the loop momentum and $$\begin{aligned} \label{kin-1} k_\pm = k \pm \frac{\Delta}{2}\,, \qquad r_\pm = k + \frac{\Sigma}{2} \pm \frac{\Delta}{4}\;,\end{aligned}$$ are the internal quark momenta and the relative momenta appearing in the quark-photon vertices, respectively. The trace in Eq.  is over Dirac indices only[^1] and the factor 2 in front of the integral stems from the exchange of the photons. All ingredients in Eq.  are determined from numerical solutions of their Dyson-Schwinger and Bethe-Salpeter equations. The renormalized quark propagator is given by $$\begin{aligned} \label{quarkprop} S(p) = Z_f(p^2)\,\frac{-i \slashed{p} + M(p^2)}{p^2 + M^2(p^2)}\end{aligned}$$ with non-perturbative dressing functions $Z_f(p^2)$ and $M(p^2)$. The renormalization-group invariant quark mass function $M(p^2)$ encodes effects of dynamical mass generation due to the dynamical breaking of chiral symmetry. The Dirac structure of the pseudoscalar Bethe-Salpeter amplitude $\Gamma_\pi$ is given by $$\begin{aligned} \label{pion-amplitude} \Gamma_\pi(k,\Delta) = \left(f_1 + f_2\,i\slashed{\Delta} + f_3\, k\cdot \Delta \,i\slashed{k} +f_4 \left[\slashed{k},\slashed{\Delta}\right]\right) \gamma_5,\end{aligned}$$ where the $f_i$ are functions of $k^2$ and $k\cdot \Delta$ with $\Delta^2=-m_\pi^2$ fixed. The non-perturbative quark-photon vertex $\Gamma^\mu$ describes the coupling of a dressed quark to a photon and is dominated by QCD corrections. It can be decomposed into twelve tensor structures, $$\begin{aligned} \Gamma^\mu(p,Q) = \sum_{i=1}^{12} \lambda_i(p^2,p\cdot Q, Q^2) \,\tau_i^\mu(p,Q) \label{vertex}\end{aligned}$$ with basis components $\tau_i^\mu(p,Q)$ and Lorentz-invariant dressing functions $\lambda_i$; see App. B of Ref. [@Eichmann:2016yit] for details. The argument $p$ denotes the average momentum of the two quark legs and $Q$ is the incoming photon momentum. Due to electromagnetic gauge invariance the vertex can be split into a transverse and non-transverse part, where the latter is the Ball-Chiu vertex [@Ball:1980ay] and determined by the vector Ward-Takahashi identity. We obtain a numerical solution of the quark-photon vertex from its inhomogeneous Bethe-Salpeter equation; see [@Maris:1999bh; @Maris:1999ta; @Maris:2002mz; @Bhagwat:2006pu; @Goecke:2010if; @Eichmann:2011vu]. As discussed in detail in [@Maris:1999bh; @Goecke:2012qm; @Eichmann:2016yit], the transverse part of the vertex contains poles in the time-like momentum region corresponding to vector-meson states. Thus, the underlying physics of vector-meson dominance is automatically contained in the numerical representation of the vertex without the need for further adjustments. In this work we restrict ourselves to the rainbow-ladder approach as reviewed in [@Eichmann:2016yit]. We use the Maris-Tandy model, Eq. (10) of Ref. [@Maris:1999nt] with parameters $\Lambda=0.74$ GeV and $\eta = 1.85 \pm 0.2$ (the parameters $\omega$ and $D$ therein are related to the above via $\omega D = \Lambda^3$ and $\omega=\Lambda/\eta$). The variation of $\eta$ changes the shape of the quark-gluon interaction in the infrared, cf. Fig. 3.13 in Ref. [@Eichmann:2016yit], and we use it in the following to estimate our theoretical error. This construction, with the same respective kernel in the Bethe-Salpeter equation for the pseudoscalar mesons and the one for the quark-photon vertex, preserves chiral symmetry and has the merit of producing reliable results in the pseudoscalar and vector-meson sector as well as for nucleon and $\Delta$ baryons. Our input current-quark mass is $m_q=3.57$ MeV at a renormalization point $\mu=19$ GeV; the resulting pion mass and pion decay constant are $m_{\pi^0} = 135.0(2)$ MeV and $f_{\pi^0} = 92.4(2)$ MeV. ![On-shell transition form factor in the symmetric limit ($\eta_+=Q^2=Q'^2$) and asymmetric limit ($\eta_+=Q^2/2$, ${Q'}^2=0$) for moderate space-like values $\eta_+>0$.\[fig:ff-lowQ2\]](figures/fpgg-2d-m){width="0.7\columnwidth"} Result for the transition form factor {#sec:tff-result} ------------------------------------- With the ingredients described above we are able to determine the $\pi^0\to\gamma\gamma$ transition form factor in the spacelike domain $Q^2>0$ and ${Q'}^2>0$ as well as for small timelike momenta. In practice it turns out that a straightforward calculation is only possible in restricted kinematic regions. As explained in Appendix \[app:sing\], this is due to the singularities of the quark propagator in the integrand, whose nearest singularities correspond to a scale $m_p \sim 0.5$ GeV. The symmetric limit is accessible for all $\eta_+>0$, whereas in the asymmetric limit one is limited to $Q^2_\text{max} \approx 4$ GeV$^2$, which is also the domain covered in the calculation of Ref. [@Maris:2002mz]. In addition, also small timelike momenta of the order $Q^2$, ${Q'}^2 \gtrsim -m_p^2$ are accessible directly; cf. Fig. \[fig:sing\] in the appendix. To determine the TFF in the full spacelike domain, we employ the strategy introduced in Ref. [@Eichmann:2017wil]: we calculate the form factor for an off-shell pion with $\Delta^2>0$ and extrapolate to the on-shell point using the lowest-lying vector-meson pole as a constraint. This allows us to determine the TFF for all space-like momenta, and in the regions that are accessible by direct calculation and extrapolation we have checked that the results of both methods are in perfect agreement. ![image](figures/all_decays){width="95.00000%"} In Fig. \[fig:ff-lowQ2\] we show the on-shell transition form factor $F(Q^2,{Q'}^2)$ as a function of the variable $\eta_+$, Eq. . The plot reveals that the TFF is essentially a function of $\eta_+$, which is larger in the asymmetric limit and smaller in the symmetric limit, and this behavior persists up to asymptotically large $\eta_+$ [@Eichmann:2017wil]. Moreover, in the chiral limit the Abelian anomaly entails $F(0,0)=1$ and our numerical result at the physical pion mass is $F(0,0) = 0.996$, which provides an important consistency check: replacing both dressed quark-photon vertices by bare ones would only give $F(0,0) \approx 0.29$; and even a Ball-Chiu vertex, which guarantees charge conservation in the pion’s electromagnetic form factor, produces $F(0,0) \approx 0.86$ only. The transverse structure of the vertex is therefore crucial for a quantitative description of the $\pi^0\to\gamma\gamma$ transition. We finally provide a fit function that accurately represents our results in the spacelike region. Abbreviating $w=\eta_+/m_v^2$ and $z=\omega/\eta_+$, the TFF is described by $$F(Q^2,{Q'}^2) = \frac{\mathcal{A}(w) + w(1-z^2)\,\mathcal{B}_1(w)\,(1+\mathcal{B}_2(w) z^2)} {(1+w)^2-w^2 z^2}\,.$$ The denominator implements the lowest-lying vector-meson pole at $wz = \pm(1+w)$, which corresponds to $Q^2=-m_v^2$ and ${Q'}^2 = -m_v^2$ with $m_v=0.77$ GeV. The functions in the numerator ensure that the TFF asymptotically approaches a monopole behavior both in the symmetric ($z=0$) and asymmetric limit ($z=\pm 1$); they are given by $$\label{fit} \begin{split} \mathcal{A}(w) &= \frac{a_0+\xi\,(a_1\, b_1\, w+a_2 \,b_2 \,w^2+a_3 \,b_3\,w^3)}{1+b_1 \,w+b_2 \,w^2+b_3 \,w^3}\,, \\ \mathcal{B}_i(w) &= \frac{c_i \,e_i \,w^2}{1+d_i \,w + e_i \,w^2} \end{split}$$ with fit parameters $a_0=0.996$ and $$\label{fit-parameters} \renewcommand{\arraystretch}{1.1} \begin{array}{rl} a_1 &= 0.735\,,\\ a_2 &= 1.214\,,\\ a_3 &= 1.547\,, \\[1mm] c_1 &= 0.384\,,\\ d_1 &= 2.010\,,\\ e_1 &= 1.540\,, \end{array}\qquad \begin{array}{rl} b_1 &= 0.089\,,\\ b_2 &= 0.133\,,\\ b_3 &= 0.0002\,, \\[1mm] c_2 &= 0.430\,,\\ d_2 &= 0.024\,,\\ e_2 &= 0.00005\,. \end{array}$$ This fit provides the input for our calculations of the various $\pi^0$ decays. The value $\xi = 1.0 \pm 0.1$ reflects our combined theoretical uncertainty from varying the parameter $\eta = 1.85 \pm 0.2$ in the effective interaction as well as the uncertainty in the determination of the TFF away from the symmetric limit. These are also the error estimates that we quote in the following results. Let us finally briefly comment on alternative fit functions available in the literature (see, e.g., Appendix B of Ref. [@Nyffeler:2016gnb] for a discussion). The simplest vector-meson dominance (VMD) parametrization $F_\text{VMD}(Q^2,{Q'}^2) = 1/[(1+w)^2-w^2 z^2]$ does not reproduce the monopole behavior in the symmetric limit $z=0$ but instead approaches a dipole at large $Q^2$. Its refined version based on lowest-meson dominance, the LMD+V model [@Knecht:2001xc], reproduces both the symmetric and the asymmetric limits and implements two vector-meson poles; an analogous form was recently employed to fit lattice results for the TFF [@Gerardin:2016cqj]. Our fit is practically indistinguishable from the LMD+V parametrization at low $Q^2$, i.e., in the momentum range shown in Fig. \[fig:ff-lowQ2\]. Also at large $Q^2$ the fits in the symmetric limit are almost identical and in the asymmetric limit they are at least qualitatively similar. However, the behavior in between ($0<|z|<1$) differs substantially: at large $\eta_+$ the LMD+V form factor develops a sharp peak very close to $|z|=1$, with a turnover to reach the asymmetric point $z=1$ followed by the vector-meson poles at $|z|>1$. By comparison, our fit varies monotonously from $z=0$ to $z=1$ and is therefore better suited for applications where the TFF is tested in the whole spacelike domain. Three- and four-body decays {#sec:decays} =========================== \[sec3\] In this subsection we discuss our results for the three- and four-body decays of pseudoscalar mesons shown in Fig. \[fig:alldecays\](a-b). The Dalitz decay of the neutral pion into a photon and an electron-positron pair has the largest branching ratio $B(\pi^0\to e^+ e^-\gamma)= (1.174 \pm 0.035) \%$ [@Patrignani:2016] after that of the two-photon decay. The neutral pion also decays into two dilepton pairs with a branching ratio of $B(\pi^0\to e^+ e^-e^+e^-)= (3.34 \pm 0.16)\times 10^{-5}$ [@Patrignani:2016]. Both decays depend on the transition form factor discussed above as the only non-trivial input. Dalitz Decay: 0 e+ e- ---------------------- The leading-order Feynman diagram for the three-body decay of the neutral pion is shown in Fig. \[fig:alldecays\](a). The decay rate is easily calculated and given by $$\begin{aligned} \label{dalitz-eq} & \Gamma_{\pi^0\rightarrow e^+e^- \gamma} = \frac{e^6 m_\pi^3}{6(4 \pi)^3}\int_{4 m^2}^{m_\pi^2} \frac{dx}{x} \left| \frac{F(Q^2,0)}{4\pi^2f_\pi} \right|^2 \nonumber \\ & \qquad \times \sqrt{1-\frac{4 m^2}{x}} \left(1 + \frac{2 m^2}{x}\right) \left(1 - \frac{x}{m_\pi^2}\right)^3,\end{aligned}$$ where $m$ is the electron mass, $m_\pi$ the pion mass, and $x=-Q^2$ is the momentum squared of the virtual photon which evaluates the form factor in the timelike region. ![Transition form factor shown in the region relevant for the three and four-body decays. The singly-virtual form factor for the three-body decay, for which either $Q^2=0$ or $Q'^2=0$, is indicated by the heavy blue lines.\[fig.gepem:phasespace\]](./figures/dalitzdecay_area){width="0.95\linewidth"} Due to the kinematics of the three-body decay the TFF is probed in the asymmetric region of one photon on-shell and one off-shell and time-like. The leading contribution to the integral in Eq.  comes from the lower end of the integral. In this region the calculated TFF can in general be described by a simple linear fit with respect to $\eta_+= (Q^2+Q'^2)/2$, $$\begin{aligned} \label{linear-fit} F(Q^2, Q'^2) = 0.996 -3.55(10)\,\eta_+ \;,\end{aligned}$$ which is also used in the four-body decay below. The TFF is shown in Fig. \[fig.gepem:phasespace\], where the asymmetric limit required for the three-body decay, $F(Q^2,0)$, is represented by either one of the dark blue lines. Employing the PDG values for $m$ and $\alpha_\text{em}$ in Eq. , our calculated decay width is $\Gamma_{\pi\to e^+ e^-\gamma}=9.11(4)\times10^{-11}$ GeV. Since the theoretical uncertainty in Eq.  affects this number only at the sub-per-mille level, the error bar in the decay rate comes from the model dependence of $m_\pi$ and $f_\pi$ which enter in Eq. . In Table \[tab:epemgs\] we compare our result with the PDG and the result of a calculation using an effective theory [@Terschlusen:2013iqa]. Within the quoted errors all results are in good agreement. This is to be expected because the TFF – as the only non-perturbative input – is probed in the kinematic region governed by the anomaly. Since any reasonable construction obeys this constraint, the discriminative potential of the Dalitz decay with regard to different non-perturbative input is very limited. Collaboration $\Gamma_{\pi^0\to e^+ e^-\gamma}\, [10^{-11}$ GeV\] ------------------------------------------- ------------------------------------------------------- PDG [@Patrignani:2016] $9.06 (18)$ Terschlüsen et al. [@Terschlusen:2013iqa] $9.26$ Hoferichter et al. [@Hoferichter:2014vra] $9.065$ Our result $9.11(4)$ : Result for the Dalitz decay.\[tab:epemgs\] Four-body decay: 0 e+ e- e+ e- ------------------------------ We now proceed with the four-body decay of the neutral pion into two electron-positron pairs. The decay rate is given by $$\begin{aligned} \label{four-body-decay-rate} \Gamma_{\pi^0 \rightarrow 2e^+2e^-} &= \frac{1}{(2 !)^2 } \,\frac{1}{2m_\pi} \int d \Phi_4\, |\mathcal{M}|^2 \, ,\end{aligned}$$ where $\left|\mathcal{M}\right|^2$ is the squared and spin-summed matrix element and the symmetry factor in front accounts for the two pairs of identical final-state particles. $d\Phi_4$ is the four-dimensional phase space measure whose detailed derivation is given in Appendix \[app:4bphase\]. Because the amplitude $\mathcal{M}$ with all initial and final particles on-shell depends on five independent variables, $d\Phi_4$ involves five non-trivial integrations. Fig. \[fig:alldecays\](b) shows the possible diagrams, $\mathcal{D}_1$ and $\mathcal{D}_2$, where exchange of two leptons (anti-leptons) introduces a relative minus sign between the two contributions. Squaring these we obtain $$\begin{aligned} \label{eq.2ep2em:squarM} \left|\mathcal{M}\right|^2 = \left|\mathcal{D}_{1}\right|^2+ \left|\mathcal{D}_{2}\right|^2 +2 \mathrm{Re}[\mathcal{D}_{1} \mathcal{D}_{2}^*] \,.\end{aligned}$$ As discussed in Refs. [@Terschlusen:2013iqa; @Escribano:2015vjz], the first two terms are equal and can thus be combined. It follows that the decay rate can be decomposed into $$\begin{aligned} \Gamma_{\pi^0 \to 2e^+ 2e^-} = \Gamma_{\pi^0 \to 2e^+ 2e^-}^{(\text{direct})} + \Gamma_{\pi^0 \to 2e^+ 2e^-}^{(\text{indirect})} \,,\end{aligned}$$ where the first (direct) contribution comes from the two squared magnitudes and the second (indirect or interference) term comes from the cross-terms. Collaboration $\Gamma_{\pi^0\rightarrow 2 e^+ 2 e^-}\, [10^{-13}$GeV\] ------------------------------------------- ----------------------------------------------------------- PDG [@Patrignani:2016] $2.58(12)$ Terschlüsen et al. [@Terschlusen:2013iqa] $2.68$ Escribano et al. [@Escribano:2015vjz] $2.62$ Our result $2.63(1)$ : Result for the double Dalitz decay.\[tab:2ep2em\] Abbreviating $G^{\mu\nu}_{ij} = (i\slashed{p}_i +m) \,\gamma^\mu \,(i\slashed{p}_j -m) \,\gamma^{\nu}$, the integrand for the direct contribution is given by $$|\mathcal{D}_{1}|^2+ |\mathcal{D}_{2}|^2= 2 e^4\frac{\Lambda^{\mu\nu}(Q,Q')\,\Lambda^{\alpha\beta}(Q,Q') }{Q^4\,{Q'}^4}\,\text{Tr}\,G_{34}^{\mu\alpha}\,\text{Tr}\,G_{12}^{\nu\beta}\,,$$ where $\Lambda^{\mu\nu}(Q,Q')$ is the $\pi^0\to\gamma\gamma$ transition current defined in Eq. . Because in this case the integrand only depends on the pairwise sums $Q'=-(p_1+p_2)$ and $Q=p_3+p_4$, the five-dimensional phase-space integral $d\Phi_4$ can be reduced to just two integrations, namely $$\begin{aligned} \label{eq.pi2l:direct} &\Gamma_{\pi^0 \to 2e^+ 2e^-}^{(\text{direct})} = \frac{ e^8}{36\,(2\pi)^5 \,m_\pi^3} \int\limits_{4 m^2}^{(m_\pi-2m)^2} \hspace{-7pt} dx \int\limits_{4m^2}^{(m_\pi-\sqrt{x})^2} \hspace{-7pt} dy \nonumber \\ & \times \sqrt{x-4m^2} \sqrt{y-4m^2} \left[\frac{(x+y-m_\pi^2)^2}{4xy} - 1\right]^{3/2} \nonumber \\ & \times \left(1+\frac{2m^2}{x}\right) \left(1+\frac{2m^2}{y}\right) \left|\frac{F(Q^2,{Q'}^2)}{4\pi^2f_\pi}\right|^2 .\end{aligned}$$ Here we used the shorthands $x=-Q^2$ and $y=-{Q'}^2$ for the two photon virtualities, where $x,y>0$ are timelike and restricted by the thresholds for the two two-body decays. The direct contribution constitutes the largest fraction of the decay rate. In contrast, the interference term depends on all possible four-vector combinations of electron and positron pairs. Consequently, the phase-space integral cannot be further reduced and its integrand is given by $$\begin{aligned} \mathrm{Re}[\mathcal{D}_{1} \mathcal{D}_{2}^*] = e^4\, \frac{\Lambda^{\mu\nu}(Q,Q')\,\Lambda^{\alpha\beta}(K,K') }{Q^2\,{Q'}^2\,K^2\,{K'}^2} \, \text{Tr} \, G_{32}^{\mu\beta}\,G_{14}^{\nu\alpha}\,,\end{aligned}$$ where $Q'=-(p_1+p_2)$, $Q=p_3+p_4$ and $K'=-(p_2+p_3)$, $K=p_1+p_4$ are the possible momenta of the virtual photons. To perform the five-dimensional integral we used various methods, ranging from tensor-product quadrature with a combination of Gauss-Legendre and double exponential rules, to 5-dimensional adaptive cubature as well as standard Monte-Carlo methods [@Hahn:2004fe]; all agreed perfectly. For the direct and indirect contributions to the decay rate we obtain $$\begin{aligned} \Gamma_{\pi^0 \rightarrow 2e^+ 2e^-}^{(\text{direct})} &= \phantom{-}2.66(1) \times 10^{-13} \,\text{GeV},\\ \Gamma_{\pi^0 \rightarrow 2e^+ 2e^-}^{(\text{indirect})} &= -0.03 \times 10^{-13} \,\text{GeV}.\end{aligned}$$ The sum of these values gives our final result, shown in Table \[tab:2ep2em\] together with the value from experiment and other theoretical calculations. In Ref. [@Terschlusen:2013iqa] the authors use an extension of chiral perturbation theory to calculate the form factor, whereas in Ref. [@Escribano:2015vjz] a data driven approach is employed, which is based on the use of rational approximants applied to the experimental data of the $\pi^0, \eta$ and $\eta^\prime$ transition form factors in the space-like region. All results are compatible with experiment within the quoted error. As with the Dalitz decay, the phase-space restriction of the virtual photons to time-like momenta between $4m^2$ and $m_\pi^2$ entails that the sensitivity to the details of the TFF beyond that dictated by the anomaly is rather small and deviates from its nominal value of $1$ by no more than $3$%. Rare decay: 0 e+ e- {#sec:rare-decay} =================== \[sec4\] Finally we consider the two-body decay of the neutral pion into one electron-positron pair. For the $\pi^0$ this is certainly the most interesting decay due to a discrepancy between the KTeV experimental result and theoretical calculations [@Abouzaid:2006kk; @Dorokhov:2007bd; @Dorokhov:2009jd; @Vasko:2011pi; @Husek:2014tna; @Masjuan:2015lca] of the order of $2\sigma$. Using the elaborate reanalysis of radiative corrections [@Vasko:2011pi; @Husek:2014tna] to the experimental result of the KTeV collaboration [@Abouzaid:2006kk] (close to the value given in PDG [@Patrignani:2016]) one arrives at an extracted experimental value for the branching ratio of $B(\pi^0 \rightarrow e^+ e^-) = (6.87 \pm 0.36) \times 10^{-8}$, which is considerably smaller than the decays considered above. To lowest order in QED the process is described by the one-loop graph in Fig. \[fig:alldecays\](c), which again includes the transition form factor $F(Q^2,{Q'}^2)$ as the only non-perturbative input. Defining $t = \Delta^2/4$ as in Eq. , the corresponding normalized branching ratio is given by $$\begin{aligned} R = \frac{B(\pi^0\rightarrow e^+ e^- )}{B(\pi^0\rightarrow \gamma\gamma )} = 2 \left(\frac{m\,\alpha_\text{em}}{\pi m_\pi}\right)^2 \beta(t_0) \ \vert \mathcal{A}(t_0)\vert^2\,,\end{aligned}$$ where $\beta(t)= \sqrt{1+ m^2 /t}$ stems from the two-body phase-space integration and $B(\pi^0\to \gamma\gamma )=0.988$. The scalar amplitude $\mathcal{A}(t)$ can be viewed as the pseudoscalar form factor of the electron due to the two-photon coupling, which must be evaluated at the on-shell pion point $t_0 = -m_\pi^2/4$. A(t) with dispersive input -------------------------- For arbitrary $t$ the amplitude $\mathcal{A}(t)$ can be defined from the matrix element for the $\pi^0\to e^+ e^-$ decay: $$\begin{aligned} & \int \!\! \frac{d^4\Sigma}{(2\pi)^4}\, \Lambda(p_+)\,\gamma^\mu\,S(p+\Sigma)\,\gamma^\nu\,\Lambda(p_-)\,\frac{\Lambda^{\mu\nu}(Q,Q')}{Q^2 \,{Q'}^2} \\ & \quad = \frac{\mathcal{A}(t)}{(4\pi)^2}\,\frac{2im\,\alpha_\text{em}}{\pi f_\pi}\, \Lambda(p_+)\,\gamma_5\,\Lambda(p_-)\,,\end{aligned}$$ where $\Lambda^{\mu\nu}(Q,Q')$ is the $\pi^0\to\gamma\gamma$ transition current from Eq.  and $\Lambda(p_\pm)=\frac{1}{2}\,(\mathds{1}+\slashed{p}_\pm/(im))$ is the positive-energy projector of the lepton. The kinematics are as discussed in Sec. \[sec:kinematics\]; in particular, the averaged photon momentum $\Sigma$ becomes the loop momentum and therefore the variables in Eq.  take the values $\sigma>0$ and $Z\in[-1,1]$. As a consequence, the photon virtualities $Q^2$ and ${Q'}^2$ are tested at complex values close to the symmetric limit as shown in Fig. \[fig:phasespace-7\]. In principle the integral depends on the pion momentum $\Delta$ and the average lepton momentum $p$, but since the electron and the positron are on-shell with momenta $p_\pm^2=(p\pm \Delta/2)^2=-m^2$ only $t$ remains as an independent variable. ![Relevant kinematic domain of the transition form factor in the $\pi^0\to e^+ e^-$ decay. The parabola starting at $\eta_+ = -m_\pi^2/4$ is the region that is sampled in the integral.[]{data-label="fig:phasespace-7"}](figures/phasespace-7-m){width="0.65\columnwidth"} Taking traces yields the following expression for $\mathcal{A}(t)$: $$\begin{aligned} \label{A(t)-1} \mathcal{A}(t) &= \frac{1}{2\pi^2 t}\int \! d^4\Sigma\,\frac{(\Sigma\cdot\Delta)^2-\Sigma^2 \Delta^2}{(p+\Sigma)^2+m^2}\,\frac{F(Q^2,{Q'}^2)}{Q^2 \,{Q'}^2}\,.\end{aligned}$$ This integral has poles in the integration domain (which we discuss in more detail in Sec. \[sec:A(t)-direct\]) and thus cannot be naively integrated except for the unphysical point $t=\Delta^2/4=0$. A standard way to circumvent the problem uses dispersive methods (see e.g.  [@BERGSTROM1983117; @Donoghue:1996kw] ). In that case the imaginary part of the amplitude along its cut at $t<0$ is given by $$\begin{aligned} \label{eq.rare:imag} & \text{Im} \ \mathcal{A}^{\text{LO}} (t) = \frac{\pi\,\ln \gamma(t)}{2 \beta(t)}\, F(0,0)\,,\end{aligned}$$ with $\gamma(t) = (1-\beta(t))/(1+\beta(t))$, which follows from cutting the two photon lines. The imaginary part gives the well-known unitary bound for the branching ratio through the inequality $\vert \mathcal{A}(t_0) \vert ^2 \geq \vert \text{Im} \mathcal{A}(t_0) \vert^2$: $$\begin{aligned} R \geq \left( \frac{m \alpha_\text{em} }{m_\pi}\right)^2 \frac{\ln^2 \gamma(t_0)}{2\beta(t_0)} = 4.75 \times 10^{-8}. \nonumber\end{aligned}$$ Using a once-subtracted dispersion relation one then obtains the real part of the amplitude via $$\begin{aligned} \label{eq.rare:re} \text{Re} \ \mathcal{A}(t) = \mathcal{A}(0) + \frac{\ln^2 \gamma(t) + \tfrac{1}{3} \pi^2 + 4\,\text{Li}_2(-\gamma(t))}{4\beta(t)}\,,\end{aligned}$$ where $\text{Li}_2(z)$ is the dilogarithm or Spence function. In particular, this implies $\text{Re} \ \mathcal{A}(t_0) = \mathcal{A}(0) +31.92(2)$ so that the only unknown left is the constant $\mathcal{A}(0)$. In fact, $t=0$ is the only point where Eq.  can be integrated directly to yield $$\begin{aligned} \mathcal{A}(0) = \frac{4}{3} \int\limits_0^\infty dx \left[ (x-2)\sqrt{1+\frac{1}{x}} - x + \frac{3}{2} \right] F(Q^2,Q^2)\,,\end{aligned}$$ where we temporarily abbreviated $x=Q^2/(4m^2)$. A similar formula can be derived using a Mellin-Barnes representation [@Ghaderi668048; @Dorokhov:2008cd; @Dorokhov:2009xs], $$\begin{aligned} \label{eq.rare:A0_1} \mathcal{A}(0) \approx -\frac{5}{4} + \frac{3}{2} \int\limits_0^{\infty} dx \ln (4x)\, \frac{d}{d x} F(Q^2,Q^2),\end{aligned}$$ which is however only valid to leading order in an expansion in the electron mass. At the point $t=0$ the transition form factor in both cases is evaluated in the symmetric limit of equal photon momenta, and due to $Q^2 = 4m^2 x$ it is mainly probed at very low $Q^2$ of the order of the electron mass. Implementing our result for $F(Q^2,{Q'}^2)$, we extract the same value $\mathcal{A}(0)= -21.85(2)$ from both formulas above, where the error comes from varying the $\xi$ parameter in Eq. . With Eqs. (\[eq.rare:imag\]–\[eq.rare:re\]) one then arrives at the on-shell value $\mathcal{A}(t_0) = 10.07(4) - 17.45(1)\,i$, which corresponds to a branching ratio of $$\begin{aligned} \label{rare-decay-br-1} B(\pi\rightarrow e^+ e^- )=6.21(3)\times 10^{-8}\,.\end{aligned}$$ Collaboration B($\pi^0\rightarrow e^+ e^-)\, [ 10^{-8}]$ -------------------------------------------------------------- -------------------------------------------- Experiment [@Abouzaid:2006kk; @Vasko:2011pi; @Husek:2014tna] $6.87(36)$ Dorokhov et al. [@Dorokhov:2007bd; @Dorokhov:2009jd] $6.23(9)$ Husek et al. [@Husek:2015wta](THS) $ 6.14(8)$ Masjuan et al. [@Masjuan:2015lca] $6.23(5)$ Our result (DR) $6.21(3)$ Our result (direct) $6.22(3)$ : Our result for the rare decay, obtained either with the dispersion relation (DR) or directly from the contour deformation, compared to other theoretical calculations and experiment (after removing the final state radiative corrections). \[tab:rare\] Our result is compared to other approaches in Table \[tab:rare\]. Whereas our calculation represents a top-down approach using a well-tested model for the underlying quark-gluon interaction, Refs. [@Dorokhov:2007bd; @Dorokhov:2009jd] use a phenomenological parametrization of the transition form factor that is adapted to reproduce experimental data from CLEO together with additional high-energy QCD constraints. A generalization of LMD+V is the two-hadron saturation model (THS) of Ref. [@Husek:2015wta]. The more recent Ref. [@Masjuan:2015lca] employs a data-driven approach via Padé Theory and Canterbury approximants. All four theoretical results are in agreement with each other, thus showing consistency between different approaches. Again, it appears that the decay rate is not overly sensitive to different representations of the form factor as long as the QCD constraints are satisfied (as guaranteed in all three approaches). However, we would like to point out that all three calculations rely on dispersion relations and the Mellin-Barnes representation. Thus the only number that influences the final result is the constant $\mathcal{A}(0)$. Although *a priori* one would deem the dispersive approach reliable for this process, it still remains to be checked via a direct calculation. Direct calculation of A(t) {#sec:A(t)-direct} -------------------------- The integrand in Eq.  has poles for vanishing denominators, i.e., if either of the photons or the intermediate lepton go on-shell. Depending on the value of $t$, this may prohibit a straightforward Euclidean integration. Specifically, for $t \in \mathds{C}$ one can draw a kinematically safe region in the complex $t$ plane where such an integration is possible, and a forbidden region where the poles enter in the integration domain and the integration would produce a wrong result. The latter case would usually be interpreted as a failure of the Wick rotation; however, as we demonstrate below, the Euclidean expression Eq.  is still valid if the poles are treated correctly. Problems of this kind are frequent in Euclidean bound-state calculations and pose limitations, e.g., in computing excited hadrons or form factors for time-like or large space-like arguments [@Eichmann:2016yit] and thus it is desirable to find a general method to deal with them. In the case of Eq.  it is the unfortunate combination of all three external momenta being on-shell that complicates the situation. The analysis in Appendix \[app:sing\] shows that the lepton poles lead to a narrow parabola $$\begin{aligned} \label{rare-decay-domain} (\text{Im}\,t)^2 < 4m^2 \text{Re}\,(-t) \, ,\end{aligned}$$ around the negative (time-like) $t$ axis which is kinematically safe, whereas the photon poles admit a straightforward integration only for real and positive $t$. Taken in combination, the integration is only possible for $t=0$, which leads to the result for $\mathcal{A}(0)$ quoted above. To analyze the situation for general $t$, we write the integral in hyperspherical variables defined by $$\begin{aligned} \label{li-2a} \sigma = \Sigma^2\,, \quad Z = \frac{\Sigma\cdot\Delta}{2\sqrt{\sigma t}}\,, \quad Y = \frac{p\cdot \Sigma}{i\sqrt{\sigma}\sqrt{t+m^2}\sqrt{1-Z^2}}\,,\end{aligned}$$ cf. Eq. , where the process becomes particularly simple in the frame $\Delta=2\sqrt{t}\,[0,0,0,1]$, $p=i\sqrt{t+m^2}\,[0,0,1,0]$ and $$\begin{aligned} \Sigma = \sqrt{\sigma}\left[\begin{array}{l} \sqrt{1-Z^2}\,\sqrt{1-Y^2}\,\sin\psi\\ \sqrt{1-Z^2}\,\sqrt{1-Y^2} \,\cos\psi \\ \sqrt{1-Z^2} \,Y \\ Z \end{array}\right].\end{aligned}$$ The innermost $\psi$ integration is trivial and thus the integral $\mathcal{A}(t)$ takes the form $$\begin{aligned} \label{A(t)-2} &\mathcal{A}(t) = -\frac{2}{\pi}\int_0^\infty d\sigma\,\sigma^2\int_{-1}^1 dZ\,\frac{(1-Z^2)^{3/2}\,F(Q^2,{Q'}^2)}{(\sigma-t)^2+4\sigma t\,(1-Z^2)} \nonumber \\ & \times\int_{-1}^1 dY\,\frac{1}{\sigma-t +2i\sqrt{\sigma}\sqrt{t+m^2}\sqrt{1-Z^2}\,Y}\,.\end{aligned}$$ The denominator under the $dZ$ integral is equal to $Q^2\,{Q'}^2$ whereas the second denominator corresponds to the lepton pole. The integration in $Y$ can be performed analytically: $$\begin{aligned} \label{rd-log} \int_{-1}^1 dY\,\frac{1}{a+ Y} = \ln\,\frac{a+1}{a-1} \, ,\end{aligned}$$ for all $a \in \mathds{C}$ except $-1<a<1$, in which case the logarithm develops a branch cut. ![Sketch of the overlapping branch cuts in the integrand of $\mathcal{A}(t)$, i.e., the complex $\sigma$ plane, for $t=(-1+i)\,m_\pi^2/4$ and $m=40$ MeV. The cut $\sigma_l$ (solid, red) is generated by the lepton pole and the cut $\sigma_\gamma$ (dashed, blue) by the photon poles; the latter opens at $\sigma=t$ but the former does not. The dotted line shows a possible integration path avoiding all singularities. The units are in GeV$^2$.\[fig.epemg:rd-cuts\]](./figures/rare-decay-cuts-m){width="1\linewidth"} After performing all angular integrations, the conditions $Q^2\,{Q'}^2=0$ and $-1<a<1$ produce poles and cuts in the complex $\sigma$ plane which are visualized in Fig. \[fig.epemg:rd-cuts\]. In principle, the $\sigma$ integration goes from zero to infinity but the ‘naive’ Euclidean’ integration path $\sigma \in \mathds{R}_+$ would cross a singularity, hence causing the problems described above. The vanishing denominator for the photons produces a cut along $$\begin{aligned} \sigma_\gamma^\pm(Z) = t\,(Z \pm i\sqrt{1-Z^2})^2\,,\end{aligned}$$ with $-1<Z<1$, which describes a circle with radius $|t|$. The circle opens at $\sigma=t$ since in this case the remaining $(1-Z^2)$ factor in the the denominator of Eq.  cancels with the numerator. On the other hand, the lepton denominator leads to a cut $$\begin{aligned} \sigma_l^\pm(Z) = (t+m^2) \left( \sqrt{Z^2-\frac{m^2}{t+m^2}} \pm i \sqrt{1-Z^2}\right)^2,\end{aligned}$$ which does not open at $\sigma=t$ as this would correspond to $a=0$ in Eq. . Instead, it never crosses the arc that passes through $\sigma=-t$ as shown in Fig. \[fig.epemg:rd-cuts\]. We solve the problem by exploiting Cauchy’s theorem in finding an integration contour that connects $\sigma=0$ and $\sigma\to\infty$ but never crosses any singularity. Such a possible path is shown in Fig. \[fig.epemg:rd-cuts\]: (1) it departs from the origin in the direction opposite to $t$, (2) then navigates between the cuts until it reaches the point $\sigma=t$, (3) returns to the positive real axis at a value $\sigma > |t|$, (4) and from there proceeds to the numerical cutoff at $\sigma\to\infty$. This strategy ensures that for each $\sigma$ along the path the integrand in $Z$ is free of any singularities. In practice, however, the problem is made worse by the small lepton mass $m \ll m_\pi$: in that case both cuts essentially describe the same circle but open on opposite sides, $\sigma=t$ and $\sigma=-t$, so that the path along segment (2) proceeds along a narrow ridge between the two cuts. As a consequence, the integrand in $Z$ is sharply peaked which requires an enormous number of grid points to obtain stable results. To this end we have optimized the procedure so that path (2) is always equally spaced between the cuts, thus minimizing the numerical error. In addition, we implement an adaptive integration in $-1 < Z < 1$ that accumulates the grid points according to the nearest singularities in the complex $Z$ plane, which are determined by solving $\sigma_\gamma^\pm(Z)$ and $\sigma_l^\pm(Z)$ for $Z$. In that way we gain a factor of $\sim 10^3$ in CPU time while maintaining the same numerical accuracy. ![Result for $\mathcal{A}(t)$ in the complex $t$ plane. The units for $t$ are GeV$^2$ and $\mathcal{A}(t)$ is dimensionless. The on-shell pion point is $t_0 = -m_\pi^2/4 \approx -0.005$ GeV$^2$.\[fig.epemg:At\]](./figures/rare-decay-res){width="0.7\linewidth"} As a result, we are in principle able to determine $\mathcal{A}(t)$ in the whole complex plane, which is shown in Fig. \[fig.epemg:At\] in the momentum region relevant for the $\pi^0\to e^+ e^-$ decay. The real part is sharply peaked at the origin $t=0$, whereas the imaginary part develops the expected branch cut on the timelike axis. The resulting on-shell value is $\mathcal{A}(t_0) = 10.10(3)-17.45(1)\,i$, where the error reflects the uncertainty in the TFF discussed below Eq. . The corresponding branching ratio $$\begin{aligned} \label{rare-decay-br-2} B(\pi^0\rightarrow e^+ e^- )=6.22(3)\times 10^{-8} \, ,\end{aligned}$$ nicely agrees with the result in Eq.  obtained via dispersion relations. We have thus established a fast and efficient numerical method to calculate Euclidean integrals, which is applicable even in cases where a naive Wick rotation fails. It can be generalized to arbitrary integrals as long as the singularity structure of the integrand is known, which is not restricted to real poles but accommodates complex poles or cuts as well. Perhaps unsurprisingly, this demonstrates that both Euclidean and Minkowski descriptions are completely equivalent as long as the singularities in the integrand are treated correctly: the correct Euclidean expression, such as Eq. , is the one that connects zero with infinity without crossing any singularities. Exploratory calculations using contour deformations have been performed, e.g., in determining quark, gluon and ghost propagators in the complex plane [@Maris:1995ns; @Alkofer:2003jj; @Eichmann:2009zx; @Strauss:2012dg; @Windisch:2012sz]. In those cases the singularity structure of the integrands is usually less intertwined. In turn, one has to deal with integral equations and thus self-consistent problems where the singularities that are dynamically generated by the integration enter again into the integrand and must be accounted for as well. In addition to other methods to calculate propagators in the complex plane [@Fischer:2005en; @Fischer:2008sp; @Krassnigg:2009gd; @Rojas:2014aka], or a direct inclusion of residues as done for example in Ref. [@Oettel:1999gc], contour deformation methods may become an attractive tool for overcoming several long-standing obstacles in the Dyson-Schwinger/Bethe-Salpeter approach, e.g., the calculation of (highly) excited states, form factors at timelike or large spacelike momentum transfer, or the treatment of genuine resonances. Conclusions {#sec5} =========== In this work we determined the branching ratios of various leptonic decays of the neutral pion. The common central element in these calculations is the pion transition form factor. We calculated its momentum dependence in a Dyson-Schwinger and Bethe-Salpeter framework employing an underlying quark-gluon interaction that has been successful elsewhere in describing a range of different static and dynamic meson and baryon properties, see e.g. [@Maris:2003vk; @Eichmann:2016yit] for reviews. The resulting form factor dynamically generates vector meson dominance and can be shown to comply with the perturbative scaling behavior at large space-like momenta in the symmetric limit, whereas the coefficient of the scaling limit is modified in the asymmetric limit [@Eichmann:2017wil]. Due to the kinematic constraints imposed by Dalitz decays – as is well-known – only a very small range of time-like momenta around the zero-momentum limit controlled by the anomaly is probed. Consequently, our results for these pion decays very much resemble those of other theoretical approaches that use, e.g., vector meson dominance models for the form factor supplemented with constraints from the anomaly. The results are subsequently compatible with experiment. For the rare decay the form factor is probed in the space-like region and we have been able to confirm theoretical calculations using dispersion relations, however through a direct calculation using contour deformations. Thus our result for the rare decay $\pi^0 \rightarrow e^+ e^-$ still leaves a 2$\sigma$ discrepancy between theory and experiment. Acknowledgements ================ We thank R. Alkofer, S. Leupold, A. Szczepaniak and M. T. Hansen for enlightening discussions. This work was supported by the DFG collaborative research centre TR 16, the BMBF grant 05H15RGKBA, the DFG Project No. FI 970/11-1, the FCT Investigator Grant IF/00898/2015, the GSI Helmholtzzentrum fuer Schwerionenforschung, and by the Helmholtz International Center for FAIR. ![image](figures/sing-domains-m){width="100.00000%"} Singularity restrictions {#app:sing} ======================== Here we return to the problem mentioned in Section \[sec:tff-result\], namely, to find the kinematic regions where a Euclidean integral with singularities in the integrand can be calculated directly without any contour deformations. For generality, consider a Lorentz-invariant integral $$\begin{aligned} \mathcal{I}(p_1, \dots p_n) = \int d^4k_1 \int d^4k_2\, \dots f(q^2) \dots\end{aligned}$$ It depends on a collection of external momenta $p_i^\mu$ and one integrates over the loop momenta $k_j^\mu$. The integrand consists of Lorentz-invariant functions such as $f(q^2)$, where $q^\mu$ is a linear combination of the external and loop momenta. The loop momenta $k_j^\mu$ are always real; however, if an external momentum is time-like ($p_i^2<0$) it will inject imaginary components into $q^\mu$, which is therefore a complex four-vector. Splitting $q^\mu$ into its real and imaginary parts, where $A$, $B\in\mathds{R}$ and $e_\mu$, $e'_\mu$ are Euclidean unit vectors, yields: $$\begin{aligned} q_\mu = A\,e_\mu + i B\,e'_\mu \; \Rightarrow \; q^2 = A^2 - B^2 +2i A B\,(e\cdot e') \,.\end{aligned}$$ Because $e\cdot e' \in [-1,1]$, the Lorentz invariant $q^2$ is tested inside a parabola $(A \pm iB)^2$ with apex $-B^2$ on the real axis; for $B=0$ it becomes the positive real axis. In other words, if $q^\mu$ has imaginary components then $q^2$ is sampled inside a complex parabola. Now, if $f(q^2)$ has singularities in the complex plane, the corresponding parabola passing through the first singularity (e.g., a real pole, complex conjugate poles, or the onset of a cut) defines the ‘contour mass’ $q^2=-m_p^2$. The kinematically safe region is then subject to the restriction $$\begin{aligned} -B^2 > -m_p^2 \quad \Leftrightarrow \quad \left[ \text{Im}\,q \right]^2 = B^2 < m_p^2\,.\end{aligned}$$ Because the imaginary part of $q^\mu$ can only come from the external momenta $p_i^\mu$, this imposes restrictions on the domain of the external Lorentz invariants. We specifically consider the transition matrix element in Eq. , where the quark momenta in the loop are given by $k \pm \Delta/2$ and $k + \Sigma$. The dressing functions of the quark propagator in Eq. , $$\begin{aligned} \frac{Z_f(p^2)}{p^2 + M^2(p^2)} \quad \text{and} \quad \frac{Z_f(p^2)\,M(p^2)}{p^2 + M^2(p^2)}\,,\end{aligned}$$ develop a certain singularity structure in the complex plane. In a rainbow-ladder truncation the nearest singularities are complex conjugate poles with a typical contour mass $m_p \sim 0.5$ GeV for light quarks (see [@Windisch:2016iud] for a detailed investigation). The loop momentum $k$ is always real and thus one arrives at the conditions $$\begin{aligned} \left( \text{Im}\,\frac{\Delta}{2}\right)^2 < m_p^2 \quad \text{and} \quad \left( \text{Im}\,\Sigma\right)^2 < m_p^2\,.\end{aligned}$$ In general these restrictions depend on the chosen frame for $\Sigma$ and $\Delta$. The components of $\Sigma$ and $\Delta$ can be arranged arbitrarily as long as the Lorentz invariants defined in Eqs. (\[li-2\]–\[alt-kinematics\]), $\Sigma^2 = \sigma = \eta_+ + m_\pi^2/4$, $\Delta^2 = -m_\pi^2$ and $\Sigma\cdot\Delta = \omega$, remain unchanged. For any possible choice, however, one can find a linear combination $\Sigma + \alpha\,\Delta$ that has a four-component only, with an arbitrary parameter $\alpha \in \mathds{R}$. The general arrangement satisfying these constraints is $$\begin{aligned} \label{general-frame} \Delta = \frac{1}{\mathcal{N}}\left[\begin{array}{c} 0 \\ 0 \\ \!-i\sqrt{\sigma m_\pi^2+\omega^2} \\ \omega-\alpha m_\pi^2 \end{array}\right]\!\!, \quad \Sigma = \frac{1}{\mathcal{N}}\left[\begin{array}{c} 0 \\ 0 \\ i\alpha\sqrt{\sigma m_\pi^2+\omega^2} \\ \sigma + \alpha \omega \end{array}\right]\end{aligned}$$ with $\mathcal{N}=\sqrt{\sigma+2\alpha \omega - \alpha^2 m_\pi^2}$. Take for example the pion’s rest frame, which corresponds to $\alpha\to\infty$: $$\begin{aligned} \Delta = \left[\begin{array}{c} 0 \\ 0 \\ 0 \\ im_\pi \end{array}\right]\!\!, \quad \Sigma = \left[\begin{array}{c} 0 \\ 0 \\ \!\sqrt{\frac{\omega^2}{m_\pi^2} + \sigma} \\ \frac{\omega}{im_\pi} \end{array}\right] = \sqrt{\sigma} \left[\begin{array}{c} 0 \\ 0 \\ \!\sqrt{1-Z^2} \\ Z \end{array}\right].\end{aligned}$$ If $\sigma>0$ and $Z \in [-1,1]$, $\Sigma$ is real and the imaginary part only comes from $\Delta$, so the resulting condition $m_\pi < 2m_p$ is always satisfied. However, the situation is different in the quadrant $\eta_+>0$, $|\omega| < \eta_+$ shown in Fig. \[fig:phasespace-1\], because in this case $Z$ is imaginary. Here both $\Delta$ and $\Sigma$ have imaginary four-components and the resulting condition becomes $\left|\omega\right| < m_\pi m_p$, which defines a narrow strip around the symmetric limit $\omega=0$. The arbitrariness of $\alpha$ can be exploited to optimize the frame, i.e. to reach kinematic regions for the form factor that are not accessible in the pion rest frame. The resulting domains are plotted in Fig. \[fig:sing\] for different values of $\alpha$. The leftmost plot shows $\alpha=0$ and the rightmost plot $\alpha=8$; for $\alpha\to\infty$ one recovers the pion rest frame. For example, with $\alpha=0$ the momentum $\Sigma$ is the one with a four-component only, whereas $\Delta$ has an imaginary three-component and, for $\sigma>0$, leads to the condition $$\begin{aligned} \omega^2 < (4m_p^2-m_\pi^2)\left( \eta_+ + \frac{m_\pi^2}{4}\right).\end{aligned}$$ For the singly-virtual form factor $F(Q^2,0)$ the optimal choice is $\alpha=1/2$ (second plot in Fig. \[fig:sing\]). In that case it is the photon momentum $Q = \Sigma + \Delta/2$ that has a four-component (resembling the Breit frame in elastic form factor calculations). The denominator $\mathcal{N} = \sqrt{\eta_++\omega}$ is always real if $\eta_+>0$ and $|\omega| < \eta_+$, and therefore the three-components of $\Sigma$ and $\Delta$ are imaginary. The resulting condition is $$\begin{aligned} m_\pi^2\, \eta_+ + \frac{m_\pi^4}{4} + \omega^2 < 4m_p^2 \,(\eta_++\omega)\,.\end{aligned}$$ This region crosses the line $\eta_+=\omega$ at $$\begin{aligned} \eta_+=\omega=\frac{Q^2}{2} = 4m_p^2 \left( 1 - \frac{\varepsilon}{2} + \sqrt{1-\varepsilon}\right) \approx 8m_p^2\,,\end{aligned}$$ with $\varepsilon=m_\pi^2/(4m_p^2)$. Hence, in the asymmetric limit the form factor can be calculated up to $Q^2_\text{max} \approx 16\,m_p^2 \approx 4$ GeV$^2$ without crossing any quark singularities. As a second example we consider the integral $\mathcal{A}(t)$ in Eq.  which describes the $\pi^0\to e^+ e^-$ decay. In that case the external momenta are $\Delta$ and $p$, with Lorentz invariants $\Delta^2 = 4t$, $p^2=-(t+m^2)$ and $p\cdot\Delta=0$, whereas $\Sigma$ is the real loop momentum. The internal photon momenta are $\Sigma \pm \Delta/2$ and the lepton momentum is $\Sigma+p$, so the singularity conditions become $$\begin{aligned} \left( \text{Im}\,\frac{\Delta}{2}\right)^2 < 0 \quad \text{and} \quad \left( \text{Im}\,p\right)^2 < m^2\,,\end{aligned}$$ where $m$ is the lepton mass. The analogous arrangement in the general frame is $$\begin{aligned} \label{general-frame-2} \Delta = \frac{1}{\mathcal{N}}\left[\begin{array}{c} 0 \\ 0 \\ \!-2i\sqrt{t}\sqrt{t+m^2} \\ 4\alpha t \end{array}\right]\!\!, \,\, p = \frac{1}{\mathcal{N}}\left[\begin{array}{c} 0 \\ 0 \\ 2i\alpha\sqrt{t}\sqrt{t+m^2} \\ -(t+m^2) \end{array}\right],\end{aligned}$$ with $\mathcal{N}=\sqrt{-(t+m^2)+4\alpha^2 t}$. In this case the maximal domains correspond to the limits $\alpha=0$ or $\alpha\to\infty$ and one arrives at the condition $(\text{Im}\,t)^2 < 4m^2\, \text{Re}\,(-t)$ from the lepton pole and $t>0$ from the photon poles, which are quoted in Eq. . Although in principle these regions are disjoint, the analysis of Eq.  shows that the point $t=0$ contains an integrable singularity and thus $\mathcal{A}(0)$ is well-defined if $m>0$. Four-body phase space {#app:4bphase} ===================== In this appendix we work out the four-body phase-space integral $d\Phi_4$ that enters in the $\pi^0\to e^+ e^- e^+ e^-$ decay of Eq. . The decay width of a particle with momentum $P$ and mass $M$ decaying into $n$ daughter particles with momenta $p_i$ and masses $m_i$ is given by $$\begin{aligned} \Gamma (P\rightarrow p_i) = \frac{1}{S}\,\frac{1}{2 M}\int d \Phi_n \, | \mathcal{M}|^2 \, ,\end{aligned}$$ with $S$ the symmetry factor, $| \mathcal{M}|^2$ the spin summed and squared matrix element, and $d \Phi_n$ the phase-space integral for an $n$-particle final state given by $$\begin{aligned} d \Phi_n = (2\pi)^4 \delta^4\left(P- \sum_{i=1}^n p_i \right) \prod_{i=1}^{n} \frac{d^3 \textbf{p}_i}{(2\pi)^3 \,2 E_i} \, .\end{aligned}$$ Following Ref. [@Byckling:1971vca], we rewrite the integration in terms of invariant mass variables. For $n=4$ one obtains $$\label{app.eq:phasespace} d \Phi_4 = \frac{1}{(2\pi)^8} \frac{\pi^2}{32 M^2} \, \frac{ d s_{12}\, d s_{34} \,d s_{124} \,d s _{134} \,d s_{14} }{\sqrt{-\Delta_{(4)}}}$$ where, for degenerate decay products with $m_i=m$, the two- and three-particle Mandelstam variables read $$\label{app.eq:sij} \begin{split} s_{ij} &= -(p_i+p_j)^2 = 2m^2 - 2p_i\cdot p_j\,, \\ s_{ijk} &= -(p_i+p_j+p_k)^2 = s_{ij} + s_{ik} + s_{jk} - 3m^2 \end{split}$$ and the four-dimensional Gram determinant $\Delta_{(4)}$ contains all possible dot products of four-vectors: $$\label{appen:eq.delta4} \renewcommand{\arraystretch}{1.2} \Delta_{(4)} = \det\left[ \begin{array}{cccc} -m^2 & p_1\cdot p_2 & p_1\cdot p_3 & p_1\cdot p_4 \\ p_1\cdot p_2 & -m^2 & p_2\cdot p_3 & p_2\cdot p_4 \\ p_1\cdot p_3 & p_2\cdot p_3 & -m^2 & p_3\cdot p_4 \\ p_1\cdot p_4 & p_2\cdot p_4 & p_3\cdot p_4 & -m^2 \end{array}\right].$$ In contrast to [@Escribano:2015vjz; @Byckling:1971vca; @Nyborg:1965zz] we employ a Euclidean signature, however with $s_{ij}$ and $s_{ijk}$ defined such that they have the same meaning in Minkowski and Euclidean conventions. To work with the invariant mass variables of Eq. , one replaces the $p_i\cdot p_j$ in the Gram determinant according to Eq.  together with $$\begin{aligned} \sum_{i<j} s_{ij} = -(M^2 + 8m^2) \, .\end{aligned}$$ The physical region of integration is bounded by the surface $\Delta_{(4)}=0$. Following the derivation of Refs. [@Escribano:2015vjz; @byers_physical_1964; @Nyborg:1965zz], we impose this relation on the invariant mass variables and begin by solving $\Delta_{(4)}=0$ for $s_{14}$. It yields $$\label{ap.eq.s14b} s_{14}^{\pm} = \frac{b \pm 2 \sqrt{G(s_{124},s_{34},s_{12})\,G(s_{134},s_{12},s_{34})}}{\lambda(s_{12},s_{34},M^2)}\, ,$$ where $\lambda(u,v,w) = u^2+v^2+w^2-2uv-2uw-2vw$ is the Källen function, the $G$ functions are given by $$\begin{split} &G(u,v,w) = m^2(w-M^2)^2 \\ & +v\left[(u-m^2)^2-(u+m^2)(w+M^2)+w M^2+uv\right], \end{split}$$ and $b$ is the coefficient of $-8\Delta_{(4)}$ linear in $s_{14}$: $$\begin{split} b &= G(s_{124},s_{34},s_{12}) + G(s_{134},s_{12},s_{34}) \\ &+ M^2 c d - (c+d)(s_{12}\, d + s_{34}\, c) \end{split}$$ with $c=s_{124}-m^2$ and $d=s_{134}-m^2$. The regions of the $s_{124}$ and $s_{134}$ integrations are bounded by the surfaces satisfying $s_{14}^+= s_{14}^-$, which is fulfilled when either of the $G$ functions in Eq.  vanishes $G(s_{124},s_{34},s_{12}) = 0$ or $G(s_{134},s_{12},s_{34}) =0$. Solving $G(u,v,w)=0$ for $u$ yields $$s_{14}^\pm = \frac{w-v+M^2+2m^2}{2} \pm \frac{\sqrt{v-4m^2}\sqrt{\lambda(v,w,M^2)}}{2\sqrt{v}}$$ and thus determines the integration boundaries $s_{124}^\pm$ and $s_{134}^\pm$ as functions of the two dilepton invariant masses $s_{12}$ and $s_{34}$. Finally, $s_{34}$ and $s_{12}$ range from the threshold at $4m^2$ to $(M-\sqrt{s_{12}})^2$ and $(M-2m)^2$, respectively; here the ordering is arbitrary and could also be exchanged. A valuable check when rewriting the phase space integral in different coordinates is the massless limit. For massless daughter particles the $n$-body phase space is given by $$\begin{aligned} \Phi_n =\frac{1}{2\,(4\pi)^{2n-3}} \frac{M^{2n-4}}{\Gamma(n)\,\Gamma(n-1)} \, .\end{aligned}$$ Integrating over the phase space volume only, as given in Eq.  with the borders as suggested for $m=0$, reproduces the limit exactly as it should. The final decay rate for the decay of the neutral pion ($M=m_\pi$) into two dileptons ($m=m_e$) is then given by $$\begin{aligned} \label{finalphasespace} \Gamma_{\pi^0 \rightarrow 2e^+ 2e^-} = \frac{1}{2^{16}\,\pi^6\, m_\pi^3} \!\!\!\! \int\limits_{4m_e^2}^{(m_\pi-2m_e)^2} \!\!\!\!\!\!\!\! d s_{12} \int\limits_{4m_e^2}^{(m_\pi-\sqrt{s_{12}})^2} \!\!\!\!\!\!\!\! d s_{34} \;\; \int\limits_{s_{124}^-}^{s_{124}^+} d s_{124} \int\limits_{s_{134}^-}^{s_{134}^+} d s_{134} \int\limits_{s_{14}^-}^{s_{14}^+} d s_{14} \,\frac{| \mathcal{M}|^2}{\sqrt{-\Delta_{(4)}}} \,.\end{aligned}$$ References {#references .unnumbered} ========== [^1]: All quantities are color-diagonal and thus the color trace is 3. The flavor matrix of the $\pi^0$ amplitude is $\mathrm{diag}\,(1,-1)$ and that of the quark-photon vertex is $\mathrm{diag}\,(q_u,q_d)$, so the flavor trace is $q_u^2-q_d^2 = 1/3$. Since the overall normalization of the pion amplitude is determined by the canonical Bethe-Salpeter norm, which follows from demanding unit residue at the pion pole in the $q\bar{q}$ scattering matrix, a different color-flavor normalization can always be absorbed by the dressing functions in Eq. . Our choice above is such that $f_1(k^2) = B(k^2)/f_\pi$ in the chiral limit, with $B(k^2) = M(k^2)/Z_f(k^2)$.
--- abstract: 'We demonstrate stable, long-term trapping of fermionic $^6$Li atoms in an optical lattice with MHz trap frequencies for use in a quantum gas microscope. Adiabatic release from the optical lattice in the object plane of a high-numerical-aperture imaging system allows a measurement of the population distribution among the lowest three bands in both radial directions with atom numbers as low as $7\times 10^2$. We measure exponential ground band heating rates as low as 0.014(1) $\mathrm{s^{-1}}$ corresponding to a radial ground state $1/e$ lifetime of $71(5)~\mathrm{s}$, fundamentally limited by scattering of lattice photons. For all lattice depths above 2 recoil, we find radial ground state lifetimes $\ge 1.6 \times 10^6$ recoil times.' author: - 'S. Blatt' - 'A. Mazurenko' - 'M. F. Parsons' - 'C. S. Chiu' - 'F. Huber' - 'M. Greiner' title: 'Low-noise optical lattices for ultracold $^6$Li' --- Trapped quantum particles are used widely in modern atomic physics from quantum information science [@leibfried03; @wineland13] and quantum simulations of many-body physics [@bloch08; @blatt12] to atomic clocks [@ludlow15] and studies of fundamental physics [@gabrielse12; @loh13]. All of these experiments benefit from long motional coherence times, often because they enable coherent rather than statistical averaging of results. Such long times require preparation of the particles in a well-defined motional state of the trap, ideally by ground state cooling. The traps must not only be stable enough to prevent the particles from escaping, but they should preserve the carefully prepared state of motion for as long as possible. Its light mass, $m$, makes fermionic [[$^{6}\mathrm{Li}$]{}]{} particularly suited to quantum simulations in optical lattices [@bloch12]. All energy scales in an optical lattice are naturally parametrized by the lattice recoil energy, $h{\ensuremath{\nu_\mathrm{rec}}}{}$, and recoil frequency, ${\ensuremath{\nu_\mathrm{rec}}}{} = h / (8 m a^2)$, associated with the geometric lattice spacing, $a$, where $h$ is Planck’s constant. For the same tunneling rate in recoil units, the absolute tunneling rate is a factor of 14.5 (6.7) faster for [[$^{6}\mathrm{Li}$]{}]{} than for [[$^{87}\mathrm{Rb}$]{}]{} ([[$^{40}\mathrm{K}$]{}]{}) atoms. Assuming typical atomic lifetimes of one minute, it will thus be possible to study thermalization processes and superexchange dynamics on timescales much longer than previously accessible [@esslinger10]. Here, we demonstrate an intensity-stable, high-power optical lattice for [[$^{6}\mathrm{Li}$]{}]{} atoms. The optical lattice is designed for a quantum gas microscope where individual sites of the optical lattice and individual atoms can be resolved in fluorescence microscopy [@bakr09; @sherson10]. Fluorescence imaging of [[$^{6}\mathrm{Li}$]{}]{} with resonant light at $\lambda_p = {\ensuremath{671~\mathrm{{nm}}}}$ is hampered by the large resonant recoil energy $E_p = h^2 / (2 m \lambda_p^2) = h \times {\ensuremath{74~\mathrm{{kHz}}}}$. Each scattering event adds $\ge 2 E_p$ on average, regardless of the atom’s motional state [@wolf00]. This recoil heating makes it challenging to keep the atoms cold enough to suppress tunneling while scattering $\mathcal{O}(10^4)$ resonant photons to form an image. For this reason, we have to combine a laser cooling scheme with deep optical lattices and MHz trap frequencies. Trap frequencies in the MHz regime require a trapping laser with low intensity noise because parametric heating rates due to laser intensity fluctuations increase quadratically with trap frequency [@gehm98; @gehm00]. Trap quality can be degraded further through thermal lensing effects in the lattice optics at high intensities. To address these challenges, we implemented a high-power and low-noise lattice laser system based on Yb-doped fiber amplifiers seeded with an intensity-stable Nd:YAG laser. In this paper, we use this laser system to demonstrate stable trapping of the two-dimensional ground band in the object plane of our quantum gas microscope with $1/e$ lifetimes exceeding one minute. The corresponding heating rates are measured with a sensitive band mapping technique [@greiner01; @esslinger10]. The high numerical aperture ([$\mathrm{NA}$]{} = 0.87) of our imaging system allows measurement of the band populations for total atom numbers as low as $7\times 10^2$. We find that the measured heating rates are consistent with a rate equation model based on measured intensity-noise spectra and spontaneous scattering rates, which dominate the heating at high trap depths. ![(color online). (a) Simplified [[$^{6}\mathrm{Li}$]{}]{} level structure (not to scale). We use an incoherent mixture of the two lowest states in the [[$^{6}\mathrm{Li}$]{}]{} $^{2}\mathrm{S}_{1/2}$ ground state manifold. (b) Front view of the glass cell. The [$1064~\mathrm{{nm}}$]{} optical lattices reflect off of a superpolished substrate at shallow angle of incidence ($70^\circ$). Fluorescence induced by the probe beam is collected through a high [$\mathrm{NA}$]{} imaging system whose object plane is ${\ensuremath{\mathord{\sim}}}{\ensuremath{10~\mathrm{{\mu m}}}}$ below the substrate. (c) Top view of the glass cell and definition of the lab frame $(X,Y,Z)$. The optical lattices are retroreflected along $X$ and $Y$.[]{data-label="fig:setup"}](Fig1_experiment_schematic){width="\columnwidth"} The lowest two hyperfine manifolds in [[$^{6}\mathrm{Li}$]{}]{} are shown in Fig. \[fig:setup\](a), and the states are commonly labeled [$|{1}\rangle$]{}-[$|{6}\rangle$]{} according to their energy splitting in a magnetic bias field. In our experiment, we load an incoherent mixture of atoms in states [$|{1}\rangle$]{} and [$|{2}\rangle$]{} into a high-power optical dipole trap ([$1064~\mathrm{{nm}}$]{}, [$300~\mathrm{{W}}$]{}) and transfer its focus into the center of the fused silica vacuum cell shown in Fig. \[fig:setup\](b) and (c). The atoms are then transferred into a crossed optical dipole trap formed by incoherent light derived from a superluminescent diode at [$780~\mathrm{{nm}}$]{}, whose short coherence length avoids fringing when passing through the imaging optics [@huber14]. The crossed dipole trap is located [$80~\mathrm{{\mu m}}$]{} below the object plane of a high-resolution microscope system and the sample is evaporated further in this trap at a magnetic field of [$300~\mathrm{{Gauss}}$]{}. We then load a single layer of an optical accordion lattice [@huber14] and decrease the accordion incidence angle from $88^\circ$ to $70^\circ$. The angular change simultaneously compresses the sample and transports it to a distance of [$10~\mathrm{{\mu m}}$]{} from the superpolished mirror that is the final lens of the imaging system. We then adiabatically load ${\ensuremath{\mathord{\sim}}}3.5\times 10^3$ atoms into a three-dimensional optical lattice along directions $X$, $Y$, and $Z$. As shown in Fig. \[fig:setup\](c), the $\lambda = {\ensuremath{1064~\mathrm{{nm}}}}$ optical lattice beams reflect off of the superpolished mirror resulting in a standing wave along the axial direction with spacing $a_z = \frac{\lambda}{2 \cos{\theta}} = {\ensuremath{1.56~\mathrm{{\mu m}}}}$ for an incidence angle $\theta = 70^\circ$. By retroreflecting each lattice beam, we obtain a non-interfering optical lattice along the $X$ and $Y$ axes with equal spacings $a_x = a_y = \frac{\lambda}{2\sin\theta} = {\ensuremath{569~\mathrm{{nm}}}}$. The radial (axial) lattice spacing corresponds to a recoil frequency ${\ensuremath{\nu_\mathrm{rec}}}{} = {\ensuremath{25.9~\mathrm{{kHz}}}}$ ([$3.4~\mathrm{{kHz}}$]{}). To image the atoms *in situ*, we apply the probe beam on the D$_2$ transition – containing two frequencies resonant with both hyperfine manifolds in the ground state – as indicated in Fig. \[fig:setup\](b). The fluorescence is collected on an intensified CCD camera to produce the image in Fig. \[fig:bandmap\](a). Note that we have increased the field of view of our 0.87 [$\mathrm{NA}$]{} infinite conjugate ratio imaging system at the expense of resolution by demagnifying the image. We load the optical lattice by ramping up the lattice powers $P_x$ and $P_y$ to ${\ensuremath{0.8~\mathrm{{W}}}}$ adiabatically, as shown in Fig \[fig:bandmap\](b). Each lattice beam’s power is controlled by two independent servos, and control over the lattice power is handed over automatically depending on the setpoint. For setpoints below $P_\mathrm{ex} = {\ensuremath{0.95~\mathrm{{W}}}}$, we use a low-power high-bandwidth servo. For setpoints above $P_\mathrm{ex}$, we use a high-power low-bandwidth servo. After loading the lattice, $P_x$ and $P_y$ are changed adiabatically to a holding power and the atoms are held in the corresponding lattice for a variable time, $t_\mathrm{hold}$, during which they are heated by lattice intensity fluctuations and spontaneous scattering of lattice photons. At the end of the experiment, we release the atoms from the lattice using a ramp that is adiabatic with respect to the band gaps of the lattice, but fast compared to the residual harmonic confinement timescales. The high-bandwidth servo allows releasing the atoms within [$200~\mathrm{{\mu s}}$]{}, using the ramp shown in the inset of Fig. \[fig:bandmap\](b). At the end of the ramp, the atoms are allowed to expand ballistically for [$1.7~\mathrm{{ms}}$]{}, after which we apply a short probe pulse to obtain the image in Fig. \[fig:bandmap\](c). ![(color online). Starting with $3.5\times 10^3$ atoms in the ($X$, $Y$) plane shown in the in-trap fluorescence image (a, dark image subtracted), we apply the adiabatic ramp shown in panel (b) and obtain the band-mapping image (c). These images are fit (d) with a convolution of in-trap distribution and band map (the deconvolved band map is shown in the inset). The fit residuals are shown in panel (e). All images represent the same region, and images (c)-(e) share the same color bar. We show cuts through the center of each image in the margins.[]{data-label="fig:bandmap"}](Fig2_bandmap){width="\columnwidth"} This image shows clear rectangular features due to the band edges of the radial lattice that are convolved with the in-trap density distribution. We model the band map distribution by flat rectangular features with widths set by the radial lattice spacings, convolved with a two-dimensional Gaussian distribution. The resulting fit and its residuals are shown in panels (d) and (e), and cuts through the center of each image are shown in the margins. We extract the radial band populations $C_{ij}$ from the fit amplitude in the corresponding Brillouin zone, shown in the inset of panel (d). ![(color online). Ground band heating rates $\Gamma_{00}$ as a function of total lattice power $P_\mathrm{tot} = P_x + P_y$. The marker color indicates the fractional lattice power mismatch $(P_x - P_y) / P_\mathrm{tot}$. The heating rates are obtained by fitting exponential decays to the ground band populations $C_{00}$ from band mapping images after holding atoms in the lattice for a variable time. The error bars indicate the statistical uncertainty from the fits. We calibrate the lattice trap frequency $\nu_x$ ($\nu_y$) via lattice modulation spectroscopy for different lattice powers $P_x$ ($P_y$), leading to the calibrated scales shown at the top for $P_x = P_y$. The shaded areas indicate the results from a rate equation model based on the measured intensity-noise spectra for different angular mismatch $\Delta\theta$. The dashed line indicates the simulated heating rate without intensity control. The solid line shows the asymptotic contribution from spontaneous scattering of lattice light $\propto \sqrt{P_\mathrm{tot}}$.[]{data-label="fig:heating"}](Fig3_heating_rate){width="\columnwidth"} By varying the hold time in the lattice and observing how the radial ground state population $C_{00}$ varies with time, we can measure an exponential heating rate. The resulting heating rates for different lattice powers are shown in Fig. \[fig:heating\]. We find exceptionally low heating rates and correspondingly long lifetimes of the radial ground band of up to [$71(5)~\mathrm{{s}}$]{}. Even at the deepest trap depths and MHz trap frequencies required to implement single-site resolved imaging in a quantum gas microscope, the radial ground band still has a lifetime of [$20~\mathrm{{s}}$]{}. For the deepest trap depths the heating process is dominated by exponential heating due to spontaneous scattering of lattice photons (solid line in Fig. \[fig:heating\]). Because the axial optical lattice is formed by two lattice beams, the position of the atoms is sensitive to intensity fluctuations on either lattice beam. The spacing of the axial lattice formed by each beam depends on the angle of incidence $\theta$ as argued above. If the angles of incidence for $X$ and $Y$ are mismatched by $\Delta\theta \equiv |\theta_x - \theta_y|$, uncorrelated intensity noise on $P_x$ and $P_y$ will lead to fluctuations of the axial trap minimum, causing fast heating that has the same dependence on motional quantum number as spontaneous photon scattering. By including all of these processes in a rate equation model based on measured intensity-noise spectra and a geometric estimate of $\Delta\theta \le 1.2^\circ$, we obtain the shaded region in Fig. \[fig:heating\], whose lower boundary corresponds to $\Delta\theta = 0$. In the model, the radial ground state population $C_{00}$ is calculated by integrating over the axial state populations of the three-dimensional trap (see Appendix \[sec:heating-rate-model\]). The model also assumes a trap depth along the radial and axial axes which we calculate from measured trap frequencies under the assumption of sinusoidal modulation. The trap frequencies are calibrated via lattice modulation spectroscopy [@huber14]. The modulation spectra also show cross-modulation peaks at the base frequency and confirm the effect. For lattice powers corresponding to trap depths below $2 h{\ensuremath{\nu_\mathrm{rec}}}{}$, the model deviates from the experimental data because the harmonic oscillator approximation inherent in the rate equation model does not describe shallow lattices well. ![(color online). (a) Lattice optics setup including fiber amplifier, collimation optics, optical isolators, and power actuators as described in the main text. (b) Relative intensity-noise (RIN) spectra for the $X$ lattice laser (the $Y$ laser has similar features) and the seed laser taken as described in Appendix \[sec:rin-measurements\]. (c) For low lattice powers, the high-bandwidth analog servo suppresses intensity noise below [$10~\mathrm{{kHz}}$]{}. The servo is conservatively tuned to work across 1.5 orders of magnitude in setpoint. The servo gain and bandwidth decrease with decreasing setpoint, leading to a larger contribution of noise from the servo electronics. The increased noise at frequencies corresponding to the band gaps of the optical lattice leads to the higher heating rates for low lattice powers in Fig. \[fig:heating\]. At the same time, the servo reduces the noise below the seed laser RIN in the frequency range corresponding to the Hubbard model parameters [@pichler12].[]{data-label="fig:rin"}](Fig4_rin){width="\columnwidth"} The lattice beams are derived from Yb-doped fiber amplifiers (Nufern), seeded by an intensity-stable Nd:YAG non-planar ring oscillator laser (Innolight Mephisto). Because of the high optical power requirements, the beam path schematically shown in Fig. \[fig:rin\](a) is designed to be mechanically and thermally stable. At high intensities, many materials show thermal lensing effects that result in beam pointing and focusing changes on fast timescales. For these reasons, the beam paths (as well as the vacuum chamber) use fused silica optics which are less susceptible to thermal lensing because of the material’s high thermal conductivity, low thermal expansion coefficient, and low index of refraction sensitivity to temperature. Thermal effects in $\mathrm{Tb}_3\mathrm{Ga}_5\mathrm{O}_{12}$ (TGG) optical isolators result in a loss of isolation at high optical powers [@dooley12] which can lead to damage to the fiber amplifiers from backreflections. To provide sufficient isolation, we use two stages of TGG Faraday rotators (FR) with fused silica Brewster polarizers (BP) and $\lambda/2$ waveplates (HWP). To suppress thermal variation in the isolators, the full optical power is always incident on the isolators. The optical power in the lattice beams is controlled in two stages. As a first stage, we use a low-bandwidth, high-dynamic-range actuator built from a z-cut birefringent quartz plate attached to a galvo motor. The quartz plate acts as a variable waveplate (Berek compensator) and in combination with a polarizer becomes a variable attenuator with a dynamic range of ${\ensuremath{\mathord{\sim}}}{\ensuremath{20~\mathrm{{dB}}}}$ over ${\ensuremath{\mathord{\sim}}}4.5^\circ$ of rotation. For the second stage of intensity stabilization, we use a large-active-area acousto-optical modulator (AOM, Crystal Technology 3080-197) as the actuator. Thermal effects in the $\mathrm{TeO}_2$ AOM crystal can also result in pointing drifts on slow time scales. Since the optical lattices are focused to [$80~\mathrm{{\mu m}}$]{} waists at the position of the atoms, such pointing noise would cause position fluctuations of the trap center. To ameliorate these effects, the lattice beams are focused through the AOM crystal by adjusting the distance between the fiber tip and the air-spaced fused silica collimator (Optosigma 027-0510). We detect the optical power on two independent, shot-noise-limited transimpedance amplifiers using [$1064~\mathrm{{nm}}$]{} enhanced InGaAs photodiodes (PD, Hamamatsu G8370-01) with gain just small enough to cover bandwidths up to [$700~\mathrm{{kHz}}$]{}. The detected photovoltages are then used as the input to two independent servo loops. For optical powers above $P_\mathrm{ex} = {\ensuremath{0.95~\mathrm{{W}}}}$, we do not require fast control over the lattice depth but require that the laser intensity is as passively stable as possible. In this regime, we feed back on the angular rotation of the quartz plate using a slow digital feedback loop. The low-bandwidth (small-signal [$3~\mathrm{{dB}}$]{} point ${\ensuremath{\mathord{\sim}}}{\ensuremath{2~\mathrm{{kHz}}}}$) actuator ensures that we cannot write noise onto the laser at frequencies comparable to the trap frequencies. In Fig. \[fig:rin\](b), we show laser intensity power spectral densities for the $X$ lattice and seed lasers. The spectra are normalized to the DC power and are commonly referred to as relative intensity-noise (RIN) spectra (see Appendix \[sec:rin-measurements\]). The prominent relaxation oscillation peak in the seed laser spectrum becomes well-suppressed when engaging the built-in intensity-noise eater. After seeding the fiber amplifiers, the intensity stability is degraded by [$10-20~\mathrm{{dB}}$]{} for Fourier frequencies below [$1~\mathrm{{MHz}}$]{}. Noise spikes from the switching power supplies driving the fiber amplifier pump diodes can be strongly suppressed by low-pass filtering the power supplies (MPE DS26387). Acoustic pointing noise on the fiber tip from power-supply fans is suppressed by removing the power supplies from the amplifier enclosure. For optical powers below $P_\mathrm{ex}$, we use analog feedback on the rf amplitude driving the AOM with an rf mixer and limit the servo’s bandwidth by inserting a steep low-pass filter at [$500~\mathrm{{kHz}}$]{} (LPF-B0R5). The bandwidth limitation ensures that no electronic noise gets written onto the lattice amplitude at frequencies in the MHz regime. For small power setpoints, the AOM servo has a nonlinear transfer function due to the use of a frequency mixer. This nonlinearity results in setpoint-dependent gain and reduced bandwidth for small setpoints, which is partially compensated by a Schottky-diode-based linearization circuit in the controller. By conservatively tuning the loop, we are able to control the lattice depth over 2.5 orders of magnitude at the expense of slightly degraded noise spectra and lifetime for small lattice depths, as seen in Fig. \[fig:heating\]. Here, such a large dynamic range is useful to obtain the adiabatic band mapping images. Additionally, the low-power servo suppresses noise at frequencies relevant for quantum simulation of Hubbard models [@pichler12]. If required, the servo tuning can be reoptimized for even longer lifetimes for lattice depths of interest. In conclusion, we have demonstrated stable trapping of [[$^{6}\mathrm{Li}$]{}]{} atoms in the radial ground band of [$1064~\mathrm{{nm}}$]{} optical lattices spanning 2.5 orders of magnitude in trap depth. We measure ground band populations with sensitivity down to $7 \times 10^2$ atoms and the $1/e$ lifetime $\tau$ in the ground band can exceed one minute for deep lattices and is longer than [$10~\mathrm{{s}}$]{} ($2\pi{\ensuremath{\nu_\mathrm{rec}}}{}\tau > 1.6\times 10^6$) for all lattice depths above $2 h{\ensuremath{\nu_\mathrm{rec}}}{}$. These heating rates are one to two orders of magnitude smaller than in ion traps with comparable trap frequencies [@brownnutt14], with smaller distances to the nearest surface ([$10~\mathrm{{\mu m}}$]{} here). In a three-dimensional optical lattice with MHz trap frequencies, we demonstrate radial ground band lifetimes that are comparable with the longest trap lifetimes measured in optical dipole traps [@ohara99] and one-dimensional optical lattices [@gibbons08]. The heating rates are well-explained by a rate equation model based on the measured intensity-noise spectra. These spectra can be further tailored by servo design for application in quantum simulation experiments with [[$^{6}\mathrm{Li}$]{}]{}. Optical traps with MHz frequencies enable ion-trap-like spectral addressability [@wineland79; @leibfried03], are compatible with proximity to surfaces, and may have applications in achieving strong coupling to high-quality-factor mechanical resonators [@camerer11]. We thank W. Setiawan and K. Wooley-Brown for early contributions, G. Jotzu for discussions, and B. Lincoln of Nufern for his hospitality and openness. We acknowledge support by ARO DARPA OLE, ARO MURI, and NSF. A.M., M.F.P., and C.S.C. are supported by NSF GRFP, and S.B. acknowledges support by the Harvard Quantum Optics Center. RIN Measurements {#sec:rin-measurements} ================ To measure the RIN, we used a shot-noise limited transimpedance amplifier design [@scott01] based on an InGaAs photodiode (Hamamatsu G8370-01), a fast operational amplifier with [$800~\mathrm{{MHz}}$]{} gain-bandwidth product (OPA843), and a transimpedance gain of [$330~\mathrm{{\Omega}}$]{}. To reduce thermal drift from interference between the photodiode and its cover glass, we removed the photodiode window. Residual light contamination was attenuated by placing an interference filter (Semrock FF01-1020/LP-25, ND = 5 for visible light) before the photodiode. We measured RIN power spectral densities (PSDs) by putting the output of such a photodetector onto a Fourier transform precision voltmeter (SR760, varying resolution band width \[RBW\] [$65~\mathrm{{Hz}}$]{} below [$12.5~\mathrm{{kHz}}$]{}, and [$500~\mathrm{{Hz}}$]{} above) below [$100~\mathrm{{kHz}}$]{} or a battery powered RF spectrum analyzer (Anritsu MS2721A, RBW = [$10~\mathrm{{Hz}}$]{}) above [$100~\mathrm{{kHz}}$]{} Fourier frequency. The noise spectra were normalized to a frequency bin width of [$1~\mathrm{{Hz}}$]{} (assuming white noise in each bin), and the SR760 spectrum was converted from $\mathrm{dBV_{rms}/\sqrt{Hz}}$ to $\mathrm{dBm/Hz}$, to be comparable with the spectrum analyzer. Such spectra were then further normalized to the optical carrier power using the DC voltage measured with an RMS voltmeter. For RIN measurements, we typically apply [$12~\mathrm{{mW}}$]{} of optical power to the photodiode (resulting in [$2.0~\mathrm{{V}}$]{} DC signal) to reduce the shot noise level below the noise floor $P_\text{floor}$ of the RF spectrum analyzer (typically $P_\text{floor} \simeq {\ensuremath{-155~\mathrm{{dBm}}}}$ at RBW = [$10~\mathrm{{Hz}}$]{}). Battery powered operation reduces noise from external sources and makes the measurements compatible with the SR760 results, allowing us to combine data from both devices in the plots shown in Fig. \[fig:rin\]. Heating Rate Model {#sec:heating-rate-model} ================== From the measured RIN spectra, we calculate single-particle heating rates in deep optical lattices due to trap intensity fluctuations and cross-modulation for mismatched trap centers [@gehm98; @ohara99; @gehm00; @supplemental]. We estimate heating rates from spontaneous scattering of lattice photons for optical lattices in the Lamb-Dicke regime from standard expressions for laser cooling [@cohen-tannoudji92; @wolf00; @leibfried03; @supplemental]. We then combine all heating rates in a three-dimensional rate equation for the probabilities $P_{{\ensuremath{\bm{n}}}}$ to occupy the motional state ${\ensuremath{\bm{n}}} = (n_x, n_y, n_z)$ of the form $$\label{eq:20} \dot{P}_{{\ensuremath{\bm{n}}}}(t) = \sum_{\Delta{\ensuremath{\bm{n}}}} [R_{{\ensuremath{\bm{n}}} \leftarrow {\ensuremath{\bm{n}}} + \Delta{\ensuremath{\bm{n}}}} - R_{{\ensuremath{\bm{n}}} + \Delta{\ensuremath{\bm{n}}} \leftarrow {\ensuremath{\bm{n}}}}] P_{{\ensuremath{\bm{n}}}}(t).$$ To compare our measured band mapping data against the heating rate coefficients, we numerically solve Eqn. (\[eq:20\]). We sum $P_{{\ensuremath{\bm{n}}}}$ over the vertical direction to get horizontal band populations $C_{ij}$ which we can then directly compare to the band map fit coefficients. Most of the heating rate coefficients depend on the RIN PSD. For the low power servo settings, we linearly interpolate (in linear units) between measured spectra such as the ones in Fig. \[fig:rin\](c). For the high power servo settings, we use the power-independent RIN spectra from Fig. \[fig:rin\](b). The PSDs are linearly interpolated (in linear units) at the Fourier frequency of interest. For this interpolation, the Fourier frequencies of interest are multiples of the trap frequencies, $(\nu_x, \nu_y, \nu_z)$. The trap frequencies are calibrated against the measured optical power via lattice modulation spectroscopy. The initial distribution among trap states is assumed to be Boltzmann with a temperature adjusted to match the measured band map pictures at short times. We limit the state space to states with energies below the lowest modulation depth (along the $y$ direction, estimated from $\nu_y$) and assume that an atom is completely lost once it is heated to states with higher quantum numbers. The three-dimensional rate equation is propagated from the initial condition and the vertically averaged populations $C_{ij}$ are fit with exponential loss curves. The resulting exponential decay rates are then compared to the experimental data in Fig. \[fig:heating\]. The parameter with the largest uncertainty is the mismatch between the angles of incidence of the two lattice beams $\Delta\theta$ responsible for the cross-modulation heating contribution. With the simulation, we generate heating rates for several values below a conservative upper limit $\Delta\theta \le 1.2^\circ$, leading to the shaded areas in Fig. \[fig:heating\]. [26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/RevModPhys.75.281) [****,  ()](\doibase 10.1103/RevModPhys.85.1103) [****, ()](\doibase 10.1103/RevModPhys.80.885) [****,  ()](\doibase 10.1038/nphys2252) [****,  ()](\doibase 10.1103/RevModPhys.87.637) [****, ()](\doibase 10.1103/PhysRevLett.108.113002) [****,  ()](\doibase 10.1126/science.1243683) [****,  ()](\doibase 10.1038/nphys2259) [****,  ()](\doibase 10.1146/annurev-conmatphys-070909-104059) [****,  ()](\doibase 10.1038/nature08482) [****,  ()](\doibase 10.1038/nature09378) [****, ()](\doibase 10.1103/PhysRevLett.85.4249) [****,  ()](\doibase 10.1103/PhysRevA.58.3914) [****,  ()](\doibase 10.1103/PhysRevA.61.029902) [****,  ()](\doibase 10.1103/PhysRevLett.87.160405) **, @noop [Ph.D. thesis]{},  () [****,  ()](\doibase 10.1103/PhysRevA.86.051605) [****,  ()](\doibase 10.1063/1.3695405) @noop [ ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.82.4204) [****,  ()](\doibase 10.1103/PhysRevA.78.043418) [****, ()](\doibase 10.1103/PhysRevA.20.1521) [****,  ()](\doibase 10.1103/PhysRevLett.107.223001) [****,  ()](\doibase 10.1109/2944.974236) @noop [ ]{} @noop [**]{} (, ) **Supplemental Material for:\ Low-noise optical lattices for ultracold [[$^{6}\mathrm{Li}$]{}]{}**\ S. Blatt,$^*$ A. Mazurenko, M. F. Parsons, C. S. Chiu, F. Huber,$^\dag$ and M. Greiner\ *Department of Physics, Harvard University, Cambridge, Massachusetts, 02138, USA*\ (Dated: ) Lattice heating mechanisms {#sec:latt-heat-mech .unnumbered} ========================== We present a short summary of results derived in Refs. [@Ssavard97; @Sgehm98; @Sgehm00] and extend them to derive an expression for cross-modulation of overlapped but non-interfering optical lattices with incommensurable periodicity in the tight-binding limit. To ensure we account for all numerical factors, we present all equations with frequencies $\nu$ in units of Hz in the sense of $\cos(2\pi\nu t)$. Rates and rate coefficients $\Gamma$ still carry factors of $2\pi$ in the sense of $\exp(-\Gamma t)$ and are presented in units of $s^{-1}$. Noise-induced heating in harmonic traps {#sec:heat-harm-traps} --------------------------------------- Intensity fluctuations of the dipole trapping laser will result in fluctuations of the trap frequency and produce parametric transitions between states of the same parity. In the limit of small fluctuations in a tight-binding optical lattice we can approximate the potential for an atom with mass $M$ around a lattice minimum $x_0$ as harmonic with trap frequency [$\nu_t$]{}. We model the intensity fluctuations with a small fluctuation $\epsilon$ and write $$\label{eq:3} V(x) = \frac{M}{2} (2\pi{\ensuremath{\nu_t}})^2 (1 + \epsilon) (x-x_0)^2$$ In this limit, we can also confine ourselves to transitions between harmonic trap states $n$ and $n\pm 2$. The rate coefficients $R_{n\pm2 \leftarrow n}$ for these transitions can be calculated in time-dependent perturbation theory as [@Sgehm98] $$\label{eq:1} \begin{aligned} R_{n\pm 2 \leftarrow n} &= \frac{\pi^2}{8} {\ensuremath{\nu_t}}^2 S_\epsilon(2{\ensuremath{\nu_t}}) (n+1 \pm 1)(n\pm 1), \\ & \equiv R_\epsilon({\ensuremath{\nu_t}}) (n+1 \pm 1)(n\pm 1). \end{aligned}$$ where $S_\epsilon(2{\ensuremath{\nu_t}})$ is the relative power spectral density of the laser intensity in (linear) units of $10^{-\mathrm{dBc}/10} / \mathrm{Hz}$ evaluated at twice the trap frequency. For an infinitely deep trap, these rate coefficients result in an exponential increase in mean energy with the rate $$\label{eq:2} \Gamma_\epsilon \equiv \langle \dot{E} \rangle / \langle E \rangle = \pi^2 {\ensuremath{\nu_t}}^2 S_\epsilon(2{\ensuremath{\nu_t}}).$$ If the harmonic trap center $x_0$ is subject to fluctuations $\delta$, we write $$\label{eq:4} V(x) = \frac{M}{2} (2\pi{\ensuremath{\nu_t}})^2 [x - (x_0 + \delta)]^2,$$ which induces transitions between trap states of opposite parity. The largest rate coefficients are given by $$\label{eq:5} \begin{aligned} R_{n\pm1 \leftarrow n} & = 4 \pi^4 \frac{M}{h} {\ensuremath{\nu_t}}^3 S_\delta({\ensuremath{\nu_t}}) (n+\frac{1}{2}\pm\frac{1}{2}) \\ & = \pi^2 {\ensuremath{\nu_t}}^2 S_{\delta / a}({\ensuremath{\nu_t}}) (n + \frac{1}{2}\pm\frac{1}{2}), \\ & \equiv R_\delta({\ensuremath{\nu_t}}) (n + \frac{1}{2}\pm\frac{1}{2}), \end{aligned}$$ to show that we can normalize the position noise by the harmonic oscillator length $a^2 \equiv \hbar / (M 2\pi{\ensuremath{\nu_t}})$ to get a more intuitive result. Assuming an infinitely deep trap again, we find a linear increase in mean energy with rate $$\label{eq:6} \begin{aligned} \langle \dot{E} \rangle / (h {\ensuremath{\nu_t}}) &= (h{\ensuremath{\nu_t}})^{-1} \times 4\pi^4 M {\ensuremath{\nu_t}}^4 S_\delta({\ensuremath{\nu_t}}) \\ &= \pi^2 {\ensuremath{\nu_t}}^3 S_{\delta/a}({\ensuremath{\nu_t}}). \end{aligned}$$ Cross-modulation {#sec:cross-modulation} ---------------- Consider the case of two harmonic traps with different trap centers and different trap frequencies. The total potential is then again harmonic with $$\label{eq:7} \begin{aligned} V(x) = &\frac{M}{2} \left[( 2\pi\nu_1)^2 (x - x_1)^2 + ( 2\pi\nu_2)^2 (x - x_2)^2 \right] \\ \equiv & \frac{M}{2} (2\pi)^2 (\nu_1^2 + \nu_2^2) \\ & \times \left[(x - x_0)^2 + r(1-r) (x_1^2 + x_2^2)\right], \\ \end{aligned}$$ where we have defined the trap frequency asymmetry parameter $r = \nu_1^2 / (\nu_1^2 + \nu_2^2)$ and the new trap center $x_0 = r x_1 + (1-r) x_2$. If we allow intensity fluctuations $\epsilon_i$ in each trap frequency as in Eqn. , we see that the trap center $x_0$ also fluctuates and find (to first order in $\epsilon_i$) relative trap frequency and trap center fluctuations with $$\label{eq:8} \begin{aligned} \epsilon & = r\epsilon_1 + (1-r)\epsilon_2, \\ \delta & = r(1-r)(\epsilon_1 - \epsilon_2) (x_1 - x_2). \\ \end{aligned}$$ Clearly, the trap center will only fluctuate if both traps have non-zero frequency, if their centers are not matched and only if the intensity fluctuations are uncorrelated. Assuming independent noise processes we find power spectral densities $$\label{eq:10} \begin{aligned} S_\epsilon(2 {\ensuremath{\nu_t}}) &= r^2 S_{\epsilon_1}(2{\ensuremath{\nu_t}}) + (1-r)^2 S_{\epsilon_2}(2{\ensuremath{\nu_t}}), \\ S_\delta({\ensuremath{\nu_t}}) & = [r(1-r) \Delta x]^2 [S_{\epsilon_1}({\ensuremath{\nu_t}}) + S_{\epsilon_2}({\ensuremath{\nu_t}})], \\ \end{aligned}$$ with trap mismatch $\Delta x \equiv x_1 - x_2$. Application to surface-reflected lattices {#sec:appl-surf-refl} ----------------------------------------- We are now ready to apply the heating rates derived in the previous Sections to the special case of our surface-reflected optical lattices. We model the site-local potential for a deep optical lattice (in the tight-binding regime) as a three-dimensional harmonic oscillator. Experimentally, we can directly measure determine the ground to first excited band energy difference using parametric heating measurements as described in the main text. The intensity-noise induced heating rates along the horizontal lattice directions $\hat{x}$ and $\hat{y}$ are $$\label{eq:11} \begin{aligned} R^x_\epsilon(\nu_x) &= \frac{\pi^2}{8} \nu_x^2 S_\epsilon^x(2\nu_x), \\ R^y_\epsilon(\nu_y) &= \frac{\pi^2}{8} \nu_y^2 S_\epsilon^y(2\nu_y), \\ \end{aligned}$$ where we have neglected the small contribution of the $x$ ($y$) lattice beam to the other trap frequency $\nu_y$ ($\nu_x$). Along the vertical direction, the surface lattice beams also form an optical lattice with spacing given by the laser wavelength $\lambda$ and the angles $\theta_x$ and $\theta_y$ with respect to the surface. The position of the $n$-th lattice minimum away from the surface is given by $z_n(\theta_i) = \lambda / (2 \sin\theta_i)$. If we assume a small angle mismatch $\theta_x = \theta - \Delta\theta/2$ and $\theta_y = \theta + \Delta\theta/2$, we find that the minima produced by the lattice beams differ by $\Delta z = z_n(\theta_x) - z_n(\theta_y) \simeq z_n(\theta) \Delta\theta / \tan\theta$. Using the arguments presented in Sec. \[sec:cross-modulation\], we can immediately see that in addition to the intensity noise heating, the position of the $n$-th vertical lattice minimum will shake as well. We find heating rates for the vertical direction as $$\label{eq:12} \begin{aligned} R^z_\epsilon(\nu_z) =& \frac{\pi^2}{8} \nu_z^2 [r^2 S_\epsilon^x(2\nu_z) + (1-r)^2 S_\epsilon^y(2\nu_z)] \\ R^z_\delta(\nu_z) =& 4 \pi^4 \frac{M}{h} \nu_z^3 \left[r(1-r) \frac{\Delta\theta}{\tan\theta} z_n(\theta)\right]^2 \\ & \times [S_\epsilon^x(\nu_z) + S_\epsilon^y(\nu_z)]. \end{aligned}$$ From Eqn. , we can immediately see that the cross-modulation of the vertical lattice center can be problematic because it scales with the cube of the vertical trap frequency and is proportional to the intensity noise power spectral density evaluated at the fundamental of the trap frequency. However, careful angle-matching of the incident lattice beams should be able to reduce this heating rate dramatically. Single-photon scattering {#sec:single-phot-scatt} ------------------------ The optical dipole trap will also produce heating from lattice photon scattering. Each scattering event will produce an average *total* energy increase of $r_h {\ensuremath{E_\mathrm{rec}}}$, where $r_h = 2$ for an optically thin atomic cloud [@Swineland79; @Scohen-tannoudji92; @Swolf00]. In a motional state resolved picture in the Lamb-Dicke regime, the scattering events will produce heating that is described well by transitions between states $n\pm 1\leftarrow n$ to first order in the Lamb-Dicke parameter squared $\eta^2 = {\ensuremath{E_\mathrm{rec}}}/ (h {\ensuremath{\nu_t}})$. This treatment leads to transition rate coefficients [@Sstenholm86; @Sleibfried03] $$\label{eq:13} \begin{aligned} R_{n\pm 1\leftarrow n} =& \frac{1}{3}\frac{r_h}{2} \eta^2 [\Gamma_\text{sc}(\Delta) + \Gamma_\text{sc}(\Delta \mp 2\pi{\ensuremath{\nu_t}})] \\ & \times \left(n+\frac{1}{2}\pm\frac{1}{2}\right),\\ \equiv & R_\mathrm{sc} \left(n+\frac{1}{2}\pm\frac{1}{2}\right) \end{aligned}$$ with identical dependence on $n$ as the rate coefficient for trap center fluctuations. Here, the scattering rate $\Gamma_\text{sc}(\Delta)$ is to be understood as the steady-state scattering rate solution from the optical Bloch equations for detuning $\Delta$ from atomic resonance. For the far-detuned case, $\Gamma_\text{sc}(\Delta \mp 2\pi{\ensuremath{\nu_t}}) \simeq \Gamma_\text{sc}(\Delta) \equiv \Gamma_\mathrm{sc}$. Since we do not distinguish between scattering events due to individual lasers, we added a factor of $1/3$ in Eqn. , which assumes isotropic heating and ensures that the total energy increases as [@Sgehm98] $$\label{eq:18} \begin{aligned} \langle \dot{E} \rangle & = \sum_j \langle \dot{E}_j \rangle = \sum_j \sum_{n=0}^\infty h \nu_j R^j_\mathrm{sc} P_{n_j}(t) = r_h {\ensuremath{E_\mathrm{rec}}}\Gamma_\mathrm{sc}, \end{aligned}$$ where we used that the populations are normalized to $\sum_{n_j} P_{n_j} = 1$. To relate the scattering rate $\Gamma_\text{sc}$ to experimentally accessible observables, we note that the scattering rate is proportional to the overall optical dipole trap depth $U$. The atomic polarizability $\alpha$ for an alkali atom ground state is well described by a single Lorentz oscillator model [@Sgrimm99] with resonant frequency $\omega_0$ and linewidth $\Gamma$: $$\label{eq:15} \alpha = 6\pi\epsilon_0 c^3 \frac{\Gamma / \omega_0^2}{\omega_0^2 - \omega^2 - i(\omega^3/\omega_0^2) \Gamma}.$$ The proportionality coefficient between $\Gamma_\text{sc}$ and $U$ is given by the ratio of the real and imaginary parts of $\alpha$ [@Sgrimm99], and we find for [[$^{6}\mathrm{Li}$]{}]{} ground state atoms in a $\lambda = 2\pi c/\omega = {\ensuremath{1064~\mathrm{{nm}}}}$ optical dipole trap that $$\label{eq:16} \xi \equiv \frac{\hbar \Gamma_\text{sc}}{U} = \frac{2 \operatorname{Im}\alpha}{\operatorname{Re}\alpha} \simeq 1.09 \times 10^{-8}.$$ In summary, the heating rate for lattice photon scattering in the Lamb-Dicke regime is $$\label{eq:17} R^j_\mathrm{sc} = \frac{r_h}{3} \xi \frac{U}{\hbar} \frac{{\ensuremath{E_\mathrm{rec}}}}{h \nu_j}$$ Our best estimate for the total dipole trap depth $U$ is the lattice modulation depth along the vertical direction which for a sinusoidal modulation is related to the measured vertical trap frequency by $$\label{eq:19} \frac{U}{\hbar} = 2\pi \nu^z_\mathrm{rec} \left(\frac{\nu_z}{2\nu^z_\mathrm{rec}}\right)^2,$$ where $\nu^z_\mathrm{rec} = h/(8 M d_z^2)$ is the geometric recoil frequency related to the vertical lattice spacing $d_z = \lambda / (2\sin\theta)$. [9]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevA.56.R1095) [****,  ()](\doibase 10.1103/PhysRevA.58.3914) [****,  ()](\doibase 10.1103/PhysRevA.61.029902) [****, ()](\doibase 10.1103/PhysRevA.20.1521) @noop [**]{} (, ) [****, ()](\doibase 10.1103/PhysRevLett.85.4249) [****,  ()](\doibase 10.1103/RevModPhys.58.699) [****,  ()](\doibase 10.1103/RevModPhys.75.281) @noop [ ()]{}
--- abstract: 'We prove that the only compact convex ancient solutions of the planar affine normal flow are contracting ellipses.' address: 'Institut für Diskrete Mathematik und Geometrie, Technische Universität Wien, Wiedner Hauptstr. 8–10, 1040 Wien, Austria' author: - 'Mohammad N. Ivaki' title: Classification of compact convex ancient solutions of the planar affine normal flow --- Introduction ============ The setting of this paper is the two-dimensional Euclidean space, $\mathbb{R}^2.$ A compact convex subset of $\mathbb{R}^2$ with non-empty interior is called a *convex body*. The set of smooth strictly convex bodies in $\mathbb{R}^2$ is denoted by $\mathcal{K}$ and write $\mathcal{K}_{0}$ for the set of smooth strictly convex bodies whose interiors contain the origin of the plane. Let $K$ be a smooth strictly convex body in $\mathbb{R}^2$ and let $X_K:\partial K\to\mathbb{R}^2,$ be a smooth embedding of $\partial K$, the boundary of $K$. Write $\mathbb{S}^1$ for the unit circle and write $\nu:\partial K\to \mathbb{S}^1$ for the Gauss map of $\partial K$. That is, at each point $x\in\partial K$, $\nu(x)$ is the unit outwards normal at $x$. The support function of $K\in\mathcal{K}_0 $ as a function on the unit circle is defined by $s_K(z):= \langle X(\nu^{-1}(z)), z \rangle,$ for each $z\in\mathbb{S}^1$. We denote the curvature of $\partial K$ by $\kappa$ which as a function on $\partial K$ is related to the support function by $$\frac{1}{\kappa(\nu^{-1}(z))}:=\mathfrak{r}(z)=\frac{\partial^2}{\partial \theta^2}s(z)+s(z).$$ Here and thereafter, we identify $z=(\cos \theta, \sin \theta)$ with $\theta$. The function $\mathfrak{r}$ is called the radius of curvature. The affine support function of $K$ defined by $\sigma:\partial K\to \mathbb{R}$ and $\sigma(x):=s(\nu(x))\mathfrak{r}^{1/3}(\nu(x))$ is invariant under the group of special linear transformations, $SL(2)$, and it plays a basic role in our argument. Let $K\in \mathcal{K}$. A family of convex bodies $\{K_t\}_t\subset\mathcal{K}$ given by the embeddings $X:\partial K\times[0,T)\to \mathbb{R}^2$ is said to be a solution to the affine normal flow with the initial data $K_0$ if the following evolution equation is satisfied: $$\label{e: flow0} \partial_{t}X(x,t)=-\kappa^{\frac{1}{3}}(x,t)\, \nu(x,t),~~ X(0,\cdot)=X_{K}.$$ In this equation, $\nu(x,t)$ is the unit normal to the curve $ X(\partial K,t)=\partial K_t$ at $X(x,t).$ The well-known affine normal flow was investigated by Sapiro and Tannenbaum [@ST] and Andrews [@BA5; @BA4]. Andrews proved that the affine normal flow evolves any convex initial bounded open set, after appropriate rescaling, exponentially fast, to an ellipsoid in the $\mathcal{C}^{\infty}$ topology. In another direction, interesting results for the affine normal flow have been obtained in [@LT] by Loftin and Tsui regarding ancient solutions, and existence and regularity of solutions on non-compact strictly convex hypersurfaces. In [@LT], the classification of compact, ancient solutions has been done in all dimensions except in dimension two: The only compact convex ancient solutions of the affine normal flow in $\mathbb{R}^n$, $n\geq3$, are contracting ellipsoids. We recall that a solution of the affine normal flow is called an ancient solution if it exists on $(-\infty, T)$, for some finite time $T$. The main result of the paper is the following: The only compact convex ancient solutions of the planar affine normal flow are contracting ellipses. To prove this theorem, we prove that the backwards limit of the affine support function is constant, hence showing the backwards limit of solution is an ellipse. On the other hand, Andrews showed that the forwards limit is also an ellipse. Therefore, in view of the monotonicity of the area product $A(K_t^{\ast})A(K_t)$, we conclude that $A(K_t^{\ast})A(K_t)=\pi^2$ for all times. This in turn implies that ancient solutions of the flow are contracting ellipses. Similar classification has also been obtained by S. Chen [@Chen]. Notice that it was proved by Sapiro, Tannebaum and Andrews that any solution to the flow (\[e: flow0\]) converges to a point in finite time. In the rest of this paper, we assume, without loss of generality, that the limiting point is the origin of the plane. Harnack estimate ================ We first recall the following Harnack estimate from [@BA4] \[prop: Harnack estimate\]Along the flow (\[e: flow0\]) we have $\partial_{t}\left(\mathfrak{r}^{-\frac{1}{3}}t^{\frac{1}{4}}\right)\geq0$ on $\mathbb{S}^1\times (0,T)$. \[cor: harnack estimate all flow\] Every ancient solution of the flow (\[e: flow0\]) satisfies $\partial_{t}\left(\frac{\mathfrak{r}^{-\frac{1}{3}}}{s}\right)\geq 0.$ By Harnack estimate every solution of the flow (\[e: flow0\]) satisfies $$\partial_{t}\mathfrak{r}^{-\frac{1}{3}}+\frac{1}{4t}\mathfrak{r}^{-\frac{1}{3}}\geq0.$$ If we start the flow from a fixed time $t_0<0$, then this last inequality becomes $$\partial_{t}\mathfrak{r}^{-\frac{1}{3}}+\frac{1}{4(t-t_0)}\mathfrak{r}^{-\frac{1}{3}}\geq0$$ for all $t_0<t<T.$ Letting $t_0$ goes to $-\infty$ yields $ \partial_{t}\mathfrak{r}^{-\frac{1}{3}}\geq0.$ Moreover, $s(\cdot,t)$ is decreasing on the time interval $(-\infty, T)$. The claim follows. Affine differential setting =========================== We will now recall several definitions from affine differential geometry. Let $\gamma:\mathbb{S}^1\to\mathbb{R}^2$ be an embedded strictly convex curve with the curve parameter $\theta$. Define $\mathfrak{g}(\theta):=[\gamma_{\theta},\gamma_{\theta\theta}]^{1/3}$, where for two vectors $u, v$ in $\mathbb{R}^2$, $[u, v]$ denotes the determinant of the matrix with rows $u$ and $v$. The affine arc-length is defined as $$\mathfrak{s}(\theta):=\int_{0}^{\theta}\mathfrak{g}(\alpha)d\alpha.$$ Furthermore, the affine normal vector $\mathfrak{n}$ is given by $ \mathfrak{n}:=\gamma_{\mathfrak{s}\mathfrak{s}}. $ In the affine coordinate ${\mathfrak{s}}$, there holds $[\gamma_{\mathfrak{s}},\gamma_{\mathfrak{s}\mathfrak{s}}]=1,$ and $\sigma=[\gamma,\gamma_{\mathfrak{s}} ].$ Let $K$ be a smooth, strictly convex body having origin of the plane in its interior. We can express the area of $K$, denoted by $A(K)$, in terms of affine invariant quantities: $$A(K)=\frac{1}{2}\int_{\partial K}\sigma d\mathfrak{s}.$$ The $p$-affine perimeter of $K\in \mathcal{K}_0$ (for $p=1$ the assumption $K\in \mathcal{K}_0$ is not necessary and we can take $K\in \mathcal{K}$), denoted by $\Omega_p(K)$, is defined by $$\Omega_p(K):=\int_{\partial K}\sigma^{1-\frac{3p}{p+2}}d\mathfrak{s},$$ [@Lutwak2]. We call the quantity $\frac{\Omega_p^{2+p}(K)}{A^{2-p}(K)},$ the $p$-affine isoperimetric ratio and mention that it is invariant under $GL(2).$ Let $K\in\mathcal{K}_0$. The polar body associated with $K$, denoted by $K^{\ast}$, is a convex body in $\mathcal{K}_0$ defined by $$K^{\ast} = \{ y \in \mathbb{R}^{2} \mid x \cdot y \leq 1,\ \forall x \in K \}.$$ The area of $K^{\ast}$, denoted by $A^{\ast}=A(K^{\ast})$ can also be represented in terms of affine invariant quantities: $$A^{\ast}=\frac{1}{2}\int_{\partial K}\frac{1}{\sigma^2}d\mathfrak{s}= \frac{1}{2}\int_{\mathbb{S}^1}\frac{1}{s^2}d\theta.$$ Let $K\in \mathcal{K}_0$. We consider a family $\{K_t\}_t\subset \mathcal{K}$ given by the smooth embeddings $X:\partial K\times[0,T)\to \mathbb{R}^2$, which are evolving according to (\[e: flow0\]). Then, up to a time-dependant diffeomorphism, $\{K_t\}_t$ evolves according to $$\label{e: affine def of flow} \frac{\partial}{\partial t}X:=\mathfrak{n},~~ X(\cdot,0)=X_{K}(\cdot).$$ Therefore, classification of compact convex ancient solutions of (\[e: flow0\]) is equivalent to the classification of compact convex ancient solutions of (\[e: affine def of flow\]). In what follows, our reference flow will be always the evolution equation (\[e: affine def of flow\]). \[cor: harnack estimate all flow1\] Every ancient solution to the flow (\[e: affine def of flow\]) satisfies $\left(\partial_{t}\sigma\right)\leq 0.$ Assume $Q$ and $\bar{Q}$ are two smooth functions $Q:\partial K\times [0,T)\to \mathbb{R}$, $\bar{Q}:\mathbb{S}^1\times[0,T)\to \mathbb{R}$ that are related by $Q(x,t)=\bar{Q}(\nu(x,t),t)$. To prove the claim, we observe that the identity $\frac{\partial}{\partial t} \nu=0$ (see [@LT] page 123) implies $\partial_{t}\bar{Q}=\partial_{t}Q.$ Therefore, the claim follows from Proposition \[prop: Harnack estimate\]. \[prop: area backward limit\] Every ancient solution to the flow (\[e: affine def of flow\]) satisfies $$\lim\limits_{t\to-\infty}A(K_t)=\infty.$$ For a fixed $N>0$, in view of John’s ellipsoid lemma, we may apply a special linear transformation $\Phi$ to $K_{-N}$, such that $B_R\subseteq\Phi K_{-N}\subseteq B_{2R}.$ Here, $B_{2R}$ and $B_R$ are disks of radii $R$ and $2R$ with the same centers. On the one hand, by comparing area, $B_R\subseteq\Phi K_{-N}\subseteq B_{2R}$ implies that $R\leq \sqrt{ \frac{A(\Phi K_{-N})}{\pi}}=\sqrt{ \frac{A(K_{-N})}{\pi}}.$ On the other hand, it is easy to see that it takes $B_{2R}$ exactly the time $\frac{3}{4}(2R)^{\frac{4}{3}}$ to shrink to its center under the affine normal flow. Since the solution $\{K_t\}$ exists on $[-N,T)$, by the comparison principle we must have $$T+N\leq \frac{3}{4}(2R)^{\frac{4}{3}}\leq \frac{3}{4}\left(2\sqrt{ \frac{A(K_{-N})}{\pi}}\right)^{\frac{4}{3}}.$$ Therefore, $\lim\limits_{N\to+\infty}A(K_{-N})=\infty.$ Next are two lemmas whose proofs were given in [@Ivaki]. (Lemma 3.1 of [@Ivaki])\[e: basic ev\] Let $\gamma_t:=\partial K_t$ be the boundary of the convex body $K_t$ which is evolving by the flow (\[e: affine def of flow\]). Then the following evolution equations hold: 1. $\displaystyle \frac{\partial}{\partial t}\sigma=-\frac{4}{3} +\frac{1}{3}\sigma_{\mathfrak{s}\mathfrak{s}},$ 2. $\displaystyle \frac{d}{dt}A=-\Omega_1.$ (Lemma 6.1 of [@Ivaki])\[lem: controlling derivative of $l$-affine surface area along affine normal flow\] For every $l\geq2$ the $l$-affine perimeter satisfies: $$\label{e: eveq general} \frac{d}{dt}\Omega_l(t)=\frac{2(l-2)}{l+2}\int_{\gamma_t}\sigma^{-\frac{3l}{l+2}}d\mathfrak{s} +\frac{6l}{(l+2)^2}\int_{\gamma_t}\sigma^{-1-\frac{3l}{l+2}}(\sigma_{\mathfrak{s}})^2d\mathfrak{s}.$$ \[rem: rem\] We recall from [@BA5] that $\frac{d}{dt}\frac{\Omega_1^3(K_t)}{A(K_t)}\geq 0$ and $\frac{d}{dt}A(K_t)A(K_t^{\ast})\geq0$. Moreover, equality holds in the latter case if and only if $K_t$ is an origin-centered ellipse. Write $\sigma_M(t)$ and $\sigma_m(t)$ for $\max\limits_{\gamma_t}\sigma$ and $\min\limits_{\gamma_t}\sigma$, respectively. In the next lemma, we will prove that the ratio $\frac{\sigma_{M}(t)}{\sigma_m(t)}$ remains uniformly bounded from above. There is a constant $0<c<\infty$, such that $\frac{\sigma_{M}(t)}{\sigma_m(t)}\leq c$ for all $t\in(-\infty,0].$ Combining Proposition \[cor: harnack estimate all flow1\] and part (1) of Lemma \[e: basic ev\] yield $$\begin{aligned} \label{e: fundamental} 0&\geq\frac{\partial_t\sigma}{\sigma}=-\frac{4}{3\sigma} +\frac{1}{3}\frac{\sigma_{\mathfrak{s}\mathfrak{s}}}{\sigma}. \end{aligned}$$ Integrating (\[e: fundamental\]) against $d\mathfrak{s}$ gives $ 4\int_{\gamma_t}\frac{1}{\sigma}d\mathfrak{s}\geq \int_{\gamma_t} ((\ln \sigma)_{\mathfrak{s}})^2d\mathfrak{s}. $ By applying the Hölder inequality to the left-hand side and right-hand side of the inequality we obtain $$\begin{aligned} \left(\int|\left(\ln\sigma\right)_{\mathfrak{s}}|d\mathfrak{s}\right)^2\leq\Omega_1\int_{\gamma_t} ((\ln \sigma)_{\mathfrak{s}})^2d\mathfrak{s}\le 4\Omega_1\int_{\gamma_t}\frac{1}{\sigma}d\mathfrak{s}&\leq\sqrt{32}A^{\ast~\frac{1}{2}}\Omega_1^{\frac{3}{2}}\\ &=\sqrt{32}\frac{A^{\frac{1}{2}}A^{\ast ~\frac{1}{2}}\Omega_1^{\frac{3}{2}}}{A^{\frac{1}{2}}}.\end{aligned}$$ Here, we used the identities $\int_{\gamma_t}\frac{1}{\sigma^2}d\mathfrak{s}=2A^{\ast}$ and $\int_{\gamma_t} d\mathfrak{s}=\Omega_1.$ Consequently, by Remark \[rem: rem\] we get $ \left(\ln\frac{\sigma_{M}(t)}{\sigma_m(t)}\right)^2\leq \sqrt{32}\frac{A^{\frac{1}{2}}(0)A^{\ast ~\frac{1}{2}}(0)\Omega_1^{\frac{3}{2}}(0)}{A^{\frac{1}{2}}(0)}. $ This implies that $$\label{ie: sigma ratio} \frac{\sigma_{M}}{\sigma_m}\leq c,$$ on the time interval $(-\infty,0],$ for some constant $0<c<\infty$. Let $\{K_t\}_t$ be a solution of the flow (\[e: affine def of flow\]). Then the family of convex bodies, $\{\tilde{K}_t\}_t$, defined by $\tilde{K}_t:=\sqrt{\frac{\pi}{A(K_t)}}K_t$ is called a normalized solution to the affine normal flow, equivalently a solution that keeps the area of evolving bodies fixed and equal to $\pi.$ We furnish every quantity associated with the normalized solution with an over-tilde. For example, the support function, the curvature, and the affine support function of $\tilde{K}$ are denoted, in order, by $\tilde{s}$, $\tilde{\kappa}$, and $\tilde{\sigma}$. There is a constant $0<c<\infty$ such that on the time interval $(-\infty,T)$ $$\label{ie: uniform para1} \frac{\tilde{\sigma}_{M}(t)}{\tilde{\sigma}_m(t)}\leq c.$$ The estimate (\[ie: sigma ratio\]) is scaling invariant. Therefore, the same estimate holds for the normalized solution. \[cor: upper bound of invers ratio\] There exists a constant $0<b<\infty$ such that on $(-\infty, 0]$ we have $\frac{1}{\Omega_2^{4}(t)}<b.$ By the Hölder inequality: $$\Omega_2(t)\Omega_1^{\frac{1}{2}}(t)(2A(t))^{\frac{1}{2}}\geq \Omega_2(t) \int_{\gamma_t}\sigma^{\frac{1}{2}}d\mathfrak{s}=\int_{\gamma_t}\sigma^{-\frac{1}{2}}d\mathfrak{s} \int_{\gamma_t}\sigma^{\frac{1}{2}}d\mathfrak{s} \geq \Omega_1^2(t),$$ so $$\Omega_2(t)\geq \left(\frac{\Omega_1^3(t)}{2A(t)}\right)^{\frac{1}{2}}=\left(\frac{\tilde{\Omega}_1^3(t)}{2\tilde{A}(t)}\right)^{\frac{1}{2}}.$$ Hence, to bound $\Omega_2(t)$ from below, it is enough to bound $\left(\frac{\Omega_1^3(t)}{2A(t)}\right)^{\frac{1}{2}}$ from below. Since both $\Omega_2(t)$ and $\frac{\Omega_1^3(t)}{2A(t)}$ are invariant under $GL(2)$, we need only to prove the claim after applying appropriate $SL(2)$ transformations to the normalized solution of the flow. By the estimate (\[ie: uniform para1\]) and the facts that $\Omega_1(\tilde{K}_{t})=\pi^{\frac{1}{3}}\frac{\Omega_1(K_{t})}{A^{\frac{1}{3}}(K_t)}$ is non-decreasing (see Remark \[rem: rem\]), and $A(\tilde{K}_t)=\pi$ we have $$\frac{c}{2}\tilde{\sigma}_m(t)\tilde{\Omega}_1(0)\geq\frac{1}{2}\tilde{\sigma}_M(t)\tilde{\Omega}_1(0)\geq\frac{1}{2}\tilde{\sigma}_M(t)\tilde{\Omega}_1(t)\geq \tilde{ A}(t)=\pi.$$ So we get $\tilde{s}\tilde{\mathfrak{r}}^{\frac{1}{3}}(t)\geq \frac{2\pi}{c\tilde{\Omega}_1(0)}$ on $(-\infty,0]$. On the other hand, recall that under the area restriction $A(\tilde{K}_t)=\pi$, and in the light of John’s ellipsoid lemma, we can apply a special linear transformation at each time such that the diameter of $\tilde{K}_t$, $D(\tilde{K}_t):=\max\limits_{z\in\mathbb{S}^1}(s_{\tilde{K}_t}(z)+s_{\tilde{K}_t}(-z)),$ is bounded from above by $4.$ Therefore, as $\tilde{s}(t)>0$ (notice that the origin of the plane belongs to int $\tilde{K}_t$), we have, after applying a special linear transformation at each time, that $\tilde{s}(t)<4.$ So, in view of $\tilde{s}\tilde{\mathfrak{r}}^{\frac{1}{3}}(t)\geq \frac{2\pi}{c\tilde{\Omega}_1(0)}$, and that the affine support function is $SL(2)$-invariant under, we get $$4\tilde{\mathfrak{r}}^{\frac{1}{3}}(t)\geq\tilde{s}\tilde{\mathfrak{r}}^{\frac{1}{3}}(t)\geq \frac{2\pi}{c\tilde{\Omega}_1(0)}\Rightarrow \tilde{\mathfrak{r}}^{\frac{2}{3}}(t)\geq\left(\frac{\pi}{2c\tilde{\Omega}_1(0)}\right)^2,$$ and thus $$\begin{aligned} \label{e:11} \Omega_2^2(t)\geq\frac{\tilde{\Omega}_1^3(t)}{2\tilde{A}(t)}=\frac{(\int_{\mathbb{S}^1}\tilde{\mathfrak{r}}^{2/3}d\theta)^3}{2\pi}\geq (2\pi)^2\left(\frac{\pi}{2c\tilde{\Omega}_1(0)}\right)^6.\end{aligned}$$ \[cor: liminf idea\] If $K_t$ evolves by (\[e: affine def of flow\]), then the following limit holds: $$\liminf_{t\to -\infty}\left(\frac{A(t)}{\Omega_1(t)\Omega_2^{5}(t)}\right) \int_{\gamma_t}\left(\sigma^{-\frac{1}{4}}\right)_{\mathfrak{s}}^2d\mathfrak{s}=0. \label{eq:sup}$$ By Lemma \[lem: controlling derivative of $l$-affine surface area along affine normal flow\] for $l=2$, and by $\frac{d}{dt}A(t)=-\Omega_1(t)$, we find $$\frac{d}{dt}\frac{1}{\Omega_2^{4}(t)}= \delta \frac{d}{dt}\ln(A(t))\left[\left(\frac{A(t)}{\Omega_1(t)\Omega_2^{5}(t)}\right) \int_{\gamma_t}\left(\sigma^{-\frac{1}{4}}\right)_{\mathfrak{s}}^2d\mathfrak{s}\right],$$ for a constant $\delta>0.$ Suppose on the contrary that there exists an $\varepsilon>0$ small enough such that $$\left(\frac{A(t)}{\Omega_1(t)\Omega_2^{5}(t)}\right) \int_{\gamma_t}\left(\sigma^{-\frac{1}{4}}\right)_{\mathfrak{s}}^2d\mathfrak{s}\geq \frac{\varepsilon}{\delta}$$ on $(-\infty, -N]$ for an $N>0$ large enough. Thus $ \frac{d}{dt}\frac{1}{\Omega_2^{4}(t)} \leq\varepsilon\frac{d}{dt}\ln(A(t)). $ Integrating this last inequality on the time interval $[t,-N]$ and using Corollary \[cor: upper bound of invers ratio\] we get $$\begin{aligned} 0<\frac{1}{\Omega_2^{4}(-N)}&\leq \frac{1}{\Omega_2^{4}(t)}+\varepsilon \ln(A(-N))-\varepsilon \ln(A(t))\\ &\leq b+\varepsilon \ln(A(-N))-\varepsilon \ln(A(t)).\end{aligned}$$ We reach to a contradiction by letting $t\to-\infty$: Since $\lim\limits_{t\to-\infty}A(t)=\infty$ (Proposition \[prop: area backward limit\]), so the right-hand side becomes negative. \[cor: limit of sigma\] There is a sequence of times $\{t_k\}_{k\in\mathbb{N}}$ and there is a constant $0<\zeta<\infty$ such that as $t_k\to-\infty$, then $\lim\limits_{t_k\to-\infty}\max\limits_{\tilde{\gamma}_{t_k}}|\tilde{\sigma}(t_k)-\zeta|=0.$ Notice that the quantity $\left(\frac{A(t)}{\Omega_1(t)\Omega_2^{5}(t)}\right)\int_{\gamma_t}\left(\sigma^{-\frac{1}{4}}\right)_{\mathfrak{s}}^2d\mathfrak{s}$ is $GL(2)$ invariant and also $\frac{\tilde{A}(t)}{\tilde{\Omega}_1(t)\tilde{\Omega}_2^{5}(t)}$ is bounded from below on $(-\infty,0]$: In fact, $\tilde{A}(t)=\pi$ and both quantities $\tilde{\Omega}_1(t)$ and $\tilde{\Omega}_2^{5}(t)$ are non-decreasing along the normalized flow thus $\tilde{\Omega}_1(t)\leq \tilde{\Omega}_1(0)$ and $\tilde{\Omega}_2^{5}(t)\le \tilde{\Omega}_2^{5}(0).$ Hence, Lemma \[cor: liminf idea\] implies that there exists a sequence of times $\{t_k\}$ such that $\lim\limits_{k\to\infty}t_k=-\infty$ while $\lim\limits_{t_k\to-\infty}\int_{\tilde{\gamma}_{t_k}} \left(\tilde{\sigma}^{-\frac{1}{4}}\right)^2_{\tilde{\mathfrak{s}}}d\tilde{\mathfrak{s}}=0.$ On the other hand, by the Hölder inequality $$\begin{aligned} \frac{\left(\tilde{\sigma}_M^{-\frac{1}{4}}(t_k)-\tilde{\sigma}_m^{-\frac{1}{4}}(t_k)\right)^2} {\tilde{\Omega}_1(t_k)}\leq \int_{\tilde{\gamma}_{t_k}} \left(\tilde{\sigma}^{-\frac{1}{4}}\right)^2_{\tilde{\mathfrak{s}}}d\tilde{\mathfrak{s}}.\end{aligned}$$ So by boundedness of $\tilde{\Omega}_1$ from above we find that $\lim\limits_{t_k\to-\infty}\left(\tilde{\sigma}_M^{-\frac{1}{4}}(t_k)- \tilde{\sigma}_m^{-\frac{1}{4}}(t_k)\right)^2=0.$ To complete the proof, we need to exclude the possibility that $\limsup\limits_{t_k\to-\infty}\tilde{\sigma}_m(t_k)=\infty$ (Recall that in the proof of Corollary \[cor: upper bound of invers ratio\], we have already established the existence of a uniform lower bound for $\tilde{\sigma}_m$.). Suppose on the contrary $\limsup\limits_{t_k\to-\infty}\tilde{\sigma}_m(t_k)=\infty$. In this case, (\[e:11\]) gives $$\pi=\tilde{A}(t_k)=\lim_{t_k\to-\infty}\tilde{A}(t_k)\geq \frac{1}{2}\left(\limsup\limits_{t_k\to-\infty}\tilde{\sigma}_m(t_k)\right)\tilde{\Omega}_1(t_k)=\infty.$$ This is a contradiction. So $\{\tilde{\sigma}_m\}$ is uniformly bounded from above and below. By passing to a further subsequence of $\{t_k\}$, if necessary, we obtain the desired result. \[lem: selection\] For each time $t\in (-\infty, T)$, let $\Phi_t\in SL(2)$ be a special linear transformation such that the minimal ellipse containing $\Phi_t\tilde{K}_t$ is a disk. Then for an $N>0$ large enough, there holds $$c_2 \frac{\zeta^{\frac{1}{3}}(2\zeta^{\frac{1}{3}})^{\frac{-3}{4}}}{2}\leq s_{\Phi_{t_k}\tilde{K}_{t_k}}\leq c_1 (2\zeta^{\frac{1}{3}})^{\frac{1}{4}}$$ on the time interval $(-\infty,-N)$, for some absolute constants $0<c_1,c_2<\infty.$ The claim follows from Corollary \[cor: limit of sigma\] and Corollary 2.4 of [@XW]. Proof of the main Theorem ========================= By Lemma \[lem: selection\] and by the Blaschke selection theorem, there is subsequence of times, denoted again by $\{t_k\}$, such that $\{\Phi_{t_k}\tilde{K}_{t_k}\}$ converges in the Hausdorff distance to a convex body $\tilde{K}_{-\infty}$, having the origin in its interior, as $t_k\to-\infty.$ By Corollary \[cor: limit of sigma\], and by the weak convergence of the Monge-Ampère measures, the support function of $\tilde{K}_{-\infty}$ is the generalized solution of the following Monge-Ampère equation on $\mathbb{S}^1$: $$s^3(s_{\theta\theta}+s)=\zeta^3$$ Therefore, by Lemma 8.1 of Petty [@Petty], $\tilde{K}_{-\infty}$ is an origin-centered ellipse. This in turn implies that $A(\tilde{K}_{-\infty})A(\tilde{K}_{-\infty}^{\ast})=\pi^2.$ On the other hand, $\lim_{t\to T}A(\tilde{K}_{t})A(\tilde{K}_{t}^{\ast})=\pi^2$, see Andrews [@BA5]. So the monotonicity of $A(\tilde{K}_{t})A (\tilde{K}_{t}^{\ast})$ yields $A(\tilde{K}_{t})A(\tilde{K}_{t}^{\ast})\equiv \pi^2$ on $(-\infty,T)$ which gives $\frac{d}{dt}A(K_{t})A(K_{t}^{\ast})\equiv 0$ on $(-\infty,T).$ From Remark \[rem: rem\] it follows that for every time $t$, $K_t$ and is an origin-centered ellipse. **Acknowledgment** I would like to thank the referee for helpful comments and suggestions. [20]{} B. Andrews, *Contraction of convex hypersurfaces by their affine normal*, J. Differential Geom. 43, 207–230, (1996) B. Andrews, *Motion of hypersurfaces by Gauss curvature*, Pacific J. Math. 195, No.1, (2000) S. Chen, *Classifying convex compact ancient solutions to the affine curve shortening flow*, J. Geom. Anal. doi:10.1007/s12220-013-9456-z (2013) C.M. Petty, *Affine isoperimetric problems*, Annals of the New York Academy of Sciences, 440, Discrete Geometry and Convexity 113–127, 1985. doi:10.1111/j.1749-6632.1985.tb14545.x M.N. Ivaki, *Centro-affine curvature flows on centrally symmetric convex curves*, Trans. Amer. Math. Soc. 366, 5671–5692, (2014) J. Loftin, M.P. Tsui, *Ancient solutions of the affine normal flow*, J. Differential Geom. 78, 113–162, (2008) E. Lutwak, *The [B]{}runn-[M]{}inkowski-[F]{}iery theory [II]{}: Affine and geominimal surface areas*, Adv. in Math. 118, 244–294, (1996) G. Sapiro, A. Tannenbaum, *On affine plane curve evolution*, J. Funct. Anal. 119, 79–120, (1994). J. Lu, X.J. Wang, *Rotationally symmetric solutions to the $L_p$-Minkowski problem*, J. Differential Equations, 254, 983–1005, (2013)
--- abstract: 'A holonomic constraint is used to enforce a constant instantaneous configurational temperature on an equilibrium system. Three sets of equations of motion are obtained, differing according to the way in which the holonomic constraint is introduced and the phase space distribution function that is preserved by the dynamics. Firstly, Gauss’ principle of least constraint is used, and it is shown that it does not preserve the canonical distribution. Secondly, a modified Hamiltonian is used to find a dynamics that provides a restricted microcanonical distribution. Lastly, we provide equations that are specifically designed to both satisfy the temperature constraint and produce a restricted canonical ensemble.' author: - Igor Rychkov - 'Debra J. Searles' bibliography: - 'Bibliography/Configuational\_Thermostat.bib' title: Isoconfigurational thermostat --- When simulating a molecular system at equilibrium one would often like to have it at a constant temperature rather than the energy. In a non-equilibrium simulation controlling the temperature becomes mandatory as the driving forces would otherwise heat up the sytems indefinitely. In order to thermostat a system in bulk, away from the walls, some synthetic fields can be added to the equations of motion. Two important questions arise. Firstly, what microscipic measure should one choose for the temperature to be controlled? Secondly, what probability distribution will the resulting equations generate in the system phase space. One popular thermostat widely used in molecular dynamics simulations is the isokinetic Gaussian equations of motion, which fix the kinetic energy to a desired value at all times and generate the isokinetic canonical distribution [@Evans1990]. Unfortunately, although the kinetic energy of molecules is a good measure of the temperature, the thermal velocities cannot always be distinguished from the streaming velocity which is usually unknown beforehand. If the streaming velocity profile is presumed, the thermostat will maintain it, acting as a “profile-biased” thermostat (PBT), which is often undesirable [@Delhommelle2003]. There are configurational measures for the temperature which do not require the peculiar, thermal velocities to be known. Several expressions, all equal to one another in the thermodynamics limit, are known [@Owen2000]; the most popular one being (in various notations)$$\begin{gathered} T=T_{conF}\equiv\frac{\left\langle \nabla U\cdot\nabla U\right\rangle }{\left\langle \nabla\cdot\nabla U\right\rangle }=\\ \frac{\left\langle \frac{\partial U}{\partial q}\cdot\frac{\partial U}{\partial q}\right\rangle }{\left\langle \frac{\partial}{\partial q}\cdot\frac{\partial U}{\partial q}\right\rangle }=\frac{\sum_{i}^{Nd}\left\langle \frac{\partial U}{\partial q^{i}}\frac{\partial U}{\partial q^{i}}\right\rangle }{\sum_{i}^{Nd}\left\langle \frac{\partial}{\partial q^{i}}\frac{\partial U}{\partial q^{i}}\right\rangle }\label{eq:Tconf}\end{gathered}$$ where $T_{conF}$ is the configurational temperature in energetic units, $U$ is the interaction potential energy, $q$ is a vector containing the position of all the particles, $d$ is the Cartesian dimension of the system, $N$ is the number of particles, $i$ labels the degrees of freedom and the angular brackets represent an ensemble average. This expression has been known as the hypervirial theorem [@Gray1984] and has been proved for canonical and microcanonical ensembles [@Landau1980; @Owen2000; @Rugh1998]. The first successful implementation of a thermostat controlling the configuational temperature was with the Nosé-Hoover method. The spurious string phase observed in shear flow simualtions with PBT was eliminated when using the configuational thermostat [@Delhommelle2003]. Moreover, in the most recent revision of the method [@Braga2005] the projected phase space distribution was made canonical. In the Nosé-Hoover method the dynamics is extended by a degree of freedom that describes coupling to the thermostat, and the fluctuations of the temperature are governed by an extra parameter. It is of interest to see if a practical, constant configurational temperature thermostat can be developed that constrains the original system to a constant *instantaneous* configurational temperature. We take Eq. without the averaging brackets as a measure of instantaneous configurational temperature, by analogy to the Gaussian isokinetic thermostat which constrains the instantaneous kinetic energy. In this Letter we consider different ways of introducing the constraint into the original equilibrium Hamiltonian equations and the effect on the ensemble probability distribution. In fact, the resulting equation will be valid not only for the configurational thermostat but for any holonomic constraint. A constraint equation is insufficient to determine the constraint reaction forces: an additional principle has to be employed in order to find the constrained equations of motion [@Jose1998]. One can use principles restricting the reaction forces, for instance, d’Alambert’s principle of orthogonal forces or Gauss’ principle of least constraint. Alternatively, there is also the Hamilton variational principle of least action. For holonomic constraints, they all lead to the same equations of motion: $\lambda(q,\dot{q})=\dot{q}\cdot|\nabla h|^{-1}\nabla g-n\cdot\nabla U$ and $(\nabla U)_{||}$ is the component of $\nabla U$ tangent to . In the following we assume unit mass, for simplicity. One can move to the Hamiltonian formalism by introducing the generalized momenta simply as $p=\dot{q}$. This, together with give the standard Gaussian equations of motion with an isoconfigurational thermostat. The same equations are obtained through Dirac’s general treatment of constrained Hamiltonian systems [@Dirac1950] when the constraint is holonomic:$$\left\{ \begin{aligned}\dot{q} & =p\\ \dot{p} & =-\nabla U-\lambda(q,p)n=(\nabla U)_{||}-n\left(p\cdot\frac{\nabla g}{|\nabla h|}\right)\end{aligned} \right.\label{eq:dynamics1}$$ Let us consider the phase space properties of this dynamical system. Besides the constrained quantity itself, the energy of the system $E(q,p)\equiv\frac{1}{2}p\cdot p+U$ is also a constant along the actual trajectory: $$\dot{E}(q,p)=\frac{\partial E}{\partial q}\cdot\dot{q}+\frac{\partial E}{\partial p}\cdot\dot{p}=0-\lambda(q,p)(p\cdot n)=0\label{eq:dH/dt}$$ Moreover, if the constraint depends only on intramolecular distances, as the configurational temperature does, then the total momentum $P$ is preserved as well. We now investigate the evolution of the phase space elementary volumes by calculating the phase space compression factor:$$\begin{gathered} \Lambda\equiv\frac{\partial}{\partial q}\cdot\dot{q}+\frac{\partial}{\partial p}\cdot\dot{p}=0-n\cdot\frac{\partial\lambda}{\partial p}=-2n\cdot\frac{\nabla g}{|\nabla h|}=\\ -2\frac{\nabla h\cdot\nabla\dot{h}}{\nabla h\cdot\nabla h}=-2\frac{\nabla h\cdot\frac{d}{dt}\nabla h}{\nabla h\cdot\nabla h}=-\frac{d}{dt}\ln(\nabla h\cdot\nabla h)\label{eq:Lambda}\end{gathered}$$ If we write the Liouville equation with the phase space compression factor [@Evans1990], $$\frac{d\ln f}{dt}=-\Lambda\label{eq:Liouville}$$ the probability distribution density function $f$ is easily found and is written over the whole phase space as follows,$$f=\nabla h\cdot\nabla h\,\delta(h-h_{0})\delta(g)\delta(E-E_{0})\delta(P-P_{0})\label{eq:f1}$$ For some constraints the weight $\nabla h\cdot\nabla h$ amounts to a constant. This is the case for the bond length constraints, for instance. Generally, however, the distribution is non-uniform over equal phase space volumes. However, the volumes themselves are constantly being stretched and compressed by the dynamics as the phase space compression factor is non-zero (albeit being zero on average). If one uses the so called invariant measure of the phase space [@Tuckerman1999], which for the dynamics turns out to be just the $\nabla h\cdot\nabla h\, dqdp$ [@Melchionna2000], then the density function becomes uniform over non-equal but invariant volumes. Whichever the interpretation of this weight, the dynamics will only preserve a non-canonical distribution unless $\nabla h\cdot\nabla h$ is constant, and the values accumulated during the simulation would have to be “reweighted” before any comparison with experimental data is possible. It is preferable to find such isoconfigurational thermostats that would produce a microcanonical, or better still, a portion of the Gibbs canonical distribution – a restricted canonical distribution. In order for the configuration to satisfy a holonomic constraint, the velocities should be tangent to the constraint hypersurface: $\dot{q}\cdot n=0$. Let us take an arbitrary vector $p$ that we can call a peculiar momentum and subtract from it its component, which we name a “convection” term, perpendicular to the hypersurface: $\dot{q}=p-n(p\cdot n)$. We now use the internal energy expressed in the coordinates $(q,p)$, $$H=\frac{\dot{q}\cdot\dot{q}}{2}+U=\frac{p\cdot p-(p\cdot n)^{2}}{2}+U\label{eq:ArnoldH}$$ as a Hamiltonian to generate the following equations of motion:$$\left\{ \begin{aligned}\dot{q} & =p-n(p\cdot n)\\ \dot{p} & =-\nabla U+(p\cdot n)\nabla(p\cdot n)\end{aligned} \right.\label{eq:dynamics2}$$ The Hamiltonian is known to be Hamiltonian in redundant coordinates [@ArnoldCelestial]. As for any other Hamiltonian system, the phase space compression factor is zero, and the distribution is the (restricted) microcanonical,$$f_{\mu}=\delta(h-h_{0})\delta(g)\delta(H-H_{0})\delta(P-P_{0}).\label{eq:f2}$$ Finally, we consider a more general family of equations satisfying the holonomic constraint:$$\left\{ \begin{aligned}\dot{q} & =p-n(p\cdot n)\\ \dot{p} & =-(\nabla U)_{||}+R(q)\end{aligned} \right.\label{eq:dynamics3}$$ where we excluded from the unknown reaction force $R$ a term that is to cancel the normal component of the intermolecular interaction force and assumed that $R$ should only depend on the configurational information and not on the momenta. We will demand the restricted canonical distribution,$$f_{c}=e^{-\beta H}\delta(h-h_{0})\delta(g)\delta(P-P_{0})\label{eq:Gibbs}$$ where $\beta=1/T_{0}$. According to the Liouville equation , we must then have $$\Lambda=-\frac{d}{dt}\ln f=\beta\dot{H}$$ As we calculate the $\Lambda$ and $\dot{H}$ similarly to and for the system , we find the following relation for $R$:$$-\nabla\cdot n(p\cdot n)=\beta p\cdot R$$ where the left-hand side can be further modified to see the scalar product of $p$,$$-\nabla\cdot n(p\cdot n)=p\cdot(\nabla\cdot n)n$$ Since it must be valid for any $p$ we obtain the solution,$$R=-T_{0}(\nabla\cdot n)n=-T_{0}[n(\nabla\cdot n)+(n\cdot\nabla)n]\label{eq:R}$$ where the first term in the brackets is normal and the second is tangential to the $\Omega_{h}$ (because $n\cdot(n\cdot\nabla)n=\frac{1}{2}(n\cdot\nabla)(n\cdot n)=0$). When $T_{conF}=T_{0}$ is chosen as the holonomic constraint $h(q)=h_{0}$, the equations and describe a dynamics that preserves the configurational temperature at each time step and generates a canonical distribution on the constraint hypersurface. In conclusion, three sets of dynamic equation were derived describing three different isoconfigurational ensembles: non-canonical , ; microcanonical , ; and canonical , . The canonical isoconfigurational ensemble is deemed to be the best candidate to simulate a system at a given temperature that is established instantaneously throughout the system based only on the configurational information, which should be especially useful in simulation of flow when the streaming velocity profile is unknown. Work is underway by the authors to numerically test the equations obtained and specifically resolve the implementation difficulties, such the algebraic unwieldiness and numerical stiffness of higher derivatives of potential energy and the choice of the starting configuration that corresponds to the desired temperature.
--- abstract: | We consider the problem of constructing fast and small binary adder circuits. Among widely-used adders, the Kogge-Stone adder is often considered the fastest, because it computes the carry bits for two $n$-bit numbers (where $n$ is a power of two) with a depth of $2{\log_{2}}n$ logic gates, size $4 n{\log_{2}}n$, and all fan-outs bounded by two. Fan-outs of more than two are avoided, because they lead to the insertion of repeaters for repowering the signal and additional depth in the physical implementation. However, the depth bound of the Kogge-Stone adder is off by a factor of two from the lower bound of ${\log_{2}}n$. This bound is achieved asymptotically in two separate constructions by Brent and Krapchenko. Brent’s construction gives neither a bound on the fan-out nor the size, while Krapchenko’s adder has linear size, but can have up to linear fan-out. With a fan-out bound of two, neither construction achieves a depth of less than $2 {\log_{2}}n$. In a further approach, Brent and Kung proposed an adder with linear size and fan-out two, but twice the depth of the Kogge-Stone adder. These results are 33-43 years old and no substantial theoretical improvement for has been made since then. In this paper we integrate the individual advantages of all previous adder circuits into a new family of full adders, the first to improve on the depth bound of $2{\log_{2}}n$ while maintaining a fan-out bound of two. Our adders achieve an asymptotically optimum logic gate depth of ${\log_{2}}n + o({\log_{2}}n)$ and linear size $\mathcal {O}(n)$. author: - | Stephan Held\ Research Institute for Discrete Mathematics, University of Bonn\ \ Sophie Theresa Spirkl\ Research Institute for Discrete Mathematics, University of Bonn title: 'Binary Adder Circuits of Asymptotically Minimum Depth, Linear Size, and Fan-Out Two' --- Introduction ============= Given two binary addends $A = (a_n \dots a_1)$ and $B = (b_n \dots b_1)$, where index $n$ denotes the most significant bit, their sum $S = A+B$ has $n+1$ bits. We are looking for a logic circuit, also called an *adder*, that computes $S$. Here, a *logic circuit* is a non-empty connected acyclic directed graph consisting of nodes that are either *gates* with incoming and outgoing edges, *inputs* with at least one outgoing edge and no incoming edges, or *outputs* with exactly one incoming edge and no outgoing edges. Gates represent one or two bit Boolean functions, specifically <span style="font-variant:small-caps;">And</span>, <span style="font-variant:small-caps;">Or</span>, <span style="font-variant:small-caps;">Xor</span>, <span style="font-variant:small-caps;">Not</span> or their negations. A small example is shown on the right side of Figure \[fig:pfx-to-gate3\]. The *fan-in* is the maximum number of incoming edges at a vertex, and it is bounded by two for all gates. The main characteristics in adder design are the [*depth*]{}, the [*size*]{}, and the [*fan-out*]{} of a circuit. The depth is defined as the maximum length of a directed path in the logic circuit and is used as a measure for its speed. The lower the depth, the faster is the adder. The size is the total number of gates in the circuit, and is used as a measure for the space and power consumption of the adder, both of which we aim to minimize. The fan-out is the maximum number of outgoing edges at a vertex. High fan-outs increase the delay and require additional repeater gates (implementing the identity function) in physical design. Thus, when comparing the depth of adder circuits, their fan-out should be considered as well; we will focus on the usual fan-out bound of two. Circuits with higher fan-outs can be transformed into fan-out two circuits by replacing each interconnect with high fan-out by a balanced binary [*repeater tree*]{}, i.e. the underlying graph is a tree and all gates are repeater gates. However, this increases the size linearly and the depth logarithmically in the fan-out. Hoover, Klawe, and Pippenger \[1984\] gave a smarter way to bound the fan-out of a given circuit, but it would also triple the size and depth in our case of gates with two inputs. Using logic circuit depth as a measure for speed is a common practice in logic synthesis that simplifies many aspects of physical hardware. In CMOS technology, <span style="font-variant:small-caps;">Nand</span>/<span style="font-variant:small-caps;">Nor</span> gates are faster than <span style="font-variant:small-caps;">And</span>/<span style="font-variant:small-caps;">Or</span> gates and efficient implementations exist for integrated multi-input [And-Or]{}-Inversion gates and [Or-And]{}-Inversion gates. We assume that a [*technology mapping*]{} step [@Chatterjee+techmap2006; @keutzer88] translates the adder circuit after logic synthesis using logic gates that are best for the given technology. Despite its simplicity, the depth-based model is at the core of programs such as [BonnLogic]{} [@bonnlogic] for refining carry bit circuits, which is an integral part of the current IBM microprocessor design flow. Recently, we reduced the running time for computing such carry bit circuits significantly from $\mathcal{O}(n^3)$ to $\mathcal{O}(n \log n)$ [@held+spirkl:2014]. Exemplary, for all newly proposed adder circuits in this paper we will demonstrate how to efficiently transform them into equivalent circuits using only <span style="font-variant:small-caps;">Nand</span>/<span style="font-variant:small-caps;">Nor</span> and <span style="font-variant:small-caps;">Not</span> gates. Like most existing adders, we use the notion of generate and propagate signals, see [@sklansky; @brent; @knowles]. For each position $1 \leq i \leq n$, we compute a *generate signal* $y_i$ and a *propagate signal* $x_i$, which are defined as follows: $$\begin{array}{rl} x_i &= a_i \oplus b_i,\\ y_i &= a_i \wedge b_i, \end{array} \label{eqn:generate-and-propagat}$$ where $\wedge$ and $\oplus$ denote the binary <span style="font-variant:small-caps;">And</span> and <span style="font-variant:small-caps;">Xor</span> functions. The *carry bit* at position $i+1$ can be computed recursively as $c_{i+1} = y_i \vee (x_i \wedge c_i)$, since there is carry bit at position $i+1$ if the $i$-th bit of both inputs is $1$ or, assuming this is not the case, if at least one (hence exactly one) of these bits is $1$ and there was a carry bit at position $i$. The first carry bit $c_1$ can be used to represent the *carry-in*, but we usually assume $c_1 = 0$. The last carry bit $c_{n+1}$ is also called the *carry-out*. From the carry bits, we can compute the output $S$ via $$s_i = c_i \oplus x_i \text{ for } 1 \leq i \leq n \text{ and } s_{n+1} = c_{n+1}. \label{eqn:sum-computation}$$ With this preparation of constant depth, linear size, and fan-out two at the inputs $a_i, b_i$ and fan-out one at the carry bits $c_{i+1}$ ($i=1,\dots,n$), the binary addition is reduced to the problem of computing all carry bits $c_{i+1}$ from $x_i,y_i$ ($i=1,\dots,n$). [: ]{} [*From now on, we will omit the preparatory steps (\[eqn:generate-and-propagat\]) and (\[eqn:sum-computation\]) and consider a circuit an adder circuit if it computes all $c_{i+1}$ from $x_i,y_i$ ($i=1,\dots,n$).*]{} Expanding the recursive formula for $c_{i+1}$ as in equation (\[eqn:and-or-form\]) results in a logic circuit that is a path of alternating <span style="font-variant:small-caps;">And</span> and <span style="font-variant:small-caps;">Or</span> gates. It corresponds to the long addition method and has linear depth $2(n-1)$. $$\begin{aligned} c_{i+1} = \ &y_i \vee \left(x_i \wedge (y_{i-1} \vee (x_{i-1} \wedge \dots \wedge (y_2 \vee (x_2 \wedge y_1)). \dots ))\right) \label{eqn:and-or-form} $$ Prefix Graph Adders {#sec:prefix-gate-adders} ------------------- For two pairs $z_i = (x_i, y_i)$ and $z_j = (x_j, y_j)$, we define the associative [*prefix operator*]{} $\circ$ as $${x_i \choose y_i} \circ {x_j \choose y_j} = {x_i \wedge x_j \choose y_i \vee (x_i \wedge y_j)}. \label{eqn:prefix-operator}$$ A circuit computing (\[eqn:prefix-operator\]) can be implemented as a logic circuit consisting of three gates and with depth two as shown in Figure \[fig:pfx-to-gate3\]. For $i=1,\dots,n$, the result of the prefix computation $z_i \circ \dots \circ z_1$ of the expression $z_n \circ \dots \circ z_1$ contains the carry bit $c_{i+1}$: $${x_i \wedge x_{i-1} \wedge \dots \wedge x_1 \choose c_{i+1}} = {x_i \choose y_i} \circ {x_{i-1} \choose y_{i-1}} \circ \dots \circ {x_1 \choose y_1}.\label{eqn:prefix-gate-carry}$$ A circuit of $\circ$-gates computing all prefixes $z_i \circ \dots \circ z_1$ ($i=1,\dots,n$) for an associative operater $\circ$ is called a *prefix graph*. A prefix graph yields an adder by expanding each $\circ$-gate as in Figure \[fig:pfx-to-gate3\], and extracting the carry bits as in . [0.49]{} [ ]{} [0.49]{} [ ]{} Most previous constructions for adders are based on prefix graphs of small depth, size and/or fan-out. Sklansky \[1960\] developed a prefix graph of minimum depth ${\log_{2}}n$, size $\frac{1}{2}n {\log_{2}}n$, but high fan-out $\frac{1}{2}n+1$. The first prefix graph with logarithmic depth ($2\log n -1$) and linear size ($3n - \log n - 2$) was developed by Ofman \[1962\], exhibiting a non-constant fan-out of $\frac{1}{2}\log n$. Kogge and Stone \[1973\] introduced the [*recursive doubling algorithm*]{} which leads to a prefix graph with depth $\log_2 n$ and fan-out two (see Figure \[fig:kogge-pfx\]). Since we will use variants of it in our construction, we describe it in detail. For $1\le s\le t\le n$, let $Z_{s,t} := z_t\circ\dots\circ z_s,$ and for $x \in {\mathbb{R}}$, let $(x)^+:= \max\{x,0\}$. The graph has ${{\log_{2}}n}$ levels, and on level $i$ it computes for every input $j$ ($1\le j \le n$) the prefix $Z_{1+{\left(j - 2^i\right)^{+}},j} = z_j \circ\dots\circ z_{1+{\left(j - 2^i\right)^{+}}}$ according to the recursive formula $$Z_{1+(j - 2^i)^{+},j} = Z_{1+(j - 2^{i-1})^{+} ,j} \circ Z_{1+(j - 2^i)^{+} ,(j - 2^{i-1})^{+}}, \label{eq:ks}$$ from the prefixes of sequences of $2^{i-1}$ consecutive inputs computed in the previous level. The fan-out is bounded by two, since every intermediate result is used exactly twice: once as the “upper half” and once as the “lower half” of an expression of the form $z_j \circ \dots \circ z_{1+{\left(j - 2^i\right)^{+}}}$. Note that for level $i$ ($1 \leq i \leq {\log_{2}}n$), a repeater gate (which computes the identity function) is used instead of a $\circ$-gate if $j \leq 2^i$, i. e. in the case that the right input in is empty. Repeaters are shown as blue squares in Figure \[fig:kogge-pfx\]. The Kogge-Stone prefix graph minimizes both depth and fan-out. On the other hand, since there is a linear number of gates at each level, the total size in terms of prefix gates is $n{{\log_{2}}n} - \frac{n}{2}$. Ladner and Fischer \[1980\] constructed a prefix graph of depth $\log_2 n$ but high fan-out. Brent and Kung found a linear-size prefix graph with fan-out two, but twice the depth of the other constructions. Finally, Han and Carlson \[1987\] described a hybrid between a Kogge-Stone adder and a Brent-Kung adder which achieves a trade-off between depth and size. Lower bounds for the trade-off between the depth and size of a prefix graph can be found in [@fich; @sergeev]. The above prefix graphs can be used for prefix computations with respect to any associative operator $\circ$. In fact, we will later use a prefix graph in which the operator $\circ$ represents an [And]{} gate. When turning one of the above prefix graph adders into a logic circuit for addition such that each prefix gate is implemented as in Figure \[fig:pfx-to-gate3\], the depth of the logic circuit is twice the depth of the prefix graph and the number of logic gates is three times the number of prefix gates. The fan-out of the underlying logic circuit can increase by one compared to the prefix graph, because the left propagate signal $x_i$ is used twice within a prefix gate. In Section \[sec:brent-kung-step\], we will see that in the case of the Brent-Kung adder a fan-out of two can be achieved by using reduced prefix gates. Any adder constructed from a prefix graph has a logic gate depth of at least ${\log_{\varphi}}n -1 > 1.44 {\log_{2}}n - 1$, where $\varphi = \frac{1+ \sqrt{5}}{2}$ is the golden section [@held+spirkl:2014], see also [@code-trees]. In [@held+spirkl:2014] an adder of size $\mathcal{O}(n{\log_{2}}{\log_{2}}n)$ asymptotically attaining this depth bound is described, however with a high fan-out of $\sqrt{n}+1$. Non-Prefix Graph Adders ----------------------- Since none of the $2n$ inputs $x_i,y_i$ ($1 \le i\le n$) except for $x_1$ are redundant for $c_{n+1}$, the depth of any adder circuit using 2-input gates is at least ${\log_{2}}n+1$, which would be attained by a balanced binary tree with inputs/leaves $x_i,y_i$ ($1 \le i\le n$). With adders that are not based on prefix graphs, this bound is asymptotically tight. Krapchenko showed that any formula (a circuit with tree topology) for computing $c_{n+1}$ has depth at least ${\log_{2}}n +0.15{\log_{2}}{\log_{2}}{\log_{2}}n + \mathcal{O}(1)$ [@krapchenkoLB]. Brent \[1970\] gives an approximation scheme for a single carry bit circuit attaining an asymptotic depth of $(1+{\varepsilon}){\log_{2}}n +o({\log_{2}}n)$ for any given ${\varepsilon}>0$. The best known depth for a single carry bit circuit is ${\log_{2}}n +{\log_{2}}{\log_{2}}n + \mathcal{O}(1)$, due to Grinchuk \[2008\]. However, [@Grinchuk-ShallowCarryBit2009] and [@brent] did not address how to overlay circuits for the different carry bits to bound the size and fan-out of an adder based on their circuits. One problem in sharing intermediate results is that this can create high fan-outs. Krapchenko \[1967\] (see [@wegener pp. 42-46]) presented an adder with asymptotically optimum depth ${\log_{2}}n +o({\log_{2}}n)$ and linear size. It was refined for small $n$ by [@Gashkov+ImprovingKrapchenko2007]. However, the fan-out is almost linear. Our Contribution {#sec:our_contribution} ---------------- In this paper, we present the first family of adders of asymptotically optimum depth, linear size, and fan-out bound two: [theorem]{}[maintheorem]{} \[thm:central-theorem\] Given two $n$-bit numbers $A$,$B$, there is a logic circuit computing the sum $A+B$, using gates with fan-in and fan-out two and that has depth ${\log_{2}}n + o(\log n)$ and size $\mathcal{O}(n)$. The rest of the paper is organized as follows. In Section \[sec:min-depth-fan-out-two\], we develop a family of adders of asymptotically minimum depth, fan-out two, but super-linear size $\mathcal{O}\left(n{\left\lceil \sqrt{{\log_{2}}n} \right\rceil}^2 2^{\sqrt{{\log_{2}}n}}\right)$. In Section \[sec:linearizing-size\], using reductions similar to [@krapchenko], this adder is transformed into an adder of linear size with the asymptotically same depth, proving Theorem \[thm:central-theorem\]. While all of the above adders use only <span style="font-variant:small-caps;">And</span>/<span style="font-variant:small-caps;">Or</span> gates and repeaters, we show in Section \[sec:technology-mapping\] that Theorem \[thm:central-theorem\] holds also if only <span style="font-variant:small-caps;">Nand</span>/<span style="font-variant:small-caps;">Nor</span> and <span style="font-variant:small-caps;">Not</span> gates are available. Asymptotically Optimum Depth and Fan-Out Two {#sec:min-depth-fan-out-two} ============================================ For $1\le s\le t \le n$, let $X_{s,t}$ and $Y_{s,t}$ denote the propagate and generate signal for the sequence of indices between $s$ and $t$, i.e.$$\begin{array}{rl} X_{s,t} &= \bigwedge_{i=s}^t x_i\\ Y_{s,t} &= y_t \vee \left(x_t \wedge (y_{t-1} \vee (x_{t-1} \wedge \dots \wedge (y_{s+1} \vee (x_{s+1} \wedge y_s))\dots ))\right)\\ \end{array}$$ The adders based on prefix graphs as in Section \[sec:prefix-gate-adders\] impose a common topological structure on the computation of intermediate results $X_{s,t}$ and $Y_{s,t}$. In the adder described by Brent \[1970\], on the other hand, intermediate results $X_{s,t}$ and $Y_{s,t}$ are computed separately within larger blocks. Let $n=2^{rk}$ for $r \in \mathbb{N}$ and $k \in \mathbb{N}$ to be chosen later. A central idea of generating a faster adder is to use multi-fan-in (also called high-radix) subcircuits within a Kogge-Stone prefix graph. While all the prefix gates in Figure \[fig:kogge-pfx\] have fan-in two, we want to use prefix gates with fan-in $2^r$, so that the number of levels reduces from ${\log_{2}}n$ to $\log_{2^r} n$ = $\frac{1}{r}{\log_{2}}n$. Each prefix gate with fan-in $2^r$ represents a logic circuit with fan-in and fan-out bounded by two. Since the output of each prefix gate will be used in $2^r$ prefix gates at the next level, our approach also requires to duplicate the intermediate result at the output of a prefix gate $2^{r-1}$ times. To accomplish this, we consider the computation of generate and propagate sequences separately. Our adder consists of two global Kogge-Stone type prefix graphs. The first such graph uses 2-input <span style="font-variant:small-caps;">And</span>-gates and computes propagate signals used in the other prefix graph. This graph uses $2^r$-input subcircuits that are arranged in the same way as the Kogge-Stone graph, and it computes the generate (carry) signals. Both graphs are modified to duplicate intermediate generate signals $2^{r-1}$ times and intermediate propagate signals $2^r$ times so that the overall constructions obeys the fan-out bound of two. Multi-Input Generate Gates {#sec:multi-input-generate-gate} -------------------------- We now introduce [*multi-input generate gates*]{}, which are the main building block for computing the generate signals. Given $2^r$ propagate and generate pairs $(\tilde{x}_{2^r},\tilde{y}_{2^r}),\dots, (\tilde{x}_{1},\tilde{y}_{1})$, a multi-input generate gate computes the generate signal $$\tilde{Y}_{1,2^r} = \tilde{y}_{2^r} \vee \left(\tilde{x}_{2^r} \wedge (\tilde{y}_{2^r-1} \vee (\tilde{x}_{2^r-1} \wedge \dots \wedge (\tilde{y}_{2} \vee (\tilde{x}_{2} \wedge \tilde{y}_1))\dots ))\right).$$ The input pairs $(\tilde{x}_i,\tilde{y}_i)$ $(i\in \{1,\dots, 2^r\})$ are not necessarily the input pairs of the adder; they can be intermediate results. Each multi-input generate gate has $2^{r-1}$ outputs, each of which provides the result $\tilde{Y}_{1,2^r}$, because later we want to reuse this signal $2^r$ times and bound the fan-out of each output by two. In contrast to two-input prefix gates computing (\[eqn:prefix-operator\]), multi-input generate gates do not compute the propagate signals $\tilde{X}_{1,2^r}=\bigwedge_{i=1}^{2^r} \tilde{x}_i$ for the given input pairs. All required propagate signals will be computed by the separate [And]{}-prefix graph, described in Section \[sec:and-pfx-graph\]. [ ]{} Figure \[fig:fo-pfx-small\] shows an example of a multi-input generate gate with $8$ inputs. A $2^r$-input prefix gate computes $\tilde{Y}_{1,2^r}$ as in the disjunctive normal form $$\tilde{Y}_{1,2^r} = \displaystyle \bigvee_{j=1}^{2^r} \left( \tilde{y}_j \wedge\left(\bigwedge_{i=j+1}^{2^r} \tilde{x}_i\right)\right),$$ first computing all the minterms $m_j:=\tilde{y}_j \wedge\left(\bigwedge_{i=j+1}^{2^r} \tilde{x}_i\right)$ $(j=1,\dots,2^r)$, and then the disjunction $\bigvee_{j=1}^{2^r} m_j$. The terms $\bigwedge_{i=j+1}^{2^r} \tilde{x}_i$ are computed as a Kogge-Stone [And]{}-suffix graph, which arises from a Kogge-Stone prefix graph by reversing the ordering of the inputs. A single stage of (red) [And]{}-gates and one repeater concludes the computation of the minterms. Each input $\tilde{y}_i$ is used exactly once within this circuit. The repeater is dispensable but simplifies the size formula and will become useful in Section \[sec:technology-mapping\]. Finally, instead of computing the disjunction $\bigvee_{j=1}^{2^r} m_j$ by a balanced binary [Or]{} tree and duplicating the results $2^{r-1}$ times through a balanced repeater tree, the duplication is accomplished by $r$ rows of $2^{r-1}$ [Or]{}-gates as shown in Figure \[fig:fo-pfx-small\]. Formally, let $M_{i,j} = \bigvee_{i'=i}^{j} m_{i'}$ be the conjunction of minterms $i, i+1, \dots, j$. Then, on level $l \in \{1,\dots, r\}$, we compute each signal of the form $M_{i2^{l} + 1, (i+1)2^l}$, $i=0, \dots, 2^{r-l}-1$, from the previous level, and we compute $2^{l-1}$ copies of it. By using $M_{i2^{l} + 1, (i+1)2^l} = M_{{2i 2^{l-1} + 1, (2i+1)2^{l-1}}} \vee M_{{(2i+1)2^{l-1} + 1, (2i+2)2^{l-1}}}$, and since each preceding signal is available $2^{l-2}$ times ($l\ge 2$), we can ensure that each of them has fan-out two. On the last level, we will have computed $2^{r-1}$ copies of $M_{1,2^r} = \tilde{Y}_{1,2^r}$. Each level uses $2^{r-1}$ <span style="font-variant:small-caps;">Or</span>-gates. Note that a similar construction for reducing fan-out has been used by Lupanov when extending his well-known bounded-size representation of general Boolean functions to circuits with bounded fan-out [@LupanovBoundedBranching62]. The multi-input generate gate has $2^r$ generate/propagate pairs as input and $2^{r-1}$ outputs. Each propagate input has fan-out two and each generate input has fan-out one. The gate consists of $r2^r + (r+1)2^{r-1}$ logic gates which have fan-out at most two. The depth for the propagate inputs $\tilde{x}_i$ is $2r+1$ and the depth for the generate inputs $\tilde{y}_i$ is $r+1$ ($i\in \{1,\dots,2^r\}$). All the terms $\bigwedge_{i=j+1}^{2^r} \tilde{x}_i$ are computed as a Kogge-Stone [And]{}-suffix graph (blue and yellow gates in Figure \[fig:fo-pfx-small\]) of size $$2^r\lceil {\log_{2}}2^r\rceil -\frac{2^r}{2} = (r-1)2^r + 2^{r-1}.$$ Then, there is a level of $2^r$ (red) [And]{} gates and one repeater, concluding the computation of the minterms. Finally, there are $r2^{r-1}$ (green) [Or]{}-gates to compute the disjunction $\bigvee_{j=1}^{2^r} m_j$ $2^r$ times, for a total of $$r2^r + (r+1)2^{r-1}$$ gates. By construction, no gate and propagate input has fan-out larger than two, and all generate inputs have fan-out one. The depth is $r$ for the [And]{}-suffix graph, one for the red gates, and $r$ for the disjunctions, yielding the desired depths of $2r+1$ for the propagate inputs and $r+1$ for the generate inputs. Augmented Kogge-Stone <span style="font-variant:small-caps;">And</span>-Prefix Graph {#sec:and-pfx-graph} ------------------------------------------------------------------------------------ The second important component of our construction is the [*augmented Kogge-Stone <span style="font-variant:small-caps;">And</span>-prefix graph*]{}. It is used to compute $X_{s,t}= \bigwedge_{i=s}^t x_i$ for all $1 \leq t \leq n$ and $s =1+{\left(t-2^{rl}\right)^{+}}$ with $0 \leq l < k$, providing each output $2^r$ times through $2^r$ individual gates. It is constructed as follows. First, we take a Kogge-Stone \[1973\] prefix graph, where the prefix operator is an [And]{}-gate, i.e. $\circ = \wedge$. It consists of ${\log_{2}}n$ levels, and on level $i$ it computes for every input $j$ ($1\le j \le n$) the prefix $X_{1+{\left(j - 2^i\right)^{+}},j}$ from the prefixes of sequences of $2^{i-1}$ consecutive inputs computed in the previous level. Each of the results $X_{s,t}$ from level $rl$ will later be used in $2^r$ multi-input generate gates for all $0\le l < k$, $s =1+ {\left(t-2^{rl}\right)^{+}}$ and $ 1\le t \le n$. In order to achieve a fan-out bound of two, starting at the inputs, we insert one row of $n$ repeaters after every $r$ levels of [And]{}-gates. This allows to use the repeaters as the inputs for the next level, and to extract the signals $X_{s,t}$ once at the <span style="font-variant:small-caps;">And</span>-gates before the repeaters. The construction is shown in Figure \[fig:kogge-aug\] with the extracted outputs $X_{s,t}$ shown as red arrows. [ ]{} The last block of $r$ rows of gates (hatched gates in Figure \[fig:kogge-aug\]) of the Kogge-Stone prefix graph can be omitted in our construction to reduce the size. Each output signal $X_{s,t}$ will be input to a multi-input generate gate, where it is immediately duplicated. Thus, each output $X_{s,t}$ of the augmented Kogge-Stone <span style="font-variant:small-caps;">And</span>-prefix graph has to be provided through an individual gate. To this end, at each of the $nk$ outputs, we add $2^{r+1}-1$ repeater gates as the vertices of a balanced binary tree to create $2^r$ copies of the signal with a single repeater serving each leaf. For simplicity these repeaters are hidden in Figure \[fig:kogge-aug\]. \[lem:size-augmented-kogge-stone-and-prefix-graph\] The total size of the augmented Kogge-Stone <span style="font-variant:small-caps;">And</span>-prefix graph is $ nr(k-1) + nk2^{r+1}$. Each binary repeater tree at one of the $nk$ outputs consists of $2^{r+1}-1$ repeaters, summing up to $nk(2^{r+1}-1)$ repeaters in these repeater trees. The remaining circuit consists of $r(k-1)+k$ rows ($r(k-1)$ rows of [And]{}-gates and $k$ rows of repeaters) of $n$ gates each, summing up to $n(r(k-1)+k)$ gates. Altogether, the circuit contains $ nr(k-1) + nk2^{r+1}$ gates. The signal $X_{s,t}$ for $1 \leq t \leq n$ and $s = 1+ {\left(t-2^{rl}\right)^{+}}$ for $0 \leq l < k$ is available $2^r$ times with internal fan-out one at a depth of $(l+1)(r+1)$. The functional correctness is clear by construction. For the depth bound, let $1 \leq t \leq n$ and $0 \leq l < k$. Then, for $s =1+{\left(t-2^{rl}\right)^{+}}$, the signal $X_{s,t}$ is available at the bottom of the $l$-th block at a depth of $l(r+1)$. Subsequently, we create $2^r$ copies of the signal in a repeater tree of depth $r+1$. Together, this gives the desired depth $(l+1)(r+1)$. Multi-Input Generate Adder -------------------------- We now describe the multi-input generate adder for $n = 2^{rk}$. It consists of an augmented Kogge-Stone [And]{}-prefix graph from the previous section and a circuit consisting of multi-input generate gates similar to a radix-$2^r$ Kogge-Stone adder. The construction uses $k$ rows with $n$ multi-input generate gates or repeater trees (see Figure \[fig:fixed-size2\]). The $t$-th multi-input generate gate in level $l\in\{1,\dots,k\}$ computes $Y_{1+{\left(t-2^{rl}\right)^{+}},t}$ according to the formula $Y_{1+{\left(t-2^{rl}\right)^{+}},t} = $ $$\bigvee_{j=1}^{2^r} \left( Y_{1+{\left(t-j2^{r(l-1)}\right)^{+}},{\left(t-(j-1)2^{r(l-1)}\right)^{+}}} \wedge \left(\bigwedge_{k=j+1}^{2^r}X_{1+{\left(t-k2^{r(l-1)}\right)^{+}},{\left(t-(k-1)2^{r(l-1)}\right)^{+}}} \right)\right). \label{eqn:multi-input-generate-adder-recursion}$$ If ${\left(t-2^{rl}\right)^{+}} < {\left(t-2^{r(l-1)}\right)^{+}}$ (yellow circuits in Figure \[fig:fixed-size2\]), this computation is carried out using a multi-input generate gate from Section \[sec:multi-input-generate-gate\]. As its inputs, it uses generate signals from the previous level, $l-1$, and propagate signals obtained from the augmented Kogge-Stone <span style="font-variant:small-caps;">And</span>-prefix graph. Except for the last level, each intermediate generate signal will be used $2^r$ times as in (\[eqn:multi-input-generate-adder-recursion\]) in the next level. As the fan-out of each generate input inside a multi-input generate gate is one, we need to provide $2^{r-1}$ copies through individual gates to serve $2^r$ multi-input generate gates with fan-out two. If ${\left(t-2^{rl}\right)^{+}} = {\left(t-2^{r(l-1)}\right)^{+}}$ (blue squares in Figure \[fig:fixed-size2\]), $Y_{1+{\left(t-2^{rl}\right)^{+}},t}$ is already computed in the previous level, and in this level it is sufficient to duplicate the signal $2^{r-1}$ times using a balanced binary repeater tree. The augmented Kogge-Stone [And]{}-prefix graph provides each signal $2^r$ times with individual repeaters. Thus, it can be distributed to $2^r$ multi-input generate gates, where the fan-out of each propagate input is two. For the first level of multi-input generate gates, we duplicate each generate signal $y_i$ at an input $i\in \{1,\dots,n\}$ using a balanced binary repeater tree of depth $r-1$ and size $2 + 2^2 + \dots + 2^{r-1} = 2^{r}-2$. Again, we can distribute each copy to two multi-input generate gates, maintaining fan-out two. In the last level of multi-input generate gates, we do not need to duplicate the signals anymore. Instead of the $r$ rows of $2^{r-1}$ [Or]{}-gates each, we can compute the single outputs using a balanced binary tree of $2^r-1$ [Or]{}-gates and depth $r$. [ ]{} The multi-input generate adder for $n=2^{rk}$ bits obeys a fan-out bound of two, contains less than $$3 nk(r+2)2^{r-1} + n2^{r} + nrk$$ gates, and has depth $$kr + 2r + k + 1.$$ \[lem:depth+size-lemma\] Inside each multi-input generate gate, the fan-out of propagate inputs is two and the fan-out of generate inputs is one. Thus, it suffices to observe that in each non-output level there are $2^{r}$ copies of each propagate signal and $2^{r-1}$ copies of each generate signal, and that the fan-out of two holds within the augmented Kogge-Stone graph and within each multi-input generate gate. By Lemma \[lem:size-augmented-kogge-stone-and-prefix-graph\], the size of the augmented Kogge-Stone [And]{}-prefix graph is $nr(k-1) + nk2^{r+1}$. The size of the $n$ balanced binary trees duplicating the input generate signals is $n (2^{r}-2)$. The remainder of the graph consists of $k$ rows of $n$ $2^r$-input multi-input generate gates or repeater trees. The size of a repeater tree (blue boxes in Figure \[fig:fixed-size2\]) is at most $2^{r-1}-1 \le r2^r + (r+1)2^{r-1}$ ($r\ge 1$), which is the size of a multi-input generate gate. Thus, the size of all these multi-input generate gates is at most $nk\left(r2^r + (r+1)2^{r-1}\right)$. Summing up, the total size is at most $$\begin{array}{rl} & nr(k-1) + nk 2^{r+1} + n (2^{r}-2) + nk \left(r2^r + (r+1)2^{r-1}\right) \\ = & nk2^{r+1} + nkr2^r + n2^r + nk(r+1)2^{r-1} + nkr - n(r + 2) \\ = & nk\left(4+ 2 r + (r+1)\right)2^{r-1} + n2^{r} + nkr - n(r + 2) \\ < & 3nk\left(r+2\right)2^{r-1} + n2^{r} + nkr.\\ \end{array}$$ For a simpler depth analysis, we assume that the input generate signals $y_i$ arrive delayed at a depth of $r+2$. The generate input signals traverse a binary tree of depth $r-1$ and the propagate input signals traverse a binary tree of depth $r+1$ before reaching the first multi-input generate gate, i. e.  generate signals $y_i$ become available at depth $2r+1$ and propagate signals at depth $r+1$. Thus, the first row of multi-input generate gates has depth $$3r+2 = \max \{2r+1 + 1+ r,r+1+r+1+r\},$$ where the first term in the maximum is caused by the delayed generate signals $y_i$ and the second term by the propagate signals $x_i$ ($1\le i \le 1$). For the next level, the propagate signals are available at time $2r+2$, and the generate signals at time $3r+2$, and the propagate signals again arrive $r$ time units before the corresponding generate signals, so at the next level, both signals arrive $r+1$ time units later than they did before. Inductively, we know that for each level $2 \le l \le k$, the generate and propagate signals arrive at a depth of $(l-1)(r+1)$ more than than they did for at the first level. Consequently, the total depth of the adder is $(k-1)(r+1) + 3r + 2 = kr + 2r + k +1$. If ${\sqrt{\log n}}\in {\mathbb{N}}$, we can choose $r=k=\sqrt{\log n}$ and receive the following result. If $\sqrt{\log n}\in {\mathbb{N}}$, there is a multi-input generate adder for $n$ bits with fan-out two, size at most $$3 n({\log n}+2\sqrt{\log n})2^{\sqrt{\log n}-1} + n2^{\sqrt{\log n}} + n\log n,$$ and depth $$\log n + 3{\sqrt{\log n}} + 1.$$ \[cor:depth+size-integral-sqrt-log-n\] In general, ${\sqrt{\log n}}\not \in {\mathbb{N}}$, and we get the following result. Let $n\in{\mathbb{N}}$. For input pairs $(x_i,y_i)$ ($i\in\{1,\dots,n\}$), there is a circuit, computing all carry bits with maximum fan-out $2$, depth at most $${\log_{2}}n + 5{\left\lceil \sqrt{{\log_{2}}n} \right\rceil} + 2.$$ The size is at most $$\label{eqn:mulit-input-generate-adder-large-n} 4 n {\left\lceil \sqrt{{\log_{2}}{n}} \right\rceil}^22^{{\left\lceil \sqrt{{\log_{2}}{n}} \right\rceil}}$$ if $n\ge 16$, and at most $$\label{eqn:mulit-input-generate-adder-small-n} 8 n {\left\lceil \sqrt{{\log_{2}}{n}} \right\rceil}^2 2^{{\left\lceil \sqrt{{\log_{2}}{n}} \right\rceil}}$$ if $n\le 15$. \[thm:generalized-adder\] We choose $r = k = {\left\lceil \sqrt{{\log_{2}}n} \right\rceil}$ and apply Lemma \[lem:depth+size-lemma\], obtaining $$\begin{array}{rl} 3nk(r+2)2^{r-1} + n2^{r} + nrk & = n\left(3(r^2+2r)2^{r-1} + 2^{r} + r^2\right).\\ \label{eqn:multi-input-adder-size-common-bound} \end{array}$$ Now, if $n\ge 16$, we have $r=k\ge 2$. Thus, we can use $2r \le r^2$ and $2^{r}+r^2 \le r^22^{r}$ to bound the right hand side by $$\begin{array}{rl} n\left(3\left(r^2 + r^2\right) 2^{r-1} + r^22^{r}\right) & \displaystyle = 4nr^22^{r},\\ \end{array}$$ implying (\[eqn:mulit-input-generate-adder-large-n\]). Otherwise, $n\le 16$, $r=k\le 2$, $r^2\le 2r$, $r^2\le 2^r$, and the right hand side of (\[eqn:multi-input-adder-size-common-bound\]) is bounded by $$\begin{array}{rl} \displaystyle n\left(3\left(2r + 2r\right)\right) 2^{r-1} + 2^{r} + 2^{r} & \displaystyle = 8nr2^{r} \le 8nr^22^r,\\ \end{array}$$ The resulting depth is $$\begin{array}{rl} kr + 2r + k +1 & = {\left\lceil \sqrt{{\log_{2}}n} \right\rceil}^2 + 3 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} +1 \\ & \leq ({\left\lfloor \sqrt{{\log_{2}}n} \right\rfloor}+1)^2 + 3 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} + 1\\ & \le {\log_{2}}n + 5 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} +2. \end{array}$$ If $\sqrt{{\log_{2}}n} \not \in {\mathbb{N}}$, the adder in Theorem \[thm:generalized-adder\] is larger than necessary since it has $n' = 2^{ {\left\lceil \sqrt{{\log_{2}}n} \right\rceil}^2}> n$ inputs. If for example $n=32$, we choose $r=k=3$ and $n'= 512$. Thus, if ${\left\lceil \sqrt{{\log_{2}}n} \right\rceil}^2 \geq n + {\left\lceil \sqrt{{\log_{2}}n} \right\rceil}$, choosing $r = {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} - 1$ instead still yields an adder with at least $n$ inputs and outputs and reduces the size and depth significantly. For $n = 32$, we would still obtain a $64$-input adder using this method. The analysis can be refined further by noticing that the columns $n'$ down to $n+1$ in the augmented Kogge-Stone [And]{}-prefix graph and the multi-input gate graph can be omitted, since they are not used for the computations of the first $n$ output bits. This reduces the size of the construction. If $n' > n$, we can omit the left half of the construction and notice that the right half of lowest row of multi-input generate gates only has $2^{r-1}$ inputs, so we can actually use $2^{r-1}$-input generate gates and reduce the depth by 1. This process can be iterated until $n' = n$, which decreases the rounding error incurred in Theorem \[thm:generalized-adder\]; the depth is decreased by ${\left\lceil \sqrt{{\log_{2}}n} \right\rceil}^2 - \log_2 n$. In this section, we have achieved a depth bound of $\log_2 n + \mathcal{O}(\sqrt{\log n}) = {\log_{2}}n + o({\log_{2}}n)$, which is asymptotically optimal, since the lower bound is ${\log_{2}}n$. Linearizing the Size of the Adder {#sec:linearizing-size} ================================= To achieve a linear size while keeping the adder asymptotically fastest possible, we adopt a technique similar to the construction by Brent and Kung \[1982\], which was first used as a size-reduction tool by Krapchenko \[1967\] (see [@wegener pp. 42-46]). Brent-Kung Step {#sec:brent-kung-step} --------------- Brent and Kung \[1982\] construct a prefix graph recursively as shown in Figure \[fig:krap-reduc\]. If $n$ is a at least two, it computes the $n/2$ intermediate results $z_{n} \circ z_{n-1}; \dots; z_{2} \circ z_{1}$ (see Section \[sec:prefix-gate-adders\] for the definition of $z_i$). A prefix graph for these $n/2$ inputs is used to compute the prefixes $Z_{1,2i}$ for all even indices $i\in\{1,\dots, n/2\}$. For odd indices, the prefix needs to be corrected by one more prefix gate as $Z_{1,2i+1} = z_{2i+1} \circ Z_{1,2i}$ ($i\in\{1,\dots, n/2-1\}$). We call this method of input halving and output correction a [*Brent-Kung step*]{}. Note that the propagate signals are not needed after the correction step. Thus, we can use reduced prefix gates (Figure \[fig:pfx-to-bk-output-gate\]) in the output correction step. In these prefix gates, the left propagate signal $x_i$ is used only once. Thus, the underlying logic circuit inherits the fan-out of two from the prefix graph. [0.49]{} [ ]{} [0.48]{} [ ]{} The Brent-Kung step reduces the instance size by a factor of two, but it increases the depth of the construction by four and the size by $5/2n$ in terms of logic gates. Applying these Brent-Kung steps recursively, Brent and Kung obtain a prefix graph that has prefix gate depth $2 {\log_{2}}n -1$ and logic gate depth $4 {\log_{2}}n -2$. The prefix gate depth is not optimal anymore, but the adder has a comparatively small size of $\frac{1}{2}(5n - {\log_{2}}n - 8)$ gates, and its fan-out is bounded by two at all inputs and gates. It is shown in Figure \[fig:brent-pfx\]. Brent-Kung steps were actually known before the paper by Brent and Kung \[1982\] e.g.  they were already used in [@krapchenko]. But the Brent-Kung adder is based solely on these steps. Krapchenko’s Adder {#sec:krap} ------------------ Krapchenko’s adder is a non-prefix adder computing all carry bits with asymptotically optimal depth and linear size. Its fan-out, on the other hand, is almost linear as well, which makes it less useful in practice. Krapchenko’s techniques can be used to derive the following reduction, based on Brent-Kung steps. \[lem:krap\] Let $\tau \leq {\log_{2}}n - 1$, then given a family of adders computing $k$ carry bits with depth $d(k)$, maximum fan-out $f(k)$ and size $s(k)$, there is a family of adders computing $n$ carry bits with depth $d({n/2^{\tau}}) + 4\tau$, maximum fan-out $\max\left\{2, f({n/2^\tau})\right\}$ and size $s({n/2^\tau}) + 5n$. With size $s({n/2^\tau}) + {5.5n}$, we can achieve the same depth and a maximum fan-out of at most $\max{\left\{2, f({n/2^\tau})\right\}}$. We apply $\tau$ Brent-Kung steps and construct the remaining adder for ${n/2^{\tau}}$ from the given adder family. Figure \[fig:krap-reduc\] shows the situation for $\tau = 1$. The simple application of $\tau$ Brent-Kung steps would achieve the claimed depth and fan-out result, except with at most $2n$ additional $2$-input prefix gates (because we will never add more prefix gates than are present in the Brent-Kung prefix graph) and thus with $6n$ additional logic gates. [ ]{} To see that $5n$ logic gates are enough, we show that we can omit the propagate signal computation for the parity-correcting part of the Brent-Kung step. Such a reduced output prefix gate is shown in Figure \[fig:pfx-to-bk-output-gate\]. With this construction, note that for $i$ even, we have computed $(x, y) = z_i \circ \dots \circ z_1$. For $z_{i+1} = (y_{i+1}, x_{i+1})$, the carry bit arising from position $i+1$ is $c_{i+2} = x_{i+1} \vee (y_{i+1} \wedge y)$, which uses two gates. It follows that a Brent-Kung step uses only the propagate signals at the inputs. For the next Brent-Kung step, the inputs are the $n/2$ pairs $z_n \circ z_{n-1}; \dots; z_2 \circ z_1$, therefore we need three logic gates per prefix gate for the reduction step. Note that in Figure \[fig:brent-pfx\], the propagate signal at a gate is used if and only if there is a vertical line from this gate to another prefix gate (and not to an output or repeater). These lines exist only in the “upper half” of the adder, i. e. the parts with depth $\leq {{\log_{2}}n}$. Since parity correction occurs exclusively in the lower half with depth $> {{\log_{2}}n}$, the propagate signals from parity correction steps are never used. As in the Brent-Kung prefix graph, $\frac{n}{2}$ repeaters can be used to distribute the fan-out and reduce the maximum fan-out of the parity-correcting gates to two (see also Figure \[fig:brent-pfx\]). The fact that the refined Brent-Kung step does not require the inner adder to provide the propagate signals, which a prefix graph adder would provide, allows us to use the multi-input generate adder with the size and depth bounds stated in Theorem \[thm:generalized-adder\], and which omits the last $r$ rows of [And]{} gates (hatched gates in Figure \[fig:kogge-aug\]) in the augmented Kogge-Stone [ And]{}-prefix graph. Lemma \[lem:krap\] can be used to achieve different trade-offs. In particular, constructions for all carry bits of size up to $n^{1+o(1)}$ can be turned into linear-size circuits with the same asymptotic depth or depth guarantee, since we could choose $\tau = o(1) {\log_{2}}n$. This works for prefix graphs and logic circuits; for example with $\tau = {\log_{2}}{\log_{2}}n$, the Kogge-Stone prefix graph will have size $3n$, depth ${\log_{2}}n + 2{\log_{2}}{\log_{2}}n$ and fan-out bounded by two in terms of prefix gates [@han+carlson]. While the technique in Lemma \[lem:krap\] is essentially a $2$-input prefix gate construction, the main result of [@krapchenko] cannot be constructed using only prefix gates. Adders with Asymptotically Minimum Depth, Linear Size, and Fan-Out Two ---------------------------------------------------------------------- By combining Theorem \[thm:generalized-adder\] and Lemma \[lem:krap\], we get an adder of asymptotically minimum depth, linear size and with fan-out at most two. There is an adder for $n$ inputs of size bounded by ${13.5 n}$ with depth $${\log_{2}}n + 8 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} + 6 {\left\lceil {\log_{2}}{\left\lceil \sqrt{{\log_{2}}n} \right\rceil} \right\rceil} + 2$$ and maximum fan-out two. If $n\ge 4096$, the size can be bounded by ${9.5}n$. \[all carry bits\] We apply Lemma \[lem:krap\] with $\tau = {\left\lceil \sqrt{{\log_{2}}n}+ 2 {\log_{2}}{{\left\lceil \sqrt{{\log_{2}}n} \right\rceil}} \right\rceil}$ and use an adder for $n/2^{\tau}$ inputs according to Theorem \[thm:generalized-adder\] as an inner adder. From the proof of Lemma \[lem:krap\], we have seen that the output correction of the Brent-Kung step does not require propagate signals from the inner adder. So the fan-out is indeed two. Using (\[eqn:mulit-input-generate-adder-small-n\]), this results in an adder of size $$\begin{array}{rl} & {\left\lceil 8\frac{n}{2^\tau} 2^{{\left\lceil \sqrt{{\log_{2}}{\frac{n}{2^\tau}}} \right\rceil}} {\left\lceil \sqrt{{\log_{2}}{\frac{n}{2^\tau}}} \right\rceil}^2 \right\rceil} + {5.5n}\\ \le & {\left\lceil 8\frac{n}{2^{{\left\lceil \sqrt{{\log_{2}}n} \right\rceil}+ 2 {\log_{2}}{{\left\lceil \sqrt{{\log_{2}}n} \right\rceil}}}} 2^{{\left\lceil \sqrt{{\log_{2}}{n}} \right\rceil}} {\left\lceil \sqrt{{\log_{2}}{n} } \right\rceil}^2 \right\rceil} + {5.5n} \\ \le &8n+{5.5n} = {13.5 n}. \end{array}$$ If $n\ge 4096$ we have $n/2^{\tau} \ge 16$ that allows us to apply the alternative bound (\[eqn:mulit-input-generate-adder-large-n\]) to achieve a size bound of $9.5n$. The depth is $$\begin{array}{rl} {\log_{2}}{\frac{n}{2^\tau}} + 5{\left\lceil \sqrt{{\log_{2}}\frac{n}{2^\tau}} \right\rceil} + 2 + 4\tau & ={\log_{2}}{n} + 5{\left\lceil \sqrt{{\log_{2}}\frac{n}{2^\tau}} \right\rceil} + 2 + 3\tau \\ &\le {\log_{2}}n + 8 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} + 6 {\left\lceil {\log_{2}}{\left\lceil \sqrt{{\log_{2}}n} \right\rceil} \right\rceil} + 2, \end{array}$$ where we are using $\tau \le {\left\lceil \sqrt{{\log_{2}}n} \right\rceil}+ 2 {\left\lceil {\log_{2}}{{\left\lceil \sqrt{{\log_{2}}n} \right\rceil}} \right\rceil}$ for the inequality. From Theorem \[all carry bits\], we can easily conclude our main result stated in Section \[sec:our\_contribution\]: Technology Mapping {#sec:technology-mapping} ================== In this section we show that our construction from Theorem \[all carry bits\] can be transformed into an adder using only <span style="font-variant:small-caps;">Nand</span>/<span style="font-variant:small-caps;">Nor</span>, and <span style="font-variant:small-caps;">Not</span> gates, which are faster in current CMOS technologies. This increases the depth by one and the size by a small constant factor. There is an adder for $n$ inputs using only <span style="font-variant:small-caps;">Nand</span>, <span style="font-variant:small-caps;">Nor</span>, and <span style="font-variant:small-caps;">Not</span> gates. Its size is bounded by ${(18+\frac{1}{3})n}$, the depth is at most $${\log_{2}}n + 8 {\left\lceil \sqrt{{\log_{2}}n} \right\rceil} + 6 {\left\lceil {\log_{2}}{\left\lceil \sqrt{{\log_{2}}n} \right\rceil} \right\rceil} + 3,$$ and the maximum fan-out is two. \[all carry bits inverted\] If $n \ge 4096$, the size is bounded by ${(15+\frac{5}{6})n}$, In the next two sections, we show how to transform the two main components of our construction, the Brent-Kung steps and the multi-input multi-output generate gate adder, into circuits using only the desired gates. Mapping Brent Kung Steps {#sec:mapping-brent-kung} ------------------------ Brent-Kung steps can be implemented using <span style="font-variant:small-caps;">Nand</span>/<span style="font-variant:small-caps;">Not</span> prefix gates as shown in Figure \[fig:pfx-nandnot\] in the reduction step. Similarly, the reduced output reduction gate in Figure \[fig:pfx-to-bk-output-gate\] can be realized by two <span style="font-variant:small-caps;">Nand</span> gates and one <span style="font-variant:small-caps;">Not</span> gate, i.e. by eliminating the two rightmost gates in Figure \[fig:pfx-nandnot\]. The modified prefix gates do not increase the depth of the Brent-Kung step, and increase the size by a constant factor less than $ \frac{5}{3}$. [ ]{} Mapping Multi-Input Multi-Output Generate Adders ------------------------------------------------ We want to transform a multi-input multi-output generate gate adder using DeMorgan’s laws. For easier understanding, we first insert repeaters so that the gates can be arranged in rows, such all input signals for gates in odd rows are computed in even rows and vice versa. This bipartite structure is already given in the augmented Kogge-Stone [And]{}-prefix graph (see Figure \[fig:kogge-aug\]). Let us now consider a multi-input generate gate shown in Figure \[fig:fo-pfx-small\]. By inserting $2^r/2$ repeaters gates in the last row of the <span style="font-variant:small-caps;">And</span>-suffix graph, we achieve a uniform depth of this first stage. The red row of <span style="font-variant:small-caps;">And</span> gates and the final $2^{r-1}$-output <span style="font-variant:small-caps;">Or</span> already have a uniform depth. The additional repeaters increase the size by less than a factor of $\frac{5}{3}$. Except for the first row of generate gates, the depth of the generate signals equals the depth of the propagate signals when they are merged in the red row of <span style="font-variant:small-caps;">And</span> gates. In the first row of generate gates, the propagate signals arrive there at depth $2r+1$, while the generate signals arrive at depth $r-1$ (see the proof of Lemma \[lem:depth+size-lemma\]). Thus, if $r$ is odd, we add one additional repeater at every generate input signal so that it arrives at an odd depth at the red level of <span style="font-variant:small-caps;">And</span> gates. Note that we can do this without increasing the overall depth, as we already assumed that the generate signals are delayed by $r+1$ in the proof of Lemma \[lem:depth+size-lemma\]. At most $n$ repeaters are inserted this way. Some generate gates of the multi-input generate gate adder are just buffer trees, i.e. blue boxes in Figure \[fig:fixed-size2\]. They have depth $r-1$, which is odd if and only if the depth $r+1$ of the corresponding paths of generate signals through multi-input generate gates is odd. Thus, they preserve the bipartite structure. Now we can use the bipartite structure to transform the multi-input multi-output generate adder into a circuit consisting of <span style="font-variant:small-caps;">Nand</span>, , and <span style="font-variant:small-caps;">Not</span> gates. In our construction we will maintain the following characteristics. Inputs to an odd row, i.e.  outputs of an even row, will be the original function values, while inputs to an even row, i.e.  outputs of an odd row, will be the negated original function values. We achieve this by transforming gates as follows: Repeaters are always transformed into <span style="font-variant:small-caps;">Not</span> gates. In odd rows, we translate <span style="font-variant:small-caps;">And</span> gates into <span style="font-variant:small-caps;">Nand</span> gates and <span style="font-variant:small-caps;">Or</span> gates into <span style="font-variant:small-caps;">Nor</span> gates. In even rows, we translate <span style="font-variant:small-caps;">And</span> gates into <span style="font-variant:small-caps;">Nor</span> gates and <span style="font-variant:small-caps;">Or</span> gates into <span style="font-variant:small-caps;">Nand</span> gates. If the number of rows is odd, we add one row of <span style="font-variant:small-caps;">Not</span> gates to correct the otherwise negated outputs of the adder. Together with the $n$ repeaters that we insert behind each generate input signal if $r$ is odd, this makes $2n$ gates that can by accounted for by the size of the augmented Kogge-Stone [And]{}-prefix graph (see Figure \[fig:kogge-aug\]), which is at least $3n$ if $r\ge 1$. Thus, the overall size of the generate adder increases by at most a factor of $\frac{5}{3}$. Together with the mapping of the Brent-Kung step in Section \[sec:mapping-brent-kung\], this proves Theorem \[all carry bits inverted\]. Conclusion {#conclusion .unnumbered} ========== We introduced the first full adder with an asymptotically optimum depth, linear size and a maximum fan-out of two. Asymptotically, this is twice as fast and significantly smaller than the Kogge-Stone adder, which is often considered the fastest adder circuit, as well as most other prefix graph adders. For small $n$, Theorem \[all carry bits\] will not immediately improve upon existing adders. When focusing on speed for small $n$, one would rather omit the size reduction from Section \[sec:linearizing-size\]. Without the size reduction, our results in Lemma \[lem:depth+size-lemma\] match the depth of the Kogge-Stone adder for 512 inputs and improve on it for 2048 inputs, where $r = 3, k = 4$ yields an adder with depth 21 for our construction, but the adder of Kogge-Stone will have depth 22. Today’s microprocessors contain adders for a few hundred bits. However, adders for 2048 bit numbers exist already today on cryptographic chips. Thus we expect that adders based on our ideas will find their way into hardware. [10]{} R.P. Brent. *On the Addition of Binary Numbers.* IEEE Transactions on Computers 19.8 (1970): 758–759. R.P. Brent and H.-T. Kung. *A regular layout for parallel adders.* IEEE Transactions on Computers 100.3 (1982): 260–264. S. Chatterjee, A. Mishchenko, R. Brayton, X. Wang, and T. Kam. *Reducing structural bias in technology mapping.* IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems 25.12 (2006): 2894–2903. F.E. Fich. *New bounds for parallel prefix circuits.* Proceedings of the 15th Annual ACM Symposium on Theory of Computing (STOC). ACM, 1983. S.B. Gashkov, M.I. Grinchuk, and I.S. Sergeev. *On the construction of schemes for adders of small depth*. Diskretnyi Analiz i Issledovanie Operatsii, Ser. 1, 14.1 (2007): 27–44 (in Russian). English translation in Journal of Applied and Industrial Mathematics 2.2, (2008): 167–-178. M.I. Grinchuk. *Sharpening an upper bound on the adder and comparator depths*. Diskretnyi Analiz i Issledovanie Operatsii, Ser. 1, 15.2 (2008): 12-–22 (in Russian). English translation in Journal of Applied and Industrial Mathematics 3.1, (2009): 61–67. T. Han and D.A. Carlson. *Fast Area Efficient VLSI Adders.* 8th IEEE Symposium on Computer Arithmetic (1987): 49–56. S. Held and S. T. Spirkl. *Fast Prefix Adders for Non-Uniform Input Arrival Times.* Algorithmica (2015). H.J. Hoover, M.M. Klawe, and N.J. Pippenger. *Bounding Fan-out in Logical Networks*. Journal of the ACM (JACM) 31.1 (1984): 13–18. K. Keutzer. *DAGON: technology binding and local optimization by DAG matching.* Papers on Twenty-five years of electronic design automation, ACM (1988): 617–624. S. Knowles. *A family of adders.* Proceedings of 14th IEEE Symposium on Computer Arithmetic (1999): 277 – 281. P.M. Kogge and H.S. Stone. *A parallel algorithm for the efficient solution of a general class of recurrence equations.* Computers, IEEE Transactions on Computers C-22.8 (1973): 786–793. V.M. Krapchenko. *Asymptotic estimation of addition time of a parallel adder*. Problemy Kibernetiki 19 (1967): 107–122 (in Russian). English translation in System Theory Res. 19 (1970): 105–122. V.M. Krapchenko. *On Possibility of Refining Bounds for the Delay of a Parallel Adder*. Diskretnyi Analiz i Issledovanie Operatsii, Ser. 1, 14.1 (2007): 87–93. English translation in Journal of Applied and Industrial Mathematics 2.2 (2008): 211–214. R.E. Ladner and M.J. Fischer. *Parallel prefix computation.* Journal of the ACM (JACM) 27.4 (1980): 831–838. O.B. Lupanov. *A class of schemes of functional elements*. Problemy Kibernetiki 7 (1962): 61–114. English translation in Problems of Cybernetics 7 (1963): 68–136. Y.P. Ofman. *The algorithmic complexity of discrete functions*. Doklady Akademii Nauk SSSR 145.1 (1962): 48–51. English translation in Soviet Physics Doklady 7 (1963): 589–591. D. Rautenbach, C. Szegedy and J. Werber. *On the cost of optimal alphabetic code trees with unequal letter costs.* European Journal of Combinatorics 29.2 (2008): 386-394. I. Sergeev. *On the complexity of parallel prefix circuits.* Electronic Colloquium on Computational Complexity (ECCC). Vol. 20. 2013. J. Sklansky. *Conditional-sum addition logic.* Electronic Computers, IRE Transactions on 2 (1960): 226-231. I. Wegener. *The complexity of Boolean functions*. Wiley-Teubner (1987). J. Werber, D. Rautenbach and C. Szegedy. *Timing optimization by restructuring long combinatorial paths.* Proceedings of the 2007 IEEE/ACM international conference on Computer-aided design (2007): 536–543.
--- author: - 'O. Jahn' - 'V. E. Leshchenko' - 'P. Tzallas' - 'A. Kessel' - 'M. Krüger' - 'A. Münzer' - 'S. A. Trushin' - 'M. Schultze' - 'G. D. Tsakiris' - 'S. Kahaly' - 'A. Guggenmos' - 'D. Kormin' - 'L. Veisz' - 'F. Krausz' - 'Zs. Major' - 'S. Karsch' title: 'Supplementary Material for Towards temporal characterization of intense isolated attosecond pulses from relativistic surface high harmonics.' --- Data acquisition and processing for CEP dependence ================================================== In this section we provide the technical details on the data acquisition and processing leading to the CEP dependence of the measured harmonics. Experimental realization of a CEP scan in a low-repetition-rate system with not perfectly stable laser pulse parameters is a challenging but feasible task. The common approach to overcome such a problem is to log all parameters of the laser system and select the shots with an acceptable deviation from the mean values in the later analysis. In our case, the root mean squared (RMS) energy instability of the driving pulses was 4% which can cause a significant error in the f-2f measurements. To exclude noise and uncertainties originating from system instabilities, OPCPA energy and spectrum were logged in parallel to the harmonic spectrum and f-2f signal. Afterwards, the shots for which the OPCPA energy and central wavelength deviated by not more than 1% from the mean values were selected which, in particular, limits the error of the f-2f interferometer originating from input energy instability to less than $\sim$200mrad \[20\]. After this sorting procedure about 20% of the collected data were left for the final analysis. Finally, each shot was assigned with a CEP value according to the measured f-2f phase data. To average over the target instabilities and other uncontrolled generation conditions, the full CEP-range of 0–2$\pi$ was divided into 12 intervals of equal size into which the shots were sorted. The spectra averaged over each interval are shown in Fig. 2(a) in the main paper. Temporal delay variation versus harmonic position shift ======================================================= In this we will give an estimation of the averaged temporal delay variation of the attosecond pulse train resulting from the shift of the harmonic position in the detected spectrum. A train of pulses separated by a delay of $T$ in the time domain results in spectral modulations in the frequency domain with a separation of $F=1/T$ between the maxima (which we will call in the following harmonics). The frequency of the n-th harmonic is then $F_n = n\times F$. When the delay T changes by $\Delta T$, the frequency spacing F between the harmonics changes accordingly by $\Delta F=1/T-1/(T+\Delta T)\approx\Delta T/T^2$. For the n-th harmonic this results in a spectral shift of $\Delta F_n=n\times \Delta F\approx n\times \Delta T/T^2$. Conversely, a frequency shift of the harmonics can be related to a change in the time delay. In the particular case presented in the paper when the harmonic position is shifted by one harmonic order, the change of the delay can be derived from the following relation: $ n/T=(n+1)/(T+\Delta T), $ which yields the equation (1) used in the paper $$\Delta T=T/n.$$ Denting estimation from the spectral beating period =================================================== Let us now develop the above scenario further and consider a pulse train with an uneven spacing between pulses. In the simplest case with three pulses and a delay of $T_1=T$ between 1$^{\rm st}$ and 2$^{\rm nd}$ pulses and a delay of $T_2=T+\delta T$ between 2$^{\rm nd}$ and 3$^{\rm rd}$ pulses, the spectral intensity is given by $$\begin{aligned} I(f) &= I_0(f)\times \Big|1+e^{i2\pi fT_1}+e^{i2\pi fT_2}\Big|^2 \nonumber\\ &= I_0(f)\times \Big(3+2\cos\big(2\pi fT\big) + \nonumber\\ &\hspace{0.5cm} +2\cos\big(2\pi f(T+\delta T)\big)+2\cos\big(2\pi f\delta T\big)\Big) \nonumber\\ &\approx I_0(f)\times \Big(3+2\cos\big(2\pi fT\big) + \nonumber\\ &\hspace{0.5cm} + 2\cos\big(2\pi fT\big)\cos\big(2\pi f\delta T\big)+2\cos\big(2\pi f\delta T\big)\Big) \nonumber\\ &=I_0(f)\times\Big(1+8\cos^2\big(\pi fT\big)\cos^2\big(\pi f\delta T\big)\Big) \end{aligned}$$ with $f$ being the frequency. An identical spectrum $I_0(f)$ is assumed for all three pulses stacked with the above defined time delays. In this equation the first cosine-factor can be identified as the previously described harmonic modulation of the spectrum, while the second factor further modulates this spectrum with a beating period of $f_{\text{beating}}=1/\delta T$. For a given beating period, e.g. as extracted from our experimental data, the delay variation can be determined by $$\delta T=1/f_{\text{beating}}.$$ As the next step, we can use this delay variation to estimate the plasma denting, i.e. the relative shift of the reflection point from the plasma mirror between the different events of attosecond pulse generation along the pulse train. As illustrated in Fig. \[fig:denting\], a shift of the mirror surface by $d$ results in an additional optical path length $\Delta L$ which is geometrically determined by L=AB+BC-AD=\ =2d/()-2d()()=2d(), where $\alpha$ is the angle of incidence. Solving this equation for $d$ and replacing $\Delta L$ with the delay variation $\delta T$ times the speed of light $c$ results in the equation (\[d\]) used in the paper for the denting estimation: $$\label{d} d=\Delta L/(2\cos(\alpha))=c\times \delta T/(2\cos(\alpha)).$$ ![Sketch illustrating the altered optical path lengths for a shifted plasma mirror surface (denting).[]{data-label="fig:denting"}](denting.jpg){width="0.9\linewidth"} CEP dependence of the harmonic spectrum and comparison with theory ================================================================== Here we give details on the origin of the CEP dependence of the harmonic spectrum and put our experimental data into perspective with the established models of relativistic SHHG. ![**(a)** PIC simulations of the harmonic spectrum (left panels) and the integrated harmonic yield (right panels) on the CEP of the driving field for $L_p=0.05\lambda$. **(b)** CEP dependence of the temporal structure of the attosecond pulse train for $L_p=0.05\lambda$ (200 nm thick Al filter is applied). **(c)** Energy (blue-dotted line) and intensity (orange-dotted line) ratio between the main attosecond pulse and the rest of the train. In these simulations $a_0=3$, $\tau=7\thinspace \text{fs}$, $L_p=0.05\lambda$.[]{data-label="fig:temp005"}](temp_str_005.eps){width="\linewidth"} As written in the main text of the paper, there are several publications that describe the basic mechanisms underlying the harmonics CEP-shift and the generation of an isolated attosecond pulse \[14, 24, 27, 28\] which also predict the key importance of the plasma scale length optimization. In order to investigate the influence of the plasma scale length on the CEP-dependence of spectral and temporal structures of the generated harmonics, we performed a PIC simulation for $L_p=0.05\lambda$, i.e. a smaller scale length compared to the case of $L_p=0.2\lambda$ presented in the main text, at otherwise identical laser parameters. The result is displayed in Fig. \[fig:temp005\] and shows no CEP dependence of the harmonic spectra for $L_p=0.05\lambda$ and the generation of two nearly equal attosecond pulses for all CEP values. The comparison of these results with the simulations for $L_p=0.2\lambda$ presented in the paper demonstrates the crucial importance of the plasma scale length optimization for the generation of isolated attosecond pulses. In particular the analysis of the plasma dynamics in PIC simulations reveals clear differences in the electron plasma density. This is shown in Fig. \[fig:ebuch02\], which depicts the electron density right before the generation of the most intense attosecond pulse for two different plasma scale lengths. While for $L_p = 0.2\lambda$ an electron bunch with 5nm layer thickness is created, no such feature is present for $L_p= 0.05\lambda$. This can be related to the steeper density gradient in the latter case where electrons at the plasma edge are just pushed into the bulk rather than bunched into a thin layer. A more detailed analysis of this effect can be found in \[27\]. The simulations show that for $L_p = 0.2\lambda$ the position of the electron nano-bunch shifts deeper inside the plasma with every subsequent optical cycle. This effect is known as plasma denting \[26\] and causes an increase of the temporal spacing between the pulses in the generated attosecond pulse train. Since the magnitude of each shift is determined by the field strength of the previous optical cycle which depends on the CEP, there is a dependence of the attosecond pulse spacing and thus the harmonic spectral structure on the CEP of the driving field. (The connection between the pulse temporal spacing and the harmonic spectral structure was discussed above in section II). ![Plasma electron density right before the generation of the most intense attosecond pulse for CEP=-0.32 rad, $L_p=0.2\lambda$ (a) (temporal structure is presented in the paper in Fig.4) and $L_p=0.05\lambda$ (b).[]{data-label="fig:ebuch02"}](Ebunch02.eps){width="\linewidth"} A further analysis of the results from our PIC simulation at $L_p = 0.2\lambda$ reveals that the amplitude of the reflected field exceeds the incoming one by up to 50% (see Fig. \[fig:refE\]). This effect requires a temporary storage of energy and cannot be explained within the relativistic oscillating mirror (ROM) model \[10\]. However, such a mechanism exists in the coherent synchrotron emission (CSE) \[24\] and the relativistic electronic spring (RES) model \[28\] in the form of the already mentioned electron nano-bunch: via the charge separation between the electrons and the quasi-immobile ion background, energy is accumulated in the plasma and released later. If this occurs in a proper phase with the driving field and under ultra-relativistic conditions ($a_0\sim30$), a strong enhancement of the electric field is observed leading to the generation of a giant attosecond pulse \[24, 28\]. At our experimental conditions ($a_0\approx3$) this enhancement is less pronounced but still improves the degree of isolation of the strongest XUV pulse. Relatively low intensity is a probable reason why our experimental results do not fit the predictions of the CSE model on the harmonics spectral scaling. One of the most important differences between the models is the exponent $n$ in the spectral intensity decay power law ($I_\text{XUV}\propto\omega^{-n}$). The ROM model \[10\] predicts $n=8/3$ whereas in the CSE model \[24\] it is $n=4/3$. A fit to the experimental data presented in the paper yields an exponent of $n=2.8\pm0.3$ which fits the predictions of the ROM model well. In summary, although the frequency scaling law of the measured harmonic spectra fits the predictions of the ROM model well, the detected CEP dependence of the generated harmonics and the demonstrated possibility to generate an isolated attosecond pulse using a 3-cycle driving field are clear indications for a CSE-like behavior. ![Incoming and reflected from the plasma mirror electric field. In these simulations $a_0=3$, $\tau=7\thinspace \text{fs}$, $L_p=0.2\lambda$, CEP=-0.32rad.[]{data-label="fig:refE"}](refE.eps){width="0.7\linewidth"} Technical details on the streaking setup and simulations ======================================================== ![NIR field used in the streaking simulations.[]{data-label="fig:NIR"}](NIR_field.eps){width="0.7\linewidth"} ![Results of streaking simulation assuming attosecond train with equal pulses separated by 3.3fs and using the NIR field presented in Fig. \[fig:NIR\].[]{data-label="fig:streaking"}](streaking.eps){width="0.9\linewidth"} ![Goodness-of-fit parameter (coefficient of determination ($R^2$)) between experimental data and simulated streak traces for the temporal structure obtained from PIC simulation (blue dots) and for traces consisting of pulses with equal amplitude and temporal spacing (red square dots). Note that the result for another XUV pulse train model with a Gaussian envelope is similar to the presented one (red point) when the envelope of the train has the FWHM duration of $N\times T$ where N is the number of pulses and T is the pulse separation.[]{data-label="fig:fit"}](fit_accuracy.eps){width="0.9\linewidth"} In our streaking setup which is schematically presented in Fig. 1 in the paper, the beam reflected from the target is recollimated and sent into an assembly consisting of a split mirror arrangement, a neon gas jet, and a time-of-flight (TOF) photoelectron (PE) spectrometer. To record a streaking trace, the NIR and XUV pulses have to be first separated and then overlapped in space and time inside the gas jet. This is achieved with a split-mirror that consists of a stationary annular outer mirror and a 8-mm-diameter inner mirror on a piezo-electric delay stage. The outer mirror has a standard silver coating to reflect the NIR beam while the inner mirror is a specially designed multilayer XUV bandpass reflector with 3eV bandwidth centered at 38eV (XUV38BW3 from Ultrafast Innovations GmbH). Both mirrors have a focal length of $f$=25cm and are confocally aligned with piezo-electric actuators. In front of the split-mirror two filters where mounted: First, a pellicle with an aluminum foil in the center to block any NIR radiation incident onto the XUV mirror. The second filter is an RG780 glass with a hole in the center to block any second harmonic radiation, containing about 30% of the total energy, incident onto the silver mirror. Passing the pellicle (2$\mu$m thickness) and the RG780 filter (0.5mm thickness) resulted in stretching of the NIR pulse to a FWHM duration of 14.5fs. Note that this NIR pulse elongation results in a broadening of the streaking trace but does not affect the number of generated attosecond pulses as it takes place after the SHHG process. The temporal structure of the NIR field is shown in Fig. \[fig:NIR\] and is used in the streaking simulations. It was calculated using the measured spectrum after reflection from the target and the dispersion of the filter provided by the supplier. The streaking traces were simulated in the strong field approximation \[32\] assuming a laser intensity of $10^{11}\thinspace \text{W/cm}^2$. The results for different CEP values of the NIR field were averaged in order to be able to compare the simulation with the experimental results obtained under random CEP conditions. The simulations presented in the paper were performed using the attosecond-pulse trains obtained from the PIC simulations with laser and plasma parameters corresponding to the experimental conditions. Here we additionally present (Fig. \[fig:streaking\]) the comparison with a simple model of a XUV train consisting of pulses with equal intensity and constant temporal spacing of 3.3fs. In order to identify the number of pulses that corresponds to the most consistent streaking trace with our experimental data we used the coefficient of determination ($R^2$) as the “goodness-of-fit” parameter. $R^2$ is defined as follows: R\^2=1-, where y is the experimental data points; f corresponds to fit results and $\overline{y}$ is the mean value. The fit accuracy of the simulated streaking traces to the experimental data for the both the PIC-simulation case and the model-pulse-train case is shown in Fig. \[fig:fit\]. The best fit accuracy of about 0.99 is provided by the XUV trains with less than 4 pulses. The simulations for more than 3 pulses in the train, for example 4 or 6, significantly deviate from the measured data and result in a worse fit accuracy of 0.96 and 0.9 correspondingly which is outside the few percent uncertainty of the $R^2$ estimation determined by the RMS error bars of the experimental data. Thus, we can conclude that the attosecond pulse train in our experiment most probably consists of not more than 3 pulses, resulting in an overall XUV train duration of $<7\thinspace \text{fs}$. Compared to common streaking measurements with gas harmonics and kHz-repetition-rate driving lasers we were facing several challenges: I) lower repetition rate of 10Hz; II) limited total number of continuous shots due to the limited target size; III) further reduction of the amount of useful data due to the necessity to select good shots (with identical performance of the laser system as described in the first section). However, the above issues can be solved in future. Namely, the points I and III can be solved with more stable and high repetition rate sources based on the thin-disc technology, while other quasi-unlimited target types such as spooling tapes should be able solve point II. During the experimental campaign, however, the limited number of shots prevented us from successfully recording a full streaking trace within the available beam time. Nevertheless the measured data are sufficient to determine an upper limit for the XUV pulse duration as described in the analysis above.
--- abstract: 'We determine the transverse system size of the initial non-equilibrium Glasma state and of the hydrodynamically evolving fireball as a function of produced charged particles in p+p, p+Pb and Pb+Pb collisions at the Large Hadron Collider. Our results are consistent with recent measurements of Hanbury-Brown-Twiss (HBT) radii by the ALICE collaboration. Azimuthal anisotropy coefficients $v_n$ generated by combining the early time Glasma dynamics with viscous fluid dynamics in Pb+Pb collisions are in excellent agreement with experimental data for a wide range of centralities. In particular, event-by-event distributions of the $v_n$ agree with the experimental data out to fairly peripheral centrality bins. In striking contrast, our results for p+Pb collisions significantly underestimate the magnitude and do not reproduce the centrality dependence of data for $v_2$ and $v_3$ coefficients. We argue that the measured $v_n$ data and HBT radii strongly constrain the shapes of initial parton distributions across system sizes that would be compatible with a flow interpretation in p+Pb collisions. Alternately, additional sources of correlations may be required to describe the systematics of long range rapidity correlations in p+p and p+Pb collisions.' author: - Björn Schenke - Raju Venugopalan bibliography: - 'spires.bib' title: | Eccentric protons? Sensitivity of flow to system size and shape\ in p+p, p+Pb and Pb+Pb collisions --- The description of ultra-relativistic heavy-ion (A+A) collisions with event-by-event viscous fluid-dynamic models has been extremely successful [@Gale:2013da]. In particular the color-glass-condensate (CGC) [@Gelis:2010nm] based IP-Glasma model [@Schenke:2012wb; @Schenke:2012hg] in combination with the viscous fluid dynamic simulation <span style="font-variant:small-caps;">music</span> [@Schenke:2010nt; @Schenke:2010rr; @Schenke:2011bn] provides a consistent description of particle spectra, the anisotropic flow coefficients $v_n$ and their event-by-event distributions [@Gale:2012rq]. Recent measurements at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and the Large Hadron Collider (LHC) at CERN have shown striking similarities in the structure of long range pseudo-rapidity correlations between high-multiplicity deuteron-gold (d+Au) [@Adare:2013piz] and proton-lead (p+Pb) collisions [@CMS:2012qk; @Abelev:2012ola; @Aad:2012gla] and peripheral heavy-ion collisions with similar multiplicity. One may thus conclude that the small collision systems are dominated by the same physics, namely collective flow of the produced matter. Indeed, first fluid dynamic calculations have been able to describe certain features of the experimental data in d+Au and p+Pb collisions [@Bozek:2011if; @Bozek:2012gr; @Bozek:2013df; @Bozek:2013uha]. In particular, the observed mass splitting of elliptic flow has been at least qualitatively explained within the fluid dynamic framework [@Bozek:2013ska; @Werner:2013ipa]. The observed long range correlations in pseudo-rapidity are an input in the fluid dynamic framework while the azimuthal structure follows from the system’s collective response to the transverse geometry as in A+A collisions. An explanation of the long range correlations in all collision systems is given in the color-glass-condensate based description of multi-particle production in high energy nuclear collisions [@Gelis:2008sz; @Dusling:2009ni]. In addition, this description produces a collimation in azimuth that is compatible with experimental data on the associated yield in p+p and p+A collisions, without any final state interactions [@Dumitru:2010iy; @Dusling:2012iga; @Dusling:2012cg; @Dusling:2012wy; @Dusling:2013oia]. The important question that needs to be answered is whether the physics responsible for the observed anisotropic flow in A+A collisions is qualitatively different from that in high-multiplicity p+p and p+A (d+A) collisions, or whether collective effects are always dominant. We argue that in order to conclude the latter, a systematic quantitative description from central to peripheral A+A to p/d+A to p+p collisions needs to be given within the same theoretical framework. In this letter, we first demonstrate that the IP-Glasma+<span style="font-variant:small-caps;">music</span> model that provided an excellent description of data for central and mid-central A+A collisions at RHIC and LHC continues to provide a good description of the data as we study more and more peripheral heavy-ion events. This holds not only for the mean values of $v_n$ but also their event-by-event distributions. These results are an important validation of the applicability of our model to A+A collisions especially since a recent study concludes that the $v_n$ distributions are not well described by most other initial state models [@Renk:2014jja]. We further demonstrate that the system sizes predicted in the IP-Glasma+<span style="font-variant:small-caps;">music</span> model for p+p, p+A, and A+A collisions are compatible with the experimentally measured Hanbury-Brown-Twiss (HBT) radii [@Abelev:2014pja]. It is however not possible to determine from the radii alone whether fluid dynamic expansion is present in p+p or p+A collisions. We study finally the multiplicity dependence of elliptic and triangular flow in A+A and p+A collisions. This requires a proper description of the multiplicity distribution for both systems [@Schenke:2013dpa; @Schenke:2012hg]. We find that while the description of $v_2$ and $v_3$ in peripheral A+A collisions is fairly good, the theoretical results for p+A collisions underpredict the experimental data by factors of up to 4. Given the excellent results of the model for A+A collisions and the various system sizes, the result for p+Pb collisions has dramatic implications. Two equally exciting explanations for the disagreement are possible. Previously discussed multi-particle correlations present in initial gluon production have been ignored in this and all other calculations that are based on collective final state effects. One explanation of our p+Pb results is that these initial state contributions could significantly modify the result for $v_2$ and $v_3$ if final state effects are not able to overpower them–the latter seems to be the case in A+A collisions [@Dusling:2012iga]. Alternatively, the disagreement with the measured $v_2$ and $v_3$ could stem from simplified assumptions about the (spherical) shape of gluon distributions in the proton[^1]. Deformed parton distributions in the proton would lead to larger initial eccentricities within our model and could generate significantly larger anisotropic flow. This implies that the new measurements at RHIC and LHC could provide unprecedented insight into the detailed shape of a proton at high energy [@Miller:2008sq; @Bjorken:2013boa]. We shall later comment on open questions that both these explanations will have to address. We begin our systematic study by demonstrating that, for a fixed shear viscosity to entropy density ratio $\eta/s=0.18$, anisotropic flow data from heavy-ion collisions at LHC is well described by fluid dynamic simulations using the IP-Glasma initial state described in [@Schenke:2013dpa]. The IP-Glasma energy density and flow velocities serve as input to the fluid dynamic simulation <span style="font-variant:small-caps;">music</span> as described in [@Gale:2012rq]. Here we choose the initial time $\tau_0=0.4\,{\rm fm}/c$ for the fluid dynamic simulation.[^2] We select centralities based on the gluon multiplicity distribution at $\tau_0$, obtained from $\sim 40,000$ IP-Glasma events. This centrality selection method neglects possible corrections due to entropy production during the fluid dynamic evolution and effects from hadronization. It is however close to the experimental procedure and avoids having to simulate the fluid dynamic evolution for tens of thousands of events. After kinetic freeze-out at $T_{\rm kin~ fo} = 135\,{\rm MeV}$ (chemical freeze-out occurs at $T_{\rm chem~ fo} = 150\,{\rm MeV}$) and resonance decays, we determine $v_n$ for $n\in \{2,3,4,5 \}$ of charged hadrons in every event by first determining the event-plane angle $ \psi_n=(1/n)\arctan( \langle \sin(n\phi)\rangle/\langle \cos(n\phi)\rangle)\,,$ and then computing $v_n=\langle \cos(n(\phi-\psi_n)) \rangle\,,$ where $\langle \cdot \rangle$ are averages over the charged hadron distribution functions. In Fig.\[fig:vnCentMean-gtr0-cut\] we present results for the mean $\langle v_n\rangle$ as a function of centrality compared to experimental results from the ATLAS collaboration [@Aad:2013xma]. Here we study significantly more peripheral events than in previous studies [@Gale:2012rq]. The agreement is excellent from the most central to 50% central events. For more peripheral events our results are up to 10% larger than the experimental data, with differences being largest for $v_2$. Between 0% and 20%, the calculated $v_3$ slightly underestimates the experimental result. ![(Color online) The event averaged $p_T$-integrated $\langle v_n \rangle$ as a function of centrality compared to experimental data from the ATLAS collaboration [@Aad:2013xma]. \[fig:vnCentMean-gtr0-cut\]](vnCentMean-gtr0-cut "fig:"){width="48.00000%"}\ We next present the computed event-by-event distributions of $v_2$, $v_3$, and $v_4$ and the corresponding initial state eccentricities defined as $\varepsilon_n = \sqrt{\langle r^n \cos(n\phi)\rangle^2+\langle r^n \sin(n\phi)\rangle^2}/\langle r^n \rangle$, where $\langle \cdot \rangle$ is the average weighted by the deposited energy density. We compare to data in the respective maximally peripheral bin measured by the ATLAS collaboration [@Aad:2013xma]. All distributions are scaled by their mean value. More central bins have been studied previously in [@Gale:2012rq]. The $\varepsilon_3$ distribution already provides a good description of the measured $v_3$ distribution, while $\varepsilon_2$ and $\varepsilon_4$ distributions are significantly narrower. However, non-linear effects in the fluid dynamic evolution modify the shape of the distributions such that the calculated $v_n$ distributions agree with the experimental result. This result strongly supports the importance of fluid dynamics in heavy-ion collisions. We have checked that the scaled distributions are only weakly dependent of the value of $\eta/s$, as was previously found in [@Niemi:2012aj]. ![(Color online) Data points correspond to the event-by-event distribution of $v_2$, $v_3$, and $v_4$ in the respective maximal peripheral bin measured by the ATLAS collaboration [@Aad:2013xma]. These are compared to the distributions of initial eccentricities in the IP-Glasma model and the distributions of $v_n$ from fluid dynamic evolution with IP-Glasma initial conditions. \[fig:vnenDist-maxCent\] ](vnenDist-maxCent){width="45.00000%"} Having established that even fairly peripheral events are well described by the IP-Glasma+<span style="font-variant:small-caps;">music</span> model, we now move on to applying the model to p+Pb and p+p collisions. We shall first determine whether the predicted system size (with and without fluid dynamical expansion) is consistent with HBT measurements for all systems. To be able to compare the initial size as well as the maximal size of the system during the evolution to the measured HBT radii, we define $r_{\rm max}$ as the (angle-averaged) radius where the system reaches the minimal threshold energy density $\varepsilon_{\rm min}$. This defines a size equivalent to the size of the system at freeze-out at a given energy density. This radius by definition depends on the choice of $\varepsilon_{\rm min}$. This choice however only affects the overall normalization of $r_{\rm max}$; it does not affect the dependence of $r_{\rm max}$ on the number of charged particles $N_{\rm ch}$ [@Bzdak:2013zma]. There is also some uncertainty in the radii coming from the choice of the infrared scale $m$ that regulates the long distance tail of the gluon distribution (see [@Schenke:2012wb; @Schenke:2012hg; @Schenke:2013dpa]). It can be mostly compensated for by adjusting a normalization constant $K$. In Fig. \[fig:r-Nch-scaled\] we show the result for $r_{\rm max}$ in p+p, p+Pb, and Pb+Pb collisions and compare to $R_{\rm inv}$ from the Edgeworth fit to the two-pion correlation function measured by the ALICE collaboration [@Abelev:2014pja]. We adjust $K$ to match to the p+p results. We determine centrality classes in the model and assign the $N_{\rm ch}$ value quoted by ALICE [@Abelev:2014pja] for each centrality class. ![(Color online) $R_{\rm inv}$ measured by the ALICE collaboration [@Abelev:2014pja] compared to $K r_{\rm max}$ determined using the IP-Glasma model and fluid dynamic expansion. The lower end of the band indicates the size of the initial state, the upper end the maximal value of $r_{\rm max}$ during the hydrodynamic evolution. \[fig:r-Nch-scaled\]](r-Nch-scaled "fig:"){width="45.00000%"}\ Because the emission of pions occurs throughout the evolution, $R_{\rm inv}$ lies somewhere between the initial radius and the maximal radius reached during evolution. We indicate the range of radii between these two extrema by a band in Fig.\[fig:r-Nch-scaled\]. We find that our estimate of the system size is compatible with the experimental HBT measurement for all systems simultaneously. The Pb+Pb result clearly favors the presence of hydrodynamic expansion. For events with the same multiplicity (for example at $\langle N_{\rm ch}\rangle ^{1/3}\approx 4$), p+Pb collisions in the hydrodynamic framework show a much more significant expansion compared to Pb+Pb collisions. For these high multiplicities, hydrodynamic expansion in p+Pb collisions appears to be necessary to explain the experimental data. However, using $m=0.1\,{\rm GeV}$ instead of $m=0.2\,{\rm GeV}$ leads to larger initial radii that are also compatible with the experimental data. We have established that the details of the bulk properties in Pb+Pb collisions as well as the systematics of the system size from p+p to Pb+Pb collisions are well reproduced in the IP-Glasma (+fluid dynamics) model. We turn now to address anisotropic flow in p+Pb collisions. Using the same method as in Pb+Pb collisions, we determine $v_2$ and $v_3$ as a function of $N_{\rm trk}^{\rm offline}$, measured by the CMS collaboration.[^3] Fig.\[fig:v2Cent-pPb-CMS\] shows the calculated $v_2$ in peripheral Pb+Pb collisions and central p+Pb collisions with the same $N_{\rm trk}^{\rm offline}$ in comparison to experimental data by the CMS collaboration [@Chatrchyan:2013nka]. While the Pb+Pb result reproduces the experimental data within 10-15%, the computed $v_2$ in p+Pb collisions underestimates the data by a factor of approximately 3.5. We have checked that even in the ideal case ($\eta/s=0$) the data is still underestimated by approximately a factor of 2. We also varied the freeze-out temperature and switching time $\tau_0$, but no choice of parameters could achieve much better agreement with the experimental data. For $v_3$, shown in Fig.\[fig:v3Cent-pPb-CMS\], we find a similar result: Pb+Pb data are well described, while p+Pb data are underestimated for $N_{\rm trk}^{\rm offline}>60$. Ideal fluid dynamics (not shown) increases the $v_3$ significantly by nearly a factor of 4. Its $N_{\rm trk}^{\rm offline}$ dependence is rather flat, slightly decreasing with increasing $N_{\rm trk}^{\rm offline}$, opposite to the trend seen in the experimental data. ![(Color online) Multiplicity dependence of the root-mean-square elliptic flow coefficient $v_2$ in Pb+Pb (open symbols) and p+Pb collisions (filled symbols) from the IP-Glasma+<span style="font-variant:small-caps;">music</span> model (connected triangles) compared to experimental data by the CMS collaboration [@Chatrchyan:2013nka]. \[fig:v2Cent-pPb-CMS\]](v2Cent-pPb-CMS "fig:"){width="45.00000%"}\ ![(Color online) Multiplicity dependence of the root-mean-square triangular flow coefficient $v_3$ in Pb+Pb (open symbols) and p+Pb collisions (filled symbols) from the IP-Glasma+<span style="font-variant:small-caps;">music</span> model (connected triangles) compared to experimental data by the CMS collaboration [@Chatrchyan:2013nka]. \[fig:v3Cent-pPb-CMS\]](v3Cent-pPb-CMS "fig:"){width="45.00000%"}\ The primary reason for the small $v_n$ in p+Pb collisions is that the initial shape of the system closely follows the shape of the proton (see [@Bzdak:2013zma]), which is spherical in our model. The subnucleonic fluctuations included generate non-zero values of the $v_n$, but they do not fully account for the larger experimentally observed values. As noted above, modifications of the (fluctuating) proton shape are necessary to account for the larger observed $v_2$ and $v_3$ in p+Pb collisions. If the hydrodynamic paradigm is valid, the results of the high-multiplicity p+Pb and p+p collisions could then in principle be used to extract detailed information on the spatial gluon distribution in the proton. There are hydrodynamical models that describe aspects of the p+Pb data. These models should also describe key features of Pb+Pb collisions where hydrodynamics is more robust. A model where the spatial geometry of p+Pb collisions is different from ours is that of [@Bozek:2011if; @Bozek:2012gr; @Bozek:2013df; @Bozek:2013uha; @Bozek:2013ska], where the interaction region is determined from the geometric positions of participant nucleons. However, as noted, this model falls into the class of models that are claimed [@Renk:2014jja] not to be able to reproduce the data on event-by-event $v_n$ distributions in A+A collisions. Whether this particular model can do so needs to be examined. We also note that the $v_2$ centrality dependence in the model differs from the CMS data for p+Pb collisions [@Bozek:2013uha]. Another model which claims large $v_2$ and $v_3$ in p+Pb collisions determines the system size from the position of “cut pomerons” and strings [@Werner:2013ipa; @Werner:2013tya]. The multiplicity dependence of the $v_n$ in this model has not yet been shown. The $v_n$ distributions in A+A collisions should also provide a stringent test of this model. In addition to the important quantitative tests imposed on different hydrodynamical models by the experimental data, there are conceptual issues that arise due to the possible breakdown of the hydrodynamic paradigm when extended to very small systems. As shown in recent quantitative studies, viscous corrections can be very significant in p+Pb collisions but play a much smaller role in Pb+Pb collisions [@Bzdak:2013zma; @Niemi:2014wta]. In particular, an analysis of Knudsen numbers reached during the evolution in A+A and p+A collisions finds that viscous hydrodynamics breaks down for $\eta/s\geq 0.08$ in p+A collisions [@Niemi:2014wta]. An alternative to the hydrodynamic picture and its sensitivity to the proton shape is provided within the Glasma framework itself by initial state correlations of gluons that show a distinct elliptic modulation in relative azimuthal angle [@Dumitru:2010iy; @Dusling:2012iga; @Dusling:2012cg; @Dusling:2012wy; @Dusling:2013oia]. If these are not overwhelmed in p+Pb collisions by final state effects, as they are in A+A collisions, they can contribute significantly to the observed $v_2$, and possibly $v_3$. The initial state correlations are those of gluons and do not address features of the data such as the mass ordering in particle spectra. While natural in hydrodynamical models, mass ordering may also emerge due to universal hadronization effects, as demonstrated in a string model [@Ortiz:2013yxa]. In summary, we have shown that the IP-Glasma model in combination with fluid dynamics describes very well the Fourier coefficients of the azimuthal particle distributions in Pb+Pb collisions out to fairly peripheral collisions. Both the experimental mean values and event-by-event distributions are well reproduced. The systematics of HBT radii in p+p, p+A, and A+A collisions are also well described. The discrepancy of our results with the experimental $v_2$ and $v_3$ data in p+Pb collisions therefore poses a challenge to applying the hydrodynamic paradigm to such small systems. A possible solution within the hydrodynamic framework could result from the inclusion of additional shape fluctuations of gluon distributions in both p+p and p+Pb collisions. Initial state effects that are also present in the Glasma framework may provide an alternative explanation of the noted discrepancy between experimental data and theory. These conclusions point to the importance of a deeper understanding of spatial shapes, sizes and correlations of gluon distributions in high energy QCD. Acknowledgments {#acknowledgments .unnumbered} =============== This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. BPS and RV are supported under DOE Contract No. DE-AC02-98CH10886. [^1]: Gluon distributions in the proton are extracted from fits of model parameters to combined H1 and ZEUS data on inclusive structure functions. These give excellent $\chi$-squared fits to diffractive and exclusive HERA data [@Rezaeian:2012ji]. However, these data may not fully capture the shapes of gluon distributions. [^2]: The effects of varying $\tau_0$ have been studied previously. They are small even if $\tau_0$ is decreased by a factor of two [@Gale:2012rq]. Increasing $\tau_0$ beyond the quoted value will, in this framework, impact the amount of flow generated. [^3]: To obtain $N_{\rm trk}^{\rm offline}$ we determine the centrality class in the IP-Glasma simulations and match to the $N_{\rm trk}^{\rm offline}$ quoted for that centrality class by the CMS collaboration in [@Chatrchyan:2013nka]. $N_{\rm trk}^{\rm offline}\approx 132$ corresponds to 65-70% central Pb+Pb events, the most peripheral bin shown for the ATLAS data in Fig.\[fig:vnCentMean-gtr0-cut\].
--- author: - Kengo Hirachi title: | Construction of boundary invariants\ and the logarithmic singularity\ of the Bergman kernel --- \#1[(\[\#1\])]{} =cmr10 amssym.def amssym.tex ‘=11 =msbm10 scaled 1100 =msbm10 =msbm10 scaled 800 == =@[@]{} @\#1[[@@[\#1]{}]{}]{} @@\#1[\#1]{} ‘=12 ‘=11 =eufm10 scaled 1100 =eufm10 =eufm7 scaled 1100== =@[@]{} @\#1[[@@[\#1]{}]{}]{} @@\#1[\#1]{} ‘=12 This paper studies Fefferman’s program [@F3] of expressing the singularity of the Bergman kernel, for smoothly bounded strictly pseudoconvex domains $\Omega\subset{{\Bbb C}}^n$, in terms of local biholomorphic invariants of the boundary. By [@F1], the Bergman kernel on the diagonal $K(z,{\overline}z)$ is written in the form $$K=\varphi\, r^{-n-1}+\psi \log r {\quad\hbox{with}\ \ } \varphi,\psi\in C^\infty({\overline}\Omega),$$ where $r$ is a (smooth) defining function of $\Omega$. Recently, Bailey, Eastwood and Graham [@BEG], building on Fefferman’s earlier work [@F3], obtained a full invariant expression of the strong singularity $\varphi\, r^{-n-1}$. The purpose of this paper is to give a full invariant expression of the weak singularity $\psi\log r$. Fefferman’s program is modeled on the heat kernel asymptotics for Riemannian manifolds, $$K_t(x,x)\sim t^{-n/2}\sum_{j=0}^\infty a_j(x)\,t^j \quad \hbox{as} \ t \to+0,$$ in which case the coefficients $a_j$ are expressed, by the Weyl invariant theory, in terms of the Riemannian curvature tensor and its covariant derivatives. The Bergman kernel’s counterpart of the time variable $t$ is a defining function $r$ of the domain $\Omega$. By [@F1] and [@BS], the formal singularity of $K$ at a boundary point $p$ is uniquely determined by the Taylor expansion of $r$ at $p$. Thus one has hope of expressing $\varphi$ modulo $O^{n+1}(r)$ and $\psi$ modulo $O^{\infty}(r)$ in terms of local biholomorphic invariants of the boundary, provided $r$ is appropriately chosen. In [@F3], Fefferman proposed to find such expressions by reducing the problem to an algebraic one in invariant theory associated with CR geometry, and indeed expressed $\varphi$ modulo $O^{n-19}(r)$ invariantly by solving the reduced problem partially. The solution in [@F3] was then completed in [@BEG] to give a full invariant expression of $\varphi$ modulo $O^{n+1}(r)$, but the reduction is still obstructed at finite order so that the procedure does not apply to the log term $\psi$. We thus modify the invariant-theoretic problem in [@F3], [@BEG] and solve the modified problem to extend the reduction. In the heat kernel case, the reduction to the algebraic problem is done by using normal coordinates, and the coefficient functions $a_k$ at a point of reference are ${{\rm O}}(n)$-invariant polynomials in jets of the metric. The CR geometry counterpart of the normal coordinates has been given by Moser [@CM]. If ${\partial}\Omega\in C^\omega$ (real-analytic) then, after a change of local coordinates, ${\partial}\Omega$ is locally placed in Moser’s normal form: $$\label{mnf-rho} N(A): \ \ \rho(z,{\overline}z)=2u-|z'|^2-\sum_{|\alpha|,|\beta|\ge2,l\ge0} A_{\alpha{\overline}\beta}^l z'{}^\alpha{\overline}z'{}^\beta v^l=0,$$ where $z'=(z^1,\dots,z^{n-1})$, $z^n=u+i v$, $A=(A_{\alpha{\overline}\beta}^l)$, and the coefficients $A_{\alpha{\overline}\beta}^l$ satisfy trace conditions which are linear (see Section 3). For each $p\in{\partial}\Omega$, Moser’s local coordinate system as above is uniquely determined up to an action of a parabolic subgroup $H$ of ${{\rm SU}}(1,n)$. Thus $H$-invariant functions of $A$ give rise to local biholomorphic invariants at the point $p$. Among these invariants, we define [*CR invariants of weight $w$*]{} to be polynomials $I(A)$ in $A$ such that $$\label{CRinv-transf} I({\widetilde}A)=|\det\Phi'(0)|^{-2w/(n+1)}I(A)$$ for biholomorphic maps $\Phi$ such that $\Phi(0)=0$ and $\Phi(N(A))=N({\widetilde}A)$. A CR invariant $I(A)$ defines an assignment, to each strictly pseudoconvex hypersurface $M\in C^\omega$, of a function $I_M\in C^\omega(M)$, which is also called a CR invariant. Here $I_M(p)$, $p\in M$, is given by taking a biholomorphic map such that $\Phi(p)=0$, $\Phi(M)=N(A)$ and then setting $$\label{CRinv-functional} I_M(p)=|\det\Phi'(p)|^{2w/(n+1)}I(A).$$ This value is independent of the choice of $\Phi$ with $N(A)$ because of . If $M\in C^\infty$ then gives $I_M\in C^\infty(M)$, though a normal form of $M$ can be a formal surface. The difficulty of the whole problem comes from the ambiguity of the choice of defining functions $r$, and this has already appeared in the problem for $\varphi$, that is, the problem of finding an expression for $\varphi$ of the form $$\label{exp-varphi} \varphi=\sum_{j=0}^n \varphi_j\,r^j+O^{n+1}(r) \quad \hbox{with} \ \ \varphi_j\in C^\infty(\overline\Omega),$$ such that the boundary value $\varphi_j|_{{\partial}\Omega}$ is a CR invariant of weight $j$. Though this expansion looks similar to that of the heat kernel, the situation is much more intricate. It is impossible to choose an exactly invariant defining function $r$, and thus the extension of CR invariants $\varphi_j|_{{\partial}\Omega}$ to $\Omega$ near ${\partial}\Omega$, which is crucial, is inevitably approximate. Fefferman [@F3] employed an approximately invariant defining function $r=r^{{{{\rm F}}}}$, which was constructed in [@F2] as a smooth approximate solution to the (complex) [Monge-Ampère ]{}equation (with zero Dirichlet condition). This defining function is uniquely determined with error of order $n+2$ along the boundary, and approximately invariant under biholomorphic maps $\Phi\colon\Omega\to{\widetilde}\Omega$ in the sense that $$\label{trans-r0} {\widetilde}r\circ\Phi=|\det\Phi'|^{2/(n+1)}r+O^{n+2}(r),$$ for $r=r^{{{{\rm F}}}}$ and ${\widetilde}r={\widetilde}r^{{{{\rm F}}}}$ associated with $\Omega$ and ${\widetilde}\Omega$, respectively. The defining function $r=r^{{{{\rm F}}}}$ was used by [@F3] and [@BEG] also in the ambient metric construction of the coefficient functions $\varphi_j$ explained as follows. Let $g[r]$ be the Lorentz-Kähler metric on ${{\Bbb C}}^*\times{\overline}\Omega\subset{{\Bbb C}}^{n+1}$ near ${{\Bbb C}}^*\times{\partial}\Omega$ defined by the potential $|z^0|^2r$ ($z^0\in{{\Bbb C}}^*$). Then scalar functions are obtained as complete contractions of tensor products of covariant derivatives of the curvature tensor of $g[r]$. By [@F3] and [@BEG], such complete contractions generate all CR invariants of weight $\le n$, and each $\varphi_j$ in the expansion of $\varphi$ is realized by linear combinations of these complete contractions. The approximately invariant defining function $r=r^{{{{\rm F}}}}$ is too rough in getting an expansion for $\psi$ analogous to that for $\varphi$, while there is no hope of making $r$ exactly invariant. Instead, we consider a family ${{\cal F}}_M$ of defining functions of the germ $M$ of ${\partial}\Omega$ at a point $p$ of reference such that ${{\cal F}}_M$ is invariant under local biholomorphic maps $\Phi\colon M\to{\widetilde}M$, that is, $r\in{{\cal F}}_M$ if and only if ${\widetilde}r\in{{\cal F}}_{{\widetilde}M}$, where ${\widetilde}r\circ\Phi=|\det\Phi'|^{2/(n+1)}r$. We also require that ${{\cal F}}_M$ is parametrized formally by $C^\infty(M)$. More precisely, $M$ is a formal surface, $r$ is a formal function, and $C^\infty(M)$ should be replaced by a space $C^\infty_{\rm formal}(M)$ of formal power series. If $M$ is in normal form $N(A)$ with $p=0$, then $f\in C^\infty_{\rm formal}(M)$ is identified with the Taylor coefficients $C=(C_{\alpha{\overline}\beta}^l)$ of $f(z',{{\overline}{z}}',v)$ as in , so that the corresponding $r\in{{\cal F}}_M$ has the parametrization $r=r[A,C]$. Specific construction of ${{\cal F}}_M$ is done by lifting the [Monge-Ampère ]{}equation to ${{\Bbb C}}^*\times\Omega$ near ${{\Bbb C}}^*\times{\partial}\Omega$ and considering a family of local (or formal) asymptotic solutions, say ${{\cal F}}_M^{\rm aux}$, which is parametrized by $C^\infty_{{\rm formal}}(M)$. This is a refinement of Graham’s construction [@G2] of asymptotic solutions to the [Monge-Ampère ]{}equation in $\Omega$. Then, ${{\cal F}}_M$ consists of the smooth parts of elements of ${{\cal F}}_M^{\rm aux}$, and the parametrization $C^\infty_{\rm formal}(M)\to{{\cal F}}_M$ for $M=N(A)$ is given by the inverse map of $r\mapsto{\partial}^{n+2}_\rho r|_{\rho=0}$, which comes from the parametrization of ${{\cal F}}_M^{\rm aux}$. Biholomorphic invariance of ${{\cal F}}_M$ gives rise to an extension of the $H$-action on the normal form coefficients $A$ to that on the pairs $(A,C)$. In fact, a natural generalization of the CR invariant is obtained by considering polynomials $I(A,C)$ in the variables $A_{\alpha{\overline}\beta}^l$ and $C_{\alpha{\overline}\beta}^l$ such that $$I({\widetilde}A,{\widetilde}C)=|\det\Phi'(0)|^{-2w/(n+1)}I(A,C)$$ as in , for biholomorphic maps $\Phi$ and $({\widetilde}A,{\widetilde}C)$ satisfying $r[{\widetilde}A,{\widetilde}C]\circ\Phi=|\det\Phi'|^{2/(n+1)}r[A,C]$. Such a polynomial defines an assignment, to each pair $(M,r)$ with $r\in{{\cal F}}_M$, of a function $I[r]\in C^\infty(M)$: $$\label{general-CR-functional} I[r](p)=|\det\Phi'(p)|^{2w/(n+1)}I(A,C),$$ with $\Phi$ as in and $(A,C)$ parametrizing ${\widetilde}r$ such that ${\widetilde}r\circ\Phi=|\det\Phi'|^{2/(n+1)}r$. We thus refer to $I(A,C)$ as an [*invariant of the pair $(M,r)$ of weight*]{} $w$. The problem for $\psi$ is then formulated as that of finding an asymptotic expansion of $\psi$ in powers of $r\in{{\cal F}}_{{\partial}\Omega}$ of the form $$\label{exp-psi-intro} \psi=\sum_{j=0}^\infty\psi_j[r]\,r^j+O^\infty(r) \quad \hbox{with} \ \ \psi_j[r]\in C^\infty(\overline\Omega),$$ such that each $\psi_j[r]|_{{\partial}\Omega}$ is an invariant of the pair $({\partial}\Omega,r)$ of weight $j+n+1$. As in the CR invariant case, a class of invariants of the pair $({\partial}\Omega,r)$ is obtained by taking the boundary value for linear combinations of complete contractions of tensor products of covariant derivatives of the curvature of the metric $g[r]$. Elements of this class are called [*Weyl invariants*]{}. We prove that all invariants of the pair $(M,r)$ are Weyl invariants (see Theorems 4 and 5), so that the expansion holds with $\psi_j[r]|_{{\partial}\Omega}$ given by Weyl invariants of weight $j+n+1$ (see Theorem 1). A CR invariant $I(A)$ is the same as an invariant of the pair $(M,r)$ which is independent of the parameter $C$, so that $I(A)$ is a Weyl invariant independent of $C$ (the converse also holds). That is, CR invariants are the same as Weyl invariants independent of the parameter $C$ (see Theorem 2 which follows from Theorems 4 and 5). For Weyl invariants of low weight, it is easy to examine the dependence on $C$. We have that all Weyl invariants of weight $\leq n+2$ are independent of $C$ (see Theorem 3). This improves the result of [@F3] and [@BEG] described above by weight $2$. If $n=2$, we have a better estimate (see Theorem 3 again) which is consistent with the results in [@HKN2]. Introducing the parameter $C$ was inspired by the work of Graham [@G2] on local determination of the asymptotic solution to the [Monge-Ampère ]{}equation in $\Omega$. He proved approximate invariance, under local biholomorphic maps, of the log term coefficients of the asymptotic solution, and gave a construction of CR invariants of arbitrarily high weight. In our terminology of Weyl invariants, these CR invariants are characterized as complete contractions which contain the Ricci tensor of $g[r]$ (see Remark \[rem-graham\] for the precise statement). This paper is organized as follows. In Section 1, we define the family ${{\cal F}}_M$ of defining functions and state our main results, Theorems 1, 2 and 3. Section 2 is devoted to the construction of the family ${{\cal F}}_M$ and the proof of its biholomorphic invariance. After reviewing the definition of Moser’s normal form, we reformulate, in Section 3, CR invariants and invariants of the pair $(M,r)$ as polynomials in $(A,C)$ which are invariant under the action of $H$. Then we relate these $H$-invariant polynomials with those in the variables $R_{i\,{{{\overline}\jmath}}\, k\,{\overline}l;ab\dots c}$ on which $H$ acts tensorially, where $R_{i\,{{{\overline}\jmath}}\, k\,{\overline}l;ab\dots c}$ are the components of the curvature of $g[r]$ and its covariant derivatives. Using this relation, we reduce our main Theorems 1–3 to the assertion that all invariants of the pair $(M,r)$ are Weyl invariants. This assertion is proved in two steps in Sections 4 and 5. In Section 4, we express all invariants of the pair $(M,r)$ as $H$-invariant polynomials in $R_{i\,{{{\overline}\jmath}}\, k\,{\overline}l;ab\dots c}$. In Section 5, we show that all such $H$-invariant polynomials come from Weyl invariants, where invariant theory of $H$ in [@BEG] is used essentially. In the final Section 6, we study the dependence of Weyl invariants on the parameter $C$. I am grateful to Professor Gen Komatsu, who introduced me to the analysis of the Bergman kernel, for many discussions and encouragement along the way. Statement of the results ======================== Our concern is a refinement of the ambient metric construction as in [@F3], [@BEG]. Let $\Omega\subset{{\Bbb C}}^n$ be a smoothly bounded strictly pseudoconvex domain and $$J(u)=(-1)^n\det \left( \begin{array}{cc} u & u_{{{{\overline}\jmath}}}\\ u_i & u_{i\,{{{\overline}\jmath}}} \end{array} \right)_{1\le i,\,j\le n} {\quad\hbox{where}\ \ } u_{i\,{{{\overline}\jmath}}}={\partial}_{z^i}{\partial}_{{\overline}z^j}u.$$ In [@F3], [@BEG], the construction started by choice of a defining function $r$, with $r>0$ in $\Omega$, satisfying $J(r)=1+O^{n+1}({\partial}\Omega)$, where $O^{n+1}({\partial}\Omega)$ stands for a term which is smoothly divisible by $r^{n+1}$. Such an $r$ is unique modulo $O^{n+2}({\partial}\Omega)$ and we denote the equivalence class by ${{\cal F}}^{{{\rm F}}}_{{\partial}\Omega}$. We here consider a subclass ${{\cal F}}_{{\partial}\Omega}$ of ${{\cal F}}^{{{\rm F}}}_{{\partial}\Omega}$, which is defined by lifting the (complex) [Monge-Ampère ]{}equation (with Dirichlet boundary condition) $$\label{MA-J} J(u)=1 \hbox{ and } u>0\hbox{ in }\Omega, \quad u=0 \hbox{ on }{\partial}\Omega.$$ For a function $U(z^0,z)$ on ${{\Bbb C}}^*\times{\overline}\Omega$, we set $$J_\#(U)=(-1)^n\det\left(U_{i\,{{{\overline}\jmath}}}\right)_{0\le i,j\le n}$$ and consider a [Monge-Ampère ]{}equation on ${{\Bbb C}}^*\times\Omega$: $$\label{MA} J_\#(U)=|z^0|^{2n} \hbox{ with } U>0 \hbox{ in }{{\Bbb C}}^*\times \Omega, \hbox{ and } \quad U=0 \hbox{ on }{{\Bbb C}}^*\times{\partial}\Omega. \qquad$$ If $U$ is written as $U(z^0,z)=|z^0|^2 u(z)$ with a function $u(z)$ on $\Omega$, then is reduced to because $J_\#(U)=|z^0|^{2n}J(u)$. In [@G2], Graham fixed $r\in{{\cal F}}_{{\partial}\Omega}^{{{\rm F}}}$ arbitrarily and constructed asymptotic solutions $u^{{{\rm G}}}$ to of the form $$\label{formal-solution-u} u^{{{\rm G}}}=r\sum_{k=0}^\infty\eta_k^{{{\rm G}}}\left(r^{n+1}\log r\right)^k {\quad\hbox{with}\ \ } \eta_k^{{{\rm G}}}\in C^\infty({\overline}\Omega),$$ which are parametrized by the space $C^\infty({\partial}\Omega)$ of initial data (see Remark \[rem-asymptotic-solution\] below). Then $U^{{{\rm G}}}=|z^0|^2u^{{{\rm G}}}$ are asymptotic solutions to . We here modify these asymptotic solutions and consider another class of asymptotic solutions of the form $$\label{formal-solution} U=r_\#+r_\#\sum_{k=1}^\infty\eta_k\left(r^{n+1}\log r_\#\right)^k {\quad\hbox{with}\ \ } \eta_k\in C^\infty({\overline}\Omega),$$ again parametrized by $C^\infty({\partial}\Omega)$, where $r_\#=|z^0|^2r$ with $r\in {{\cal F}}_{{\partial}\Omega}^{{{{\rm F}}}}$. It should be emphasized that $r$ is not prescribed but determined by $U$. Note also that $U$ is not of the form $|z^0|^2u$ because $\log r_\#$ is not homogeneous in $z^0$. We call $r$ in the [*smooth part*]{} of $U$ and define ${{\cal F}}_{{\partial}\Omega}$ to be the totality of the smooth parts of asymptotic solutions to for ${\partial}\Omega$. We identify two asymptotic solutions of the form if the corresponding functions $r$ and $\eta_k$ agree to infinite order along ${\partial}\Omega$. Then the unique existence of the asymptotic solution $U$ as in holds once the initial data are given in $C^\infty({\partial}\Omega)$. Let $X$ be a real vector field on ${\overline}\Omega$ which is transversal to ${\partial}\Omega$[.]{} Then for any $a\in C^\infty({\partial}\Omega)$[,]{} there exists a unique asymptotic solution $U$ to for ${\partial}\Omega$ such that the smooth part $r$ satisfies $$X^{n+2}r|_{{\partial}\Omega}=a. \label{initial-condition}$$ The lifted [Monge-Ampère ]{}equation and the asymptotic solutions of the form are introduced in order to obtain the following exact transformation law for the smooth part $r$. Let $\Phi\colon\Omega\to{\widetilde}\Omega$ be a biholomorphic map[.]{} Then $r\in{{\cal F}}_{{\partial}\Omega}$ if and only if ${\widetilde}r\in{{\cal F}}_{{\partial}{\widetilde}\Omega}$[,]{} where ${\widetilde}r$ is given by $$\label{trans-r} {\widetilde}r\circ\Phi=|\det\Phi'|^{2/(n+1)}r.$$ Here $\det\Phi'$ is the holomorphic Jacobian of $\Phi$[.]{} \[rem-asymptotic-solution\] For $u^{{{\rm G}}}$ in , $\eta_0^{{{\rm G}}}=1+O^{n+1}({\partial}\Omega)$ holds. To make $u^{{{\rm G}}}$ unique, Graham [@G2] used the boundary value of $(\eta_0^{{{\rm G}}}-1)/r^{n+1}|_{{\partial}\Omega}$ as the initial data $a\in C^\infty({\partial}\Omega)$, where $r$ is arbitrarily fixed. It is also possible to make $u^{{{\rm G}}}$ unique by requiring $\eta_0^{{{\rm G}}}=1$ in , in which case $r$ is determined by $u^{{{\rm G}}}$ (cf. Lemma \[chose-r\]). Then we may write $r=r[u^{{{\rm G}}}]$ and consider the totality of these, say ${{\cal F}}_{{\partial}\Omega}^{{{\rm G}}}$. However, ${{\cal F}}_{{\partial}\Omega}^{{{\rm G}}}$ does not satisfy the transformation law in Proposition 2; it is not the case that every ${\widetilde}r=r[{\widetilde}u^{{{\rm G}}}]\in{{\cal F}}_{{\partial}{\widetilde}\Omega}^{{{\rm G}}}$ is given by with some $r=r[u^{{{\rm G}}}]\in{{\cal F}}^{{{\rm G}}}_{{\partial}\Omega}$. Though the proof requires some preparation (cf. Remark \[rem-proof\]), this is roughly seen by the fact that implies $(\log {\widetilde}r)\circ\Phi=\log r+\log |\det\Phi'|^{2/(n+1)}$, which destroys the condition ${\widetilde}\eta_0^{{{\rm G}}}=\eta_0^{{{\rm G}}}[{\widetilde}u^{{{\rm G}}}]=1$ (cf. subsection 2.1). For each defining function $r\in{{\cal F}}_{{\partial}\Omega}$, we define a Lorentz-Kähler metric $$g[r]=\sum_{i,j=0}^n\frac{{\partial}^2\, r_\#}{{\partial}z^i{\partial}{\overline}z^j}\,dz^i\,d{\overline}z^j {\quad\hbox{on}\ \ } {{\Bbb C}}^*\times{\overline}\Omega \hbox{ near }{{\Bbb C}}^*\times{\partial}\Omega.$$ We call this metric $g=g[r]$ an [*ambient metric*]{} associated with ${\partial}\Omega$. From the ambient metric, we construct scalar functions as follows. Let $R$ denote the curvature tensor of $g$ and ${R^{(p,q)}}={\overline}\nabla{}^{q-2}\nabla^{p-2}R$ the successive covariant derivatives, where $\nabla$ (resp. ${\overline}\nabla$) stands for the covariant differentiation of type $(1,0)$ (resp. $(0,1)$). Then a complete contraction of the form $$\label{weyl-def} W_\#={{\rm contr}}({R^{(p_1,q_1)}}\otimes\cdots\otimes{R^{(p_d,q_d)}})$$ gives rise to a function $W_\#[r]$ on ${{\Bbb C}}^*\times{\overline}\Omega$ near ${{\Bbb C}}^*\times{\partial}\Omega$ once $r\in{{\cal F}}_{{\partial}\Omega}$ is specified. Here contractions are taken with respect to the ambient metric for some pairing of holomorphic and antiholomorphic indices. The [*weight*]{} of $W_\#$ is defined by $ w=-d+\sum_{j=1}^d (p_j+q_j)/2, $ which is an integer because $\sum p_j=\sum q_j$ holds. By a [*Weyl polynomial*]{}, we mean a linear combination of $W_\#$ of the form of homogeneous weight. A Weyl polynomial gives a functional for the pair $({\partial}\Omega,r)$ which satisfies a transformation law under biholomorphic maps. To state this precisely, we make the following definition. A Weyl polynomial $W_\#$ of weight $w$ assigns, to each pair $({\partial}\Omega,r)$ with $r\in{{\cal F}}_{{\partial}\Omega}$, a function $W[r]=W_\#[r]|_{z^0=1}$ on ${\overline}\Omega$ near ${\partial}\Omega$. We call this assignment $W\colon r\mapsto W[r]$ a [*Weyl functional of weight*]{} $w$ associated with $W_\#$. Let $W$ be a Weyl functional of weight $w$[.]{} Then[,]{} for $r$ and ${\widetilde}r$ as in [,]{} $$\label{WF-inv-transf} W[{\widetilde}r]\circ\Phi=|\det\Phi'|^{-2w/(n+1)}W[r].$$ We refer to the relation as a transformation law of weight $w$ for $W$. Without change of the proof, Proposition 1 can be localized near a boundary point $p$. That is, we may replace ${\partial}\Omega$ by a germ $M$ of ${\partial}\Omega$ at $p$ or a formal surface, and $r$, $\eta_k$, $a$ by germs of smooth functions or formal power series about $p$. Then ${{\cal F}}_{{\partial}\Omega}$ is a sheaf $({{\cal F}}_{p,{\overline}\Omega})_{p\in{\partial}\Omega}$. Abusing notation, we write ${{\cal F}}_M$ in place of ${{\cal F}}_{p,{\overline}\Omega}$. Then Propositions 2 and 3 also have localization, where $\Phi$ is a (formal) biholomorphic map such that $\Phi(M)={\widetilde}M$ with ${\widetilde}M$ associated to ${\widetilde}\Omega$. For each $r\in{{\cal F}}_{{\partial}\Omega}$, we write the asymptotic expansion of the Bergman kernel of $\Omega$ on the diagonal $K(z)=K(z,{\overline}z)$ as follows: $$\label{bergman} K=\varphi[r]\,r^{-n-1}+\psi[r]\log r {\quad\hbox{with}\ \ } \varphi[r],\psi[r]\in C^\infty({\overline}\Omega),$$ where we regard $\varphi=\varphi[r]$ and $\psi=\psi[r]$ as functionals of the pair $({\partial}\Omega,r)$. Note that $\varphi[r]$ mod $O^{n+1}({\partial}\Omega)$ and $\psi[r]$ mod $O^\infty({\partial}\Omega)$ are independent of the choice of $r$. In our first main theorem, we express these functionals in terms of Weyl functionals. For $n\ge2$[,]{} there exist Weyl functionals $W_k$ of weight $k$ for $k=0,1,2,\dots$ such that $$\begin{aligned} \noalign{\vskip4pt} \varphi[r]& = &\ \sum_{k=0}^n \ \, W_k[r] \,r^k+O^{n+1}({\partial}\Omega), \label{exp-phi} \\ \noalign{\vskip4pt} \psi[r]& =&\sum_{k=0}^\infty W_{k+n+1}[r] \,r^k+O^\infty({\partial}\Omega), \label{exp-psi}\\ \nonumber\end{aligned}$$ for any strictly pseudoconvex domain $\Omega\subset{{\Bbb C}}^n$ and any $r\in{{\cal F}}_{{\partial}\Omega}$[.]{} Here means that $\psi[r]=\sum_{k=0}^m W_{k+n+1}[r]\,r^k+O^{m+1}({\partial}\Omega)$ for any $m\ge 0$[.]{} The expansion has been obtained in [@F3] and [@BEG], where $r$ is any defining function satisfying $J(r)=1+O^{n+1}({\partial}\Omega)$. This condition is fulfilled by our $r\in{{\cal F}}_{{\partial}\Omega}$. Suppose ${\partial}\Omega$ is in Moser’s normal form (0.1) near $0$. With the real coordinates $(z',{\overline}z',v,\rho)$, we write the Taylor series about $0$ of ${\partial}^{n+2}_\rho r|_{\rho=0}$ for $r\in{{\cal F}}_{{\partial}\Omega}$ as $$\label{pa-r} {\partial}^{n+2}_\rho r|_{\rho=0}=\sum_{|\alpha|, |\beta|, l\ge0} C_{\alpha{\overline}\beta}^l z'{}^\alpha{\overline}z'{}^\beta v^l.$$ Then for a Weyl functional $W$, the value $W[r](0)$ is expressed as a universal polynomial $I_W(A,C)$ in the variables $A_{\alpha{\overline}\beta}^l, C_{\alpha{\overline}\beta}^l$. We call this polynomial a [*Weyl invariant*]{} and say that $I_W$ is [*${{\cal C}}$-independent*]{} if it is independent of the variables $C_{\alpha{\overline}\beta}^l$. Our second main theorem asserts that ${{\cal C}}$-independent Weyl invariants give all CR invariants. \[thm-weyl-general\] All ${{\cal C}}$[-]{}independent Weyl invariants are [CR]{} invariants[,]{} and vice versa[.]{} It is not easy to determine which Weyl invariant $I_W$ is ${{\cal C}}$-independent when the weight $w$ of $I_W$ is high. If $w\le n+2$ (resp. $w\le5$) for $n\ge3$ (resp. $n=2$), then we can show that $W$ is ${{\cal C}}$-independent (Proposition \[prop-c-dependence\]). Thus Theorem \[thm-weyl-general\] yields: \[thm-weyl\] For weight $\le n+2$[,]{} all Weyl invariants are [CR]{} invariants and vice versa[.]{} Moreover[,]{} for $n=2$[,]{} the same is true for weight $\le5$[.]{} In this theorem, the restriction on weight is optimal. In fact, there exists a ${{\cal C}}$-dependent Weyl invariant of weight $n+3$, or weight $6$ when $n=2$ (Proposition \[prop-c-dependence\]). Thus, to obtain a complete list of CR invariants for this or higher weights, one really needs to select ${{\cal C}}$-independent Weyl invariants. This is a problem yet to be studied. \[rem-weyl\] In the introduction, we defined a Weyl invariant to be the boundary value of a Weyl functional. This definition is consistent with the one given here as a polynomial $I_W(A,C)$. In fact, $I_W(A,C)$ defines via an assignment, to each pair $({\partial}\Omega,r)$, of a function $I_W[r]\in C^\infty({\partial}\Omega)$ which coincides with $W[r]|_{{\partial}\Omega}$. This corresponds to the identification of a CR invariant $I(A)$ with the boundary functional induced by $I(A)$. -12truept Asymptotic solutions of the\ complex Monge-Ampère equation ============================= In this section we prove Propositions 1, 2 and 3. We first assume Proposition 1 and prove Propositions 2 and 3, the transformation laws of ${{\cal F}}_{{\partial}\Omega}$ and Weyl functionals. For a biholomorphic map $\Phi\colon\Omega\to{\widetilde}\Omega$, we define the lift $\Phi_\#\colon{{\Bbb C}}^*\times\Omega\to{{\Bbb C}}^*\times{\widetilde}\Omega$ by $$\Phi_\#(z^0,z)=\left(z^0\cdot[\det \Phi '(z)]^{{-1/(n+1)}},\Phi(z)\right), \label{Philift}$$ where a branch of $[\det \Phi ']^{{-1/(n+1)}}$ is arbitrarily chosen. Then $\det\Phi'_\#(z^0,z)=[\det \Phi '(z)]^{{n/(n+1)}}$, so that $$(|z^0|^{-2n}\det({\widetilde}U_{i\,{{{\overline}\jmath}}}))\circ\Phi_\# =|z^0|^{-2n}\det(({\widetilde}U\circ\Phi_\#)_{i\,{{{\overline}\jmath}}})$$ for any function ${\widetilde}U$ on ${{\Bbb C}}^*\times{\widetilde}\Omega$. In particular, if $U$ is an asymptotic solution of for ${\partial}\Omega$, so is ${\widetilde}U=U\circ\Phi^{-1}_\#$ for ${\partial}{\widetilde}\Omega$. The expansion of ${\widetilde}U$ is given by $${\widetilde}U={\widetilde}r_\#+{\widetilde}r_\# \sum_{k=1}^\infty {\widetilde}\eta_k\,({{\widetilde}r}^{\,n+1}\log{\widetilde}r_\#)^k,$$ where ${\widetilde}r\circ\Phi=|\det \Phi'|^{2/(n+1)}r$ and ${\widetilde}\eta_k\circ\Phi=|\det\Phi'|^{-2k}\eta_k$. It follows that ${\widetilde}r$ is the smooth part of ${\widetilde}U$ if and only if $r=|\det\Phi'|^{-2/(n+1)}{\widetilde}r\circ\Phi$ is the smooth part of $U={\widetilde}U\circ\Phi_\#$. This proves Proposition 2. We next prove Proposition 3. Writing the transformation law as ${\widetilde}r_\#\circ\Phi_\#=r_\#$ and applying ${\partial}{\overline}{\partial}$ to it, we see that $\Phi_\#\colon({{\Bbb C}}^*\times\Omega,g[r])\to({{\Bbb C}}^*\times{\widetilde}\Omega, g[{\widetilde}r])$ is an isometry. If $W_\#$ is a Weyl polynomial of weight $w$, then $$\label{invariance-W} W_\#[{\widetilde}r]\circ\Phi_\#=W_\#[r],$$ while the homogeneity of the ambient metric in $z^0$ implies $$W_\#[r]=|z^0|^{-2w}W[r].$$ Thus is rewritten as , and Proposition 3 is proved. We fix a defining function $\rho$ satisfying $J(\rho)=1+O^{n+1}({\partial}\Omega)$ and introduce a nonlinear differential operator for functions $f$ on ${{\Bbb C}}^*\times\Omega$: $${{\cal M}}(f)=\det(U_{i\,{{{\overline}\jmath}}})/\det((\rho_\#)_{i\,{{{\overline}\jmath}}}) \quad\hbox{with } U=\rho_\#\,(1+f).$$ Then $J_\#(U)=|z^0|^{2n}$ is written as $${{\cal M}}(f)=J(\rho)^{-1}. \label{eqMf}$$ If $U$ is a series of the form , then $f$ admits an expansion $$f=\sum_{k=0}^\infty \eta_k (\rho^{n+1}\log\rho_\#)^k, \quad\hbox{where } \eta_k\in C^\infty({\overline}\Omega).$$ Denoting by $\cal A$ the space of all formal series of this form, we shall construct solutions to in $\cal A$. We first study the degeneracy of the equation at the surface ${{\Bbb C}}^*\times{\partial}\Omega$. Following [@G2], we use a local frame $Z_0,\dots,Z_n$ of $T^{(1,0)}({{\Bbb C}}^*\times{\overline}\Omega)$ near ${{\Bbb C}}^*\times{\partial}\Omega$ satisfying: - $Z_0=z^0({\partial}/{\partial}z^0)$; - $Z_1,\dots,Z_{n-1}$ are orthonormal vector fields on ${\overline}\Omega$ with respect to the Levi form ${\partial}{\overline}{\partial}\rho$ such that $Z_j\rho=0$; - $Z_n$ is a vector field on ${\overline}\Omega$ such that $Z_n{ \hskip2pt\vbox{\hbox{\phantom{2}{\vrule}}\hrule}\hskip2pt}{\partial}{\overline}{\partial}\rho=\gamma\,{\overline}{\partial}\rho$ for some $\gamma\in C^\infty({\overline}\Omega)$, $N\rho=1$ and $T\rho=0$, where $N={{\rm Re}}Z_n$, $T={{\rm Im}}Z_n$. Using this frame, we introduce a ring ${{\cal P}}_{{\partial}\Omega}$ of differential operators on${{\Bbb C}}^*\times{\overline}\Omega$ that are written as polynomials of $Z_0,\dots,Z_{n-1},{\overline}Z_0,\dots,{\overline}Z_{n-1}, T,\rho N$ with coefficients in $C^\infty({\overline}\Omega,{{\Bbb C}})$, the space of complex-valued smooth functions on ${\overline}\Omega$. In other words, ${{\cal P}}_{{\partial}\Omega}$ is a ring generated by $Z_0,{\overline}Z_0$ and totally characteristic operators on ${\overline}\Omega$ in the sense of [@LM]. We first express ${{\cal M}}$ as a nonlinear operator generated by ${{\cal P}}_{{\partial}\Omega}$. \[MAlin\] Let $E=-(\rho N+1)(\rho N-2Z_0-n-1)$[.]{} Then[,]{} $${{\cal M}}(f)=1+Ef+\rho P_0f+Q(P_1f,\dots,P_l f){\quad\hbox{for}\ \ }f\in{{\cal A}}, \label{Mfexp}$$ where $P_0,P_1,\dots,P_l\in{{\cal P}}_{{\partial}\Omega}$[,]{} and $Q$ is a polynomial without constant and linear terms[.]{} Taking the dual frame $\omega^0,\dots,\omega^n$ of $Z_0,\dots,Z_n$, we set $\theta^j=z^0\omega^j$. Then, the conditions (1)–(3) imply $\theta^0=dz^0$, $\theta^n=z^0{\partial}\rho$ and $${\partial}{\overline}{\partial}\rho_\#=\rho\theta^0\wedge{\overline}{\theta^0} +\theta^0\wedge{\overline}{\theta^n}+\theta^n\wedge{\overline}{\theta^0} -\sum_{i=1}^{n-1}\theta^i\wedge{\overline}{\theta^i}+ \gamma\theta^n\wedge{\overline}{\theta^n}. \hskip.4in \label{pprho}$$ Using the coframe $\theta^0,\dots,\theta^n$, we define a Hermitian matrix $A(f)=(A_{i\,{{{\overline}\jmath}}}(f))$ by $${\partial}{\overline}{\partial}\,\left(\rho_\#\,(1+f)\right)=\sum_{i,\, j=0}^n A_{i\,{{{\overline}\jmath}}}(f)\theta^i\wedge{\overline}{\theta^j},$$ so that ${{\cal M}}(f)=\det A(f)/\det A(0)$ holds. Let us compute $A(f)$. First, $${\partial}{\overline}{\partial}\,\left(\rho_\#\,(1+f)\right)=(1+f){\partial}{\overline}{\partial}\,\rho_\#+ {\partial}f\wedge{\overline}{\partial}\rho_\#+ {\partial}\rho_\#\wedge{\overline}{\partial}f +\rho_\#{\partial}{\overline}{\partial}\,f.$$ For the first term on the right-hand side, we use . The second and the third terms are respectively given by $${\partial}f\wedge{\overline}{\partial}\rho_\#=\sum_{j=0}^{n} Z_j f\,\theta^j\wedge(\rho\,{\overline}{\theta^0}+{\overline}{\theta^n})$$ and its complex conjugate. Finally, for the last term, $$\rho_\#{\partial}{\overline}{\partial}f=\rho \sum_{i,j=0}^{n} (Z_i{\overline}{Z_j}+E_{i{{{\overline}\jmath}}})f\, \theta^i\wedge{\overline}{\theta^j}+\rho_\# Nf{\partial}{\overline}{\partial}\rho,$$ where $E_{i\,{{{\overline}\jmath}}}\in{{\cal P}}_{{\partial}\Omega}$ with $E_{0{{{\overline}\jmath}}}=E_{j\,{\overline}0}=0$ for any $j$. Therefore, $A(f)$ modulo functions of the form $\rho Pf$, $P\in{\cal P}_{{\partial}\Omega}$, is given by $$\left( \begin{array}{ccc} \rho& 0& 1+P_{0{\overline}n}f\\ 0& -\delta_{i{{{\overline}\jmath}}}(1+f+\rho Nf)& *\\ 1+P_{n{\overline}0}f& * & \gamma+P_{n{\overline}n}f \end{array} \right),$$ where $*$ stands for a function of the form $Pf, P\in{{\cal P}}_{{\partial}\Omega}$, and $$\begin{aligned} P_{0{\overline}n}&=&{\overline}{P_{n{\overline}0}}=1+\rho {\overline}{Z_n}+Z_0+\rho Z_0{\overline}{Z_n},\\ P_{n{\overline}n}&=&\gamma+Z_n+{\overline}{Z_n}+\rho Z_n{\overline}{Z_n}+\gamma\rho N\\ &=&\rho N^2+2N\quad\hbox{mod}\ {{\cal P}}_{{\partial}\Omega}.\end{aligned}$$ Let $B(f)$ denote the matrix obtained from $A(f)$ by dividing the first column by $\rho$ and multiplying the last row by $\rho$. Then $B(f)$ modulo functions of the form $\rho Pf, P\in{\cal P}_{{\partial}\Omega}$, is given by $$\left(\begin{array}{ccc} 1 & 0 & 1+P_{0{\overline}n}f\\ {*} & -\delta_{i{{{\overline}\jmath}}}(1+f+\rho Nf) &*\\ 1+P_{n{\overline}0}f &0 & \gamma\rho+\rho^2N^2f+2\rho Nf \end{array}\right).$$ Noting that $\det A(f)=\det B(f)$, we get $$\begin{aligned} {{\cal M}}(f) &=& 1-\rho^2 N^2f-2\rho Nf+(n-1)(1+\rho N)f\\ & & +\ P_{n{\overline}0}f+P_{0{\overline}n}f+\rho P_0f+Q(P_1f,\dots,P_l f).\end{aligned}$$ Using $Z_0f={\overline}{Z_0}f$, we obtain . To construct solutions to inductively, we introduce a filtration $${{\cal A}}={{\cal A}}_0\supset{{\cal A}}_1\supset{{\cal A}}_2\supset\cdots,$$ where ${\cal A}_s$ denotes the space of all asymptotic series in ${{\cal A}}$ of the form $$\rho^s\sum_{k=0}^\infty \alpha_k (\log\rho_\#)^k {\quad\hbox{with}\ \ }\alpha_k\in C^\infty({\overline}\Omega).$$ This filtration makes ${{\cal A}}$ a filtered ring which is preserved by the action of ${{\cal P}}_{{\partial}\Omega}$. That is, ${{\cal A}}_j{{\cal A}}_k\subset{{\cal A}}_{j+k}$ and $Pf\in{{\cal A}}_j$ for each $(P,f)\in{{\cal P}}_{{\partial}\Omega}\times{{\cal A}}_j$. Hence yields ${{\cal M}}(f+g)={{\cal M}}(f)+{{\cal A}}_s$ for any $g\in{{\cal A}}_s$. In particular, if $f\in{{\cal A}}$ is a solution to the equation $${{\cal M}}(f)=J(\rho)^{-1}{\quad\hbox{mod}\ \ }{{\cal A}}_{s+1}, \leqno{\hbox{\eqref{eqMf}}{}_s}$$ so is ${\widetilde}f=f+g$ for any $g\in{{\cal A}}_{s+1}$. We shall show that this equation admits a unique solution modulo ${{\cal A}}_{s+1}$ if an initial condition corresponding to is imposed. \[f-solution\] [(i)]{} An asymptotic series $f\in{{\cal A}}$ satisfies ${}_n$ if and only if $f\in{{\cal A}}_{n+1}$[.]{} [(ii)]{} Let $s\ge n+1$[.]{} Then[,]{} for any $a\in C^\infty({\partial}\Omega)$[,]{} the equation ${}_s$ admits a solution $f_s$[,]{} which is unique modulo ${{\cal A}}_{s+1}$ under the condition $$\eta_0=a\rho^{n+1}+O^{n+2}({\partial}\Omega). \label{init-eta0}$$ Since $f\in{{\cal A}}$ satisfies $f=\eta_0$ mod ${{\cal A}}_{n+1}$, it follows that ${{\cal M}}(f)={{\cal M}}(\eta_0)+{{\cal A}}_{n+1}$. Thus, recalling ${{\cal M}}(\eta_0)=J(\rho(1+\eta_0))/J(\rho)$, we see that ${}_n$ is reduced to $$J(\rho(1+\eta_0))=1+O^{n+1}({\partial}\Omega).$$ This is satisfied if and only if $\eta_0=O^{n+1}({\partial}\Omega)$, which is equivalent to $f\in{{\cal A}}_{n+1}$. Thus (i) is proved. To prove (ii), we first consider ${}_{s}$ for $s=n+1$. If $f\in{{\cal A}}_{n+1}$, then ${{\cal M}}(f)=1+Ef+{{\cal A}}_{n+2}$. Thus ${}_{n+1}$ is equivalent to $$Ef=J(\rho)^{-1}-1 {\quad\hbox{mod}\ \ }{{\cal A}}_{n+2}. \label{MAn+1}$$ Writing $f=\rho^{n+1}(\alpha_0+\alpha_1\log\rho_\#)$ mod ${{\cal A}}_{n+2}$, we have $Ef=(n+2)\alpha_1\,\rho^{n+1}\break +{{\cal A}}_{n+2}$. Hence, holds if and only if $(n+2)\alpha_1=(J(\rho)^{-1}-1)\rho^{-n-1}+O({\partial}\Omega)$. Noting that $\alpha_0|_{{\partial}\Omega}$ is determined by , we get the unique existence of $f_{n+1}$ modulo ${{\cal A}}_{n+2}$. For $s>n+1$, we construct $f_s$ by induction on $s$. Assume that $f_{s-1}$ exists uniquely modulo ${{\cal A}}_s$. Then we have ${{\cal M}}(f_{s-1}+g)={{\cal M}}(f_{s-1})+Eg+{{\cal A}}_{s+1}$ for $g\in{{\cal A}}_s$, so that ${}_{s}$ is reduced to $$E[g]_s=[h]_s{\quad\hbox{with}\ \ }h=J(\rho)^{-1}-{{\cal M}}(f_{s-1})\in{{\cal A}}_s, \label{linears}$$ where $[g]_s$ and $[h]_s$ stand for the cosets in ${{\cal A}}_s/{{\cal A}}_{s+1}$. To solve this equation, we introduce a filtration of ${{\cal A}}_s/{{\cal A}}_{s+1}$: $${{\cal A}}_s/{{\cal A}}_{s+1}={{\cal A}}_s^{(l)}\supset {{\cal A}}_s^{(l-1)}\supset\cdots\supset{{\cal A}}_s^{(0)} \supset{{\cal A}}_s^{(-1)}=\{0\},$$ where $l=[s/(n+1)]$ and $${{\cal A}}_s^{(t)}=\Bigl\{[g]_s\in{{\cal A}}_s/{{\cal A}}_{s+1}: g=\sum_{k=0}^t\eta_k(\rho^{n+1}\log\rho_\#)^k\in{{\cal A}}_s\Bigr\}.$$ Clearly, $\rho N{{\cal A}}_s^{(t)}\!\!\subset\!{{\cal A}}_s^{(t)}$ and $Z_0{{\cal A}}_s^{(t)}\!\!\subset\!{{\cal A}}_s^{(t-1)}$. Consequently, if we write $[g]_s\!\in~\!{{\cal A}}_s^{(m)}$ as $[g]_s=[\alpha_m\rho^s(\log\rho_\#)^m]_s+{{\cal A}}_s^{(m-1)}$, then $$E[g]_s=I(s)[\alpha_m\rho^s(\log\rho_\#)^m]_s+{{\cal A}}_s^{(m-1)},$$ where $I(x)=-(x+1)(x-n-1)$. Thus, setting $F=1-I(s)^{-1}E$, we have $F[g]_s\in{\cal A}_s^{(m-1)}$ so that $F{{\cal A}}_s^{(m)}\subset{{\cal A}}_s^{(m-1)}$. In particular, $F^l=0$ on ${{\cal A}}_s^{(l)}$. Since $E=I(s)(1-F)$, the linear operator $L=I(s)^{-1}\sum_{k=0}^{l-1} F^k$ satisfies $LE=EL=\hbox{id}$ on ${{\cal A}}_s^{(l)}$. Therefore, admits a unique solution $[g]_s$, which gives a unique solution $f_s=f_{s-1}+g$ modulo ${{\cal A}}_{s+1}$ of ${}_s$. The unique solution of with the condition is obtained by taking the limit of $f_s$ as $s\to\infty$. More precisely, we argue as follows. For $a\in C^\infty({\partial}\Omega)$, we take a sequence $\{f_s\}$ in Lemma \[f-solution\], and write $f_s=\sum\eta_k^{(s)}(\rho^{n+1}\log\rho_\#)^k$. Then the uniqueness of $f_s$ mod ${\cal A}_{s+1}$ yields $\eta_k^{(s+1)}=\eta_k^{(s)}$ mod $O^{s-k(n+1)}({\partial}\Omega)$. This implies the existence of $\eta_k\in C^\infty({\overline}\Omega)$ such that $$\eta_k=\eta_k^{(s)} \hbox{ mod } O^{s-k(n+1)}({\partial}\Omega)$$ for any $s$. Therefore, the formal series $f=\sum_{k=0}^\infty\eta_k(\rho^{n+1}\log\rho_\#)^k$ satisfies ${{\cal M}}(f)=J(\rho)^{-1}$ and . The uniqueness follows from that for each ${}_{s}$. We have constructed a solution $f\in {{\cal A}}$ of and hence obtained a formal series $$\label{Utemp} U=\rho_\#(1+f)=\rho_\#+\rho_\#\sum_{k=0}^\infty \eta_k (\rho^{n+1}\log\rho_\#)^k,$$ which solves to infinite order along ${{\Bbb C}}^*\times{\partial}\Omega$. In general, the series is not in the form because $\eta_0$ may not vanish. We next construct a unique defining function $r$ such that $U$ is written in the form . In the following, we write $f=g$ mod $O^\infty({\partial}\Omega)$ if $f-g$ vanishes to infinite order along ${\partial}\Omega$. \[chose-r\] Let $f\in{{\cal A}}_{n+1}$[.]{} Then there exists a unique defining function $r$ [mod]{} $O^\infty({\partial}\Omega)$ such that $U=\rho_\#(1+f)$ is written in the form [. ]{} Starting from $r_1=\rho$, we define a sequence of defining functions $r_s$, $s=1,2,\dots$, by setting $r_{s+1}=r_s(1+\eta_{s,0})$, where $\eta_{s,0}$ is the coefficient in the expansion $U=r_{s\#}+r_{s\#}\sum_{k=0}^\infty\eta_{s,k}(r_s^{n+1}\log r_{s\#})^k$. It then follows from $\log(r_{s+1\#})=\log(r_{s\#})+O^{s(n+1)}({\partial}\Omega)$ that $\eta_{s,0}=O^{s(n+1)}({\partial}\Omega)$, so that $r_{s+1}=r_s+O^{s(n+1)+1}({\partial}\Omega)$. We can then construct a defining function $r$ such that $r=r_s$ mod $O^{s(n+1)+1}({\partial}\Omega)$ for any $s$. With this $r$, the series $U$ is written as $U=r_{\#}+r_{\#}\sum_{k=1}^\infty\eta_k( r^{n+1}\log r_{\#})^k.$ Let us next prove the uniqueness of $r$. We take another defining function ${\widetilde}r$ with the required property and write $U={\widetilde}r_{\#}\sum_{k=0}^\infty{\widetilde}\eta_k({\widetilde}r^{n+1}\log{\widetilde}r_{\#})^k$. Setting $\phi= r/{\widetilde}r\in C^\infty({\overline}\Omega)$, we then have ${\widetilde}\eta_0=\phi(1+\sum_{k=1}^\infty\eta_k(\rho^{n+1}\log\phi)^k)$. Since ${\widetilde}\eta_0=1$, $$\frac{1}{\phi}=1+\sum_{k=1}^\infty\eta_k(r^{n+1}\log\phi)^k. \label{phi-relation}$$ This implies that if $\phi=1+O^m({\partial}\Omega)$ then $\phi=1+O^{m+n+1}({\partial}\Omega)$. Therefore, $\phi=1$ mod $O^\infty({\partial}\Omega)$; that is, ${\widetilde}r=r$ mod $O^\infty({\partial}\Omega)$. We next examine the relation between the conditions and . Writing $U=\rho_\#(1+f)$ in the form , we have $$r=\rho+\eta_0\,\rho+O^{2(n+1)}({\partial}\Omega) =\rho+\rho^{n+2}a+O^{n+3}({\partial}\Omega).$$ Applying $X^{n+2}$, we get $$X^{n+2}r=X^{n+2}\rho+(n+2)!\,(X\rho)^{n+2}a+O({\partial}\Omega).$$ Since $X$ is transversal to ${\partial}\Omega$, that is, $X\rho|_{{\partial}\Omega}\ne 0$, it follows that specifying $X^{n+2}r|_{{\partial}\Omega}$ is equivalent to specifying $a$ in when $\rho$ is prescribed. Therefore, $f$ and thus $U=\rho_\#(1+f)$ are uniquely determined by the condition . This completes the proof of the first statement of Proposition 1. It remains to prove $r\in{{\cal F}}^{{{\rm F}}}_{{\partial}\Omega}$; that is, $J(r)=1+O^{n+1}({\partial}\Omega)$. If we write $U=r_\#(1+f)$ then ${{\cal M}}(f)=J(r)^{-1}$, where ${{\cal M}}$ is defined with respect to $\rho=r$. On the other hand, we have by Lemma \[f-solution\], (i) that $f\in{{\cal A}}_{n+1}$ and thus ${{\cal M}}(f)={{\cal M}}(0)=1$ mod ${{\cal A}}_{n+1}$. Therefore, $J(r)^{-1}=1$ mod ${{\cal A}}_{n+1}$; that is, $J(r)=1+O^{n+1}({\partial}\Omega)$. Reformulation of the main theorems ================================== We first recall the definition and basic properties of Moser’s normal form [@CM]. A real-analytic hypersurface $M\subset{{\Bbb C}}^n$ is said to be in [*normal form*]{} if it admits a defining function of the form $$\rho = 2u - |z'|^2-\sum _{|\alpha|, |\beta|\ge 2,\, l\ge 0} A_{\alpha{\overline}\beta}^l\, {z'}^\alpha {{\overline}z'}^\beta v^l, \label{mnf}$$ where the coefficients $(A_{\alpha{\overline}\beta}^l)$ satisfy the following three conditions: (N1) For each $p,\, q\ge 2$ and $l\ge0$, ${{\bf A}}_{p{\overline}q}^l=(A_{\alpha{\overline}\beta}^l)_{|\alpha|=p,\, |\beta|=q}$ is a bisymmetric tensor of type $(p, q)$ on ${{\Bbb C}}^{n-1}$; (N2) $A_{\alpha{\overline}\beta}^l={\overline}{A_{\beta{\overline}\alpha}^l}$; (N3) $ {{\rm tr}}{{\bf A}}_{2{\overline}2}^l=0,\ {{\rm tr}}^2{{\bf A}}_{2{\overline}3}^l=0,\ {{\rm tr}}^3{{\bf A}}_{3{\overline}3}^l=0; $ here ${{\rm tr}}$ denotes the usual tensorial trace with respect to $\delta^{i\,{{{\overline}\jmath}}}$. We denote this surface by $N(A)$ with $A=(A_{\alpha{\overline}\beta}^l)$. In particular, $N(0)$ is the hyperquadric $2\,u=|z'|^2$. For any real-analytic strictly pseudoconvex surface $M$ and $p\in M$, there exist holomorphic local coordinates near $p$ such that $M$ is in normal form and $p=0$. Moreover, if $M$ is tangent to the hyperquadric to the second order at $0$, a local coordinates change $S(z)=w$ for which $S(M)$ is in normal form is unique under the normalization $$S(0)=0,\quad S'(0)=\hbox{id},\quad {{\rm Im}}\frac{{\partial}^2 w^n}{{\partial}(z^n)^2}(0)=0. \label{normalization}$$ Even if $M$ is not real-analytic but merely $C^\infty$, there exists a formal change of coordinates such that $M$ is given by a formal surface $N(A)$, a surface which is defined by a formal power series of the form . In this case, uniquely determines $S$ as a formal power series. We sometimes identify a formal surface $N(A)$ in normal form with the collection of coefficients $A=(A_{\alpha{\overline}\beta}^l)$ and denote by $\cal N$ the real vector space of all $A$ satisfying (N1–3). The conditions (N1–3) do not determine uniquely the normal form of a surface: two different surfaces in normal form may be (formally) biholomorphically equivalent. The equivalence classes of normal forms can be written as orbits in ${{\cal N}}$ of an action of the group of all fractional linear transformations which preserve the hyperquadric and the origin. To describe this action, let us first delineate the group explicitly. In projective coordinates $(\zeta^0,\dots,\zeta^n)\in{{\Bbb C}}^{n+1}$ defined by $z^j=\zeta^j/\zeta^0$, the hyperquadric is given by $\zeta^0{{\overline}\zeta^n}+\zeta^n{{\overline}\zeta^0}-|\zeta'|^2=0$. Let $g_0$ denote the matrix $$g_0= \left( \begin{array}{ccc} 0& 0 &1\\ 0&-I_{n-1}&0\\ 1& 0 &0 \end{array} \right). \label{g0matrix}$$ Then, the hyperquadric is preserved by the induced linear fractional transformation $\phi_h$ of ${{\Bbb C}}^n$ for $h\in{{\rm SU}}(g_0)$. Clearly, $\phi_h$ leaves the origin $0\in{{\Bbb C}}^n$ fixed if and only if $h$ is in the subgroup $$H=\{ h\in {{\rm SU}}(g_0):\ h\,e_0=\lambda \,e_0,\ \lambda\in{{\Bbb C}}^*\},$$ where $e_0$ is the column vector ${}^t(1,0,\dots,0)$. For $(h,A)\in H\times{{\cal N}}$, the surface $\phi_h(N(A))$ always fits the hyperquadric to the second order. Hence there is a unique transformation $S$ normalized by such that $S$ sends $\phi_h(N(A))$ to a surface $N({\widetilde}A)= S\circ\phi_h(N(A))$ in normal form. We set $\Phi_{(h,A)}=S\circ\phi_h$ and $h.A={\widetilde}A$. Then the uniqueness of $S$ implies $$\Phi_{({\widetilde}h h,A)}=\Phi_{({\widetilde}h,h.A)}\circ\Phi_{(h,A)} {\quad\hbox{for any}\ \ }h,{\widetilde}h\in H. \label{transPhi}$$ Thus $H\times{{\cal N}}\ni(h,A)\mapsto h.A\in{{\cal N}}$ defines a left action of $H$ on ${{\cal N}}$. Moreover, any biholomorphic map $\Phi$ such that $\Phi(0)=0$ and $\Phi(N(A))=N({\widetilde}A)$ is written as $\Phi=\Phi_{(h,A)}$ for some $h$ satisfying $h.A={\widetilde}A$. Therefore, the orbits of this $H$-action are the equivalence classes of the normal form. We now rewrite the definition of CR invariants in terms of this $H$-action. An $H$-[*invariant of ${{\cal N}}$ of weight $w$*]{} is a polynomial $I(A)$ in the components of $A=(A_{\alpha{\overline}\beta}^l)$ satisfying $$h.I=\sigma_{w,w}(h)I\quad\hbox{for}\ h\in H, \label{invarianceP}$$ where $(h.I)(A)=I(h^{-1}.A)$ and $\sigma_{p,q}$ is the character of $H$ given by $$\sigma_{p,q}(h)=\lambda^{-p}{\overline}\lambda{}^{-q} \quad\hbox{when}\quad h\,e_0=\lambda\,e_0.$$ We denote by $I^w({{\cal N}})$ the vector space of all invariants of ${{\cal N}}$ of weight $w$. Let us observe that is equivalent to and hence $I(A)$ is an$H$-invariant if and only if it is a CR invariant. If $\Phi(N(A))=N({\widetilde}A)$ and $\Phi(0)=0$, then there exists an $h\in H$ such that ${\widetilde}A=h.A$ and $\Phi=\Phi_{(h,A)}$. Since $\Phi'(0)=\phi_h'(0)$ and $\det\phi_h'(0)=\lambda^{-n-1}$, $$|\det\Phi'(0)|^{{-2w}/(n+1)}=|\det\phi'_h(0)|^{-2w/(n+1)}=\sigma_{-w,-w}(h). \label{det-sigma}$$ Thus is written as $I(h.A)=\sigma_{-w,-w}(h)I(A)$, which is equivalent to . \[H-action-to-r\] In the previous subsection, we expressed the transformation law of CR invariants in terms of the $H$-action on ${{\cal N}}$. Proceeding similarly, let us express the transformation law of defining functions by using the group $H$. We consider asymptotic solutions to for a surface $N(A)$ in normal form, where $r$ and $\eta_k$ are regarded as real formal power series about $0\in{{\Bbb C}}^n$. Let ${{\cal F}}_{N(A)}$ denote the totality of the smooth parts of asymptotic solutions for $N(A)$. Then, applying the argument in the proof of Proposition 1, we can find for any real formal power series $$\label{h-c} h_C(z',{\overline}z',v)=\sum_{|\alpha|,|\beta|,l\ge0}C_{\alpha{\overline}\beta}^l z'{}^\alpha{\overline}z'{}^\beta v^l,$$ a unique formal power series $r\in{{\cal F}}_{N(A)}$ such that $$\label{init} {\partial}^{n+2}_\rho r|_{\rho=0}=h_C,$$ where ${\partial}_\rho$ is the differentiation with respect to $\rho$ in the (formal) coordinates $(z', {\overline}z',v,\rho)$ given by . Thus, denoting by ${{\cal C}}$ the totality of the collections of coefficients $C=(C_{\alpha{\overline}\beta}^l)$ of , we can define a map $$\iota_1\colon{{\cal N}}\times{{\cal C}}\to{{\cal F}}=\bigcup_{A\in{{\cal N}}}{{\cal F}}_{N(A)}$$ which assigns to each $(A,C)\in{{\cal N}}\times{{\cal C}}$ the element $r\in{{\cal F}}_{N(A)}$ satisfying . For $h\in H$ and $r\in{{\cal F}}_{N(A)}$, we define $h.r={\widetilde}r\in{{\cal F}}_{N(h.A)}$ by $${\widetilde}r\circ\Phi_{(h,A)}=|\det\Phi_{(h,A)}'|^{2/(n+1)}r.$$ Then the map $H\times{{\cal F}}\ni(h,r)\mapsto h.r\in{{\cal F}}$ gives an $H$-action on ${{\cal F}}$ in virtue of , and it induces an $H$-action on ${{\cal N}}\times{{\cal C}}$ via the bijection $\iota_1$. With respect to this $H$-action, we can characterize invariants of the pair $(M,r)$ as $H$-invariant polynomials of ${{\cal N}}\times{{\cal C}}$ defined as follows. An [*$H$-invariant of ${{\cal N}}\times{{\cal C}}$ of weight $w$*]{} is a polynomial $I(A,C)$ in the variables $A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l$ satisfying with $$(h.I)(A,C)= I(h^{-1}.(A,C)).$$ The [*totality*]{} of such polynomials is denoted by $I^w({{\cal N}}\times{{\cal C}})$. Observe that the projection ${{\cal N}}\times{{\cal C}}\to{{\cal N}}$ is $H$-equivariant. Hence an $H$-invariant of ${{\cal N}}$ is regarded as an $H$-invariant of ${{\cal N}}\times{{\cal C}}$, and $I^w({{\cal N}})$ is identified with a subspace of $I^w({{\cal N}}\times{{\cal C}})$. We next embed the $H$-space ${{\cal F}}$, and also ${{\cal N}}\times{{\cal C}}$, into a tensor $H$-module by using the curvatures of the ambient metrics $g[r]$. Recall that, for $r\in{{\cal F}}$, the ambient metric is defined by the Kähler potential $r_\#$, and $r_\#$ is a formal power series in $z^0,z$ (and ${\overline}z^0,{\overline}z$) about $e_0=(1,0)\in{{\Bbb C}}^*\times{{\Bbb C}}^n$. Since $r_\#$ is homogeneous in $z^0$, it follows that the ambient metric, the curvature tensor $R_{j\,{\overline}k\,l\,{\overline}m}$ and the covariant derivatives $R_{j\,{\overline}k\,l\,{\overline}m,\gamma_1\cdots\gamma_p}$ are defined at each point $\lambda\,e_0\in{{\Bbb C}}^*\times{{\Bbb C}}^n$, $\lambda\in{{\Bbb C}}^*$. For simplicity of the notation, we write $R^{(p,q)}=(R_{\alpha{\overline}\beta})_{|\alpha|=p,|\beta|=q}$, where $$R_{\alpha{\overline}\beta}= \left\{ \begin{array}{cl} R_{\alpha_1{\overline}\beta_1\alpha_2{\overline}\beta_2, \alpha_3\cdots\alpha_p{\overline}\beta_3\cdots{\overline}\beta_q} & \hbox{if}\ |\alpha|\ge2\hbox{ and }|\beta|\ge2;\\ 0 & \hbox{otherwise.} \end{array} \right.$$ Here, components of tensors are written with respect to the coordinates$\zeta=(\zeta^0,\zeta^1,\dots,\zeta^n)\in{{\Bbb C}}^{n+1}$ given by $$\zeta^0=z^0,\ \zeta^1=z^0 z^1,\ \dots,\ \zeta^n=z^0 z^n.$$ Then we have, at the point $e_0$, $$\label{R-reduction0} \begin{array}{rl} R_{\alpha'0\alpha''{\overline}\beta} &=(1-|\alpha'\alpha''|)R_{\alpha'\alpha''{\overline}\beta}, \\ R_{\alpha\,{\overline}{\beta'}\,{\overline}0\,{\overline}{\beta''}} &=(1-|\beta'\beta''|) R_{\alpha\,{\overline}{\beta'}\,{\overline}{\beta''}}, \end{array}$$ for all lists $\alpha,\alpha',\alpha'',\beta,\beta',\beta''$ of indices in $\{0,1,\dots,n\}$ with $|\alpha'|,|\beta'|>1$. This fact is a consequence of the homogeneity of the Kähler potential $r_\#(\zeta,{\overline}\zeta)$ in $\zeta$ and ${\overline}\zeta$; see the tensor restriction lemma in [@F3]. We write down the transformation law of $R_{\alpha{\overline}\beta}$ which comes from the$H$-action on ${{\cal F}}$. Let $W={{\Bbb C}}^{n+1}$ denote the defining representation of ${{\rm SU}}(g_0)$, hence also of $H$, by left multiplication on the column vectors. We define$H$-modules $${{\bf T}}_s^{p,q}=(\otimes^{p,q} W^*)\otimes\sigma_{s-p,s-q}, \quad\hbox{for }p,q,s\in{\Bbb Z}\hbox{ with } p,q\ge0,$$ where $\mathbold{\otimes}^{p,q} W^*=(\mathbold{\otimes}^p W^*)\otimes(\mathbold{\otimes}^q{\overline}{W^*})$. We denote by ${{\bf T}}_s$ the $H$-submodule of $\prod_{p,q\ge 0}{{\bf T}}_s^{p,q}$ consisting of all $T=(T_{\alpha{\overline}\beta})_{|\alpha|,|\beta|\ge0}$ satisfying 0 $$\begin{array}{rl} T_{\alpha'0\alpha''{\overline}\beta} &=(s-|\alpha'\alpha''|)T_{\alpha'\alpha''{\overline}\beta}, \\ T_{\alpha\,{\overline}{\beta'}\,{\overline}0\,{\overline}{\beta''}} &=(s-|\beta'\beta''|) T_{\alpha\,{\overline}{\beta'}\,{\overline}{\beta''}}, \end{array} \eqno{\reduction0}_s$$ where $\alpha,\alpha',\alpha'',\beta,\beta',\beta''$ are lists of indices with $|\alpha'|,|\beta'|> s$. Then, permits us to define a map $\iota_2\colon{{{\cal F}}}\to{{\bf T}}_1$ by setting $\iota_2(r)=(R_{\alpha{\overline}\beta}|_{e_0})_{|\alpha|,|\beta|\ge0}$, where $R_{\alpha{\overline}\beta}$ are the components of the covariant derivatives of the curvature of $g[r]$. The map $\iota_2$ is $H$[-]{}equivariant[.]{} In particular[,]{} the image ${{\cal R}}=\iota_2({{\cal F}})$ is an $H$[-]{}invariant subset of ${{\bf T}}_1$[. ]{} For $r\in{{\cal F}}_{N(A)}$ and $h\in H$, set ${\widetilde}r=h.r$. Then $g[{\widetilde}r]=F_*g[r]$, where $F=(\Phi_{(h,A)})_\#$, so that $${\widetilde}R^{(p,q)}=F_*R^{(p,q)}\quad\hbox{for any $p,q\ge0$},$$ where $R^{(p,q)}$ and ${\widetilde}R^{(p,q)}$ are curvatures of $g[r]$ and $g[{\widetilde}r]$, respectively. Evaluating this formula at $e_0$, we have $$\iota_2(h.r)=\left((F_*R^{(p,q)})|_{e_0}\right)_{p,q\ge0}.$$ Note that the right-hand side is independent of the choice of the lift $F$. We shall fix $F$ as in the next lemma and express $(F_*R^{(p,q)})|_{e_0}$ in terms of $R^{(p,q)}|_{e_0}$ and $h$. \[jacobim\] There exists a lift $F$ of $\Phi_{(h,A)}$ satisfying $F(e_0)=\lambda\,e_0$[,]{} where $\lambda=\sigma_{-1,0}(h)$[.]{} For such a lift $F$[,]{} the Jacobian matrix $F'$ at $\lambda^{-1} e_0$ with respect to $\zeta$ is $h$[.]{} We first note that the linear map $\zeta\mapsto h\,\zeta$ gives a lift of $\phi_h$. This lift satisfies $(\phi_h)_\#(e_0)=\lambda\,e_0$ and $(\phi_h)_\#'(\nu\,e_0)=h$ for any $\nu\in{{\Bbb C}}^*$. On the other hand, for a map $S$ normalized by , we can define $S_\#$ such that $S_\#(e_0)=e_0$ and $S_\#'(e_0)={{\rm id}}$; see Lemma N1 of [@F3]. Hence the composition $F=S_\#\circ(\phi_h)_\#$ gives a lift of $\Phi_{(h,A)}=S\circ\phi_h$ satisfying $F(e_0)=\lambda\,e_0$. The Jacobian matrix of $F$ at $\lambda^{-1}e_0$ is given by $F'(\lambda^{-1} e_0)=S_\#'(e_0)\cdot(\phi_h)_\#'(\lambda^{-1}e_0)=h$. By this lemma, we see that $(F_*R^{(p,q)})|_{e_0}$ is given by the usual tensorial action of $h$ on $R^{(p,q)}|_{\lambda^{-1} e_0}\in\mathbold{\otimes}^{p,q} W^*$. To compare $R^{(p,q)}|_{\lambda^{-1} e_0}$ with $R^{(p,q)}|_{e_0}$, we next consider the scaling map $\phi(\zeta)=\lambda \zeta$. Then, from the homogeneity of $g$, we have $\phi_*g=|\lambda|^{-2}g$, so that $\phi_*R^{(p,q)}=|\lambda|^{-2}R^{(p,q)}$. Thus we get $R^{(p,q)}|_{\lambda^{-1}e_0}= \lambda^{p-1}{{\overline}\lambda}{}^{q-1}R^{(p,q)}|_{e_0}$. Summing up, we obtain $(F_*R^{(p,q)})|_{e_0}=h.(R^{(p,q)}|_{e_0})\in{{\bf T}}_1^{p,q}$. We have defined the following $H$-equivariant maps: $${{\cal N}}\times{{\cal C}}\stackrel{\iota_1}{\longrightarrow}{{\cal F}}\stackrel{\iota_2}{\longrightarrow}{{\bf T}}_1.$$ We set $\iota=\iota_2\circ\iota_1$. Inspecting the construction of $\iota_1$ and $\iota_2$, we see that $\iota$ is a polynomial map in the sense that each component of $\iota(A,C)=(T_{\alpha{\overline}\beta}(A,C))$ is a polynomial in the variables $(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l)$. We now define $H$-invariants of ${{\cal R}}=\iota({{\cal N}}\times{{\cal C}})\subset{{\bf T}}_1$. An $H$-[*invariant of ${{\cal R}}$ of weight $w$*]{} is a polynomial $I(T)$ in the components of $(T_{\alpha{\overline}\beta})\in{{\bf T}}_1$ satisfying $$I(h^{-1}.T)=\sigma_{w,w}(h)I(T) \quad\hbox{for any} \ (h,T)\in H\times{{\cal R}}.$$ Identifying two $H$-invariants which agree on ${{\cal R}}$, we denote by $I^w({{{\cal R}}})$ the totality of the equivalence classes of all $H$-invariants of ${{\cal R}}$ of weight $w$. This definition implies that $\iota$ induces an injection $$\label{iota-star} \iota^*\colon I^w({{\cal R}})\ni I(T)\mapsto I(\iota(A,C))\in I^w({{\cal N}}\times{{\cal C}}).$$ This map is also surjective by the following theorem. There exists a polynomial map $\tau\colon{{\bf T}}_1\to{{\cal N}}\times{{\cal C}}$ such that $\tau\circ\iota={{\rm id}}$[. ]{} In particular[,]{} $\iota$ is injective and thus the map is an isomorphism[.]{} On the tensor space ${{\bf T}}_1$, we can construct $H$-invariants by making complete contractions of the form $${{\rm contr}}\left(T^{(p_1,q_1)}\otimes\cdots\otimes T^{(p_d,q_d)}\right),$$ where $T^{(p,q)}\in{{\bf T}}^{p,q}_1$ and the contraction is taken with respect to the metric $g_0$. We call such $H$-invariants [*elementary invariants*]{}. The next theorem asserts that $I^w({{\cal R}})$ is spanned by the elementary invariants of weight $w$. Every $H$[-]{}invariant of ${{\cal R}}$ coincides on ${{\cal R}}$ with a linear combination of elementary invariants of ${{\bf T}}_1$[. ]{} Once we know Theorems 4 and 5, we can easily prove the main theorems stated in Section 1. We here consider only the log term $\psi[r]$. The expansion of $\varphi[r]$ is obtained exactly in the same manner if we note that $\varphi[r]$ is defined only up to $O^{n+1}({\partial}\Omega)$ and hence one should keep track of the ambiguity in each step of the proof; see [@F3]. We prove by induction on $m$ that there exist Weyl functionals $W_k$ of weight $k$ such that $$\psi[r]=\sum_{k=0}^{m-1} W_{k+n+1}[r]\, r^k+ J_m[r]\, r^m {\quad\hbox{for}\ \ } J_{m}[r]\in C^\infty({\overline}\Omega). \leqno{\exppsi}_m$$ We interpret $(3.12)_0$ as $\psi[r]\in C^\infty({\overline}\Omega)$. Assuming $(3.12)_m$, we seek a Weyl functional $W_{m+n+1}$ such that $J_m[r]=W_{m+n+1}[r]$ on ${\partial}\Omega$ for any $\Omega$ and $r\in{{\cal F}}_{{\partial}\Omega}$. Then, $(3.12)_{m+1}$ is obtained by setting $J_{m+1}[r]=(J_m[r]-W_{m+n+1}[r])/r$. We first study the transformation law of $J_m$ under a biholomorphic map $\Phi\colon\Omega\to{\widetilde}\Omega$. Let ${\widetilde}K$ be the Bergman kernel of ${\widetilde}\Omega$ and $\psi[{\widetilde}r]$ its log term. It then follows from ${\widetilde}K\circ\Phi=|\det\Phi'|^{-2}K$ that $$\label{trans-psi} \psi[{\widetilde}r]\circ\Phi=|\det\Phi'|^{-2}\psi[r]{\quad\hbox{mod}\ \ } O^\infty({\partial}\Omega).$$ Choosing ${\widetilde}r$ as in Proposition 2, , we obtain $$\label{trans-J} J_m[{\widetilde}r]\circ\Phi=|\det\Phi'|^{-2({m+n+1})/({n+1})}J_m[r] \quad\hbox{mod }O^\infty({\partial}\Omega). \hskip.5in$$ That is, $J_m$ satisfies a transformation law of weight $m+n+1$. If ${\partial}\Omega$ and ${\partial}{\widetilde}\Omega$ are locally written in normal form $N(h^{-1}.A)$ and $N(A)$, respectively, then the restriction of to $z=0$ gives $$J_m[{\widetilde}r](0)=\sigma_{m+n+1,m+n+1}(h)J_m[r](0).$$ From this formula, we can conclude $J_m[r](0)\in I^{m+n+1}({{\cal N}}\times{{\cal C}})$ if we know that $J_m[r](0)$ is a polynomial in the components of $(A,C)\in{{\cal N}}\times{{\cal C}}$. Now we have: Assume that ${\partial}\Omega$ is locally written in normal form $N(A)$[,]{} $A\in{{\cal N}}$ and that $r=\iota_1(A,C)$[.]{} Then $J_m[r](0)$ is a universal polynomial in the components of $(A,C)$[.]{} Inspecting the proof of the existence of asymptotic expansion in [@F1] or [@BS], we see that the Taylor coefficients of $\psi$ about $0$ are universal polynomials in $A$ (direct methods of computing these universal polynomials are given in \[B1,2\] and \[HKN1,2\]). On the other hand, by the constructions of $r$ and Weyl invariants, we can show that the Taylor coefficients of $r$ and of $W_k[r]$ about $0$ are universal polynomials in $(A,C)$. Therefore, we see by the relation $(3.12)_m$ that $J_m[r](0)$ is a universal polynomial in $(A,C)$. We now apply Theorems 3 and 4 to obtain a Weyl functional $W_{m+n+1}$ such that $J_m[r](0)=W_{m+n+1}[r](0)$ for any ${\partial}\Omega$ in normal form and any$r\in{{\cal F}}_{{\partial}\Omega}$. Noting that $J_m$ and $W_{m+n+1}$ satisfy the same transformation law under biholomorphic maps, we conclude $J_m[r]=W_{m+n+1}[r]$ on ${\partial}\Omega$ for any $\Omega$ by locally transforming ${\partial}\Omega$ into normal form. We thus complete the inductive step. Note that the isomorphism $\iota^*\colon I^w({{\cal R}})\to I^w({{\cal N}}\times{{\cal C}})$ of Theorem 4 is given by $W(R)\mapsto I_W(A,C)$. Thus Theorem 5 guarantees that each element of $I^w({{\cal N}}\times{{\cal C}})$ is expressed as a Weyl invariant $I_W(A,C)$ of weight $w$. Noting that $I_W\in I^w({{\cal N}})$ if and only if $I_W$ is ${{\cal C}}$-independent, we obtain Theorem 2. Proof of Theorem 4 ================== We first write the map $\iota$ as a projective limit of maps $\iota_m$, $m=1,2,\dots$, on finite-dimensional $H$-spaces. Then, we can reduce the proof of Theorem 4 to that of an analogous assertion for each $\iota_m$ (Proposition 4.1 below). To define the maps $\iota_m$, we introduce weights for the components of ${{\cal N}}$, ${{\cal C}}$ and ${{\bf T}}_s$ by considering the action of the matrix $$\delta_t= \left( \begin{array}{ccc} t&0 &0 \\ 0&I_{n-1}&0 \\ 0&0 &1/t \end{array} \right) \in H,\quad t>0.$$ The actions of $\delta_t$ on $(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l)\in{{\cal N}}\times{{\cal C}}$ and $(T_{\alpha{\overline}\beta})\in{{\bf T}}_s$ are given by $$\begin{aligned} \delta_t.(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l) &=& (t^{|\alpha|+|\beta|+2l-2}A_{\alpha{\overline}\beta}^l, t^{|\alpha|+|\beta|+2l+2(n+1)}C_{\alpha{\overline}\beta}^l), \\ \delta_t.(T_{\alpha{\overline}\beta}) &=& (t^{\|\alpha{\overline}\beta\|-2s}T_{\alpha{\overline}\beta}).\end{aligned}$$ Here $\|\alpha{\overline}\beta\|$ is the strength of the indices defined by setting $\| a_1\dots a_k\| =\| a_1\| +\cdots+\| a_k\|$ with $\|0\|=\|{\overline}0\|=0$, $\| j\|=\|{{{\overline}\jmath}}\|=1$ for $1\le j\le n-1$ and $\| n\|=\|{\overline}n\|=2$. The [*weights*]{} of the components $A_{\alpha{\overline}\beta}^l$, $C_{\alpha{\overline}\beta}^l$ and $T_{\alpha{\overline}\beta}$ are defined to be the halves of degrees in $t$ with respect to the action of $\delta_t$: $$\frac{1}{2}(|\alpha|+|\beta|)+l-1, \quad \frac{1}{2}(|\alpha|+|\beta|)+l+n+1 \quad\hbox{and}\quad \frac{1}{2}\|\alpha{\overline}\beta\|-s,$$ respectively. We also define the weight for a monomial of $(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l)$ or $(T_{\alpha{\overline}\beta})$ to be the half of the degree with respect to $\delta_t$. Then the notion of weight is consistent with that for $H$-invariants. Let $[{{\cal N}}]_m$ denote the vector space of all components $A_{\alpha{\overline}\beta}^l$ of weight $\le m$: $$[{{\cal N}}]_m= \left\{ (A_{\alpha{\overline}\beta}^l)_{(|\alpha|+|\beta|)/2+l-1\le m}: (A_{\alpha{\overline}\beta}^l)\in{{\cal N}}\right\}.$$ We also define $[{{\cal C}}]_m$ and $[{{\bf T}}_s]_m$ similarly. Then $[{{\cal N}}]_m$, $[{{\cal C}}]_m$ and $[{{\bf T}}_s]_m$ are finite-dimensional vector spaces such that their projective limits as $m\to\infty$ are ${{\cal N}}$, ${{\cal C}}$ and ${{\bf T}}_s$, respectively. Since $\iota$ is compatible with the action of $\delta_t$, each component $T_{\alpha{\overline}\beta}(A,C)$ of $\iota(A,C)$ is a homogeneous polynomial of weight $||\alpha{\overline}\beta||/2-1$. It follows that, if $||\alpha{\overline}\beta||/2-1\le m$, then $T_{\alpha{\overline}\beta}(A,C)$ depends only on the variables $(A,C)\in [{\cal N}]_m\times [{\cal C}]_m$, because all components of $(A,C)\in{\cal N}\times{\cal C}$ have positive weight. We can thus define maps $$\iota_m\colon[{\cal N}]_m\times[{\cal C}]_m\to[{{\bf T}}_1]_m \quad\hbox{by}\ \ \iota_m(A,C)=(T_{\alpha{\overline}\beta}(A,C))_{||\alpha{\overline}\beta||/2-1\le m}.$$ The projective limit of $\iota_m$ as $m\to\infty$ gives $\iota$. Therefore, Theorem 4 is reduced to the following finite-dimensional proposition. \[prop-iota-m\] -9pt There exist polynomial maps $\tau_m\colon[{{\bf T}}_1]_m\to [{{\cal N}}]_m \times[{{\cal C}}]_m$[,]{} for $m=0,1,2\ldots\,$[,]{} such that $\tau_m\circ\iota_m={{\rm id}}$ and $\pi_m\circ\tau_{m}=\tau_{m-1}\circ\pi_{m}'$. Here $\pi_m\colon[{{\cal N}}]_m\times[{{\cal C}}]_m\to[{{\cal N}}]_{m-1}\times[{{\cal C}}]_{m-1}$ and $\pi_{m}'\colon[{{\bf T}}_1]_{m}\to[{{\bf T}}_1]_{m-1}$ are the natural surjections[.]{} The projective limit of $\tau_m$ is a polynomial map $\tau\colon{{\bf T}}_1\to{{\cal N}}\times{{\cal C}}$ such that $\tau\circ\iota={{\rm id}}$. Thus Theorem 4 follows from Proposition \[prop-iota-m\]. \[Linearization-of-MA\] We prove Proposition \[prop-iota-m\] by using the inverse function theorem. Our first task is to show the injectivity of $d\iota_m|_0\colon T_0([{{\cal N}}]_m\times[{{\cal C}}]_m)\to[{{\bf T}}_1]_m$, the differential of $\iota_m$ at $0=(0,0)$. Identifying $T_0([{{\cal N}}]_m\times[{{\cal C}}]_m)$ with $[{{\cal N}}]_m\oplus [{{\cal C}}]_m$, we define a linear map $d\iota|_0\colon{{\cal N}}\oplus{{\cal C}}\to{{\bf T}}_1$ by the projective limit of $d\iota_m|_0$. Then we can prove the injectivity of each $d\iota_m|_0$ by proving that of $d\iota|_0$. Before starting the proof, we introduce some vector spaces of formal power series, which will be used in the rest of this paper. For $s\in{\Bbb Z}$, let ${{\cal E}}(s)$ denote the vector space of real formal power series $f(\zeta,{\overline}\zeta)$ of $\zeta,{\overline}\zeta$ about $e_0\in{{\Bbb C}}^{n+1}$ of homogeneous degree $(s, s)$. Here we say that $f$ is homogeneous of degree $(s, s')$ if $Zf=s\,f$ and ${\overline}Zf=s'\,f$, where $Z=\sum_{j=0}^n\zeta^j{\partial}_{\zeta^j}$. The space ${{\cal E}}(s)$ admits a natural $H$-action $(h.f)(\zeta,{\overline}\zeta)=f(h^{-1}\zeta,{\overline}{h^{-1}\zeta})$, and is embedded as an $H$-submodule of ${{\bf T}}_s$ by $f\mapsto(T_{\alpha{\overline}\beta})_{|\alpha|,|\beta|\ge0}$, where $T_{\alpha{\overline}\beta}={\partial}_\zeta^\alpha{\partial}_{{\overline}\zeta}^\beta f(e_0)$. Using this expression, we define $H$-submodules of ${{\cal E}}(s)$ for $s\ge0$: $$\begin{aligned} {{\cal E}}_s &=&\{f\in{{\cal E}}(s): T_{\alpha{\overline}\beta}=0 \hbox{ if }|\alpha|\le s\,\,\hbox{ or }\,\,|\beta|\le s\}, \\ {{\cal E}}^s &=&\{f\in{{\cal E}}(s): T_{\alpha{\overline}\beta}=0 \hbox{ if }|\alpha|> s\hbox{ and }|\beta|> s\}.\end{aligned}$$ Then we have a decomposition of ${{\cal E}}(s)$ as $H$-modules $$\label{calE-decomp} {{\cal E}}(s)={{\cal E}}_s\oplus{{\cal E}}^s.$$ We next consider the restrictions of $f\in{{\cal E}}(s)$ to the null cone $${{\cal Q}}=\{\mu=\zeta^0{\overline}\zeta^n+\zeta^n{\overline}\zeta{}^0-|\zeta'|^2=0\}\subset{{\Bbb C}}^{n+ 1}$$ associated with $g_0$ and set $$\label{def-calJ} {{\cal J}}(s)=\{f|_{{\cal Q}}:f\in{{\cal E}}(s)\},\quad {{\cal J}}^s =\{f|_{{\cal Q}}:f\in{{\cal E}}^s\}.$$ If we employ $(z^0,{\overline}z^0,z',{\overline}z',v)$ as coordinates of ${{\cal Q}}$, then each $f\in{{\cal J}}(s)$ is written as $$f=|z^0|^{2s}\sum_{|\alpha|,|\beta|,l\ge0} B_{\alpha{\overline}\beta}^l z'{}^\alpha{\overline}z'{}^\beta v^l. \label{relation-J}$$ Thus we may identify ${{\cal J}}(s)$ with the space of real formal power series in $(z',{\overline}z',v)$. Using this identification, we embed ${{\cal N}}$ as a subspace of ${{\cal J}}(1)$ and identify ${{\cal C}}$ with ${{\cal J}}(-n-1)$, so that the actions of $\delta_t$ on ${{\cal N}}$ and ${{\cal C}}$ are compatible with those on ${{\cal J}}(1)$ and ${{\cal J}}(-n-1)$, respectively. It then follows from [@CM] that $$\label{moser-lem} {{\cal J}}(1)={{\cal N}}\oplus{{\cal J}}^1\quad \hbox{(direct sum of vector spaces).}$$ This is the key equation in the proof of the uniqueness of the normal form. \(i) In the decomposition , the vector space ${{\cal J}}^1$, as well as ${{\cal N}}$, is realized by a subspace of ${{\cal J}}(1)$. In [@CM], elements of ${{\cal J}}(1)$ are expressed by formal power series of $(z',{\overline}z',v)$, where ${{\cal J}}^1$ is identified via with the range of a linear operator $L$ defined by $$L(F)={{\rm Re}}\left({\overline}z^1 f_1+\cdots+{\overline}z^{n-1}f_{n-1}+f_n \right)\Big|_{u=|z'|^2/2}$$ for ${{\Bbb C}}^n$-valued formal power series $F=(f_1,\dots,f_n)$ of $z$. \(ii) The $H$-action on ${{\cal J}}(1)$ is linear, whereas that on ${{\cal N}}$ is nonlinear. These are defined differently, and ${{\cal N}}$ in is not an $H$-invariant subspace of ${{\cal J}}(1)$. Nevertheless, the tangent space $T_0{{\cal N}}$ at the origin is $H$-isomorphic to ${{\cal J}}(1)/{{\cal J}}^1$ as follows. Let us tentatively introduce an $H$-submodule ${{\cal J}}_3(1)$ of ${{\cal J}}(1)$ representing surfaces close to those in normal form; ${{\cal J}}_3(1)$ consists of elements of ${{\cal J}}(1)$ vanishing to the third order at $e_0$, and each $f\in{{\cal J}}_3(1)$ is identified with the surface $N(f)=N(B)$ via . Then the $H$-action $(h,N(f))\mapsto\phi_h(N(f))$ on ${{\cal J}}_3(1)$ is nonlinear, while the linearization coincides with the action $\rho(h)$ on ${{\cal J}}(1)$. Now ${{\cal N}}$ is a subset of ${{\cal J}}_3(1)$, and the linearization of the $H$-action on ${{\cal N}}$ is given by $(h,f)\mapsto p(\rho(h)f)$, where $p\colon{{\cal J}}(1)\to{{\cal N}}$ denotes the projection associated with the decomposition . This is the $H$-action on $T_0{{\cal N}}$, so that $T_0{{\cal N}}$ is $H$-isomorphic to the quotient space ${{\cal J}}(1)/{{\cal J}}^1$. \(iii) In [@CM], surfaces in normal form are constructed within ${{\cal J}}_3(1)$ by induction on weight. It should be noted that the grading by weight is different from the linearization. More precisely, the lowest weight part in the deviation from normal form is determined by the projection $p\colon{{\cal J}}(1)\to{{\cal N}}$, whereas higher weight parts are affected nonlinearly by lower weight terms. We use a similar procedure in the remaining part of this section, where the induction is implicit in the inverse function theorem. Using these power series, we shall write down $d\iota |_0(A,C)$ explicitly. By definition, $$d\iota |_0(A,C)=\frac{d}{d{\varepsilon}}\iota({\varepsilon}\,A,{\varepsilon}\,C)\Big|_{{\varepsilon}=0}.$$ To compute the right-hand side, we consider a family of asymptotic solutions $$\label{U-e} U_{\varepsilon}=r_{{\varepsilon}\#}+r_{{\varepsilon}\#}\sum_{k=1}^\infty\eta_{{\varepsilon},k} \left(r_{\varepsilon}^{n+1}\log r_{{\varepsilon}\#}\right)^k$$ with $r_{\varepsilon}=\iota_1({\varepsilon}\,A,{\varepsilon}\,C)$, and derive an equation such that $(d U_{\varepsilon}/d{\varepsilon})|_{{\varepsilon}=0}$ is a unique solution. \[proplinMA\] [(i)]{} $F=(d U_{\varepsilon}/d{\varepsilon})|_{{\varepsilon}=0}$ admits an expansion of the form $$F=\varphi+\eta\, \mu^{n+2}\log \mu, \quad\hbox{where }\varphi\in{{\cal E}}(1),\ \eta\in{{\cal E}}(-n-1). \label{expf}$$ [(ii)]{} Let ${\partial}_{\mu}$ denote the differentiation with respect to $\mu$ for the coordinates $(z^0,{\overline}z^0,z',{\overline}z',v,\mu)$[.]{} Then $$\begin{aligned} \varphi|_{{{\cal Q}}}=-f_A, &\hbox{where}& f_A=|z^0|^2\sum A_{\alpha{\overline}\beta}^l {z'}^\alpha{{\overline}z'}^\beta v^l, \label{bcA}\\ {\partial}_{\mu}^{n+2}\varphi|_{{{\cal Q}}}=h_C, &\hbox{where}& h_C=|z^0|^{-2n-2}\sum C_{\alpha{\overline}\beta}^l {z'}^\alpha{{\overline}z'}^\beta v^l. \label{bcC}\end{aligned}$$ [(iii)]{} Let $$\Delta={\partial}_{\zeta^0}{\partial}_{{\overline}\zeta^n}+{\partial}_{\zeta^n}{\partial}_{{\overline}\zeta^0} -\sum_{j=1}^{n-1}{\partial}_{\zeta^j}{\partial}_{{\overline}\zeta^j}$$ be the Laplacian for the ambient metric $g_0$ with potential $\mu$[.]{} Then $\Delta F=0.$ [(iv)]{} For each $(f,h)\in{{\cal J}}(1)\oplus{{\cal J}}(-n-1)$[,]{} there exists a unique function $F$ of the form satisfying $\Delta F=0$ and $$\label{initial-fg} \varphi|_{{\cal Q}}=-f, \quad {\partial}_\mu^{n+2}\varphi|_{{\cal Q}}=h.$$ In the proof of (i), we use the following lemma, which implies, in particular, that the first variation of the higher log terms in vanishes at ${\varepsilon}=0$. \[e-dependence\] Each log term coefficient of $U_{\varepsilon}$ satisfies $\eta_{{\varepsilon},k}=O({\varepsilon}^k)$[.]{} Setting $U_{\varepsilon}=r_{{\varepsilon}\#}(1+f_{\varepsilon})$, we shall show that $f_{\varepsilon}\in O_{{\cal A}}({\varepsilon})$, where $O_{{\cal A}}({\varepsilon})$ is the space of all formal series of the form $$\sum_{k=1}^\infty\eta_{{\varepsilon},k}\left(r_{\varepsilon}^{n+1}\log r_{{\varepsilon}\#}\right)^k {\quad\hbox{with}\ \ } \eta_{{\varepsilon},k}=O({\varepsilon}^k).$$ Neglecting higher log terms in $f_{\varepsilon}$, we set ${\widetilde}f_{\varepsilon}=\eta_{{\varepsilon},1}\,r_{\varepsilon}^{n+1}\log r_{{\varepsilon}\#}$. Then we have ${\widetilde}f_{\varepsilon}\in O_{{\cal A}}({\varepsilon})$ and ${{\cal M}}_{\varepsilon}({\widetilde}f_{\varepsilon})=J(r_{\varepsilon})^{-1}$ mod ${{\cal A}}_{2(n+1)}^{\varepsilon}$, where ${{\cal M}}_{\varepsilon}$ and ${{\cal A}}_{2(n+1)}^{\varepsilon}$ are defined with respect to $\rho=r_{\varepsilon}$. We can recover $f_{\varepsilon}$ from ${\widetilde}f_{\varepsilon}$ by the procedure of constructing asymptotic solutions used in the proof of Lemma \[f-solution\]. This procedure consists of the operations of applying ${{\cal M}}_{\varepsilon}$ and solving the equation $E_{\varepsilon}[g_{\varepsilon}]_s=[h_{\varepsilon}]_s$ for $s>n+1$. These operations preserve the class $O_{{\cal A}}({\varepsilon})$, so that ${\widetilde}f_{\varepsilon}\in O_{{\cal A}}({\varepsilon})$ implies $f_{\varepsilon}\in O_{{\cal A}}({\varepsilon})$ as desired. \(i) By Lemma \[e-dependence\] above, we have $$U_{\varepsilon}=r_{{\varepsilon}\#}+\mu^{n+2}|z^0|^{-2(n+1)}\eta_{{\varepsilon},1}\log\mu+O({\varepsilon}^2).$$ We thus get with $$\varphi=|z^0|^2(d\,r_{\varepsilon}/d{\varepsilon})|_{{\varepsilon}=0} \quad\hbox{and}\quad \eta=|z^0|^{-2(n+1)}(d\,\eta_{{\varepsilon},1}/d{\varepsilon})|_{{\varepsilon}=0}.$$ \(ii) We now regard $f_A$ as an element of ${{\cal E}}(1)$ which is independent of $\mu$ in the coordinates $(z^0,{\overline}z^0,z',{\overline}z',v,\mu)$. Setting $\mu_{\varepsilon}=\mu-{\varepsilon}\,f_A$, we then consider the Taylor expansion of $r_{{\varepsilon}\#}$ with respect to $\mu_{\varepsilon}$ in the coordinates $(z^0,{\overline}z^0,z',{\overline}z',v,\mu_{\varepsilon})$: $$r_{{\varepsilon}\#}=\mu_{\varepsilon}+\sum_{k=1}^\infty a_{{\varepsilon},k}(z^0,{\overline}z^0,z',{\overline}z',v)\,\mu_{\varepsilon}^k. \label{r-e-expand}$$ Since $r_{{\varepsilon}\#}=\mu+O({\varepsilon})$, we have $a_{{\varepsilon},k}=O({\varepsilon})$. Differentiating both sides of with respect to ${\varepsilon}$ and then setting ${\varepsilon}=0$, we get $$\varphi=-f_A+\sum_{k=1}^\infty a'_k\,\mu^k, {\quad\hbox{where}\ \ } a'_k=\left.\frac{d\,a_{{\varepsilon},k}}{d\,{\varepsilon}}\right|_{{\varepsilon}=0}. \label{varphiexp}$$ Restricting this formula to $\mu=0$, we obtain . We also have by that ${\partial}_\mu^{n+2}\varphi |_{\mu=0}=(n+2)!\,a'_{n+2}$. Therefore, noting that is equivalent to $a_{{\varepsilon},n+2}={\varepsilon}\,h_C/(n+2)!$, we obtain . \(iii) Recalling that $g_0$ is the flat metric given by the matrix , we have $\Delta F={{\rm tr}}(g_0^{-1}({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}F))$. Thus, by $({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}U_{\varepsilon})= g_0+{\varepsilon}\,({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}F)+O({\varepsilon}^2)$, we get $$\begin{aligned} \det( {\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}U_{\varepsilon}) &=&\det\left(g_0\,(I_{n+1}+ {\varepsilon}\, g_0^{-1}\,( {\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}F))\right)+O({\varepsilon}^2)\\ &=&\det g_0\,\det\left(I_{n+1}+ {\varepsilon}\,g_0^{-1}\,({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}F)\right)+O({\varepsilon}^2)\\ &=&(-1)^n \left(1+{\varepsilon}\,{{\rm tr}}(g_0^{-1}\,({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}F))\right)+O({\varepsilon}^2)\\ &=&(-1)^n (1+{\varepsilon}\,\Delta F) +O({\varepsilon}^2).\end{aligned}$$ Noting that $\det({\partial}_{\zeta^i}{\partial}_{{\overline}\zeta^j}U_{\varepsilon})$ is independent of ${\varepsilon}$, we obtain $\Delta F=0$. \(iv) By induction on $m$, we construct $F_m$ of the form satisfying $\Delta F_m=O(\mu^m)$, where $O(\mu^m)$ stands for an expression of the form$\mu^m(\varphi+\psi\log\mu)$ with $\varphi,\psi\in{{\cal E}}(1-m)$. For $m=0$, we may take $F_0$ to be an arbitrary extension of $f$ to ${{\cal E}}(1)$. Assume we have constructed $F_{m-1}$ with some $m>0$. When $m<n+1$, we set $F_m=F_{m-1}+\varphi_m\,\mu^m$ with $\varphi_m\in{{\cal E}}(1-m)$, and get $$\begin{aligned} \Delta F_m &=&\Delta F_{m-1}+ [\Delta,\mu^m]\varphi_m +\mu^{m}\Delta \varphi_m\\ &=&\Delta F_{m-1} +m\,\mu^{m-1}(Z+{\overline}Z+n+m)\varphi_m+O(\mu^m)\\ &=&\Delta F_{m-1} +m(n+2-m)\mu ^{m-1}\varphi_m+O(\mu^m).\end{aligned}$$ Thus $\Delta F_m=0$ holds with $\varphi_m=\mu^{1-m}\Delta F_{m-1}/m(n+2-m)$. When $m\ge n+2$, we set $F_m=F_{m-1}+\mu^m(\varphi_m+\eta_m\log\mu)$ with $\varphi_m,\eta_m\in{{\cal E}}(1-m)$. Using $$\label{commutator} \begin{array}{rl} [\Delta,\mu^m\log\mu]= &m\mu^{m-1}\log\mu\,(Z+{\overline}Z+n+m) \\ &+\ \mu^{m-1}(Z+{\overline}Z+2m+n), \end{array}$$ we then obtain $$\begin{aligned} \Delta F_m &=&\Delta F_{m-1} +m(n+2-m)\mu^{m-1}\left(\varphi_m+\eta_m\log\mu\right)\\ & & +\ (n+2)\mu^{m-1}\eta_m+O(\mu^m).\end{aligned}$$ If $m=n+2$, then $\Delta F_{n+2}=\Delta F_{n+1}+(n+2)\mu^{n+1}\eta_{n+2}+O(\mu^{n+2})$, so that $\varphi_{n+2}$ and $\eta_{n+2}$ are determined by and $\Delta F_{n+2}=O(\mu^{n+2})$, respectively. If $m>n+2$, then $n+2-m\ne0$ and thus $\eta_m,\varphi_m$ are uniquely determined by $\Delta F_m=O(\mu^m)$ as in the case $m\le n+1$. This completes the inductive step. The solution to $\Delta F=0$ is then given by the limit of $F_m$ as $m\to\infty$. The uniqueness assertion is clear by the construction. Using (iv), we define a linear map $$\label{def-calF} {{\cal L}}\colon{{\cal J}}(1)\oplus{{\cal J}}(-n-1)\to{{\cal E}}(1),$$ where ${{\cal L}}(f,h)=\varphi$ is the smooth part of the solution to $\Delta(\varphi+\eta\mu^{n+2}\log\mu)=0$ satisfying . Setting $\varphi={{\cal L}}(f_A,h_C)$ for $(A,C)\in{{\cal N}}\oplus{{\cal C}}$, we get, by (i) and (ii), $$r_{{\varepsilon}\#}={\mu}+{\varepsilon}\,\varphi+O({\varepsilon}^2). \label{r-e-expansion}$$ Applying ${\partial}{\overline}{\partial}$, we obtain $$g_{\varepsilon}=g_0+{\varepsilon}\,\left(\varphi_{i\,{{{\overline}\jmath}}}\right)+O({\varepsilon}^2).$$ Since $g_0$ is a flat metric, the curvature $R^{\varepsilon}_{\alpha{\overline}\beta}$ of $g_{\varepsilon}$ satisfies $$R^{\varepsilon}_{\alpha{\overline}\beta} ={\varepsilon}\,{\partial}^\alpha_\zeta{\partial}^\beta_{{\overline}\zeta}\,\varphi+O({\varepsilon}^2) {\quad\hbox{for}\ \ }|\alpha|,|\beta|\ge2, \label{R-linear}$$ where ${\partial}^{i\cdots j}_\zeta={\partial}_{\zeta^i}\cdots{\partial}_{\zeta^j}$. Hence $d\iota|_0(A,C)=(S_{\alpha{\overline}\beta})$ is given by $$S_{\alpha{\overline}\beta}= \left\{ \begin{array}{ll} {\partial}^\alpha_\zeta{\partial}^\beta_{{\overline}\zeta}\,\varphi(e_0) & \hbox{if $|\alpha|\ge2$ and $|\beta|\ge2$;}\\ 0 & \hbox{otherwise.} \end{array}\right.$$ Consequently, if we identify $(S_{\alpha{\overline}\beta})$ with a series in ${{\cal E}}_1$, then $$\label{composition} d\iota|_0(A,C)=\pi\circ {{\cal L}}(f_A,h_C),$$ where $\pi\colon{{\cal E}}(1)\to{{\cal E}}_1$ is the projection with respect to the decomposition . Using this expression, we now prove: \[injectivity-of-diota\] The map $d\iota|_0$ is injective[. ]{} [*Proof*]{}. Assuming $d\iota|_0(A,C)=0$ for $(A,C)\in{{\cal N}}\oplus{{\cal C}}$, we shall prove $(A,C)=(0,0)$. By , this assumption is equivalent to ${{\cal L}}(f_A,h_C)\in{{\cal E}}^1$. We first write down the set $({{\rm Range}}\,{{\cal L}})\cap{{\cal E}}^1$ explicitly. \[lem-RangeL\] [(i)]{} ${{\rm Range}}\,{{\cal L}}={\widetilde}{{\cal H}}(1)$[,]{} where $${\widetilde}{{\cal H}}(1)= \left\{ \varphi\in{{\cal E}}(1): \Delta\varphi=c_n\,\mu^{n+1}\Delta^{n+2}\varphi, \ \Delta^{n+3}\varphi=0 \right\}$$ with $c_n=(-1)^{n+1}\left((n+1)!\right)^{-2}$[. ]{} [(ii)]{} Let ${{\cal H}}^1=\{\varphi\in{{\cal E}}^1:\Delta\varphi=0\}$[.]{} Then $$\label{range-L} {\widetilde}{{\cal H}}(1)\cap{{\cal E}}^1={{\cal L}}({{\cal J}}^1\oplus\{0\}) ={{\cal H}}^1.$$ \(i) For each $\varphi\in{{\rm Range}}\,{{\cal L}}$, there exists $\eta\in{{\cal E}}(-n-1)$ such that $F=\varphi+\eta\,\mu^{n+2}\log\mu$ satisfies $\Delta F=0$. Using , we then get $$\begin{aligned} \Delta F &=&\Delta\varphi +\mu^{n+2}\log\mu\cdot \Delta\eta+ [\Delta,\mu^{n+2}\log\mu]\,\eta\\ &=&\Delta\varphi +(n+2)\mu ^{n+1}\eta+ \mu ^{n+2}\log \mu\cdot\Delta\eta\end{aligned}$$ so that $\Delta F=0$ is reduced to a system $$\label{system-varphi-eta} \left\{ \begin{array}{c} \Delta\varphi+(n+2)\mu^{n+1}\eta=0,\\ \Delta \eta=0. \end{array} \right.$$ Noting that this system implies $\Delta^{n+2}\varphi=(-1)^n\,(n+1)!\,(n+2)!\eta$, we replace $\eta$ in by $(-1)^{n}\Delta^{n+2}\varphi/[(n+1)!\,(n+2)!]$. Then $$\begin{aligned} &&\label{eq-varphi} \left\{ \begin{array}{c} \Delta\varphi= c_n\,\mu^{n+1}\Delta^{n+2}\varphi,\\ \Delta^{n+3}\varphi=0. \end{array} \right.\\ \noalign{\hskip-.3in Thus we have ${{\rm Range}}\,{{\cal L}}={\widetilde}{{\cal H}}(1)$.\hfill \nonumber}\end{aligned}$$ -16pt (ii) We first show that ${\widetilde}{{\cal H}}(1)\cap{{\cal E}}^1\supset{{\cal L}}({{\cal J}}^1\oplus\{0\})$. Since ${\widetilde}{{\cal H}}(1)={{\rm Range}}\,{{\cal L}}$, it suffices to prove ${{\cal E}}^1\supset{{\cal L}}({{\cal J}}^1\oplus\{0\})$. For $f\in{{\cal J}}^1$, take its extension ${\widetilde}f\in{{\cal E}}^1$ such that ${\partial}_\mu{\widetilde}f=0$, and set $$\varphi={\widetilde}f-\mu\Delta{\widetilde}f/(n+1)\in{{\cal E}}^1.$$ Then, $\varphi|_{{\cal Q}}={\widetilde}f|_{{\cal Q}}=f$, ${\partial}_\mu^{n+2}\varphi|_{{\cal Q}}=0$ and $$(n+1)\Delta\varphi=(n+1)\Delta{\widetilde}f-[\Delta,\mu]\Delta{\widetilde}f-\mu\Delta^2{\widetilde}f =-\mu\Delta^2{\widetilde}f=0,$$ so that ${{\cal L}}(f,0)=\varphi\in{{\cal E}}^1$. We next show ${\widetilde}{{\cal H}}(1)\cap{{\cal E}}^1\subset{{\cal H}}^1$. For $\varphi\in{\widetilde}{{\cal H}}(1)$, we have $\Delta\varphi=c_n\mu^{n+2}\Delta^{n+2}\varphi$, while $\varphi\in{{\cal E}}^1$ implies $\Delta^2\varphi=0$. Therefore, if $\varphi\in{\widetilde}{{\cal H}}(1)\cap{{\cal E}}^1$, then $\Delta\varphi=0$ and thus $\varphi\in{{\cal H}}^1$. It remains to prove ${{\cal H}}^1\subset{{\cal L}}({{\cal J}}^1\oplus\{0\})$. But this is clear since each $\varphi\in{{\cal H}}^1$ satisfies $\varphi={{\cal L}}(\varphi|_{{\cal Q}},0)\break\in{{\cal L}}({{\cal J}}^1\oplus\{0\})$. We now return to the proof of Proposition \[injectivity-of-diota\]. By , there exists an $f_1\in{{\cal J}}^1$ such that ${{\cal L}}(f_A-f_1,h_C)=0$. The injectivity of ${{\cal L}}$ then implies $f_A-f_1=0$ and $h_C=0$. Noting that forces $f_A=f_1=0$, we get $(A,C)=(0,0)$ as desired. By virtue of Proposition \[injectivity-of-diota\], we can apply the inverse function theorem to $\iota_m$, and obtain a neighborhood $V$ of $(0,0)\in [{{\cal N}}]_m\times[{{\cal C}}]_m$ such that $\iota_m|_V$ is an embedding. Noting that $\iota_m$ is compatible with the action of $\delta_t$, we see that $\iota_m|_{V_t}$ is also an embedding, where $V_t=\{\delta_t.(A,C):(A,C)\in V\}$. Since $t>0$ is arbitrary, it follows that $\iota_m\colon [{{\cal N}}]_m\times[{{\cal C}}]_m\to [{{\bf T}}_1]_m$ itself is an embedding. Thus $[{{\cal R}}]_m=\iota_m([{{\cal N}}]_m\times[{{\cal C}}]_m)$ is a real-analytic submanifold of $[{{\bf T}}_1]_m$, and there exists a real-analytic inverse map $\iota_m^{-1}\colon[{{\cal R}}]_m\to[{{\cal N}}]_m\times[{{\cal C}}]_m$. We now construct $\tau_m$ inductively. The case $m=0$ is trivial because $[{{\cal N}}]_0\times[{{\cal C}}]_0=\{(0,0)\}$. Assume we have gotten $\tau_{m-1}(T)$. We construct $\tau_m(T)$ as follows. Denote the components of $\iota_m^{-1}(R)$, $R\in [{{\cal R}}]_m$, by $$(P_{\alpha{\overline}\beta}^l(R),Q_{\alpha{\overline}\beta}^l(R)) \in[{{\cal N}}]_m\times[{{\cal C}}]_m.$$ For $(P_{\alpha{\overline}\beta}^l(R),Q_{\alpha{\overline}\beta}^l(R)) \in[{{\cal N}}]_{m-1}\times[{{\cal C}}]_{m-1}$, define their polynomial extensions to $[{{\bf T}}_1]_m$ by the components of $\tau_{m-1}(T)=(P_{\alpha{\overline}\beta}^l(T),Q_{\alpha{\overline}\beta}^l(T))$. For the components of weight $>m-1$, we construct their polynomial extensions in two steps. First, extend $P_{\alpha{\overline}\beta}^l(R)$, $Q_{\alpha{\overline}\beta}^l(R)$ to real-analytic functions ${\widetilde}P_{\alpha{\overline}\beta}^l(T)$, ${\widetilde}Q_{\alpha{\overline}\beta}^l(T)$ on $[{{\bf T}}_1]_m$ in such a way that they have homogeneous weight. Next, neglect the monomials of degrees $>m$ in ${\widetilde}P_{\alpha{\overline}\beta}^l(T)$, ${\widetilde}Q_{\alpha{\overline}\beta}^l(T)$ and define polynomials $P_{\alpha{\overline}\beta}^l(T)$, $Q_{\alpha{\overline}\beta}^l(T)$. These polynomials are extensions of $P_{\alpha{\overline}\beta}^l(R)$, $Q_{\alpha{\overline}\beta}^l(R)$ because of the following lemma. \[weight-of-R\] A monomial $P(T)$ on ${{\bf T}}_1$ vanishes on ${{\cal R}}$ provided the weight is less than the degree[.]{} Let $Q(A,C)=P(\iota(A,C))$ be of weight $w$. Then, by the assumption on $P(T)$, each monomial constituting $Q(A,C)$ has degree $>w$. But such a monomial must be $0$ because all the variables $A_{\alpha{\overline}\beta}^l$ and $C_{\alpha{\overline}\beta}^l$ have weight $\ge1$. Thus we have $Q(A,C)=0$, which is equivalent to $P(T)=0$ on ${{\cal R}}$. The collection $(P_{\alpha{\overline}\beta}^l(T), Q_{\alpha{\overline}\beta}^l(T))$ gives a polynomial map $\tau_m(T)$ satisfying $\pi_m\circ\tau_m=\tau_{m-1}\circ\pi_m'$ and $\tau_m\circ\iota_m={{\rm id}}$. This completes the inductive step. \[rem-proof\] Using the method of linearization in this section, we can now prove the statement in Remark \[rem-asymptotic-solution\]. Given $(A,C)\in{{\cal N}}\oplus{{\cal C}}$, let $u_{\varepsilon}^{{{\rm G}}}$ be the asymptotic solution to of the form (with $\eta_1^{{{\rm G}}}=1$) for the surface $N({\varepsilon}A)$ satisfying with ${\varepsilon}C$ in place of $C$. Then $F=|z^0|^2 (d u^{{{\rm G}}}_{\varepsilon}/d{\varepsilon})|_{{\varepsilon}=0}$ can be written in the form $$\label{G-lin} F=\varphi+\mu^{n+2}\eta\log(\mu|\zeta|^{-2}), {\quad\hbox{where}\ \ } \varphi\in{{\cal E}}(1),\ \eta\in{{\cal E}}(-n-1),\hskip.25in$$ which satisfies , and $\Delta F=0$. We denote $\varphi$ in by $\varphi[A,C]$, and set ${{\cal H}}^{{{\rm G}}}=\{\varphi[A,C]:(A,C)\in{{\cal N}}\oplus{{\cal C}}\}$. Then, for $\varphi,{\widetilde}\varphi\in{{\cal H}}^{{{\rm G}}}$, $\varphi-{\widetilde}\varphi=O(\mu)$ if and only if $\varphi-{\widetilde}\varphi=\mu^{n+2}\psi$ with $\psi\in{{\cal E}}(-n-1)$ satisfying $\Delta\psi=0$. Let ${{\cal F}}_{{\partial}\Omega}^{{{\rm G}}}$ be the space of defining functions in Remark \[rem-asymptotic-solution\], and assume that ${{\cal F}}_{{\partial}\Omega}^{{{\rm G}}}$ satisfies the transformation law . Linearizing , we then see that, for each $h\in{{\rm SU}}(g_0)$, $$\label{transf-HG} \varphi\in {{\cal H}}^{{{\rm G}}}\quad\hbox{if and only if}\quad {\widetilde}\varphi(\zeta):=\varphi(h\zeta)\in {{\cal H}}^{{{\rm G}}}.$$ We next set ${\widetilde}F(\zeta)=F(h\zeta)$ for $F$ in . Then $\Delta{\widetilde}F=0$ and ${\widetilde}F=\widehat\varphi+\mu^{n+2}{\widetilde}\eta\log(\mu |\zeta^0|^{-2})$, where $$\label{whvarphi} \widehat\varphi(\zeta)={\widetilde}\varphi(\zeta)+ \mu^{n+2}{\widetilde}\eta\log(|\zeta^0/{\widetilde}\zeta^0|^2),\quad {\widetilde}\eta(\zeta)=\eta({\widetilde}\zeta)\quad ({\widetilde}\zeta=h\zeta). \hskip.4in$$ Thus ${\widetilde}\varphi,\widehat\varphi\in{{\cal H}}^{{{\rm G}}}$ whenever $\varphi\in{{\cal H}}^G$. It then follows from $\widehat\varphi-{\widetilde}\varphi=O(\mu)$ and that $\Delta{\widetilde}\eta\log(|\zeta^0/{\widetilde}\zeta^0|^2)=0$. But this equation is not satisfied, e.g., by $h$ and $\varphi$ such that ${\widetilde}\zeta^0=\zeta^0+i\zeta^n, {\widetilde}\zeta'=\zeta',{\widetilde}\zeta^n=\zeta^n$ and $\varphi$ satisfying $\varphi=|\zeta^0|^2|\zeta^1/\zeta^0|^{2(n+2)}+O(\mu)$, in which case $\eta=(-1)^{n+1}(n+2) |\zeta^0|^{-2(n+1)}$. This is a contradiction, and we have proved the statement in Remark \[rem-asymptotic-solution\] -6truept Proof of Theorem 5 ================== -6truept [5.1.]{} [*Linearization*]{}. We have seen in Theorem 4 that ${{\cal R}}$ is a submanifold of ${{\bf T}}_1$ with a system of polynomial defining equations $\iota\circ\tau(T)-T=0$, where $T=(T_{\alpha{\overline}\beta})\in{{\bf T}}_1$. Using this fact, we first reduce the study of $H$-invariants of ${{\cal R}}$ to that of the invariants of the $H$-module $T_0{{\cal R}}$. That is, we reduce Theorem 5 to the following: Every $H$[-]{}invariant of $T_0{{\cal R}}$ is the restriction to $T_0{{\cal R}}$ of a linear combination of elementary invariants of ${{\bf T}}_1$[.]{} [*Proof of Theorem $5$ using Theorem $5'$*]{}. We follow the argument of [@F3]. Taking an $H$-invariant $I$ of ${{\cal R}}$ of weight $w$, we shall show that, for any $N$, there exists a finite list of elementary invariants $W_j$ such that $I$ is written in the form $$I=\sum c_j W_j+Q_N{\quad\hbox{on}\ \ } {{\cal R}}{\quad\hbox{with}\ \ }Q_N(T)=O(T^N), \label{Pinduction}$$ where $O(T^m)$ stands for a term (polynomial) which does not contain monomials of degree $<m$. Once this is proved, Theorem 5 follows. In fact, by taking $N$ so that $N>w$, we obtain by Lemma \[weight-of-R\] that $Q_N=0$ on ${{\cal R}}$, that is, $I=\sum c_j W_j$ on ${{\cal R}}$. To prove , we start by writing $I(T)=O(T^m)$ so that $$I(T)=S^m(T)+O(T^{m+1}),$$ where $S^m$ is homogeneous of degree $m$. Then $S^m$ is an $H$-invariant of $T_0{{\cal R}}$. In fact, if we take a curve $\gamma_{\varepsilon}$ in ${{\cal R}}$ such that $\gamma_0=0$ and $(d\gamma_{\varepsilon}/d{\varepsilon})|_{{\varepsilon}=0}=T\break\in T_0{{\cal R}}$, then we have $S^m(T)=\lim_{{\varepsilon}\to 0}I(\gamma_{\varepsilon})/{\varepsilon}^m$. Since the right-hand side is $H$-invariant, so is $S^m$ as claimed. Therefore, we can find, by using Theorem $5'$, elementary invariants $W_j$ such that $$S^m=\sum c_j W_j+U, \label{Sm-tmp}$$ where $U$ is homogeneous of degree $m$ and vanishes on $T_0{{\cal R}}$. We now examine the remainder $U$. Let $\{P_i(T)\}_{i=1}^\infty$ be a system of polynomials in the variables $T_{\alpha{\overline}\beta}$ which defines ${{\cal R}}$, i.e., ${{\cal R}}=\mathbold{\cap}_{i=1}^\infty\{P_i=0\}$, and let $p_i$ be the linear part of $P_i$ so that $T_0{{\cal R}}=\mathbold{\cap}_{i=1}^\infty\ker p_i$. We write $U$ as a finite sum $U=\sum U_i\,p_i$, where $U_i$ are homogeneous of degree $m-1$. Then $$U =\sum U_i\,(p_i-P_i)+\sum U_i P_i =\sum U_i\,(p_i-P_i){\quad\hbox{on}\ \ }{{\cal R}}.$$ Noting $\sum U_i\,(p_i-P_i)=O(T^{m+1})$ and using , we obtain for $N=m+1$. Repeating the same procedure for the remainder $Q_{m+1}$, we obtain the expression inductively for arbitrary $N$. We further reduce Theorem $5'$ to an analogous theorem for a simpler $H$-module of trace-free tensors. This is done by writing down a system of equations of $T_0{{\cal R}}$ explicitly and giving a short exact sequence which characterizes $T_0{{\cal R}}$, where $T_0{{\cal R}}=d\iota|_0({{\cal N}}\oplus{{\cal C}})$ is regarded as a subspace of ${{\cal E}}_1$ as in subsection 4.2. \[def-eq-T0R\] [(i)]{} The tangent space $T_0{{\cal R}}$ of ${{\cal R}}$ at $0$ is given by $${\widetilde}{{\cal H}}_1:= {\widetilde}{{\cal H}}(1)\cap{{\cal E}}_1= \left\{ \varphi\in{{\cal E}}_1: \Delta\varphi=c_n\,\mu^{n+1}\Delta^{n+2}\varphi, \ \Delta^{n+3}\varphi=0 \right\}.$$ [(ii)]{} The following sequence is exact[:]{} $$0\to{{\cal H}}_1\hookrightarrow{\widetilde}{{\cal H}}_1 \stackrel{\Delta^{n+2}}{\longrightarrow}{{\cal H}}(-n-1)\to 0, \label{short-exactsequence}$$ where ${{\cal H}}(k)=\{\varphi\in{{\cal E}}(k):\Delta\varphi=0\}$ and ${{\cal H}}_k={{\cal H}}(k)\cap{{\cal E}}_k$[.]{} [*Proof*]{}. (i) Since implies $\pi\circ{{\cal L}}({{\cal J}}^1\oplus\{0\})=\{0\}$, it follows from that $$\pi\circ{{\cal L}}\left({{\cal N}}\oplus{{\cal J}}(-n-1)\right)= \pi\circ{{\cal L}}\left({{\cal J}}(1)\oplus{{\cal J}}(-n-1)\right).$$ Using Lemma \[lem-RangeL\] (i), we then get $$\begin{aligned} T_0{{\cal R}}&=&\pi\circ{{\cal L}}\left({{\cal N}}\oplus{{\cal J}}(-n-1)\right)\\ &=&\pi({{\rm Range}}\,{{\cal L}})\\ &=&\pi({\widetilde}{{\cal H}}(1))={\widetilde}{{\cal H}}_1.\end{aligned}$$ \(ii) It is clear from the definition of ${\widetilde}{{\cal H}}_1$ that $ 0\to{{\cal H}}_1\hookrightarrow{\widetilde}{{\cal H}}_1 \stackrel{\Delta^{n+2}}{\longrightarrow}{{\cal H}}(-n-1)$ is exact. It then remains only to prove the surjectivity of $\Delta^{n+2}$. To show this, we solve the equation $$\label{Delta-eq} \Delta^{n+2}\varphi=\eta \quad\hbox{for $\eta\in{{\cal H}}(-n-1)$ given}.$$ We need to find a solution $\varphi\in{\widetilde}{{\cal H}}_1$. But it suffices to find $\varphi$ in ${\widetilde}{{\cal H}}(1)={\widetilde}{{\cal H}}_1\oplus{{\cal H}}^1$, because all $\varphi\in{{\cal H}}^1$ satisfy $\Delta^{n+2}\varphi=0$. Next, we follow the argument of [@EG2 Prop. 4.5]. We first recall a lemma in [@EG1]. For $k\le0$[,]{} the restriction ${{\cal H}}(k)\ni\eta\mapsto \eta|_{{\cal Q}}\in{{\cal J}}(k)$ is an isomorphism[.]{} This amounts to proving the unique existence of a solution $\eta\in{{\cal H}}(k)$ to the equation $\Delta\eta=0$ under the condition $\eta|_{{\cal Q}}=f\in{{\cal J}}(k)$. The proof is a straightforward modification of that of our Proposition \[proplinMA\], (iv). By this lemma, we can reduce to an equation for $f\in{{\cal J}}(1)$: $$\label{Delta-eq-2} \Delta^{n+2}{{\cal L}}(f,0)|_{{\cal Q}}=g, \quad\hbox{where $g=\eta|_{{\cal Q}}\in{{\cal J}}(-n-1)$}.$$ We write down the left-hand side with the real coordinates $(t,x)=(t,x^1,\dots,$ $x^{2n})$ of ${{\cal Q}}$, where $2\zeta^0=t+i\,x^1$, $2\zeta^j=x^{2j}+i\,x^{2j+1}$, $j=1,\dots,n-1$, and $x^{2n}=2\,{{\rm Im}}\zeta^n$. Let $\varphi$ be a formal power series about $e_0\in{{\Bbb C}}^{n+1}$ of homogeneous degree $2$ in the sense that $(Z+{\overline}Z)\varphi=2\varphi$[.]{} Then $$\label{Delta-Delta} (\Delta^{n+2}\varphi)|_{{\cal Q}}=\Delta_x^{n+2}(\varphi |_{{\cal Q}}), \quad\hbox{where } \Delta_x=2{\partial}_{x^1}{\partial}_{x^{2n}}-\sum_{j=2}^{2n-1}{\partial}_{x^j}^2. \hskip.5in$$ In terms of the coordinates $(t,x,\mu)$, the Laplacian $\Delta$ is written as $$\Delta=\Delta_x+ \left( \mu{\partial}_\mu+E +n+1 \right){\partial}_\mu, \quad\hbox{where }E=t{\partial}_t+\sum_{j=1}^{2n}x^j{\partial}_{x^j}.$$ Writing $\varphi(t,x,\mu)=\varphi_0(t,x)+\mu\,\psi(t,x,\mu)$, we have $$\Delta^{n+2}\varphi=\Delta^{n+2}_x\varphi_0+\Delta^{n+2}(\mu\,\psi).$$ Noting that $\psi$ is homogeneous of degree $0$, we have $$\begin{aligned} \Delta^{n+2}(\mu\,\psi) &=&[\Delta^{n+2},\mu]\psi+O(\mu)\\ \noalign{\vskip4pt} &=&(n+2)(Z+{\overline}Z+2n+2)\Delta^{n+1}\psi+O(\mu)\\ \noalign{\vskip4pt} &=&O(\mu).\end{aligned}$$ Therefore, $\Delta^{n+2}\varphi=\Delta^{n+2}_x\varphi_0+O(\mu)$, which is equivalent to . Since implies $\Delta^{n+2}{{\cal L}}(f,0)|_{{\cal Q}}=\Delta_x^{n+2}f$, we can reduce to $$\label{equation-x} \Delta_x^{n+2}f=g.$$ It is a standard fact of harmonic polynomials that, for each polynomial $q(x)$ of homogeneous degree $k$, there exists a polynomial $p(x)$ of homogeneous degree $k$ such that $$\Delta_x^{n+2}\mu_x^{n+2}p=q,\quad\hbox{where } \mu_x=2 x^1 x^{2n}-\sum_{j=2}^{2n-1}(x^j)^2.$$ We apply this fact to solving . Writing $g(t,x)=\sum_{j=2n+2}^\infty q_j(x)\,t^{-j}$ with polynomials $q_j(x)$ of homogeneous degree $j-2n-2$, we take, for each $j$, a polynomial $p_j(x)$ such that $\Delta_x^{n+2}\mu_x^{n+2}p_j=q_j$. Then $$\Delta^{n+2}_x{\widetilde}f=g,\quad\hbox{where } {\widetilde}f=\mu_x^{n+2}\sum_{j=2n+2}^\infty p_j(x)\,t^{-j}.$$ It is clear that ${\widetilde}f$ is homogeneous of degree $2$, though ${\widetilde}f$ may not be contained in ${{\cal J}}(1)$. Let us write ${\widetilde}f=\sum_{p+q=2}f^{(p,q)}$, where $f^{(p,q)}$ is homogeneous of degree $(p,q)$. We then see by that $\Delta_x^{n+2}f^{(p,q)}$ is homogeneous of degree$(p-n-2,q-n-2)$. Setting $f=f^{(1,1)}\in{{\cal J}}(1)$, we thus obtain . The proof of Proposition \[def-eq-T0R\] is complete. Now we use the exact sequence and reduce Theorem $5'$ to: Every $H$-invariant of ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$ is realized by the restriction to ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$ of a linear combination of elementary invariants of ${{\cal E}}_1\oplus{{\cal E}}(-n-1)$[.]{} Here[,]{} an elementary invariant of ${{\cal E}}_1\oplus{{\cal E}}(-n-1)$ is defined to be a complete contraction of the form $${{\rm contr}}\left( R^{(p_1,q_1)}\otimes\cdots\otimes R^{(p_d,q_d)}\otimes E^{(p'_1,q'_1)}\otimes\cdots\otimes E^{(p'_{d'},q'_{d'})} \right),$$ where $R^{(p,q)}=(R_{\alpha{\overline}\beta})_{|\alpha|=p,|\beta|=q}$ and $E^{(p,q)}=(E_{\alpha{\overline}\beta})_{|\alpha|=p,|\beta|=q}$ with $(R_{\alpha{\overline}\beta},E_{\alpha{\overline}\beta})\break\in{{\cal E}}_1\oplus{{\cal E}}(-n-1) \subset{{\bf T}}_1\oplus{{\bf T}}_{-n-1}$[.]{} [*Proof of Theorem $5'$ using Theorem $5''$*]{}. We embed $T_0{{\cal R}}$ into ${{\cal E}}_1\oplus{{\cal E}}(-n-1)$ as a subspace ${{\cal H}}=\{(R,\Delta^{n+2}R):R\in T_0{{\cal R}}\}$ by identifying $R=(R_{\alpha{\overline}\beta})$ with a formal power series in ${{\cal E}}_1$. We wish to find, for any $N$, a list of elementary invariants $\{W_j\}$ of ${{\bf T}}_1$ such that $I$ is written in the form $$I=\sum c_j W_j+O(E^N){\quad\hbox{on}\ \ } {{\cal H}}, \label{Pexp-tmp}$$ where $O(E^N)$ is a polynomial in $(R_{\alpha{\overline}\beta},E_{\alpha{\overline}\beta})$ consisting of monomials of degree $\ge N$ in $E$. If $(n+1)N$ is greater than the weight of $I$, then the error term $O(E^N)$ vanishes, because each component $E_{\alpha{\overline}\beta}$ has weight $\ge n+1$, and the reduction to showing is done. The proof of goes as an analogy of the procedure of linearization. Writing $$I(R,E)=S^m(R,E)+O(E^{m+1}),$$ where $S^m$ is homogeneous of degree $m$ in $E$, we show that $S^m$ is an $H$-invariant of ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$. For any $(R,E)\in{{\cal H}}_1\oplus{{\cal H}}(-n-1)$, we use and choose ${\widetilde}R$ such that $({\widetilde}R,E)\in {{\cal H}}$. Then we have $S^m(R,E)=\lim_{{\varepsilon}\to0}I(R+{\varepsilon}{\widetilde}R,{\varepsilon}E)/{\varepsilon}^m$. Since the right-hand side is $H$-invariant, so is $S^m$ as claimed. Therefore we can find, by Theorem $5''$, a list of elementary invariants $W_j(R,E)$ of ${{\cal E}}_1\oplus{{\cal E}}(-n-1)$ and write $S^m$ as $$S^m=\sum c_j W_j+U,$$ where $U$ vanishes on ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$ and is homogeneous of degree $m$ in $E$. Note that each elementary invariant $W_j(R,E)$ coincides on ${{\cal H}}$ with the elementary invariant $W_j(R,\Delta^{n+2}R)$ of ${{\bf T}}_1$. We next study the remainder $U$. Recall by Proposition \[def-eq-T0R\] that ${{\cal H}}$ is written as $${{\cal H}}=\{(R,E): P_i(R)={\widetilde}Q_i(E), Q_i(E)=0\hbox{ and } E=\Delta^{n+2}R\},$$ where $\{P_i(R)\}_{i=1}^\infty$ and $\{Q_i(E), {\widetilde}Q_i(E)\}_{i=1}^\infty$ are systems of linear functions on ${{\cal E}}_1$ and ${{\cal E}}(-n-1)$, respectively, such that $${{\cal H}}_1=\cap_i\ker P_i \quad\hbox{and}\quad {{\cal H}}(-n-1)=\cap_i\ker Q_i.$$ Using the defining functions $\{P_i,Q_i\}$ of ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$, we can express $U$ as a finite sum $U=\sum U_i P_i+\sum V_i Q_i$, where $U_i$ (resp. $V_i$) are homogeneous of degree $m$ (resp. $m-1$) in $E$. We thus write $U$ in the form $$U=\sum U_i\,{\widetilde}Q_i+\sum U_i(P_i-{\widetilde}Q_i)+\sum V_i Q_i \label{Uexpression}$$ and find that $U=\sum U_i{\widetilde}Q_i=O(E^{m+1})$ on ${{\cal H}}$, because the last two sums in vanish on ${{\cal H}}$. We have shown for $N=m+1$. Repeating the same procedure for the remainder, we obtain for arbitrary $N$. Since $H$ acts on ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$ by linear transformations, we may restrict our attention to the $H$-invariants $I(R,E)$ which are homogeneous of degrees $d_R$ and $d_E$ in $R$ and $E$, respectively. If $d_E=0$, we may regard $I(R,E)$ as an invariant $I(R)$ of ${{\cal H}}_1$. For $I(R)$, we can apply Theorem C of [@BEG] and express it as a linear combination of elementary invariants of ${{\cal H}}_1$. We thus assume $d_E\ge1$, and again follow the arguments of [@BEG]. The first step of the proof is to express $I(R,E)$ as a component of a linear combination of partial contractions. We denote by $\odot^{p,q}W^*$ the space of bisymmetric tensors of type $(p,q)$ on $W^*$ and by $\odot_0^{p,q}W^*$ the subspace of $\odot^{p,q}W^*$ consisting of trace-free tensors. Let $e^*$ be the row vector $(0,\dots,0,1)\break\in W^*\otimes\sigma_{1,0}$. Then we have: \[sm\] For some integer $m\le w-d_R-(n+1)d_E$[,]{} a map $$C\colon{{\cal H}}_1\oplus{{\cal H}}(-n-1)\to\odot{}_0^{m,m}W^* \otimes\sigma_{m-w,m-w}$$ is defined by making a linear combination of partial contractions of the tensors $R^{(p,q)}, E^{(p,q)}$ and $e^*$[,]{} ${\overline}e^*$ such that $$C_{n\cdots n\,{\overline}n\cdots{\overline}n}=I. \label{CI}$$ Since $R_{\alpha{\overline}\beta}$ and $E_{\alpha{\overline}\beta}$ satisfy $$\label{link0} \begin{array}{ll} R_{\alpha0{\overline}\beta}=(1-|\alpha|)R_{\alpha{\overline}\beta},&\ R_{\alpha\,{\overline}{\beta}\,{\overline}0}=(1-|\beta|) R_{\alpha\,{\overline}{\beta}}, \\ E_{\alpha0{\overline}\beta}=(-n-1-|\alpha|)E_{\alpha{\overline}\beta},&\ E_{\alpha\,{\overline}{\beta}\,{\overline}0}=(-n-1-|\beta|) E_{\alpha\,{\overline}{\beta}}, \end{array} \hskip.5in$$ we can write $I(R,E)$ as a polynomial in the components of the form $${{\widehat}R}^{k{\overline}l}_{\alpha{\overline}\beta}= R_{\alpha \underbrace{\scriptstyle{ n\cdots n}}_{k}{\overline}\beta\, \underbrace{\scriptstyle{ {\overline}n\cdots {\overline}n}}_{l}},\quad {{\widehat}E}^{k{\overline}l}_{\alpha{\overline}\beta}= E_{\alpha \underbrace{\scriptstyle{ n\cdots n}}_{k}{\overline}\beta\, \underbrace{\scriptstyle{ {\overline}n\cdots {\overline}n}}_{l}},$$ where $\alpha,\beta$ are lists of indices between 1 and $n-1$. We now regard ${{\widehat}R}^{k{\overline}l}_{\alpha{\overline}\beta},{{\widehat}E}^{k{\overline}l}_{\alpha{\overline}\beta}$ as tensors on ${{\Bbb C}}^{n-1}$ by fixing $k,{\overline}l$. For these tensors, the Levi factor $$L= \left\{ \left( \begin{array}{ccc} \lambda & 0 & 0 \\ 0 & u & 0 \\ 0 & 0 & 1/{\overline}\lambda \end{array} \right): u\in {{\rm U}}(n-1),\ \lambda{\overline}\lambda^{-1}\det u=1 \right\}$$ of $H$ acts as the usual tensorial action of $u$ up to a scale factor depending on $\lambda$. Thus we may regard $I(R,E)$ as a ${{\rm U}}(n-1)$-invariant polynomial. By Weyl’s classical invariant theory for ${{\rm U}}(n-1)$, we then see that $I$ is expressed as a linear combination of complete contractions of the tensors ${{\widehat}R}^{k{\overline}l}_{\alpha{\overline}\beta},{{\widehat}E}^{k{\overline}l}_{\alpha{\overline}\beta}$ for the standard metric $\delta^{i\,{{{\overline}\jmath}}}$ on ${{\Bbb C}}^{n-1}$. (See Lemma 7.4 of [@BEG] for details of this discussion.) We next replace the contractions with the metric $\delta^{i\,{{{\overline}\jmath}}}$ by those with the metric $g_0$. This is done by using the relation $$\sum_{j=1}^{n-1}T_{j\,{{{\overline}\jmath}}}=- \sum_{i,j=0}^{n}g_0^{i\,{{{\overline}\jmath}}}T_{i\,{{{\overline}\jmath}}}+T_{0\,{\overline}n}+T_{n{\overline}0} {\quad\hbox{for}\ \ } (T_{i\,{{{\overline}\jmath}}})\in W^*\otimes{\overline}W^*.$$ Then several $0$ and ${\overline}0$ come out as indices. These can be eliminated by using , and there remain only $n$ and ${\overline}n$ as indices. We then get an expression of $I$ as a linear combination $$I=\sum_{j=1}^k c_j C^{(j)}_{ \underbrace{\scriptstyle{ n\cdots n}}_{p_j} \underbrace{\scriptstyle{{\overline}n\cdots{\overline}n}}_{q_j}},$$ where each $C^{(j)}\in{{\bf T}}^{p_j,q_j}_w$ is given by partial contraction of the tensor products of several $R^{(p,q)}$ and $E^{(p,q)}$. In general, $p_j,q_j$ $(1\le j\le k)$ are different. Denoting by $m$ the maximum of $p_j,q_j$ $(1\le j\le k)$, we define a tensor $$C'=\sum_{j=1}^k c_j (\otimes^{m-p_j}e^*)\otimes C^{(j)}\otimes(\otimes^{m-q_j}{\overline}e^*)\in {{\bf T}}^{m,m}_w.$$ Then we have $I=C'_{n\cdots n{\overline}n\cdots{\overline}n}$ because $e^*_n={\overline}e^*_n=1$. The map $C$ is now given by taking the trace-free bisymmetric part of $C'$. To obtain the estimate $m\le w-d_R-(n+1)d_E$, we note that $C$ contains at least one partial contraction which has no $e^*$ or no ${\overline}e^*$. If such a term consists of $R^{(p_j,q_j)}$, $E^{(p'_j,q'_j)}$ and several $e^*$ (resp. ${\overline}e^*$), then $C$ takes values in $\odot^{m,m}_0 W^*\otimes\sigma_{\kappa,\kappa}$ with $\kappa=\sum_{j=1}^{d_R}(1-q_j)+\sum_{j=1}^{d_E}(-n-1-q'_j)$ (resp. the same relation with $p$ in place of $q$). Hence noting $\kappa=m-w$ and $p_j,q_j\ge2$, we obtain $m-w\le\sum_{j=1}^{d_R}(-1)+\sum_{j=1}^{d_E}(-n-1)$, which is equivalent to the desired estimate for $m$. Now we regard ${{\cal H}}_1\oplus{{\cal H}}(-n-1)$ as the space of pairs of formal power series $(\varphi,\eta)$ about $e_0\in W$ and write $I(\varphi,\eta)$ for $I(R,E)$ and $C(\varphi,\eta)$ for $C(R,E)$. If we replace the tensors $R_{\alpha{\overline}\beta}$ (resp. $E_{\alpha{\overline}\beta}$) in the partial contractions in $C$ by the formal power series ${\partial}_\zeta^\alpha{\partial}_{{\overline}\zeta}^\beta\varphi$ (resp. ${\partial}_\zeta^\alpha{\partial}_{{\overline}\zeta}^\beta\eta$), we obtain a formal power series about $e_0\in W$ which takes values in $\odot{}^{m,m}_0W^*$. Restricting this power series to ${{\cal Q}}$ and raising all indices by using $g_0$, we obtain a map $$\begin{aligned} && {\widetilde}C\colon{{\cal H}}_1\oplus{{\cal H}}(-n-1) \to \odot{}^{m,m}_0W\otimes{{\cal J}}(m-w) \\ \noalign{\hbox{which satisfies ${\widetilde}C(\varphi,\eta)|_{e_0}=C(\varphi,\eta)$ when all indices are raised.}}\end{aligned}$$ -20truept Note that ${{\cal H}}(k)$, ${{\cal H}}^k$, ${{\cal H}}_k$ and ${{\cal J}}(k)$ admit a natural structure of $({{\frak s\frak u}}(g_0),H)$-modules, where ${{\frak s\frak u}}(g_0)$ is the Lie algebra of ${{\rm SU}}(g_0)$. For ${{\cal H}}(k)$, ${{\cal H}}^k$ and ${{\cal J}}(k)$ there are natural $({{\frak s\frak u}}(g_0),H)$-actions induced from the action of ${{\rm SU}}(g_0)$ on $W$ and ${{\cal Q}}$. For ${{\cal H}}_k$, a $({{\frak s\frak u}}(g_0),H)$-action is induced via the $H$-isomorphism ${{\cal H}}_k\cong{{\cal H}}(k)/{{\cal H}}^k$. We also consider the complexification of these spaces and denote them, e.g., by ${{\cal H}}^{{\Bbb C}}(k)$, ${{\cal J}}^{{\Bbb C}}(k)$. Now we have: There exists a unique $({{\frak s\frak u}}(g_0),H)$[-]{}equivariant map $${\widetilde}I\colon{{{\cal H}}_1}\oplus{{\cal H}}(-n-1)\to{{\cal J}}^{{\Bbb C}}(-w)$$ such that ${\widetilde}I(\varphi,\eta)|_{e_0}=I(\varphi,\eta)$ for any $(\varphi,\eta)\in{{{\cal H}}}_1\oplus{{\cal H}}(-n-1)$[.]{} [(ii)]{} For any $(\varphi,\eta)\in{{\cal H}}_1\oplus{{\cal H}}(-n-1)$[,]{} $${\widetilde}C(\varphi,\eta)^{\alpha{\overline}\beta}= \zeta ^{\alpha_1}\cdots \zeta ^{\alpha_m} {\overline}\zeta{}^{\beta_1}\cdots{\overline}\zeta{}^{\beta_m} {\widetilde}I(\varphi,\eta). \label{wtCI}$$ The proof of this lemma goes exactly the same way as those of Propositions 8.1 and 8.5 of [@BEG], where the $({{\frak s\frak u}}(g_0),H)$-equivariance of ${\widetilde}C$ is used essentially. The final step in the proof of Theorem $5''$ is to obtain an explicit expression of ${\widetilde}I$ in terms of ${\widetilde}C$ by differentiating both sides of the equation . We first introduce differential operators $D_{i\,{{{\overline}\jmath}}}\colon{{\cal E}}^{{\Bbb C}}(s+1)\to{{\cal E}}^{{\Bbb C}}(s)$ for $(n+2s)(n+2s+1)\break\ne~0$ by $$D_{i\,{{{\overline}\jmath}}} f= \left({\partial}_{\zeta^i}-\frac{\zeta_i\Delta}{n+2s}\right) \left({\partial}_{{{\overline}\zeta}^j}-\frac{{\overline}\zeta_j\Delta}{n+2s+1}\right)f,$$ where the index for $\zeta$ is lowered with $g_0$. Then one can easily check the following facts: (i) $D_{i\,{{{\overline}\jmath}}}(\mu\,f)=O(\mu)$, so that $(D_{i\,{{{\overline}\jmath}}}f)|_{\cal Q}$ depends only on $f|_{\cal Q}$; (ii) For any $f\in{{\cal E}}^{{\Bbb C}}(s)$, $$D_{i\,{{{\overline}\jmath}}}(\zeta^i{\overline}\zeta{}^j f)=c_s\,f, {\quad\hbox{where}\ \ } c_s=\frac{(n+s)^2(n+2s+2)}{n+2s}$$ and the repeated indices are summed over $0,1,\dots,n$; see Lemma 8.7 of [@BEG]. In view of (i), (ii) and taking an arbitrary extension of ${\widetilde}I(\varphi,\eta)$ to ${{\cal E}}(-w)$, we get $$\begin{array}{rl} D_{\alpha_1{\overline}\beta_1}D_{\alpha_2{\overline}\beta_2} &\cdots D_{\alpha_m{\overline}\beta_m}{\widetilde}C^{\alpha{\overline}\beta}(\varphi,\eta)\\ &=D_{\alpha_1{\overline}\beta_1}D_{\alpha_2{\overline}\beta_2}\cdots D_{\alpha_m{\overline}\beta_m} (\zeta^{\alpha}{\overline}\zeta{}^{\beta}{\widetilde}I(\varphi,\eta))\\ &=c_{-w}c_{-w+1}\cdots c_{-w+m-1}\,{\widetilde}I(\varphi,\eta) {\quad\hbox{on}\ \ }{{\cal Q}}. \end{array}$$ Since $m-w\le-d_R-(n+1)d_E$ and $1\le d_E$, we have $m-w\le-n-1$. Hence, all $D_{i{{{\overline}\jmath}}}$ appearing above are well-defined and all $c_s\ne 0$. Therefore, $$I(\varphi,\eta)= \frac{1}{c_{-w}\cdots c_{-w+m-1}} D_{\alpha_1{\overline}\beta_1}\cdots D_{\alpha_m{\overline}\beta_m} {\widetilde}C^{\alpha{\overline}\beta}(\varphi,\eta)|_{e_0},$$ and $I$ is expressed as a linear combination of complete contractions. The assumption $d_E\ge1$ was used only in the final step of the proof to ensure $c_s\ne0$. The argument above is valid even if $d_E=0$ as long as $d_R\ge n$. This is exactly the proof of Theorem C of [@BEG] for invariants of high degrees. To treat the invariants of low degrees on ${{\cal H}}_1$, the authors used an entirely different argument. \[rem-graham\] The tensors $E^{(p,q)}$ used in this section are modeled on the biholomorphically invariant tensors $$E^{(p,q)}_k=\nabla{}^p{\overline}\nabla{}^q(|z^0|^{-2k(n+1)}\eta_k),$$ which were introduced by Graham [@G2]. He used these tensors to construct CR invariants from the complete contractions of the form $${{\rm contr}}\big(R^{(p_1,q_1)}\otimes\cdots R^{(p_d,q_d)}\otimes E_{k_1}^{(p_1',q_1')}\otimes\cdots E_{k_{d'}}^{(p'_{d'},q'_{d'})}\big).$$ Such complete contractions give rise to CR invariants if, for example, $p_j,q_j< n+2$ and $p'_j,q_j'<n+1$. This class of CR invariants correspond to ${{\cal C}}$-independent Weyl invariants which contain the covariant derivatives of the Ricci tensor. In fact, $g[r]$ is Ricci flat if and only if $\eta_k=0$ for all $k\ge1$, because the Ricci form of $g[r]$ is given by ${\partial}{\overline}{\partial}\log J[r]$. Thus, a CR invariant depending on $E^{(p,q)}_k$ must contain the covariant derivatives of the Ricci tensor when it is expressed as a Weyl invariant. Proof of Theorem 3 ================== By virtue of Theorem 2, it suffices to prove: \[prop-c-dependence\] Let $n\ge3$ $($resp. $n=2)$[.]{} Then every Weyl invariant of weight $w\le n+2$ $($resp. $w\le5)$ is ${{\cal C}}$[-]{}independent[.]{} For $w=n+3$ $($resp. $w=6)$[,]{} there exists a ${{\cal C}}$[-]{}dependent Weyl invariant of weight $w$[.]{} [*Proof of Proposition $\ref{prop-c-dependence}$*]{}. Take a Weyl polynomial $W_\#$ of weight $w$ and set $I(A,C)=I_W(A,C)$. We begin by inspecting the linear part of $I(A,C)$. \[lem-linear-term\] If $I(A,C)$ has nonzero linear part[,]{} then $w=n+2$ and the linear part is a constant multiple of $\Delta_x^{n+2}f_A(e_0)$ with $f_A$ as in [.]{} If $W_\#(R)$ has no linear terms, neither does $I(A,C)$. Thus it suffices to consider the case where $W_\#(R)$ is a linear complete contraction ${{\rm contr}}(R^{(p,p)})$. In this case, implies $$\label{linear-weyl} I(A,C)=\Delta^p\varphi(e_0)+Q(A,C),$$ where $Q(A,C)$ is a polynomial in $(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l)$ without linear terms. By and , $\Delta^p\varphi(e_0)=0$ if $p\ne n+2$ and $\Delta^{n+2}\varphi(e_0)=-\Delta_x^{n+2}f_A(e_0)$. Thus we obtain the lemma. We next consider nonlinear terms in $I(A,C)$. Since $I(A,C)$ is invariant under the action of ${{\rm U}}(n-1)$, it is written as a linear combination of complete contractions of the form $$\label{contrAC} {{\rm contr}}' \left( {{\bf A}}^{l_1}_{p_1{\overline}q_1}\otimes\cdots\otimes {{\bf A}}^{l_d}_{p_d{\overline}q_d}\otimes {\bf C}^{l'_1}_{p'_1{\overline}q'_1} \otimes\cdots\otimes{\bf C}^{l'_{d'}}_{p'_{d'}{\overline}q'_{d'}} \right)$$ with $\sum_{j=1}^d(p_j+q_j+2l_j-2)+\sum_{j=1}^{d'}(p'_j+q'_j+2l'_j+2n+2)=2w$. Here ${{\bf C}}^l_{p{\overline}q}=(C_{\alpha{\overline}\beta}^l)_{|\alpha|=p,|\beta|=q}$ is regarded as a tensor of type $(p,q)$ on ${{\Bbb C}}^{n-1}$ and the contraction is taken with respect to $\delta^{i{{{\overline}\jmath}}}$ for some pairing of lower indices. Suppose is nonlinear and contains the variables $C_{\alpha{\overline}\beta}^l$, so that $d+d'\ge2$ and $d'\ge1$. Then $p_j+q_j\ge4$ implies $w\ge n+2$. The equality $w=n+2$ holds only for ${{\rm contr}}'({{\bf A}}^0_{2{\overline}2}\otimes{{{\bf C}}}^0_{0{\overline}0})$, while by (N2), $${{\rm contr}}'({{\bf A}}^0_{2{\overline}2}\otimes{{{\bf C}}}^0_{0{\overline}0}) ={{\rm contr}}'({{\bf A}}^0_{2{\overline}2})C^0=0,$$ where $C^0$ is the only one component of ${{\bf C}}^0_{0{\overline}0}$. Thus $I(A,C)$ containing $C_{\alpha{\overline}\beta}^l$ has weight $\ge n+3$. In case $w=n+3$, there are only two types of contractions of the form , namely, $$\label{contrAAC} {{\rm contr}}'( {{\bf A}}^0_{2{\overline}2}\otimes{{\bf A}}^0_{2{\overline}2} \otimes{{\bf C}}^0_{0{\overline}0})\quad\hbox{and}\quad {{\rm contr}}'({{\bf A}}^l_{p{\overline}q}\otimes{{\bf C}}^{l'}_{p'{\overline}q'}),$$ where $p+p'=q+q'=3-l-l'$. The contractions of the second type always vanish by (N2) (see §3), and the first ones vanish except for the case $\|{{\bf A}}^0_{2{\overline}2}\|^2\,C^0 = \sum_{i,j,k,l=1}^{n-1}|A_{ij\,{\overline}k\,{\overline}l}^0|^2\,C^0$; this also vanishes for $n=2$ because $A_{11\,{\overline}1{\overline}1}^0={{\rm tr}}\,{{\bf A}}^0_{2{\overline}2}=0$. This completes the proof of the first statement of Proposition \[prop-c-dependence\]. To prove the second statement, we consider, for $n=2$, a complete contraction of weight $6$: $$W_{2}=\sum_{|\alpha|=6,|\beta|=2} R_{\alpha{\overline}\beta}R^{{\overline}\beta\alpha}$$ and, for $n\ge 3$, $$W_{n}=\sum_{|\alpha|=|\beta|=2,|\gamma|=n+2} R_{\alpha\,{\overline}\beta}R^{{\overline}\beta}{}^\gamma R_\gamma{}^\alpha,$$ which has weight $n+3$. Here indices are raised by using $g_0$. These complete contractions give ${{\cal C}}$-dependent Weyl invariants. In fact: Let $I_n(A,C)=I_{W_n}(A,C)$[.]{} Then $$\label{p2} I_2(A,C)=72\cdot6!\,(C^0)^2+Q_2(A,C),$$ where $Q_2$ is a polynomial in $(A_{\alpha{\overline}\beta}^l,C_{\alpha{\overline}\beta}^l)$ such that $Q_2(0,C)=0$. For $n\ge3$[,]{} $$\label{pn} I_n(A,C)=(-1)^n 64\,(n+2)!\,\|{{\bf A}}_{2{\overline}2}^0\|^2 C^0+Q_n(A),$$ where $Q_n(A)$ is a polynomial in $A_{\alpha{\overline}\beta}^l$[.]{} We first prove . Since $I_n(A,C)=c_n\,\|{{\bf A}}_{2{\overline}2}^0\|^2 C^0+Q_n(A)$ for some constant $c_n$, to determine $c_n$, we consider a family of surfaces with real parameter $s$ $$2u=|z'|^2+f_s(z',{\overline}z'),{\quad\hbox{where}\ \ } f_s(z',{\overline}z')=2s\,{{\rm Re}}\,(z^1)^2({\overline}z^2)^2.$$ Let $A_s\in{{\cal N}}$ denote the list of normal form coefficients of this surface and $C_t\in{{\cal C}}$ the element such that $C^0=t$ and all the other components vanish. Then $$\label{Pn-1} I_n(A_s,C_t)=c_n\,s^2t+Q_n(A_s)=c_n\,s^2t+O(s^{n+3}).$$ On the other hand, we see by that the components $R_{\alpha{\overline}\beta}(s,t)$ of $\iota(A_s,C_t)$ satisfy $$R_{\alpha{\overline}\beta}(s,t)=S_{\alpha{\overline}\beta}+O(s^2+t^2),$$ where $S_{\alpha{\overline}\beta}={\partial}_\zeta^\alpha{\partial}_{{\overline}\zeta}^\beta\varphi(e_0)$ with $\varphi={{\cal L}}(f_s,t)$. Thus $$\label{Pn-2} I_n(A_s,C_t)=W'_n+O((s^2+t^2)^2),$$ where $$\label{Wn} W'_n=\sum_{|\alpha|=|\beta|=2,|\gamma|=n+2} S_{\alpha{\overline}\beta}\,S^{{\overline}\beta\gamma}\,S_\gamma{}^\alpha.$$ Comparing with , we get $$W_n'=c_n\,s^2 t.$$ Since $$\varphi=-|\zeta^0|^2 f_s+t\,|\zeta^0|^{-2(n+1)}\mu^{n+2}/(n+2)!,$$ the term $S_{\alpha{\overline}\beta}$ in the sum of vanishes except for $S_{11{\overline}2{\overline}2}=S_{22{\overline}1{\overline}1}=-4s$. Using $$\sum_{|\gamma|=n+2} S^{{\overline}2{\overline}2\,\gamma}S_\gamma{}^{11}= \sum_{|\gamma|=n+2} S_{{\overline}1{\overline}1\,\gamma}S^\gamma{}_{22}= {\overline}{\sum_{|\gamma|=n+2} S^{{\overline}1{\overline}1\,\gamma}S_\gamma{}^{22}},$$ we then obtain $$\begin{aligned} W'_n&=&-4s\sum_{|\gamma|=n+2} \left( S^{{\overline}2{\overline}2\,\gamma}S_\gamma{}^{11}+ S^{{\overline}1{\overline}1\,\gamma}S_\gamma{}^{22} \right)\\ &=&-8s\,{{\rm Re}}\sum_{|\gamma|=n+2} S_{\gamma\,{\overline}1{\overline}1}S^\gamma{}_{22}.\end{aligned}$$ In the last sum, $S_{\gamma\,{\overline}1{\overline}1}$ vanishes unless $\gamma$ is a permutation of $0\cdots022$ or $11n\cdots n$, while $$S_{0\cdots022\,{\overline}1{\overline}1}= S_{22}{}^{11n\cdots n}=(-1)^{n+1}4\cdot n!\,s,$$ $$S_{11n\cdots n\,{\overline}1{\overline}1}=S_{22}{}^{0\cdots022}=2\,t.$$ Therefore $W_n'=(-1)^n 64\, (n+2)!\,s^2\,t$, so that $c_n=(-1)^n 64\,(n+2)!$. We next prove . Since $I_2(A,C)$ contains no linear term, we have $$I_2(A,C)=c_2\,(C^0)^2+Q_2(A,C)$$ for a constant $c_2$. Restriction of this formula to $(A,C)=(0,C_t)$ yields $$\label{P2t} I_2(0,C_t)=c_2\,t^2+O(t^3),$$ while, by the expression $\varphi={{\cal L}}(0,t)=t\,|\zeta^0|^{-6}\mu^4/4!$, $$I_2(0,C_t)=W_2'+O(t^3),\quad W'_2=\sum_{|\alpha|=6,|\beta|=2} S_{\alpha\,{\overline}\beta}\,S^{{\overline}\beta\,\alpha}.$$ Since $S_{\alpha{\overline}0{\overline}k}=S^{{\overline}2\,{\overline}k\,\alpha}=0$ for $k=0,1,2$ and any list $\alpha$, we have $$W'_2=\sum_{|\alpha|=6}S_{\alpha\,{\overline}1{\overline}1}\,S^{{\overline}1{\overline}1\,\alpha}.$$ In this sum, $S_{\alpha\,{\overline}1{\overline}1}$ vanishes unless $\alpha$ is a permutation of $001122$, while $$S_{001122{\overline}1{\overline}1}=S^{{\overline}1{\overline}1\,001122}=4!\,t.$$ Thus $W'_2=72\cdot 6!\,t^2$. This together with yields $c_2=72\cdot 6!$. As a consequence of Lemma \[lem-linear-term\], we see that a CR invariant $I(A)$ of weight $w$ can contain linear terms only when $w=n+1$ and that the linear part must be a constant multiple of $\Delta_x^{n+2}f_A(e_0)$. This fact is equivalent to Theorem 2.3 of Graham [@G2]. and , Invariant theory for conformal and CR geometry, [*Ann. of Math.*]{} [**139**]{} (1994), 491–552. , Complément sur le noyau de Bergman, Sém.  EDP, École Polytech, 1985–86, exposé n$^\circ$ 20, 1986. , Le noyau de Bergman en dimension $2$, Sém.  EDP, Ecole Polytech, 1987-88, exposé n$^\circ$ 22, 1988. and , Sur la singularité des noyau de Bergman et de Szegö, [*Astérisque*]{} [**34-35**]{} (1976), 123–164. and , Real hypersurfaces in complex manifolds, [*Acta Math*]{}. [**133**]{} (1974), 219–271; Erratum: [*Acta Math*]{}. [**150**]{} (1983), 297. and , Invariants of CR densities, [*Proc. Sympos. Pure Math.*]{} [**52**]{} (1991), 117–133. , Invariants of conformal densities, [*Duke Math. J*]{}. [**63**]{} (1991), 633–671. , The Bergman kernel and biholomorphic mappings of pseudoconvex domains, [*Invent. Math*]{}. [**26**]{} (1974), 1–65. , Monge-Ampère equations, the Bergman kernel, and geometry of pseudoconvex domains, [*Ann. of Math*]{}. [**103**]{} (1976), 395–416; Correction: [*Ann. of Math*]{}.  [**104**]{} (1976), 393–394. , Parabolic invariant theory in complex analysis, [*Adv. in Math*]{}. [**31**]{} (1979), 131–262. , Scalar boundary invariants and the Bergman kernel, in: [*Complex Analysis*]{} II, [*Lecture Notes in Math*]{}. [**1276**]{} (1987), 108–135, Springer-Verlag, New York. , Higher asymptotics of the complex Monge-Ampère equation, [*Compos.  Math*]{}. [**64**]{} (1987), 133–155. , and , Two methods of determining local invariants in the Szegö kernel, in: [*Complex Geometry*]{}, [*Lecture Notes in Pure Appl. Math.*]{} (1993), 77–96, Dekker, New York. , CR invariants of weight five in the Bergman kernel, [*Adv. in Math*]{}. [**143**]{} (1999), 185–250. and , Boundary behaviour of the complex Monge-Ampère equation, [*Acta Math*]{}. [**148**]{} (1982), 159–192.
--- abstract: 'Non-Markovian effects can speed up the dynamics of quantum systems while the limits of the evolution time can be derived by quantifiers of quantum statistical speed. We introduce a measure for characterizing the non-Markovianity of quantum evolutions through the Hilbert-Schmidt speed (HSS), which is a special type of quantum statistical speed. This measure has the advantage of not requiring diagonalization of evolved density matrix. Its sensitivity is investigated by considering several paradigmatic instances of open quantum systems, such as one qubit subject to phase-covariant noise and Pauli channel, two independent qubits locally interacting with leaky cavities, V-type and $\Lambda $-type three-level atom (qutrit) in a dissipative cavity. We show that the proposed HSS-based non-Markovianity measure detects memory effects in perfect agreement with the well-established trace distance-based measure, being sensitive to system-environment information backflows.' author: - Hossein Rangani Jahromi - Kobra Mahdavipour - Mahshid Khazaei Shadfar - Rosario Lo Franco title: 'Measuring non-Markovian effects of quantum processes through Hilbert-Schmidt speed' --- Introduction\[introduction\] ============================ The interaction of quantum systems with the surrounding environment leads to dissipating energy and losing quantum coherence [@breuer2002theory]. Nevertheless, the process does not need to be monotonic and the quantum system may recover temporarily some of the lost energy or information due to memory effects during the evolution [@NM1; @breuer2016colloquium; @rivas2014quantum; @lofrancoreview; @mortezapourOpt; @RevModPhys.86.1203; @GholipourAnnPhys; @darrigo2012AOP; @LoFrancoNatCom; @origin2; @origin1; @NMexp5; @NMexp4; @NMexp3; @NMexp2; @NMexp1]. This dynamical behavior, named non-Markovianity, can then act as a resource in various quantum information tasks such as teleportation with mixed states [@laine2014nonlocal], improvement of capacity for long quantum channels [@bylicka2014non], efficient entangling protocols [@xiang2014entanglement; @mirkin1; @mirkin2], and work extraction from an Otto cycle [@thomas2018thermodynamics]. Caharacterization and quantification of non-Markovianity has been a subject of intense study [@rivas2014quantum; @breuer2016colloquium; @teittinen2018revealing; @naikoo2019facets]. One route is to investigate temporary increases of the entanglement shared by the open quantum system with an isolated ancilla, which amounts to measure the deviation from complete positivity (CP-divisibility) of the dynamical map describing the evolution of the system [@rivas2010entanglement]. Another approach [@breuer2009measure; @laine2010measure] relies on measuring the distinguishability of two optimal initial states evolving through the same quantum channel and detecting any non-monotonicity (information backflows). Further witnesses of non-Markovianity have been proposed, based on different dynamical figures of merit, such as: channel capacities [@bylicka2014non], quantum mutual information [@luo2012quantifying], local quantum uncertainty [@he2014measuring], quantum interferometric power [@dhar2015characterizing; @girolami2014quantum; @fanchini2017lectures; @jahromi2019multiparameter], coherence [@chanda2016delineating; @he2017non], state fidelity [@jahromi2019multiparameter; @rajagopal2010kraus; @farajollahi2018estimation], change of volume of the set of accessible states of the evolved system [@lorenzo2013geometrical], Fisher information flow [@lu2010quantum; @rangani2017relation], spectral analysis [@zhang2012general], entropy production rates [@strasberg2019non; @jahromi2015precision], correlation measures [@de2019correlation], Choi state [@zheng2020detecting] and quantum evolution speedup [@deffner2013quantum; @xu2014quantum; @xu2018hierarchical]. This variety of witnesses and approaches highlight the multifaceted nature of non-Markovian behavior which hence cannot be attributed to a unique feature of the system-environment interaction, preventing the characterization by means of a single tool for such a phenomenon. CP-divisibility is the most common definition for Markovianity in open quantum systems [@breuer2002theory; @rivas2014quantum]. A dynamical map $\{\mathcal{E}_{t}\}_{t\geq 0} $ is defined as a family of completely positive (CP) and trace-preserving (TP) maps acting on the system Hilbert space $ \mathcal{H} $. Generally speaking, one calls a map $k$-positive if the composite map $\mathcal{E}_{t}\otimes \mathbb{I}_{k} $ is positive, where $ k $, $ \mathbb{I}_{k} $ denote the dimensionality of the ancillary Hilbert space and its identity operator, respectively [@darek2019]. Provided that $\mathcal{E}_{t}\otimes \mathbb{I}_{k} $ is positive for all $ k\geq 0 $ and for all $t$, then the dynamical map is completely positive. One then says that the dynamical map $ \mathcal{E}_{t} $ is CP-divisible (P-divisible) when the propagator $ V_{t,s} $, defined by $\mathcal{E}_{t}=V_{t,s}\circ \mathcal{E}_{s} $, is completely positive (positive) for all $ t\geq s \geq 0 $ [@breuer2002theory]. According to the non-Markovianity measure introduced by Rivas-Huelga-Plenio (RHP) [@rivas2010entanglement], the quantum evolution is considered Markovian if and only if the corresponding dynamical map $ \mathcal{E}_{t} $ is CP-divisible. The non-Markovian character of the system dynamics can be identified through another well-known perspective proposed by Breuer-Laine-Piilo (BLP), namely the distinguishability of two evolving quantum states of the same system [@breuer2009measure; @laine2010measure]. This distinguishability is quantified by the trace distance, a commonly used distance measure for two arbitrary states $ \rho_{1} $ and $ \rho_{2} $, defined as $D(\rho_{1},\rho_{2})=\frac{1}{2}\text{Tr}|\rho_{1}-\rho_{2}|$, where $ |A|=\sqrt{A^{\dagger}A} $ for some operator $ A $. The trace distance $ D(\rho_{1},\rho_{2}) $ is contractive under CPTP maps, i.e. $ D(\mathcal{E}_{t}(\rho_{1}),\mathcal{E}_{t}(\rho_{2}))\leq D(\rho_{1},\rho_{2})$. Nevertheless, this does not mean generally that $ D(\mathcal{E}_{t}(\rho_{1}),\mathcal{E}_{t}(\rho_{2})) $ is a monotonically decreasing function of time. In fact, $\frac{\mathrm{d}}{\mathrm{d}t}D(\mathcal{E}_{t}(\rho_{1}),\mathcal{E}_{t}(\rho_{2})) > 0$ implies violation of P-divisibility and therefore of CP-divisibility [@breuer2009measure; @PhysRevA.90.022110]. In other words, under any Markovian evolution of the quantum system, one gets $ \mathrm{d} D(\mathcal{E}_{t}(\rho_{1}),\mathcal{E}_{t}(\rho_{2}))/{\mathrm{d}t}\leq 0$, owing to the contraction property. Therefore, its non-monotonicity can be understood as a witness of non-Markovianity due to system-environment backflows of information. Studies on the role of typical figures of merit for quantum metrology, based on quantum Fisher information metric, to witness non-Markovianity have been also reported [@lu2010quantum; @adesso2017]. On the other hand, non-Markovian effects can speed up the quantum evolution of a system [@deffner2013quantum; @PhysRevA.94.052125; @PhysRevA.93.020105; @Ahansaz2019; @adesso2017]. It is known that quantifiers of statistical speed in the system Hilbert space may be associated with measures adopted in quantum metrology to investigate the ultimate limit of precision in estimating a given physical quantity [@braunstein1994statistical]. The sensitivity of an initial quantum state to changes of the parameter (e.g., an unknown phase shift) of a dynamical evolution can be then determined by measures of quantum statistical speed [@gessner2018statistical]. A higher sensitivity implies higher precision in the estimation of the parameter of interest [@braunstein1994statistical; @giovannetti2006quantum]. These arguments naturally motivate one to inquire whether measures of quantum statistical speed can conveniently quantify the non-Markovian character of the system dynamics, a problem which has remained unexplored. Here, we address this issue introducing a method for witnessing and measuring non-Markovianity by means of the Hilbert-Schmidt speed (HSS) [@gessner2018statistical], a type of quantum statistical speed which has the advantage of avoiding diagonalization of the evolved density matrix. We check the efficiency of the proposed HSS-based witness in several typical situations of open quantum systems made of qubits and qutrits. In particular, we consider: one qubit subject to phase-covariant noise [@lankinen2016complete], especially the so-called eternal non-Markovianity model [@he2017non; @chruscinski2013non; @chruscinski2015non; @hall2014canonical; @teittinen2019there]; a single qubit undergoing the Pauli channel [@breuer2016colloquium; @chruscinski2013non; @song2017dynamics]; two independent qubits locally interacting with leaky cavities; V-type and $\Lambda $-type three-level atom (qutrit) in a dissipative cavity. We find that the HSS-based non-Markovianity measure identifies memory effects in total agreement with the trace distance-based BLP measure, thus detecting system-environment information backflows. The paper is organized as follows. In Sec. \[sec:HSS\] we briefly review the definition of the Hilbert-Schmidt speed. In Sec. \[Witness\] we introduce the measure of quantum non-Markovianity via the HSS. Through various examples, the sensitivity of this measure in detecting memory effects is studied in Sec. \[Example\]. Finally, Sec. \[cunclusion\] summarizes the main results and prospects. Hilbert-Schmidt speed (HSS) {#sec:HSS} =========================== We start by recalling the general framework leading to the definition of quantum statistical speed, whose the HSS is a particular case. Let us consider the family of distance measures $$\label{dis} [d_{\alpha}(p,q)]^{\alpha}=\dfrac{1}{2}\sum\limits_{x}^{}|p_{x}-q_{x}|^{\alpha},$$ with $ \alpha\geq 1 $ and where $ p = \{p_{x}\}_{x} $ and $ q = \{q_{x}\}_{x} $ are probability distributions. Here it is assumed that the random variable $ x $ takes only discrete values; in the case of a continuum of values, the sum is replaced by an integral. These distances satisfy the following basic properties: (i) non-negativity and normalization $0 \leq d_{\alpha}(p,q)\leq 1 $, where $ d_{\alpha}(p,q)=0~\leftrightarrow p\equiv q $; (ii) triangle inequality $ d_{\alpha}(p_{1},p_{3})\leq d_{\alpha}(p_{1},p_{2})+d_{\alpha}(p_{2},p_{3}) $; (iii) symmetry $ d_{\alpha}(p,q)=d_{\alpha}(q,p) $. Generally, in order to obtain the statistical speed from any statistical distance, one should quantify the distance between infinitesimally close distributions taken from a one-parameter family $ p_{x}(\varphi) $ with parameter $ \varphi $. Then, the classical statistical speed is given by $$\label{classicalspeed} \text{s}_{\alpha}\big[p(\varphi_{0})\big]=\dfrac{\mathrm{d}}{\mathrm{d}\varphi}d_{\alpha}\big(p(\varphi_{0}+\varphi),p(\varphi_{0})\big).$$ Considering now a given pair of quantum states $ \rho $ and $ \sigma $, one can extend these classical notions to the quantum case by taking $ p_{x} = \text{Tr}\{E_{x}\rho\} $ and $ q_{x} = \text{Tr}\{E_{x}\sigma\} $ as the measurement probabilities associated with the positive-operator-valued measure (POVM) defined by the set of $ \{E_{x}\geq 0\} $ satisfying $\sum_{x}^{} E_{x} = \mathbb{I} $, where $\mathbb{I}$ is the identity operator. Maximizing the classical distance over all possible choices of POVMs, one obtains the corresponding quantum distance $$\label{qdis} D_{\alpha}(\rho,\sigma)=\max_{\{E_{x}\}}d_{\alpha}(p,q),$$ which leads to the expression [@gessner2018statistical] $$\label{qqdis} [D_{\alpha}(\rho,\sigma)]^{\alpha}=\frac{1}{2}\text{Tr}|\rho-\sigma|^{\alpha},$$ where $ |X|^{\alpha} $ can be computed using the spectral decomposition $ X\equiv\sum_{i}^{}\lambda_{i}|\lambda_{i} \rangle \langle \lambda_{i}| $, i.e., $ |X|^{\alpha}=\sum_{i}^{}|\lambda_{i}|^{\alpha}|\lambda_{i} \rangle \langle \lambda_{i}| $, so that $\text{Tr}|X|^{\alpha}=\sum_{i}^{}|\lambda_{i}|^{\alpha} $. For $ \alpha = 1 $, the trace distance $D(\rho_{1},\rho_{2})=\frac{1}{2}\text{Tr}|\rho_{1}-\rho_{2}|$ is retrieved, while for $ \alpha = 2 $ one gets the so-called Hilbert-Schmidt distance $D_{2}(\rho,\sigma)$ allowing for a simple evaluation because it does not need diagonalization of the argument operator. This distance is of Riemann type and limited by the following inequality relation $$0 \leq D_{2}(\rho,\sigma) \leq 2D(\rho,\sigma).$$ The Hilbert-Schmidt distance generally does not possess the contractivity property, although quantum systems such as qubits constitute useful exceptions. Necessary and sufficient conditions for contractivity of the Hilbert-Schmidt distance for the Lindblad operators have been discussed [@wang2009contractivity]. For a single qubit, it is straightforward to derive that trace and Hilbert-Schmidt distances are equivalent, namely $$D_{2}(\rho,\sigma)=\sqrt{2} D(\rho,\sigma),$$ so that contractivity of trace distance implies contractivity of Hilbert-Schmidt distance. However, it worth to notice that this argument cannot be generalized to high-dimensional systems with Hilbert space dimension larger than two [@wang2009contractivity]. Extending Eq. (\[classicalspeed\]) to the quantum case, one then obtains the quantum statistical speed as [@gessner2018statistical] $$\label{quantumspeed} \text{S}_{\alpha}\big[\rho(\varphi)\big]=\max_{\{E_{x}\}} \text{s}_{\alpha}\big[p(\varphi)\big]=\bigg(\frac{1}{2}\text{Tr}\bigg|\dfrac{d\rho(\varphi)}{d\varphi}\bigg|^{\alpha}\bigg)^{1/\alpha}.$$ In the special case when $ \alpha = 2 $, the quantum statistical speed is given by the Hilbert-Schmidt speed (HSS) [@gessner2018statistical] $$\label{HSS} HSS(\rho_{\varphi})=\sqrt{\frac{1}{2}\text{Tr}\bigg[\bigg(\dfrac{\text{d}\rho_{\varphi}}{\text{d}\varphi}\bigg)^2\bigg]},$$ which, in analogy with the Hilbert-Schmidt distance, does not require the diagonalization of $ \text{d}\rho_{\varphi}/\text{d}\varphi $. In the following we shall introduce a non-Markovianity quantifier based on this quantity. HSS-based non-Markovianity measure \[Witness\] ============================================== It is known that non-Markovian effects are responsible for the speedup of quantum evolutions from an initial state to a subsequent one [@deffner2013quantum; @PhysRevA.94.052125; @PhysRevA.93.020105; @Ahansaz2019]. It thus seems natural that measures of quantum speed limits may play the role of proper quantifiers of memory effects occurring during a system dynamics. Some works along this direction based on quantum Fisher information metric have been reported [@lu2010quantum; @adesso2017]. Here we aim at exploiting a convenient quantum statistical speed [@gessner2018statistical] as a figure of merit of the non-Markovian character of quantum evolutions, which avoids diagonalization of the system density matrix, with consequent practical advantages in the analysis. We stress that such a quantifier would be particularly useful, especially for detecting the memory effects of high-dimensional and multipartite open quantum systems. Looking at the various possible choices among the quantum statistical speeds of Eq. (\[quantumspeed\]), the most natural candidate towards this aim is just that obtained for $\alpha=2$, corresponding to the Hilbert-Schmidt speed (HSS). In this regard, for a quantum system with $n$-dimensional Hilbert space $\mathcal{H}$, let us take an initial state defined as $$|\psi_{0}\rangle=\dfrac{1}{\sqrt{n}}\big(\text{e}^{i\varphi}|\psi_{1}\rangle+...+|\psi_{n}\rangle\big),$$ where $\varphi$ is an unknown phase shift and $\{|\psi_{1}\rangle,...,|\psi_{n}\rangle\}$ constructs a complete and orthonormal set (basis) for $ \mathcal{H} $. With the idea that an increasing speed of quantum evolutions is a signature of memory effects in the system dynamics, we then introduce the HSS-based witness of non-Markovianity as $$\label{chit} \chi(t):= \dfrac{\text{d}HSS \big(\rho_{\varphi}(t)\big)}{\text{d}t} > 0,$$ where $ \rho_{\varphi}(t) $ denotes the evolved state of the system and $HSS (\rho_{\varphi}(t))$ is defined in Eq. (\[HSS\]). In analogy to what has been done for other measures [@breuer2009measure; @laine2010measure], a quantifier of the degree of non-Markovianity follows as $$\label{NHSS} \mathcal{N}_\mathrm{HSS}:=\max_{{\varphi,\{|\psi_{1}\rangle,...,|\psi_{n}\rangle\}}} \int\limits_{\chi(t)>0}^{}\chi(t)\text{dt},$$ where the maximization is taken over all the possible parametrizations of the initial states. The sanity check of these quantities as faithful indicators of non-Markovianity is performed in the following section. Notice that, to this aim, it is sufficient to study the time behavior of the witness $\chi(t)$, verifying that it is positive just in correspondence of backflows of information from the environment to the system. The maximization giving the optimal initial state shall be performed numerically from a large sample of randomly generated initial states. Qualitative analysis of non-Markovianity {#Example} ======================================== In this section, we consider several typical examples of open quantum systems of both theoretical and experimental interest to qualitatively analyze the faithfulness of the HSS-based non-Markovianity witness. This is performed by comparing the behavior of $\chi(t)$ of Eq. (\[chit\]) with that of the BLP (trace distance-based) witness $\sigma(t)\equiv \frac{\mathrm{d}}{\mathrm{d}t}D(\rho_{1}(t),\rho_{2}(t))$ [@breuer2009measure]. If $\chi(t)>0$ whenever $\sigma(t)>0$, then we can claim that the proposed HSS-based measure is a bona-fide quantifier of non-Markovianity, being sensitive to information backflows. One-qubit systems ----------------- ### Phase-covariant noise We start by considering a single qubit undergoing a so-called phase covariant noise. The general time-local master equation, in the interaction picture (in units of $\hbar$), for the density matrix $\rho$ for a single qubit subject to phase-covariant noise is written as [@lankinen2016complete; @smirnePRL; @Teittinen_2018] $$\label{Mastercovaiant} \dfrac{\mathrm{d}\rho}{\mathrm{d}t}=-i\omega (t)[\sigma_{z},\rho]+\dfrac{\gamma_{1}(t)}{2}L_{1}(\rho)+\dfrac{\gamma_{2}(t)}{2}L_{2}(\rho)+\dfrac{\gamma_{3}(t)}{2}L_{3}(\rho),$$ where $ \omega(t) $ represents a time-dependent frequency shift, $ \gamma_{i}(t) $ ($i=1,2,3$) denotes the time-dependent rate associated to each dissipator $ L_{i}(\rho) $, whose expressions are [@lankinen2016complete] $$\begin{aligned} &&L_{1}(\rho)=\sigma_{+}\rho\sigma_{-}-\frac{1}{2}\{\sigma_{-}\sigma_{+},\rho\},\nonumber\\ && L_{2}(\rho)=\sigma_{-}\rho\sigma_{+}-\frac{1}{2}\{\sigma_{+}\sigma_{-},\rho\},\nonumber \\ && L_{3}(\rho)=\sigma_{z}\rho\sigma_{z}-\rho.\end{aligned}$$ In the above equations, $ \sigma_{\pm} = \frac{1}{2}(\sigma_{x} \pm i\sigma_{y}) $ denote the inversion operators and $ \sigma_{i}$’s ($i = x,y,z$) are the Pauli operators. Moreover, the three dissipators $L_{1},~ L_{2}$, and $ L_{3} $ describe, respectively, the heating, dissipation, and dephasing. Special cases of master equations of the form of Eq. (\[Mastercovaiant\]), describing the phase-covariant noise, are the amplitude damping model obtained for $ \gamma_{1}(t)=\gamma_{3}(t)=0 $ and the pure dephasing model achieved for $ \gamma_{1}(t)=\gamma_{2}(t)=0 $ [@breuer2016colloquium; @nielsen00; @he2019non]. Indicating with $ |0\rangle $ and $ |1\rangle $ the ground and excited states of the qubit, respectively, one can show that the solution of the master equation of Eq. (\[Mastercovaiant\]) is given by [@lankinen2016complete] $$\mathcal{E}_{t}(\rho(0))=\rho(t)= \left( \begin {array}{cc} P_{1}(t)&Q(t)\\ \noalign{\medskip} Q^{*}(t)&1-P_{1}(t)\end {array} \right),$$ where $$P_{1}(t)=\text{e}^{-\Gamma(t)}[G(t)+P_{1}(0)],\ Q(t)=\alpha(0)\text{e}^{i\Omega(t)-\Gamma(t)/2-\tilde{\Gamma}(t)},$$ with the time-dependent functions $$\begin{aligned} &&\Gamma(t)=\int_{0}^{t}dt'[\gamma_{1}(t')+\gamma_{2}(t')]/2,\quad \tilde{\Gamma}(t)=\int_{0}^{t'}dt'\gamma_{3}(t'),\nonumber \\ &&\Omega(t)=\int_{0}^{t'}dt'2\omega(t'),\ G(t)=\int\limits_{0}^{t'}dt'\text{e}^{\Gamma(t')}\gamma_{2}(t')/2.\end{aligned}$$ The master equation of Eq. (\[Mastercovaiant\]) leads to commutative dynamics, meaning $\mathcal{E}_{t}\circ\mathcal{E}_{s}=\mathcal{E}_{s}\circ\mathcal{E}_{t} $ for any $ s $, $ t\geq 0 $, iff $ \gamma_{1}(t)=\gamma(t) $ and $ \gamma_{2}(t)=\kappa \gamma(t) $, in which $0 \leq \kappa \leq 1 $. Moreover, the dynamics is unital, i.e. the corresponding channel $ \mathcal{E}_{t} $ satisfies $\mathcal{E}_{t}(\mathbb{I})=\mathbb{I} $ ($ \mathbb{I} $ denotes the identity operator), when it is commutative and $\kappa =1 $. Preparing the qubit in the initial state $$\label{newpar} |\psi_{0}\rangle=\dfrac{1}{\sqrt{2}}(\text{e}^{i\varphi}|+\rangle+|-\rangle),$$ where $|\pm\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)$, the time derivative of the HSS, that is the quantity $\chi(t)$ of Eq. (\[chit\]), results to be $$\begin{aligned} \chi(t)&=& -\frac{1}{8}{\rm e}^{-2\tilde{\Gamma} (t) } \frac { \left( \gamma_{1}(t)+\gamma_{2}(t)+4\gamma_{3}(t) \right) \cos^{2}\varphi }{\sqrt {{\rm e}^{\Gamma(t)-2\tilde{\Gamma} (t)} \cos^{2}\varphi + \sin^{2} \varphi }} \nonumber\\ &&-\frac{1}{4}{\rm e}^{-\Gamma (t) } \frac { \left( \gamma_{1}(t)+\gamma_{2}(t)\right) \sin^{2}\varphi }{\sqrt {{\rm e}^{\Gamma(t)-2\tilde{\Gamma} (t)} \cos^{2}\varphi + \sin^{2} \varphi }} .\end{aligned}$$ Accordingly, choosing $ \varphi = 0 $, the HSS-based witness $\chi(t)>0$ tells us that the process is non-Markovian when $\gamma_{1}(t)+\gamma_{2}(t)+4\gamma_{3}(t)<0$. On the other hand, choosing $ \varphi = \frac{\pi}{2} $, the dynamics is non-Markovian by the HSS-based witness when $ \gamma_{1} (t) + \gamma_{2} (t)< 0 $. In other words, the dynamics is detected as non-Markovian if either of the conditions above holds. This is exactly the same result obtained by the BLP witness $\sigma(t)$ for the same dynamical instance [@teittinen2018revealing]). Notice that the sensivity of the witness $\chi(t)$ is investigated by considering general conditions for the phase-covariant noise, which encompass many of the most studied qubit dynamics such as pure dephasing, amplitude damping noise, depolarizing noise and the so-called eternal non-Markovianity [@hall2014canonical]. As a general insight from this first example, we thus observe that the HSS-based witness performs in perfect agreement with the BLP measure. It is known that the BLP measure, for which breaking CP-divisibility is a consequence of breaking P-divisibility [@breuer2009measure; @PhysRevA.90.022110], is tighter than other proposed non-Markovianity measures [@teittinen2018revealing]. On the basis of the above results, the same property holds for the HSS-based witness. ### Pauli channel In this section, we consider a qubit subject to a Pauli channel, whose corresponding master equation is [@chruscinski2013non; @jiang2013comparing] $$\dfrac{\mathrm{d}\rho}{\mathrm{d}t}=\sum_{i=1}^{3}\gamma_{i}(t)(\sigma_{i}\rho \sigma_{i}-\rho),$$ where $ \gamma_{i}(t) $ ($i =1,2, 3$) denote the decoherence rate associated to the $i$-th channel. The dynamics may be rewritten in the following equivalent form [@chruscinski2013non; @jiang2013comparing] $$\rho(t)= \mathcal{E}_{t} [\rho(0)] =\sum\limits_{i=0}^{3}p_{i}(t)\sigma_{i}\rho(0)\sigma_{i},~~t\geq 0$$ where $ \sigma_{0} = \mathbb{I} $ (identity operator), $ \sigma_{i}$’s are the Pauli matrices, and $ p_{i}(t) $’s denote the time-dependent probability distribution. Notice that $ p_{0}(0) = 1 $ and $p_i(0)=0$ ($i=1,2,3$), guaranteeing that $ \mathcal{E}_{0} = \mathbf{I} $ (identity channel). The explicit expressions of the time-dependent probabilities of the Pauli channel are $$\begin{aligned} &&p_{0}(t)=\dfrac{1}{4}[1+\lambda_{1}(t)+\lambda_{2}(t)+\lambda_{3}(t)],\nonumber \\ && p_{1}(t)=\dfrac{1}{4}[1+\lambda_{1}(t)-\lambda_{2}(t)-\lambda_{3}(t)],\nonumber\\ && p_{2}(t)=\dfrac{1}{4}[1+\lambda_{2}(t)-\lambda_{1}(t)-\lambda_{3}(t)],\nonumber\\ &&p_{3}(t)=\dfrac{1}{4}[1+\lambda_{3}(t)-\lambda_{2}(t)-\lambda_{1}(t)],\end{aligned}$$ where $\lambda_{1}(t)=\text{e}^{-2(\Gamma_{2}(t)+\Gamma_{3}(t))}$, $\lambda_{2}(t)=\text{e}^{-2(\Gamma_{1}(t)+\Gamma_{3}(t))}$, and $\lambda_{3}(t)=\text{e}^{-2(\Gamma_{1}(t)+\Gamma_{2}(t))}$, with $$\Gamma_{i}(t)=\int_{0}^{t}\gamma_{i}(\tau)\mathrm{d}\tau.\quad (i=1,2,3)$$ It is straightforward to show that this dynamics is unital ($ \mathcal{E}_{t} (\mathbb{I}) = \mathbb{I}$). When $ \gamma_{1}{(t)} = \gamma_{2}{(t)} $, the unital case of the phase-covariant master equation and the Pauli channel with the same decay rates coincide with each other. It should be noted that the general Pauli channel includes a larger set of dynamics than the unital phase-covariant noise, such as bit-flip and bit-phase-flip channels. Before analysing the HSS-based witness, we recall useful results valid for the Pauli channel extracted from previous works [@chruscinski2013non; @jiang2013comparing]. In particular: (i) according to the RHP non-Markovian criterion, the dynamics is Markovian if and only if all of the decoherence rates remain positive for all $ t \geq 0 $, i.e., $ \gamma_{i}(t) \geq 0$, for all $i = 1,2,3$; (ii) according to BLP non-Markovian criterion, the dynamics is Markovian if and only if the sum of all pairs of distinct decoherence rates remains positive, i.e., $ \gamma_{i}(t) + \gamma_{j}(t) \geq 0$ for all $ j\neq i $. With this in mind, we calculate the HSS-based witness $\chi(t)$ introduced in Eq. (\[chit\]). The qubit is initially prepared in a state parametrized as $$|\psi_{0}^\pm(\varphi)\rangle=\frac{1}{\sqrt{2}}(\text{e}^{i\varphi}|0\rangle\pm |1\rangle).$$ For three different optimal initial parametrizations given by the set $\{\ket{\psi_{0}^+(0)},\ket{\psi_{0}^+(\pi/2)},\ket{\psi_{0}^-(\pi/2)}\}$ one easily finds, respectively, $$\begin{aligned} &&\chi(t)= - \left( \gamma_{1}(t)+\gamma_{3}(t) \right) {{\rm e}^{-2{ \Gamma_{1}(t)}-2{ \Gamma_{3}(t)}}} , \nonumber \\ && \chi(t)= - \left( \gamma_{1}(t)+\gamma_{2}(t) \right) {{\rm e}^{-2{ \Gamma_{1}(t)}-2{ \Gamma_{2}(t)}}} , \nonumber\\ && \chi(t)=- \left( \gamma_{2}(t)+\gamma_{3}(t) \right) {{\rm e}^{-2{ \Gamma_{2}(t)}-2{ \Gamma_{3}(t)}}}. \end{aligned}$$ Therefore, according to the HSS-based criterion the dynamics is deemed Markovian if and only if $ \gamma_{1} (t) + \gamma_{2} (t) \geq 0$, $\gamma_{1} (t) + \gamma_{3} (t) \geq 0$ and $\gamma_{2} (t) + \gamma_{3} (t) \geq 0 $ for all $ t\geq 0 $, exactly the same as the condition (ii) above obtained by the BLP measure. Whenever at least one of the three conditions above is not satisfied, that is $ \gamma_{i}(t) + \gamma_{j}(t) < 0$ for some $ j\neq i $, the qubit dynamics exhibits memory effects and is non-Markovian. Two-qubit system {#Two-qubit example} ---------------- We now investigate a composite quantum system consisting of two separated qubits, A and B, which independently interact with their own dissipative reservoir (leaky cavity). The general Hamiltonian is therefore written as $H=H_A+H_B$. The single qubit-reservoir Hamiltonian is ($\hbar\equiv 1$) [@breuer2002theory] $$H=\omega_{0}~\sigma_{+}\sigma_{-}+\sum\limits_{k}^{}\omega_{k}b^{\dagger}_{k}b_{k}+(\sigma_{+}B+\sigma_{-}B^{\dagger}),$$ where $ \omega_{0} $ represents the transition frequency of the qubit, $\sigma_{\pm} $ are the system raising and lowering operators, $ \omega_{k} $ is the frequency of the $k$-th field mode of the reservoir, $ b_{k} $ and $ b^{\dagger}_{k} $ denote, respectively, the $k$-mode creation and annihilation operators, $ B=\sum_{k}^{}g_{k}b_{k} $ with $ g_{k} $ being the coupling constant with the $k$-th mode. At zero temperature and in the basis $ \{|1\rangle,|0\rangle\} $, from the above Hamiltonian with a Lorentzian spectral density for the cavity modes, one finds that the dynamics of the qubit can be described by the evolved reduced density matrix [@breuer2002theory; @bellomo2007non] $$\rho_\mathrm{q}(t)=\left( \begin {array}{cc} \rho_{11}^{S}(0)P(t)&\rho_{10}^{S}(0)\sqrt{P(t)}\\ \noalign{\medskip} \rho_{01}^{S}(0)\sqrt{P(t)}&1-\rho_{00}^{S}(0)P(t)\end {array} \right),$$ where the coherence characteristic function $P(t)$ is $$P(t)=\text{e}^{-\Gamma t}\left[\cos(\Gamma t/2)+(\lambda/\Gamma)\sin(\Gamma t/2)\right]^{2},$$ with $ \Gamma=\sqrt{2\gamma_{0}\lambda-\lambda^{2}} $. The rate $ \lambda $ denotes the spectral width for the qubit-reservoir coupling (photon decay rate) and is connected to the reservoir correlation time $ \tau_{c} $ by the relation $ \tau_{c} =1/\lambda$. The decay rate $\gamma_{0} $ is instead related to the system (qubit) relaxation time scale $\tau_{r} $ by $ \tau_{r} =1/\gamma_{0}$. In the strong coupling regime, occurring for $ \gamma_{0} > \lambda/2 $, the non-Markovian effects become relevant [@breuer2002theory]. The density matrix evolution of the two independent qubits can be then easily obtained knowing the evolved density matrix of a single qubit [@bellomo2007non]. The elements of the two-qubit evolved density matrix $ \rho(t) $ are presented in Appendix \[A\]. Using the definition $D(\rho_{1},\rho_{2})=\frac{1}{2}\text{Tr}|\rho_{1}-\rho_{2}|$ and the optimal pair of two-qubit quantum states $\rho_{1}(0)=\ket{++}\bra{++} $, $\rho_{2}(0)=\ket{--}\bra{--} $ with $ |\pm\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle) $, one obtains the time-dependent trace distance [@wang2018probing] $$\label{TD2} D(\rho_{1}(t),\rho_{2}(t))=\sqrt{P(t)(2-2P(t)+P(t)^{2})}.$$ On the other hand, preparing the two-qubit system in the initial state $$\label{initial2} |\psi_{0}\rangle=\frac{1}{2}(\text{e}^{i\varphi}|11\rangle+|10\rangle+|01\rangle+|00\rangle),$$ we find that the HSS of Eq. (\[HSS\]) is given by $$\label{HSS2} HSS(\rho_{\varphi}(t))=\frac{1}{4}\sqrt {P(t) [ P(t) \left( 4\,P(t)-3 \right) +2 ] },$$ which is independent of the phase $\varphi$. \[t!\] ![Dynamics of Hilbert-Schmidt speed $HSS(\rho_{\varphi}(t)) $ (blue solid line), trace distance $D(\rho_{1}(t),\rho_{2}(t))$ (red dot-dashed line) and coherence characteristic function $ P(t)$ (amplified by $20$ times for comparison, green dashed line) as a function of the dimensionless time $\gamma_0 t$ for the two-qubit system in the strong coupling regime, with $ \lambda=1.25 \gamma_0$. []{data-label="compare2"}](Figure1 "fig:"){width="52.00000%"} The numerical computation immediately shows that, in the weak coupling regime ($ \lambda>2\gamma_0 $), the behavior of $ D(\rho_{1}(t),\rho_{2}(t)) $, $HSS(\rho_{\varphi}(t)) $, and $ P(t) $ is essentially a Markovian exponential decay controlled by $ \gamma_{0} $ (all of them are decreasing monotonic functions of time). Differently, in the strong coupling regime ($ \lambda < 2\gamma_0 $), where memory effects appear, $ D(\rho_{1}(t),\rho_{2}(t)) $, $HSS(\rho_{\varphi}(t)) $, and $ P(t) $ simultaneously exhibit an oscillatory behavior such that their maximum and minimum points exactly coincide, as shown in Fig. \[compare2\]. In particular, one notices that $\chi(t)= \frac{\mathrm{d}}{\mathrm{d}t}HSS(\rho_{\varphi}(t))>0 $ in the very same periods when $\sigma(t)=\frac{\mathrm{d}}{\mathrm{d}t}D(\rho_{1}(t),\rho_{2}(t))>0$, which detects the system-environment information backflows. Hence, the HSS-based measure is in perfect agreement with the trace distance-based witness and can be used as an efficient tool for assessing non-Markovianity in two-qubit systems. One-qutrit systems ------------------ ### V-type three-level open quantum system In this section, we investigate the non-Markovian dynamics of a V-type three level atom, playing the role of a qutrit, coupled to a dissipative environment [@scully_zubairy_1997; @gu2012non]. We recall that three-level quantum systems (qutrits) can be promising alternative candidates to be used in quantum processors instead of the standard two-level systems (qubits) [@lanyon2008; @kumar]. For a V-type qutrit interacting with a dissipative reservoir, the two upper levels, i.e., $ |2\rangle $ and $ |1\rangle $ are coupled to the ground state $ |0\rangle $ with transition frequencies $ \omega_{2} $ and $ \omega_{1} $, respectively. The Hamiltonian of the total system can be written as $$H =H_{0} + H_{I},$$ where ($\hbar\equiv 1$) $$H_{0} =\sum\limits_{j=1}^{2}\omega_{j}\sigma^{(j)}_{+}\sigma^{(j)}_{-}+\sum\limits_{k}^{} \omega_{k} b^{\dagger}_{k}b_{k},$$ represents the free Hamiltonian of the system plus the environment, while $$H_{I} =\sum\limits_{j=1}^{2}\sum\limits_{k}^{}(g_{jk}\sigma^{(j)}_{+}b_{k}+g^{*}_{ik}\sigma^{(j)}_{-}b^{\dagger}_{k}),$$ is the interaction Hamiltonian in which $\sigma^{(j)}_{\pm}$ ($j=1,2 $) are the standard raising and lowering operators between each of the two upper levels and the ground one. The index $ k $ denotes the different reservoir field modes with frequencies $ \omega_{k} $, creation and annihilation operators $b^{\dagger}_{k}$, $b_{k} $ and coupling constants $ g_{jk} $. We assume that the relaxation rates of the two upper levels are equal, the two upper atomic levels are degenerated, the atomic transitions are resonant with the central frequency of the reservoir and the photonic bath is initially with no excitation. Under these conditions and after applying the unitary transformation $$\varrho(t)=U\rho_{S}(t)U^{\dagger},$$ with $$U=\left( \begin {array}{ccc} \dfrac{1}{\sqrt{2}}&-\dfrac{1}{\sqrt{2}}&0\\ \dfrac{1}{\sqrt{2}}&\dfrac{1}{\sqrt{2}}&0\\ \noalign{\medskip} 0&0&1\end {array} \right),$$ on the evolved density matrix $ \rho(t) $ obtained in the interaction picture and written in the basis $ \{|2\rangle,~|1\rangle,~|0\rangle\} $, one obtains the evolved state of the V-type atom by [@behzadi2017effect; @gu2012non] $$\varrho(t)=\sum\limits_{i=1}^{3}\mathcal{K}_{i}\varrho(0)\mathcal{K}^{\dagger}_{i}.$$ In the above dynamical map, the Kraus operators are $$\begin{aligned} &&\mathcal{K}_{1}=\left( \begin {array}{ccc} G_{+}(t)&0&0\\ 0&G_{-}(t)&0\\ \noalign{\medskip} 0&0&1\end {array} \right),\nonumber\\ && \mathcal{K}_{2}=\left( \begin {array}{ccc} 0&0&0\\ 0&0&0\\ \noalign{\medskip} \sqrt{1-|G_{+}(t)|^{2}}&0&0\end {array} \right),\nonumber\\ &&\mathcal{K}_{3}=\left( \begin {array}{ccc} 0&0&0\\ 0&0&0\\ \noalign{\medskip} 0&\sqrt{1-|G_{-}(t)|^{2}}&0\end {array} \right),\end{aligned}$$ with $$G_{\pm}(t)=\text{e}^{-\lambda t/2}\bigg[\text{cosh}\bigg(\dfrac{d_{\pm} t}{2}\bigg)+\dfrac{\lambda}{d_{\pm}}\text{sinh}\bigg(\dfrac{d_{\pm} t}{2}\bigg)\bigg],$$ where $d_{\pm}=\sqrt{\lambda^{2}-2\lambda (\gamma \pm |\gamma\theta|)} $, $ \lambda $ is the spectral width of the reservoir, $ \gamma $ is the relaxation rate, and $ \theta $ depends on the relative angle between two dipole moment elements associated with the transitions $ |2\rangle \rightarrow |0\rangle $ and $ |1\rangle \rightarrow |0\rangle $. For example, $ \theta=0 $ means that the dipole moments of the two transitions are perpendicular to each other and corresponds to the case where there is no spontaneously generated interference (SGI) between the two decay channels. Moreover, $ \theta = \pm 1 $ indicates that the two dipole moments are parallel or antiparallel, corresponding to the strongest SGI between the two decay channels. \[t!\] ![Dynamics of Hilbert-Schmidt speed $HSS(\rho_{\varphi}(t)) $ (blue solid line) and trace distance $D(\rho_{1}(t),\rho_{2}(t))$ (red dashed line) as a function of the dimensionless time $\gamma t$ for the V-type three-level atom, with $\lambda=5\times10^{-3}\gamma $ and $ \theta=0.6 $.[]{data-label="VTYPEHSS"}](Figure2 "fig:"){width="52.00000%"} Using the trace distance-based measure for the non-Markovianity analysis of this system, one chooses the optimal pair of initial orthogonal pure states $ \rho_{1}(0)=|\psi_{+}\rangle \langle \psi_{+}|$ and $\rho_{2}(0)=|\psi_{-}\rangle \langle \psi_{-}| $, where $ |\psi_\pm\rangle=(|+\rangle\pm |0\rangle)/\sqrt{2}$ with $ |+\rangle=(|2\rangle+|1\rangle) /2$, giving [@gu2012non] $$D(\rho_{1}(t),\rho_{2}(t))=|G_{+}(t)|.$$ On the other hand, to assess the memory effects by the HSS-based measure, the qutrit is initially taken in the state $$|\psi_{0}\rangle=\dfrac{1}{\sqrt{3}}(\text{e}^{i\varphi}|\tilde{2}\rangle+|\tilde{1}\rangle+|\tilde{0}\rangle),$$ where $|\tilde{i}\rangle =U|i\rangle$ ($i=1,2,3$). The HSS of Eq. (\[HSS\]) is then easily obtained as $$HSS(\varrho_{\varphi}(t))=\frac{1}{3}\sqrt{|G_{+}(t)|^{2}(|G_{-}(t)|^{2}+1)},$$ being independent of the initial phase $\varphi$. We can now compare the time behaviors of the two quantities above, which is plotted in Fig. \[VTYPEHSS\]. We notice that the trace distance and the HSS are in perfect qualitative agreement, clearly showing that $\chi(t)=\frac{\mathrm{d}}{\mathrm{d}t} HSS(\rho_{\varphi}(t))$ is positive whenever $\sigma(t)\equiv \frac{\mathrm{d}}{\mathrm{d}t}D(\rho_{1}(t),\rho_{2}(t)) > 0$. Therefore, the HSS-based measure guarantees that the qutrit dynamics exhibits non-Markovian behavior exactly in correspondence of system-environment information backflows. ### $\Lambda $-type three-level open quantum system The last system considered in our case study analysis is the so-called $\Lambda $ model, consisting of a three-level atom (qutrit) with excited state $ |a\rangle $ and two ground states $ |b\rangle $ and $ |c\rangle $ which interacts off-resonantly with a cavity field [@scully_zubairy_1997]. The cavity modes are assumed to have a Lorentzian spectral density $$J(\omega)=\dfrac{\gamma_{0}}{2\pi}\dfrac{\lambda^{2}}{(\omega_\mathrm{cav}-\omega)^{2}+\lambda^{2}},$$ where $ \lambda $ is the cavity spectral width, $ \omega_\mathrm{cav} $ represents the resonance frequency of the cavity, and the rate $\gamma_{0} $ quantifies the strength of the system-environment coupling. Moreover, $ \Delta_{i}=\omega_{i}-\omega_{cav} $ denotes the detuning of the $ i $-th transition frequency of the atom from the cavity resonance frequency, being $\omega_1\equiv\omega_{ab}$ and $\omega_2\equiv\omega_{ac}$. The master equation describing the reduced dynamics of the $ \Lambda $-type atom and its analytical solution are reported, for convenience, in Appendix \[B\]. This is characterized by two Lindblad operators $\ket{b}\bra{a}$ and $\ket{c}\bra{a}$ corresponding to the time-dependent decay rates, respectively, $\gamma_1(t)$ and $\gamma_2(t)$. To find the conditions for dynamical memory effects by means of the HSS-based measure, we prepare the $\Lambda $-type atom in the initial state $$|\psi_{0}\rangle=\frac{1}{\sqrt{3}}(\text{e}^{i\varphi}|a\rangle+|b\rangle+|c\rangle),$$ which gives, from Eq. (\[HSS\]), $HSS(\rho_{\varphi}(t))=\frac{\sqrt{2}}{3}\text{e}^{-[D_{1}(t)+D_{2}(t)]/2} $, where $D_i(t)=\int_{0}^{t}\mathrm{d}s \gamma_{i}(s)$. Therefore, the non-Markovianity witness $\chi(t)$ of Eq. (\[chit\]) results to be $$\chi(t)=\dfrac{-(\gamma_{1}(t)+\gamma_{2}(t))}{3\sqrt{2}}\text{e}^{-[D_{1}(t)+D_{2}(t)]/2}.$$ This equation reveals that the non-Markovian character of the system dynamics is identified by the sum of the time-dependent decay rates $\gamma_{1}(t)+\gamma_{2}(t)$, which takes into account the competing processes of the two decay channels associated to $\gamma_{1}(t)$ and $\gamma_{2}(t)$, respectively. This is physically expected, also on the basis of previous analysis of such a $\Lambda$-type system in terms of non-Markovian quantum jumps [@piilo2009open]. Let us qualitatively discuss some particular conditions. When both $\gamma_{1}{(t)}$ and $\gamma_{2}{(t)}$ are nonnegative, $\chi(t)\leq 0$ so that the dynamics is Markovian: in this case, the rate of information flow may change but the direction of the flow remains constant, namely from the system to the environment. On the other hand, it is known that, when the detunings $ \Delta_{i}$ are large enough, the decay rates $\gamma_i$ assume temporary negative values which produce information backflows from the cavity to the system [@piilo2009open; @laine2010measure]: hence, memory effects occur ($\chi(t)>0$) when $\gamma_{1}(t)+\gamma_{2}(t)<0$ with an overall backflow of information. For $ \Delta_{1}= \Delta_{2}$ the decay rates are simultaneously negative in the same time regions, while for $ \Delta_{1} \neq \Delta_{2}$ the decay rates can have opposite signs [@laine2010measure]. In the latter situation, the cooperative action of the two channels become relevant. When the channel corresponding to the decay rate $\gamma_i (t)$ ($i = 1,2$) produces more information flow from environment to system than the other channel associated to $\gamma_j (t)$ ($j \neq i$), then $|\gamma_i (t)| > |\gamma_j(t)|$. This means that $\gamma_j (t) < -\gamma_i (t)$ during the time intervals when $\gamma_i(t)$ is negative and $\gamma_j(t)$ is positive: it is thus sufficient that only $\gamma_i(t)$ is negative to assure non-Markovianity ($\chi(t)>0$). These results are fully consistent with the previous findings obtained by the BLP (trace distace-based) witness [@laine2010measure]. Conclusions {#cunclusion} =========== We have established a relation between the non-Markovian dynamics of open quantum systems and the positive changing rate of the Hilbert-Schmidt speed (HSS), which is a special case of quantum statistical speed. The idea underlying this definition is grounded on the fact that the speedup of quantum evolutions is a signature of memory effects in the dynamics of the system interacting with the surrounding environment. By the introduced HSS-based witness, we have then defined a quantitative measure of dynamical memory effects. We have shown, in an extensive case study analysis, that the proposed witness is as efficient as the well-known trace distance-based (BLP) measure in detecting the non-Markovianity. The models considered for our study encompass many of the most paradigmatic open quantum systems (single qubits, two qubits and single qutrits undergoing dissipative and nondissipative dynamics), and provide evidence for the sensitivity of our HSS-based witness. Besides its conceptual interest, we remark that the HSS-based quantifier does not require diagonalization of the reduced system density matrix, with consequent practical advantages in the analysis. In fact, a quantifier with this characteristic is particularly useful, especially for assessing memory effects of high-dimensional and multipartite open quantum systems. The HSS is related to the Hilbert-Schmidt metric. However, despite the non-contractivity of the Hilbert-Schmidt distance for quantum systems of dimension $d>2$, we have shown that the HSS-based witness is a faithful non-Markovianity measure (satisfying contractivity) for all the systems studied, including qutrits ($d=3$). As a prospect, these results stimulate the investigation for systems of higher dimension to assess the extent of validity. Our study supplies an alternative useful tool to detect non-Markovianity based on the concept of quantum statistical speed sensitive to system-environment information backflows. It thus motivates further analyses on the role of memory effects in composite open quantum systems and their relation to quantum speedup. Acknowledgements {#acknowledgements .unnumbered} ================ H.R.J. thanks Henri Lyyra and Jose Teittinen for invaluable comments as well as constructive remarks and highly appreciates Sabrina Maniscalco for useful help. H.R.J. also wishes to acknowledge the financial support of the MSRT of Iran and Jahrom University. Two-qubit evolved density matrix {#A} ================================ Following the procedure described in Ref. [@bellomo2007non] to construct the reduced density matrix $\rho(t) $ for the two-qubit system discussed in Sec. \[Two-qubit example\], one finds that the diagonal and nondiagonal elements of $\rho(t)$ in the computational basis $ \{|11\rangle,|10\rangle,|01\rangle,|00\rangle\}$ are given by $$\begin{aligned} &&\rho_{11}(t)=\rho_{11}(0)P(t)^{2},\nonumber\\ && \rho_{22}(t)=\rho_{22}(0)P(t)+\rho_{11}(0)P(t)(1-P(t)),\nonumber\\ && \rho_{33}(t)=\rho_{33}(0)P(t)+\rho_{11}(0)P(t)(1-P(t)),\nonumber\\ &&\rho_{44}(t)=1-[\rho_{11}(t)+\rho_{22}(t)+\rho_{33}(t)],\end{aligned}$$ and $$\begin{aligned} &&\rho_{12}(t)=\rho_{12}(0)P(t)^{3/2},~~\rho_{13}(t)=\rho_{13}(0)P(t)^{3/2},\nonumber\\ && \rho_{14}(t)=\rho_{12}(0)P(t),~~\rho_{23}(t)=\rho_{23}(0)P(t),\nonumber\\ && \rho_{24}(t)=\sqrt{P(t)}[\rho_{24}(0)+\rho_{13}(0)(1-P(t))],\nonumber\\ && \rho_{34}(t)=\sqrt{P(t)}[\rho_{34}(0)+\rho_{12}(0)(1-P(t))],\end{aligned}$$ with $\rho_{ji}(t) =\rho^{*}_{ij}(t) $. Solutions for $\Lambda $-type three-level system {#B} ================================================ This appendix presents the formal analytical solutions for the $\Lambda $-type three-level systems [@piilo2009open; @laine2010measure]. The weak-coupling master equation for this model is written as follows $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\rho(t)&=-i\lambda_{1}(t)[|a\rangle\langle a|,\rho(t)]-i\lambda_{2}(t)[|a\rangle\langle a|,\rho(t)]\nonumber\\ &+\gamma_{1}(t)\bigg[|b\rangle\langle a| \rho(t) |a\rangle\langle b|-\frac{1}{2}\{\rho(t),|a\rangle\langle a|\}\bigg]\nonumber\\ &+\gamma_{2}(t)\bigg[|c\rangle\langle a| \rho(t) |a\rangle\langle c|-\frac{1}{2}\{\rho(t),|a\rangle\langle a|\}\bigg],\end{aligned}$$ where $$\begin{aligned} &&\lambda_{i}(t)=\int\limits_{0}^{t}\mathrm{d}s \int\limits_{0}^{\infty}\mathrm{d}s J(\omega) \text{sin}[(\omega-\omega_{i})s],\nonumber\\ &&\gamma_{i}(t)=\int\limits_{0}^{t}\mathrm{d}s \int\limits_{0}^{\infty}\mathrm{d}s J(\omega) \text{cos}[(\omega-\omega_{i})s].\end{aligned}$$ Introducing the short-hand notation $$\label{DLi} D_{i}(t)=\int_{0}^{t}\mathrm{d}s \gamma_{i}(s),\quad L_{i}(t)=\int_{0}^{t}\mathrm{d}s\lambda_{i}(s),$$ one finds that the solution of the master equation is given by [@piilo2009open; @laine2010measure] $$\begin{aligned} \label{gammadensity} &&\rho_{aa}(t)=\rho_{aa}(0)\text{e}^{-[D_{1}(t)+D_{2}(t)]},\nonumber\\ && \rho_{bb}(t)=\rho_{aa}(0)\int_{0}^{t}\mathrm{d}s \gamma_{1}(s)\text{e}^{-[D_{1}(s)+D_{2}(s)]}+\rho_{bb}(0),\nonumber\\ && \rho_{cc}(t)=\rho_{aa}(0)\int_{0}^{t}\mathrm{d}s \gamma_{2}(s)\text{e}^{-[D_{1}(s)+D_{2}(s)]}+\rho_{cc}(0),\\ && \rho_{ab}(t)=\rho_{ab}(0)\text{e}^{-[D_{1}(t)+D_{2}(t)]/2}\text{e}^{-i[L_{1}(t)+L_{2}(t)]},\nonumber\\ && \rho_{ac}(t)=\rho_{ac}(0)\text{e}^{-[D_{1}(t)+D_{2}(t)]/2}\text{e}^{-i[L_{1}(t)+L_{2}(t)]},\nonumber\\ && \rho_{bc}(t)=\rho_{bc}(0)\nonumber.\end{aligned}$$ [78]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ** (, ), ISBN . , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , ** (, ). , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , ** (, ). , , , , ****, (). , ****, (). , , , ****, (). , , , , , , , ****, (). , ** (, ). , ****, (). , , , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, ().
--- abstract: 'In general relativity, the notion of mass and other conserved quantities at spatial infinity can be defined in a natural way via the Hamiltonian framework: Each conserved quantity is associated with an asymptotic symmetry and the value of the conserved quantity is defined to be the value of the Hamiltonian which generates the canonical transformation on phase space corresponding to this symmetry. However, such an approach cannot be employed to define “conserved quantities” in a situation where symplectic current can be radiated away (such as occurs at null infinity in general relativity) because there does not, in general, exist a Hamiltonian which generates the given asymptotic symmetry. (This fact is closely related to the fact that the desired “conserved quantities” are not, in general, conserved!) In this paper we give a prescription for defining “conserved quantities” by proposing a modification of the equation that must be satisfied by a Hamiltonian. Our prescription is a very general one, and is applicable to a very general class of asymptotic conditions in arbitrary diffeomorphism covariant theories of gravity derivable from a Lagrangian, although we have not investigated existence and uniqueness issues in the most general contexts. In the case of general relativity with the standard asymptotic conditions at null infinity, our prescription agrees with the one proposed by Dray and Streubel from entirely different considerations.' author: - | Robert M. Wald and Andreas Zoupas\ [*Enrico Fermi Institute and Department of Physics*]{}\ [*University of Chicago*]{}\ [*5640 S. Ellis Avenue*]{}\ [*Chicago, Illinois 60637-1433*]{} title: 'A General Definition of “Conserved Quantities” in General Relativity and Other Theories of Gravity' --- Introduction ============ Notions of energy and angular momentum have played a key role in analyzing the behavior of physical theories. For theories of fields in a fixed, background spacetime, a locally conserved stress-energy-momentum tensor, $T_{ab}$, normally can be defined. If the background spacetime has a Killing field $k^a$, then $J^a = {T^a}_b k^b$ is a locally conserved current. If $\Sigma$ is a Cauchy surface, then $q = \int_\Sigma J^a d \Sigma_a$ defines a conserved quantity associated with $k^a$; if $\Sigma$ is a timelike or null surface, then $\int_\Sigma J^a d \Sigma_a$ has the interpretion of the flux of this quantity through $\Sigma$. However, in diffeomorphism covariant theories such as general relativity, there is no notion of the local stress-energy tensor of the gravitational field, so conserved quantities (which clearly must include gravitational contributions) and their fluxes cannot be defined by the above procedures, even when Killing fields are present. Nevertheless, in general relativity, for asymptotically flat spacetimes, conserved quantities associated with asymptotic symmetries have been defined at spatial and null infinity. A definition of mass-energy and radiated energy at null infinity, ${\cal I}$, was first given about 40 years ago by Trautman [@t] and Bondi et al [@b]. This definition was arrived at via a detailed study of the asymptotic behavior of the metric, and the main justification advanced for this definition has been its agreement with other notions of mass in some simple cases as well as the fact that the radiated energy is always positive (see, e.g., [@g], [@cjm] for further discussion of the justification for this definition). A number of inequivalent definitions of quantities associated with general (BMS) asymptotic symmetries at null infinity have been proposed over the years, but it was not until the mid-1980’s that Dray and Streubel [@ds] gave a general definition that appears to have fully satisfactory properties [@s]. This definition generalized a definition of angular momentum given by Penrose [@p] that was motivated by twistor theory. In much of the body of work on defining “conserved quantities” at null infinity, little contact has been made with the Hamiltonian formulation of general relativity. An important exception is the work of Ashtekar and Streubel [@as] (see also [@abr]), who noted that BMS transformations correspond to canonical transformations on the radiative phase space at ${\cal I}$. They identified the Hamiltonian generating these canonical transformations as representing the net flux of the “conserved quantity” through ${\cal I}$. They then also obtained a local flux formula under some additional assumptions not related to the canonical framework (in particular, by their choice of topology they, in effect, imposed the condition that the local flux formula contain no “second derivative terms”). However, they did not attempt to derive a local expression for the “conserved quantity” itself within the Hamiltonian framework, and, indeed, until the work of [@ds] and [@s], it was far from clear that, for arbitrary BMS generators, their flux formula corresponds to a quantity that could be locally defined on cross-sections of ${\cal I}$. The status of the definition of “conserved quantities” at null infinity contrasts sharply with the situation at spatial infinity, where formulas for conserved quantities have been derived in a clear and straightforward manner from the Hamiltonian formulation of general relativity [@adm], [@rt]. As will be reviewed in sections 2 and 3 below, for a diffeomorphism covariant theory derived from a Lagrangian, if one is given a spacelike slice $\Sigma$ and a vector field $\xi^a$ representing “time evolution”, then the Hamiltonian generating this time evolution—if it exists—must be purely a “surface term” when evaluated on solutions (i.e., “on shell”). It can be shown that if $\Sigma$ extends to spatial infinity in a suitable manner and if $\xi^a$ is a suitable infinitesimal asymptotic symmetry, then a Hamiltonian does exist (see “case I” of section 4 below). The value of this Hamiltonian “on shell” then can be interpreted as being the conserved quantity conjugate to $\xi^a$. One thereby directly obtains formulas for the ADM mass, momentum, and angular momentum as limits as one approaches spatial infinity of surface integrals over two-spheres. It might seem natural to try a similar approach at null infinity: Let $\Sigma$ be a spacelike slice which is asymptotically null in the sense that in the unphysical spacetime its boundary is a cross-section, ${\cal C}$, of ${\cal I}$. Let the vector field $\xi^a$ be an infinitesimal BMS asymptotic symmetry. Then, when evaluated on solutions, the Hamiltonian generating this time evolution—if it exists—must again be purely a “surface term” on $\Sigma$, i.e., it must be expressible as an integral of a local expression over the cross-section ${\cal C}$. This expression would then provide a natural candidate for the value of the “conserved quantity” conjugate to $\xi^a$ at “time” ${\cal C}$. As we shall see in section 3 below, the above proposal works if $\xi^a$ is everywhere tangent to ${\cal C}$. However, if $\xi^a$ fails to be everywhere tangent to ${\cal C}$, then it is easy to show that no Hamiltonian generating the time evolution exists. The obstruction to defining a Hamiltonian arises directly from the possibility that symplectic current can escape through ${\cal C}$. The main purpose of this paper is to propose a general prescription for defining “conserved quantities” in situations where a Hamiltonian does not exist. This proposal consists of modifying the equation that a Hamiltonian must satisfy via the addition of a “correction term” involving a symplectic potential that is required to vanish whenever the background spacetime is stationary. If such a symplectic potential exists and is unique—and if a suitable “reference solution” can be chosen to fix the arbitrary constant in the definition of the “conserved quantity”—we obtain a unique prescription for defining a “conserved quantity” associated with any infinitesimal asymptotic symmetry. In the case of asymptotically flat spacetimes at null infinity in vacuum general relativity, we show in section 5 that existence and uniqueness does hold, and that this prescription yields the quantities previously obtained in [@ds]. In section 2, we review some preliminary material on the diffeomorphism covariant theories derived from a Lagrangian. In section 3, we investigate the conditions under which a Hamiltonian exists. In section 4, we present, in a very general setting, our general proposal for the definition of “conserved quantities” associated with infinitesimal asymptotic symmetries. This general proposal is then considered in the case of asymptotically flat spacetimes at null infinity in general relativity in section 5, where it is shown to yield the results of [@ds]. Some further applications are briefly discussed in section 6. Preliminaries ============= In this paper, we will follow closely both the conceptual framework and the notational conventions of [@lw] and [@iw]. Further details of most of what is discussed in this section can be found in those references. On an $n$-dimensional manifold, $M$, we consider a theory of dynamical fields, collectively denoted $\phi$, which consist of a Lorentzian metric, $g_{ab}$, together with other tensor fields, collectively denoted as $\psi$. To proceed, we must define a space, ${\cal F}$, of “kinematically allowed” field configurations, $\phi = (g_{ab}, \psi)$ on $M$. A precise definition of ${\cal F}$ would involve the specification of smoothness properties of $\phi$, as well as possible additional restrictions on $g_{ab}$ (such as global hyperbolicity or the requirement that a given foliation of $M$ by hypersurfaces be spacelike) and asymptotic conditions on $\phi$ (such as the usual asymptotic flatness conditions on fields at spatial and/or null infinity in general relativity). The precise choice of ${\cal F}$ that would be most suitable for one’s purposes would depend upon the specific theory and issues being considered. In this section and the next section, we will merely assume that a suitable ${\cal F}$ has been defined in such a way that the integrals occurring in the various formulas below converge. In section 4, we will impose a general set of conditions on $\cal F$ that will ensure convergence of all relevant integrals. In section 5, we will verify that asymptotically flat spacetimes at null infinity in vacuum general relativity satisfy these conditions. We assume that the equations of motion of the theory arise from a diffeomorphism covariant $n$-form Lagrangian density [@iw] $${\bf L} = {\bf L} \left( g_{ab}; R_{abcd}, \nabla_a R_{bcde}, ...;\psi, \nabla_a \psi, ...\right) \label{lag}$$ where $\nabla_a$ denotes the derivative operator associated with $g_{ab}$, $R_{abcd}$ denotes the Riemann curvature tensor of $g_{ab}$. (An arbitrary (but finite) number of derivatives of $R_{abcd}$ and $\psi$ are permitted to appear in ${\bf L}$.) Here and below we use boldface letters to denote differential forms on spacetime and, when we do so, we will suppress the spacetime indices of these forms. Variation of ${\bf L}$ yields $$\delta {\bf L} = {\bf E}(\phi) \delta \phi + d {\mbox{\boldmath $\theta$}}(\phi, \delta \phi) . \label{dL}$$ where no derivatives of $\delta \phi$ appear in the first term on the right side. The Euler-Lagrange equations of motion of the theory are then simply ${\bf E} = 0$. Note that—when the variation is performed under an integral sign—the term ${\mbox{\boldmath $\theta$}}$ corresponds to the boundary term that arises from the integrations by parts needed to remove derivatives from $\delta \phi$. We require that ${\mbox{\boldmath $\theta$}}$ be locally constructed out of $\phi$ and $\delta \phi$ in a covariant manner. This restricts the freedom in the choice of ${\mbox{\boldmath $\theta$}}$ to[^1] $${\mbox{\boldmath $\theta$}} \rightarrow {\mbox{\boldmath $\theta$}} + d {\bf Y} \label{Y1}$$ where ${\bf Y}$ is locally constructed out of $\phi$ and $\delta \phi$ in a covariant manner. The presympletic current $(n-1)$-form, ${\mbox{\boldmath $\omega$}}$—which is a local function of a field configuration, $\phi$, and two linearized perturbations, $\delta_1 \phi$ and $\delta_2 \phi$ off of $\phi$—is obtained by taking an antisymmetrized variation of ${\mbox{\boldmath $\theta$}}$ $${\mbox{\boldmath $\omega$}} (\phi, \delta_1 \phi, \delta_2 \phi) = \delta_1 {\mbox{\boldmath $\theta$}} (\phi,\delta_2 \phi)-\delta_2{\mbox{\boldmath $\theta$}} (\phi,\delta_1 \phi) \label{omega}$$ On account of the ambiguity (\[Y1\]) in the choice of ${\mbox{\boldmath $\theta$}}$, we have the ambiguity $${\mbox{\boldmath $\omega$}} \rightarrow {\mbox{\boldmath $\omega$}} + d [\delta_1 {\bf Y}(\phi, \delta_2 \phi) - \delta_2 {\bf Y}(\phi, \delta_1 \phi)] \label{Y2}$$ in the choice of ${\mbox{\boldmath $\omega$}}$. Now let $\Sigma$ be a closed, embedded $(n-1)$-dimensional submanifold without boundary; we will refer to $\Sigma$ as a [*slice*]{}. The presymplectic form, $\Omega_\Sigma$, associated with $\Sigma$ is a map taking field configurations, $\phi$, together with a pairs of linearized perturbations off of $\phi$, into the real numbers—i.e., it is a two-form on ${\cal F}$—defined by integrating[^2] ${\mbox{\boldmath $\omega$}}$ over $\Sigma$, $$\Omega_\Sigma (\phi, \delta_1 \phi, \delta_2 \phi) = \int_\Sigma {\mbox{\boldmath $\omega$}} \label{Omega}$$ Although this definition depends, in general, upon the choice of $\Sigma$, if $\delta_1 \phi$ and $\delta_2 \phi$ satisfy the linearized field equations and $\Sigma$ is required to be a Cauchy surface, then $\Omega_\Sigma$ does not depend upon the choice of $\Sigma$, provided that $\Sigma$ is compact or suitable asymptotic conditions are imposed on the dynamical fields [@lw]. The ambiguity (\[Y2\]) in the choice of ${\mbox{\boldmath $\omega$}}$ gives rise to the ambiguity $$\Omega_\Sigma (\phi, \delta_1 \phi, \delta_2 \phi) \rightarrow \Omega_\Sigma (\phi, \delta_1 \phi, \delta_2 \phi) + \int_{\partial \Sigma} [\delta_1 {\bf Y}(\phi, \delta_2 \phi) - \delta_2 {\bf Y}(\phi, \delta_1 \phi)] \label{Y3}$$ in the presymplectic form $\Omega_\Sigma$. In this equation, by the integral over $\partial \Sigma$, we mean a limiting process in which the integral is first taken over the boundary, $\partial K$, of a compact region, $K$, of $\Sigma$ (so that Stokes’ theorem can be applied[^3]), and then $K$ approaches all of $\Sigma$ in a suitably specified manner. (Note that since $\Sigma$ is a slice, by definition it does not have an actual boundary in the spacetime.) Thus, for example, if $\Sigma$ is an asymptotically flat spacelike slice in an asymptotically flat spacetime, the integral on the right side of eq.(\[Y3\]) would correspond to the integral over a two-sphere on $\Sigma$ in the asymptotically flat region in the limit as the radius of the two-sphere approaches infinity. Of course, the right side of eq.(\[Y3\]) will be well defined only if this limit exists and is independent of any of the unspecified details of how the compact region, $K$, approaches $\Sigma$. In section 4 below, we will make some additional assumptions that will ensure that integrals over “$\partial \Sigma$” of certain quantities that we will consider are well defined. Given the presymplectic form, $\Omega_\Sigma$, we can factor ${\cal F}$ by the orbits of the degeneracy subspaces of $\Omega_\Sigma$ to construct a phase space, $\Gamma$, in the manner described in [@lw]. This phase space acquires directly from the presymplectic form $\Omega_\Sigma$ on ${\cal F}$ a nondegenerate symplectic form, $\Omega$. One also obtains by this construction a natural projection from ${\cal F}$ to $\Gamma$. Now, a complete vector field $\xi^a$ on $M$ naturally induces the field variation ${\cal L}_\xi \phi$ on fields $\phi \in {\cal F}$. If $\xi^a$ is such that ${\cal L}_\xi \phi$ corresponds to a tangent field on ${\cal F}$ (i.e., if the diffeomorphisms generated by $\xi^a$ map $\cal F$ into itself), then we may view $\delta_\xi \phi = {\cal L}_\xi \phi$ as the dynamical evolution vector field corresponding to the notion of “time translations” defined by $\xi^a$. If, when restricted to the solution submanifold[^4], $\bar{{\cal F}}$, of ${\cal F}$, this time evolution vector field on ${\cal F}$ consistently projects to phase space, then one has a notion of time evolution associated with $\xi^a$ on the “constraint submanifold”, $\bar{\Gamma}$, of $\Gamma$, where $\bar{\Gamma}$ is defined to be the image of $\bar{{\cal F}}$ under the projection of ${\cal F}$ to $\Gamma$. If this time evolution vector field on $\bar{\Gamma}$ preserves the pullback to $\bar{\Gamma}$ of $\Omega$, it will be generated by a Hamiltonian, $H_\xi$ [@lw]. (As argued in the Appendix of [@lw], this will be the case when $\Sigma$ is compact; see section 3 below for some general results in the noncompact case.) Thus, this construction provides us with the notion of a Hamiltonian, $H_\xi$, conjugate to a vector field $\xi^a$ on $M$. However, a number of complications arise in the above construction. In particular, in order to obtain a consistent projection of ${\cal L}_\xi \phi$ from $\bar{{\cal F}}$ to $\bar{\Gamma}$, it is necessary to choose $\xi^a$ to be “field dependent”, i.e., to depend upon $\phi$. As explained in [@lw], this fact accounts for why, in a diffeomorphism covariant theory, the Poisson bracket algebra of constraints does not naturally correspond to the Lie algebra of infinitesimal diffeomorphisms. However, these complications are not relevant to our present concerns. To avoid dealing with them, we prefer to work on the original field configuration space ${\cal F}$ with its (degenerate) presymplectic form $\Omega_\Sigma$ rather than on the phase space $\Gamma$. The notion of a Hamiltonian, $H_\xi$, on ${\cal F}$ can be defined as follows: [*Definition*]{}: Consider a diffeomorphism covariant theory within the above framework, with field configuration space ${\cal F}$ and solution submanifold $\bar{\cal F}$. Let $\xi^a$ be a vector field on the spacetime manifold, $M$, let $\Sigma$ be a slice of $M$, and let $\Omega_\Sigma$ denote the presymplectic form (\[Omega\]). (If the ambiguity (\[Y2\]) in the choice of ${\mbox{\boldmath $\omega$}}$ gives rise to an ambiguity in $\Omega_\Sigma$ (see eq.(\[Y3\])), then we assume that a particular choice of $\Omega_\Sigma$ has been made.) Suppose that ${\cal F}$, $\xi^a$, and $\Sigma$ have been chosen so that the integral $\int_\Sigma {\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi)$ converges for all $\phi \in \bar{\cal F}$ and all tangent vectors $\delta \phi$ to $\bar{\cal F}$ at $\phi$. Then a function $H_\xi : {\cal F} \rightarrow {\rm I} \! {\rm R}$ is said to be a [*Hamiltonian conjugate to $\xi^a$*]{} on slice $\Sigma$ if for all $\phi \in \bar{\cal F}$ and all field variations $\delta \phi$ tangent to $\cal F$ (but not necessarily tangent to $\bar{\cal F}$) we have $$\delta H_\xi = \Omega_\Sigma(\phi, \delta \phi, {\cal L}_\xi \phi) = \int_\Sigma {\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) \label{H}$$ Note that if a Hamiltonian conjugate to $\xi^a$ on slice $\Sigma$ exists, then—assuming that $\bar{\cal F}$ is connected—its value on $\bar{\cal F}$ is uniquely determined by eq.(\[H\]) up to the addition of an arbitrary constant. In many situations, this constant can be fixed in a natural way by requiring $H_\xi$ to vanish for a natural reference solution, such as Minkowski spacetime. On the other hand, the value of $H_\xi$ off of $\bar{\cal F}$ is essentially arbitrary, since eq.(\[H\]) fixes only the “field space gradient” of $H_\xi$ in directions off of $\bar{\cal F}$ at points of $\bar{\cal F}$. If a Hamiltonian conjugate to $\xi^a$ on slice $\Sigma$ exists, then its value provides a natural definition of a conserved quantity associated with $\xi^a$ at “time” $\Sigma$. However, in many cases of interest—such as occurs in general relativity when, say, $\xi^a$ is an asymptotic time translation and the slice $\Sigma$ goes to null infinity—no Hamiltonian exists. In the next section, we shall analyze the conditions under which a Hamiltonian exists. In section 4, we shall propose a definition of the “conserved quantity” conjugate to $\xi^a$ on a slice $\Sigma$ when no Hamiltonian exists. Existence of a Hamiltonian ========================== When does a Hamiltonian conjugate to $\xi^a$ on slice $\Sigma$ exist? To analyze this issue, it is very useful to introduce the Noether current $(n-1)$-form associated with $\xi^a$, defined by $${\bf j} = {\mbox{\boldmath $\theta$}} (\phi, {\cal L}_\xi \phi) - \xi \cdot {\bf L} \label{j}$$ where the “$\cdot$” denotes the contraction of the vector field $\xi^a$ into the first index of the differential form ${\bf L}$. One can show (see the appendix of [@iw2]) that for a diffeomorphism covariant theory, ${\bf j}$ always can be written in the form $${\bf j} = d {\bf Q} + \xi^a {\bf {C}}_a , \label{Q}$$ where ${\bf {C}}_a = 0$ when the equations of motion hold, i.e., ${\bf {C}}_a$ corresponds to “constraints” of the theory. Equation (\[Q\]) defines the Noether charge $(n-2)$-form, $\bf Q$. It was shown in [@iw] that the Noether charge always takes the form $${\bf Q} = {\bf X}^{ab}(\phi) \nabla_{[a} \xi_{b]} + {\bf U}_a (\phi) \xi^a + {\bf V} (\phi, {\cal L}_\xi \phi) + d {\bf Z}(\phi, \xi) . \label{Qform}$$ From eqs.(\[dL\]), (\[omega\]), and (\[j\]), it follows immediately that for $\phi \in \bar{\cal F}$ but $\delta \phi$ arbitrary (i.e., $\delta \phi$ tangent to $\cal F$ but not necessarily tangent to $\bar{\cal F}$), the variation of ${\bf j}$ satisfies $$\delta {\bf j} = {\mbox{\boldmath $\omega$}} (\phi, \delta \phi, {\cal L}_\xi \phi) + d(\xi \cdot {\mbox{\boldmath $\theta$}}) . \label{dj}$$ Thus, we obtain $${\mbox{\boldmath $\omega$}} (\phi, \delta \phi, {\cal L}_\xi \phi) = \xi^a \delta {\bf {C}}_a + d (\delta {\bf Q}) - d(\xi \cdot {\mbox{\boldmath $\theta$}}) . \label{dj2}$$ Consequently, if there exists a Hamiltonian, $H_\xi$, conjugate to $\xi^a$ on $\Sigma$, then for all $\phi \in \bar{\cal F}$ and all $\delta \phi$ it must satisfy the equation $$\delta H_\xi = \int_\Sigma \xi^a \delta {\bf {C}}_a + \int_{\partial \Sigma} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] \label{dh}$$ where the integral over $\partial \Sigma$ has the meaning explained below eq.(\[Y3\]). Note that for field variations which are “on shell”, i.e., such that $\delta \phi$ satisfies the linearized equations of motion, we have $$\delta H_\xi = \int_{\partial \Sigma} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] . \label{hsurf}$$ Consequently, if $H_\xi$ exists, it is given purely as a “surface term” (i.e., an integral over $\partial \Sigma$) when evaluated on $\bar{\cal F}$. Equation (\[dh\]) gives rise to an obvious necessary condition for the existence of $H_\xi$: Let $\phi \in \bar{\cal F}$ (i.e., $\phi$ is a solution to the field equations) and let $\delta_1 \phi$ and $\delta_2 \phi$ be tangent to $\bar{\cal F}$ (i.e., $\delta_1 \phi$ and $\delta_2 \phi$ satisfy the linearized field equations). Let $\phi(\lambda_1, \lambda_2)$ be a two-parameter family with $\phi(0,0) = \phi$, $\partial \phi/\partial \lambda_1 (0,0) = \delta_1 \phi$, and $\partial \phi/\partial \lambda_2 (0,0) = \delta_2 \phi$. Then if eq.(\[dh\]) holds, by equality of mixed partial derivatives, we must have $$\begin{aligned} 0 & = & (\delta_1 \delta_2 - \delta_2 \delta_1) H_\xi \nonumber\\ & = & - \int_{\partial \Sigma} \xi \cdot [\delta_1 {\mbox{\boldmath $\theta$}}(\phi, \delta_2 \phi) - \delta_2 {\mbox{\boldmath $\theta$}}(\phi, \delta_1 \phi)] \nonumber\\ & = & - \int_{\partial \Sigma} \xi \cdot {\mbox{\boldmath $\omega$}}(\phi, \delta_1 \phi, \delta_2 \phi) \label{d2h}\end{aligned}$$ Conversely, if eq.(\[d2h\]) holds, then—assuming that $\bar{\cal F}$ is simply connected (and has suitable differentiable properties)—it will be possible to define $H_\xi$ on $\bar{\cal F}$ so that eq.(\[dh\]) holds whenever $\delta \phi$ is tangent to $\bar{\cal F}$. ([*Proof*]{}: On each connected component of $\bar{\cal F}$ choose a “reference solution” $\phi_0 \in \bar{\cal F}$ and define $H_\xi = 0$ at $\phi_0$. Let $\phi \in \bar{\cal F}$ and let $\phi(\lambda)$ for $\lambda \in [0,1]$ be a smooth, one-parameter family of solutions that connects $\phi_0$ to $\phi$. Define $$H_\xi [\phi] = \int_0^1 d \lambda \int_{\partial \Sigma} [\delta {\bf Q}(\lambda) - \xi \cdot {\mbox{\boldmath $\theta$}}(\lambda)] . \label{hdef}$$ This definition will be independent of the choice of path $\phi(\lambda)$ when eq.(\[d2h\]) holds since, by simple-connectedness, any other path $\phi'(\lambda)$ will be homotopic to $\phi(\lambda)$ and one can apply Stokes’ theorem to the two-dimensional submanifold spanned by this homotopy.) However, if $H_\xi$ is defined on $\bar{\cal F}$, there is no obstruction to extending $H_\xi$ to ${\cal F}$ so that eq.(\[dh\]) holds on $\bar{\cal F}$ for all $\delta \phi$ tangent to $\cal F$ (i.e., including $\delta \phi$ that are not tangent to $\bar{\cal F}$), since the additional content of that equation merely fixes the first derivative of $H_\xi$ in the “off shell” directions of field space. Therefore, the necessary and sufficient condition for the existence of a Hamiltonian conjugate to $\xi^a$ on $\Sigma$ is that for all solutions $\phi \in \bar{\cal F}$ and all pairs of linearized solutions $\delta_1 \phi, \delta_2 \phi$ tangent to $\bar{\cal F}$, we have $$\int_{\partial \Sigma} \xi \cdot {\mbox{\boldmath $\omega$}}(\phi, \delta_1 \phi, \delta_2 \phi) = 0 . \label{Hexist}$$ Note that since this condition refers only to the “covariant phase space” $\bar{\cal F}$, we shall in the following restrict attention to entirely $\bar{\cal F}$ and use eq.(\[hsurf\]) for $H_\xi$ (even though the “off shell” volume integral in eq.(\[dh\]) is crucial to justifying the interpretation of $H_\xi$ as the generator of dynamics conjugate to $\xi^a$). Note that there are two situations where eq.(\[Hexist\]) will automatically hold: (i) if the asymptotic conditions on $\phi$ are such that ${\mbox{\boldmath $\omega$}}(\phi, \delta_1 \phi, \delta_2 \phi)$ goes to zero sufficiently rapidly that the integral of $\xi \cdot {\mbox{\boldmath $\omega$}}$ over $\partial K$ vanishes in the limit as $K$ approaches $\Sigma$; (ii) if $\xi^a$ is such that $K$ can always be chosen so that $\xi^a$ is tangent to $\partial K$, since then the pullback of $\xi \cdot {\mbox{\boldmath $\omega$}}$ to $\partial K$ vanishes. In two these cases, a Hamiltonian conjugate to $\xi^a$ will exist on $\Sigma$. However, if these conditions do not hold, then in general no Hamiltonian will exist. We turn, now, to giving a general prescription for defining “conserved quantities”, even when no Hamiltonian exists. General Definition of “Conserved Quantities” ============================================ In this section, we will propose a definition of conserved quantities under very general assumptions about asymptotic conditions “at infinity”. We begin by specifying these assumptions. We shall assume that the desired asymptotic conditions in the given diffeomorphism covariant theory under consideration are specified by attaching a boundary, ${\cal B}$, to the spacetime manifold, $M$, and requiring certain limiting behavior of the dynamical fields, $\phi$, as one approaches ${\cal B}$. We shall assume that ${\cal B}$ is an $(n-1)$-dimensional manifold, so that $M \cup {\cal B}$ is an $n$-dimensional manifold with boundary[^5]. In cases of interest, $M \cup {\cal B}$ will be equipped with additional nondynamical structure (such as a conformal factor on $M \cup {\cal B}$ or certain tensor fields on $\cal B$) that will enter into the specification of the limiting behavior of $\phi$ and thereby be part of the specification of the field configuration space, ${\cal F}$, and the covariant phase space, $\bar{\cal F}$. We will refer to such fixed, non-dynamical structure as the “universal background structure” of $M \cup {\cal B}$. We now state our two main assumptions concerning the asymptotic conditions on the dynamical fields, $\phi$, and the asymptotic behavior of the allowed hypersurfaces, $\Sigma$: (1) We assume that ${\cal F}$ has been defined so that for all $\phi \in \bar{\cal F}$ and for all $\delta_1 \phi, \delta_2 \phi$ tangent to $\bar{\cal F}$, the $(n-1)$-form ${\mbox{\boldmath $\omega$}} (\phi, \delta_1 \phi, \delta_2 \phi)$ defined on $M$ extends continuously[^6] to ${\cal B}$. (2) We restrict consideration to slices, $\Sigma$, in the “physical spacetime”, $M$, that extend smoothly to ${\cal B}$ in the “unphysical spacetime”, $M \cup {\cal B}$, such that this extended hypersurface intersects ${\cal B}$ in a smooth $(n-2)$-dimensional submanifold, denoted $\partial \Sigma$. Following terminology commonly used for null infinity, we shall refer to $\partial \Sigma$ as a “cross-section” of ${\cal B}$. We also shall assume that $\Sigma \cup \partial \Sigma$ is compact—although it would be straightforward to weaken this assumption considerably, since only the behavior of $\Sigma$ near ${\cal B}$ is relevant to our considerations. An important immediate consequence of the above two assumptions is that the integral (\[Omega\]) defining $\Omega_\Sigma$ always converges, since it can be expressed as the integral of a continuous $(n-1)$-form over the compact $(n-1)$-dimensional hypersurface $\Sigma \cup \partial \Sigma$. We turn, now, to the definition of infinitesimal asymptotic symmetries. Let $\xi^a$ be a complete vector field on $M \cup {\cal B}$ (so that, in particular, $\xi^a$ is tangent to $\cal B$ on $\cal B$). We say that $\xi^a$ is a [*representative of an infinitesimal asymptotic symmetry*]{} if its associated one-parameter group of diffeomorphisms maps $\bar{\cal F}$ into $\bar{\cal F}$, i.e., if it preserves the asymptotic conditions specified in the definition of $\bar{\cal F}$. Equivalently, $\xi^a$ is a representative of an infinitesimal asymptotic symmetry if ${\cal L}_\xi \phi$ (which automatically satisfies the linearized field equations [@lw]) satisfies all of the asymptotic conditions on linearized solutions arising from the asymptotic conditions imposed upon $\phi \in \bar{\cal F}$, i.e., if ${\cal L}_\xi \phi$ corresponds to a vector tangent to $\bar{\cal F}$. If $\xi^a$ is a representative of an infinitesimal asymptotic symmetry, then the integral appearing on the right side of eq.(\[hsurf\]), namely $$I = \int_{\partial \Sigma} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] \label{I}$$ always is well defined via the limiting proceedure described below eq.(\[Y3\]), and, indeed, $I$ depends only on the cross-section $\partial \Sigma$ of ${\cal B}$, not on $\Sigma$. To see this[^7], let $K_i$ be a nested sequence of compact subsets of $\Sigma$ such that $\partial K_i$ approaches $\partial \Sigma$, and let $$I_i = \int_{\partial K_i} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] , \label{Ii}$$ Then, since “on shell” we have $${\mbox{\boldmath $\omega$}} (\phi, \delta \phi, {\cal L}_\xi \phi) = d [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] \label{omQ}$$ (see eq.(\[dj2\]) above) we have by Stokes’ theorem for $i \geq j$, $$I_i - I_j = \int_{\Sigma_{ij}}d [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] = \int_{\Sigma_{ij}} {\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) \label{Iij}$$ where $\Sigma_{ij}$ denotes $K_i \setminus K_j$, i.e., the portion of $\Sigma$ lying between $\partial K_i$ and $\partial K_j$. As a direct consequence of our assumptions that ${\mbox{\boldmath $\omega$}}$ extends continuously to $\cal B$ and that $\Sigma \cup \partial \Sigma$ is compact, it follows that $\{I_i\}$ is a Cauchy sequence, and hence it has a well defined limit, $I$, as $i \rightarrow \infty$. Note that this limit always exists despite the fact that there is no guarantee that the differential forms ${\bf Q}$ or ${\mbox{\boldmath $\theta$}}$ themselves extend continuously to ${\cal B}$. A similar argument establishes that this limit is independent of $\Sigma$, i.e., for a slice $\tilde{\Sigma}$ such that $\partial \tilde{\Sigma}$ = $\partial \Sigma$, a similarly defined sequence $\{\tilde{I}_i \}$ of integrals on $\tilde{\Sigma}$ will also converge to $I$. Let $\xi^a$ and $\xi'^a$ be representatives of infinitesimal asymptotic symmetries. We say that $\xi^a$ is [*equivalent*]{} to $\xi'^a$ if they coincide on ${\cal B}$ and if, for all $\phi \in \bar{\cal F}$, $\delta \phi$ tangent to $\bar{\cal F}$, and for all $\partial \Sigma$ on ${\cal B}$, we have $I = I'$, where $I$ is given by eq.(\[I\]) and $I'$ is given by the same expression with $\xi^a$ replaced by $\xi'^a$. The [*infinitesimal asymptotic symmetries*]{} of the theory are then comprised by the equivalence classes of the representatives of the infinitesimal asymptotic symmetries. Now consider an infinitesimal asymptotic symmetry, represented by the vector field $\xi^a$, and let $\Sigma$ be a slice in the spacetime with boundary $\partial \Sigma$ on ${\cal B}$. We would like to define a conserved quantity $H_\xi: \bar{\cal F} \rightarrow {\rm I} \! {\rm R}$ associated with $\xi^a$ at “time” $\Sigma$ via eq.(\[hsurf\]). As we have seen above, the right side of eq.(\[hsurf\]) is well defined under our asymptotic assumptions, but, as discussed in the previous section, in general, there does not exist an $H_\xi$ which satisfies this equation. The analysis naturally breaks up into the following two cases: [**Case I**]{}: Suppose that the continuous extension of ${\mbox{\boldmath $\omega$}}$ to ${\cal B}$ has vanishing pullback to ${\cal B}$. Then by eq.(\[Hexist\]), $H_\xi$ exists for all infinitesimal asymptotic symmetries (assuming that $\bar{\cal F}$ is simply connected and has suitable differentiable properties) and is independent of the choice of representative $\xi^a$. Furthermore, if $\partial \Sigma_1$ and $\partial \Sigma_2$ are cross-sections of ${\cal B}$ that bound a region ${\cal B}_{12} \subset {\cal B}$, we have[^8] by eqs.(\[hsurf\]) and (\[omQ\]) $$\delta H_\xi|_{\partial \Sigma_2} - \delta H_\xi|_{\partial \Sigma_1} = - \int_{{\cal B}_{12}} {\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) = 0 \label{Hcons}$$ Thus, $\delta H_\xi$ is independent of choice of cross-section within the same homology class. If the arbitrary constant (for each cross-section) in $H_\xi$ is fixed in such a way that there is a “reference solution” for which $H_\xi = 0$ on all cross-sections (see below), then on all solutions $H_\xi$ will be independent of choice of cross-section within the same homology class. Thus, in this case, not only does $H_\xi$ exist, but it truly corresponds to a conserved quantity, i.e., its value is independent of “time”, $\Sigma$. [**Case II**]{}: Suppose that the continuous extension of ${\mbox{\boldmath $\omega$}}$ to ${\cal B}$ does not, in general, have vanishing pullback to ${\cal B}$. Then, in general, there does not exist an $H_\xi$ satisfying eq.(\[hsurf\]). One exception is the case where $\xi^a$ and $\partial \Sigma$ are such that $\xi^a$ is everywhere tangent to $\partial \Sigma$. In this case, if $\xi^a$ is tangent to cross-sections $\partial \Sigma_1$ and $\partial \Sigma_2$, we have $$\delta H_\xi|_{\partial \Sigma_2} - \delta H_\xi|_{\partial \Sigma_1} = - \int_{{\cal B}_{12}} {\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) \label{Hnocons}$$ Since the right side of this equation is nonvanishing in general, we see that even when $\xi^a$ is tangent to cross-sections so that $H_\xi$ exists, $H_\xi$ will not be conserved. Case I arises in general relativity for spacetimes which are asymptotically flat at spatial infinity as defined in [@ar], and our prescription for defining $H_\xi$ corresponds to that given in [@adm] and [@rt]; see [@iw] for the explicit details of how eq.(\[hsurf\]) gives rise to the usual expression for ADM mass when $\xi^a$ is an asymptotic time translation. As we shall discuss in detail in the next section, case II arises in general relativity for spacetimes which are asymptotically flat at null infinity. The main purpose of this paper is to provide a general definition of a “conserved quantity” conjugate to an arbitrary infinitesimal asymptotic symmetry $\xi^a$ in case II. In the following, we will restrict attention to this case, and we will denote the quantity we seek as ${\cal H}_\xi$ to distinguish it from a true Hamiltonian $H_\xi$. As we have seen, in this case an attempt to define ${\cal H}_\xi$ by eq.(\[hsurf\]) fails the consistency check (\[d2h\]) and thus does not define any quantity. However, consider the following simple modification of eq.(\[hsurf\]): On ${\cal B}$, let ${\mbox{\boldmath $\Theta$}}$ be a symplectic potential for the pullback, $\bar{\mbox{\boldmath $\omega$}}$, of the (extension of the) symplectic current form ${\mbox{\boldmath $\omega$}}$ to ${\cal B}$, so that on ${\cal B}$ we have for all $\phi \in \bar{\cal F}$ and $\delta_1 \phi$, $\delta_2 \phi$ tangent to $\bar{\cal F}$ $$\bar{\mbox{\boldmath $\omega$}}(\phi, \delta_1 \phi, \delta_2 \phi) = \delta_1 {\mbox{\boldmath $\Theta$}}(\phi, \delta_2 \phi) - \delta_2 {\mbox{\boldmath $\Theta$}}(\phi, \delta_1 \phi) \label{Theta}$$ We require that ${\mbox{\boldmath $\Theta$}}$ be locally constructed[^9] out of the dynamical fields, $\phi$, and their derivatives (or limits of such quantities to ${\cal B}$) as well as any fields present in the “universal background structure”. In the case where $\bf L$ (and, hence ${\mbox{\boldmath $\omega$}}$) is an analytic function[^10] of its variables (see eq.(\[lag\])), we also require that ${\mbox{\boldmath $\Theta$}}$ depend analytically on the dynamical fields; more precisely, if $\phi(\lambda)$ is a one-parameter family of fields on $M$ that depends analytically on $\lambda$ and satisfies suitable uniformity conditions[^11] near $\cal B$, we require that the corresponding ${\mbox{\boldmath $\Theta$}}(\lambda)$ also depends analytically on $\lambda$. If any arbitrary choices are made in the specification of the background structure (such as a choice of conformal factor in the definition of null infinity in general relativity), then we demand that ${\mbox{\boldmath $\Theta$}}$ be independent of such choices (so, in particular, in the case of null infinity, ${\mbox{\boldmath $\Theta$}}$ is required to be conformally invariant). Our proposal is the following: Let ${\cal H}_\xi$ satisfy[^12] $$\delta {\cal H}_\xi = \int_{\partial \Sigma} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] + \int_{\partial \Sigma} \xi \cdot {\mbox{\boldmath $\Theta$}} \label{hsurf2}$$ Then it is easily seen that this formula satisfies the consistency check (\[d2h\]) and, thus, defines a “conserved quantity” ${\cal H}_\xi$ up to an arbitrary constant. Finally, let this arbitrary constant be fixed by requiring that ${\cal H}_\xi$ vanish (for all infinitesimal asymptotic symmetries $\xi^a$ and all cross-sections $\partial \Sigma$) on a suitably chosen “reference solution” $\phi_0 \in \bar{\cal F}$. We will specify below the necessary conditions that must be satisfied by $\phi_0$. However, the above proposal fails to define a unique prescription because the choice of symplectic potential ${\mbox{\boldmath $\Theta$}}$ is ambiguous up to[^13] $${\mbox{\boldmath $\Theta$}}(\phi, \delta \phi) \rightarrow {\mbox{\boldmath $\Theta$}}(\phi, \delta \phi) + \delta {\bf W}(\phi) \label{W}$$ where ${\bf W}$ is an $(n-1)$-form on ${\cal B}$ locally constructed out of the dynamical fields $\phi$ as well as the universal background structure defined on ${\cal B}$, with ${\bf W}$ independent of any arbitrary choices made in the specification of the background structure. Thus, in order to obtain a prescription which defines ${\cal H}_\xi$, we must specify an additional condition or conditions which uniquely select ${\mbox{\boldmath $\Theta$}}$. An additional requirement on ${\mbox{\boldmath $\Theta$}}$ can be motivated as follows. We have already seen from eq.(\[Hnocons\]) above that ${\cal H}_\xi$ cannot, in general, be conserved, i.e., there must be a nonzero flux, ${\bf F}_\xi$, on ${\cal B}$ associated with this “conserved quantity”. This is to be expected on account of the possible presence of radiation at ${\cal B}$. However, it seems natural to demand that ${\bf F}_\xi$ vanish (and, thus, that ${\cal H}_\xi$ be conserved) in the case where no radiation is present at ${\cal B}$. Such a case should occur when $\phi$ is a stationary solution, i.e., when there exists a nonzero infinitesimal asymptotic symmetry represented by an exact symmetry $t^a$—so that ${\cal L}_t \phi = 0$ in $M$—and $t^a$ is timelike in $M$ in a neighborhood of ${\cal B}$. Hence, we wish to require that ${\bf F}_\xi$ vanish on ${\cal B}$ for all $\xi^a$ for stationary solutions. To see what condition on ${\mbox{\boldmath $\Theta$}}$ will ensure that this holds, we note that from eq.(\[hsurf2\]), it follows immediately that $$\delta {\cal H}_\xi|_{\partial \Sigma_2} - \delta {\cal H}_\xi|_{\partial \Sigma_1} = - \int_{{\cal B}_{12}} \delta {\bf F}_\xi \label{F1}$$ where the variation of the flux $(n-1)$-form, ${\bf F}_\xi$, on ${\cal B}$ is given by $$\delta {\bf F}_\xi = \bar{\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) + d [\xi \cdot {\mbox{\boldmath $\Theta$}}(\phi, \delta \phi)] \label{F2}$$ Here the first term in this equation arises from taking ‘$d$’ of the integrand of the first term in eq.(\[hsurf2\]) (using eq.(\[omQ\]) above), whereas the second term is just the ‘$d$’ of the integrand of the second term in eq.(\[hsurf2\]). However, we have $$\begin{aligned} d [\xi \cdot {\mbox{\boldmath $\Theta$}}(\phi, \delta \phi)] & = & {\cal L}_\xi {\mbox{\boldmath $\Theta$}}(\phi, \delta \phi) \nonumber \\ & = & - \bar{\mbox{\boldmath $\omega$}}(\phi, \delta \phi, {\cal L}_\xi \phi) + \delta {\mbox{\boldmath $\Theta$}}(\phi,{\cal L}_\xi \phi) \label{F3}\end{aligned}$$ Thus, we obtain $$\delta {\bf F}_\xi = \delta {\mbox{\boldmath $\Theta$}}(\phi,{\cal L}_\xi \phi) \label{F4}$$ We now impose the requirement that ${\mbox{\boldmath $\Theta$}}(\phi, \delta \phi)$ vanish whenever $\phi$ is stationary (even when $\delta \phi$ is non-stationary). We also explicitly assume that the reference solution, $\phi_0$, (on which ${\cal H}_\xi$ vanishes for all cross-sections and hence ${\bf F}_\xi = 0$) is stationary. Since both ${\mbox{\boldmath $\Theta$}}$ and ${\bf F}_\xi$ vanish on $\phi_0$, we obtain from eq.(\[F4\]) the remarkably simple formula $${\bf F}_\xi = {\mbox{\boldmath $\Theta$}}(\phi,{\cal L}_\xi \phi) \label{F5}$$ It then follows immediately (as a consequence of our choice of ${\mbox{\boldmath $\Theta$}}$) that ${\bf F}_\xi$ vanishes (for all $\xi$) on stationary solutions, as we desired. Equation (\[F5\]) also implies an additional desirable property of ${\bf F}_\xi$: We have ${\bf F}_\xi = 0$ whenever $\xi^a$ is an exact symmetry—i.e., whenever ${\cal L}_\xi \phi = 0$—regardless of whether radiation may be present. If a symplectic potential ${\mbox{\boldmath $\Theta$}}$ satisfying our above condition exists and is unique, then eq.(\[hsurf2\]) together with the requirement that ${\cal H}_\xi$ vanish (for all cross-sections and all $\xi^a$) on a particular, specified solution, $\phi_0$, uniquely determines ${\cal H}_\xi$. However, there remains a potential difficulty in specifying $\phi_0$: If $\phi_0 \in \bar{\cal F}$ then we also have $\psi_* \phi_0 \in \bar{\cal F}$, where $\psi: M \cup {\cal B} \rightarrow M \cup {\cal B}$ is any diffeomorphism generated by a representative of an infinitesimal asymptotic symmetry. Since we have no meaningful way of distinguishing between $\phi_0$ and $\psi_* \phi_0$, if we demand that ${\cal H}_\xi$ vanish on $\phi_0$ we must also demand that it vanish on $\psi_* \phi_0$. However, this overdetermines ${\cal H}_\xi$ (so that no solution exists) unless the following consistency condition holds: Let $\eta^a$ be a representative of an infinitesimal asymptotic symmetry and consider the field variation about $\phi_0$ given by $\delta \phi = {\cal L}_\eta \phi_0$. Since this corresponds to the action of an infinitesimal asymptotic symmetry on $\phi_0$, under this field variation we must have $\delta {\cal H}_\xi = 0$. On the other hand, $\delta {\cal H}_\xi$ is specified by eq.(\[hsurf2\]). Since under this field variation we have $$\delta {\bf Q}[\xi] = {\cal L}_\eta {\bf Q}[\xi] - {\bf Q}[{\cal L}_\eta \xi] \label{deltaQ}$$ and since, by assumption, ${\mbox{\boldmath $\Theta$}}$ vanishes at $\phi_0$, we find that the consistency requirement on $\phi_0$ is that for all representatives $\xi^a$ and $\eta^a$ of infinitesimal asymptotic symmetries and for all cross-sections $\partial \Sigma$, we must have $$0 = \int_{\partial \Sigma} \{{\cal L}_\eta {\bf Q}[\xi] - {\bf Q}[{\cal L}_\eta \xi] - \xi \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\eta \phi_0) \} \label{consis}$$ From eqs.(\[omQ\]) and (\[Theta\]) together with the vanishing of ${\mbox{\boldmath $\Theta$}}$ at $\phi_0$, it follows that the right side of eq.(\[consis\]) is independent of cross-section and thus need only be checked at one cross-section. In addition, eq.(\[consis\]) manifestly holds when $\eta^a$ is an exact symmetry of $\phi_0$—i.e., when ${\cal L}_\eta \phi_0 = 0$—since $\delta \phi = 0$ in that case. Using $${\cal L}_\eta {\bf Q} = d (\eta \cdot {\bf Q}) + \eta \cdot d {\bf Q} = d (\eta \cdot {\bf Q}) + \eta \cdot {\bf j} \label{lieid}$$ together with eq.(\[j\]), we may rewrite eq.(\[consis\]) in the form $$0 = \int_{\partial \Sigma} \{ \eta \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\xi \phi_0) - \xi \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\eta \phi_0) - \eta \cdot (\xi \cdot {\bf L}) - {\bf Q}[{\cal L}_\eta \xi] \} \label{consis2}$$ (Here, the integral over $\partial \Sigma$ is to be interpreted as an asymptotic limit, with the limit guaranteed to exist by the argument given above. If ${\bf L}$ extends continuously to ${\cal B}$ then the term $\eta \cdot (\xi \cdot {\bf L})$ makes no contribution to the integral since both $\eta^a$ and $\xi^a$ are tangent to $\cal B$.) Since eq.(\[consis2\]) is manifestly antisymmetric in $\eta^a$ and $\xi^a$, it follows that the consistency condition also is automatically satisfied whenever $\xi^a$ is an exact symmetry of $\phi_0$. However, if both $\eta^a$ and $\xi^a$ are asymptotic symmetries that are not exact symmetries of $\phi_0$, then eq.(\[consis\]) (or, equivalently, eq.(\[consis2\])) yields a nontrivial condition that must be satisfied by $\phi_0$. To summarize, we propose the following prescription for defining “conserved quantities” in case II: Let ${\mbox{\boldmath $\Theta$}}$ be a symplectic potential on ${\cal B}$ (see eq.(\[Theta\]) above) which is locally constructed out of the dynamical fields and background structure (and is an analytic function of the dynamical fields when $\bf L$ is analytic), is independent of any arbitrary choices made in specifying the background structure, and is such that ${\mbox{\boldmath $\Theta$}}(\phi, \delta \phi)$ vanishes for all $\delta \phi$ tangent to $\bar{\cal F}$ whenever $\phi \in \bar{\cal F}$ is stationary. (If it exists, such a ${\mbox{\boldmath $\Theta$}}$ is unique up to addition of a term $\delta {\bf W}$ where ${\bf W}$ is locally constructed out of the dynamical fields and background structure (and is analytic in the dynamical fields when $\bf L$ is analytic), is independent of any arbitrary choices made in specifying the background structure, and is such that $\delta {\bf W}$ vanishes for all $\delta \phi$ whenever $\phi$ is stationary.) Let $\phi_0$ be a stationary solution that satisfies eq.(\[consis\]) (or, equivalently, eq.(\[consis2\])) for all infinitesimal asymptotic symmetries $\eta^a$ and $\xi^a$. Then we define ${\cal H}_\xi$ by eq.(\[hsurf2\]) together with the requirement that ${\cal H}_\xi$ vanish on $\phi_0$. To the extent that a ${\mbox{\boldmath $\Theta$}}$ satisfying the above requirements exists and is unique, and to the extent that a stationary $\phi_0$ satisfying (\[consis\]) exists, this defines a prescription for defining “conserved quantities” associated with asymptotic symmetries. This prescription automatically gives rise to the flux formula (\[F5\]), so that the flux vanishes whenever $\phi$ is stationary or $\xi^a$ is an exact symmetry. In the next section, we analyze what this general prescription yields for the case of asymptotically flat spacetimes at null infinity in vacuum general relativity. “Conserved Quantities” at Null Infinity in General Relativity ============================================================= In vacuum general relativity, the manifold $M$ is taken to be $4$-dimensional and the only dynamical field, $\phi$, is the spacetime metric, $g_{ab}$. We shall write the varied field as $$\gamma_{ab} \equiv \delta g_{ab} \label{gamma}$$ The Einstein-Hilbert Lagrangian of general relativity is $${\bf L} = \frac{1}{16 \pi} R {\mbox{\boldmath $\epsilon$}} \label{Lgr}$$ where $R$ denotes the scalar curvature of $g_{ab}$ and ${\mbox{\boldmath $\epsilon$}}$ is the spacetime volume form associated with $g_{ab}$. The presymplectic potential $3$-form ${\mbox{\boldmath $\theta$}}$ is given by $$\theta_{abc} = \frac{1}{16 \pi} \epsilon_{dabc} v^d \label{thetagr}$$ where $$v^a = g^{ae} g^{fh} [\nabla_f \gamma_{eh} - \nabla_e \gamma_{fh}] \label{v}$$ where $\nabla_a$ is the derivative operator associated with $g_{ab}$. The corresponding presymplectic current $3$-form is [@bw] $$\omega_{abc} = \frac{1}{16 \pi} \epsilon_{dabc} w^d \label{omegagr}$$ where $$w^a = P^{abcdef}[\gamma_{2bc} \nabla_d \gamma_{1ef} - \gamma_{1bc} \nabla_d \gamma_{2ef} ] \label{w}$$ with $$P^{abcdef} = g^{ae}g^{fb}g^{cd} - \frac{1}{2}g^{ad}g^{be}g^{fc} - \frac{1}{2}g^{ab}g^{cd}g^{ef} - \frac{1}{2}g^{bc}g^{ae}g^{fd} + \frac{1}{2}g^{bc}g^{ad}g^{ef} \label{P}$$ Finally, the Noether charge $2$-form associated with a vector field $\xi^a$ is given by [@iw] $$Q_{ab}[\xi] = - \frac{1}{16 \pi} \epsilon_{abcd} \nabla^c \xi^d \label{Qgr}$$ We wish to consider spacetimes that are asymptotically flat at future and/or past null infinity. For definiteness, we will consider future null infinity. (Sign changes would occur in several formulas when we consider past null infinity on account of our orientation convention on $\cal B$ (see footnote \[orient4\]).) We denote future null infinity by ${\cal I}$ and adopt the standard definition of asymptotic flatness there (see, e.g., [@w]). The key ingredient of this definition is that there exist a smooth[^14] metric, $\tilde{g}_{ab}$, on $M \cup {\cal I}$ and a smooth function, $\Omega$, on $M \cup {\cal I}$ such that $\Omega > 0$ on $M$, $\Omega = 0$ on ${\cal I}$, and $\tilde{\nabla}_a \Omega$ is null[^15] and nonvanishing everywhere on ${\cal I}$, and such that throughout $M$ we have $$\tilde{g}_{ab} = \Omega^2 g_{ab} \label{gtilde}$$ We also assume that $\cal I$ has topology ${\rm S}^2 \times {\rm I} \! {\rm R}$. In the following all indices will be raised and lowered using the “unphysical metric”, $\tilde{g}_{ab}$. We write $$n_a = \tilde{\nabla}_a \Omega \label{n}$$ (Here $\tilde{\nabla}_a$ denotes the derivative operator associated with $\tilde{g}_{ab}$, although, of course, since $\Omega$ is a scalar, $\nabla_a \Omega$ is independent of the choice of derivative operator.) We may use the freedom $\Omega \rightarrow \omega \Omega$ with $\omega$ a smooth, strictly positive function on $M \cup {\cal I}$ to assume, without loss of generality, that the Bondi condition $$\tilde{\nabla}_a n_b |_{\cal I} = 0 \label{bondi}$$ holds. An immediate consequence of eq.(\[bondi\]) is that on ${\cal I}$ we have $\tilde{\nabla}_a (n^b n_b) = 2 n^b \tilde{\nabla}_a n_b = 0$, so in the Bondi gauge $$n^a n_a = O(\Omega^2) \label{n2}$$ Without loss of generality (see, e.g., [@gw]), we also may assume that the conformal factor, $\Omega$, on $M \cup {\cal I}$ and the unphysical metric, $\tilde{g}_{ab}$, on ${\cal I}$ are universal quantities, i.e., they may be assumed to be independent of the physical metric, $g_{ab}$, on $M$. Without loss of generality, we may (by use of freedom remaining in the choice of $\Omega$) take the universal unphysical metric $\tilde{g}^0_{ab}$, on ${\cal I}$ to be such that the induced spatial metric on all cross-sections of ${\cal I}$ is that of a round two-sphere of scalar curvature $k$. In the following, we will fix an allowed choice of $\Omega$ on $M \cup {\cal I}$ and a choice of $k$. We will then take[^16] ${\cal F}$ to consist of metrics, $g_{ab}$, on $M$ such that $\Omega^2 g_{ab}$ extends smoothly to ${\cal I}$ and equals $\tilde{g}^0_{ab}$ there, and such that the Bondi condition (\[bondi\]) holds on ${\cal I}$. It may then be checked that the general notion of infinitesimal asymptotic symmetries given in the previous section corresponds to the usual notion of infinitesimal BMS symmetries; indeed, our general definition of infinitesimal asymptotic symmetries corresponds closely to the definition of infinitesimal BMS symmetries[^17] given in [@gw]. It follows immediately from our conditions on ${\cal F}$ that the unphysical perturbed metric $$\tilde{\gamma}_{ab} \equiv \Omega^2 \gamma_{ab} \label{gamtilde}$$ extends smoothly to ${\cal I}$ and vanishes there, so it can be written in the form $$\tilde{\gamma}_{ab} = \Omega \tau_{ab} \label{tau}$$ where $\tau_{ab}$ extends smoothly to ${\cal I}$ and, in general, is nonvanishing there. Furthermore, since $\delta n_a = 0$, we have $$\delta [\tilde{\nabla}_a n_b] = - \{ \tilde{\nabla}_{(a} \tilde{\gamma}_{b)c} - \frac{1}{2} \tilde{\nabla}_c \tilde{\gamma}_{ab} \} n^c \label{dbondi}$$ Substituting from eqs.(\[gamtilde\]), (\[tau\]), and (\[n\]) and setting the resulting expression to zero on ${\cal I}$ in accord with eq.(\[bondi\]), we obtain $$n_{(a} \tau_{b)c} n^c |_{\cal I} = 0 \label{taun}$$ This, in turn, implies that $\tau_{bc} n^c$ vanishes on ${\cal I}$, so we may write $$\tau_{bc} n^c = \Omega \tau_b \label{taub}$$ where $\tau_b$ is smooth (and, in general, nonvanishing) at ${\cal I}$. This implies that $$\delta n^a = \delta (\tilde{g}^{ab} n_b) = - \Omega \tau^{ab} n_b = - \Omega^2 \tau^a \label{dn}$$ The crucial issue with regard to the applicability of the ideas of the previous section is whether the presymplectic current $3$-form[^18] ${\mbox{\boldmath $\omega$}}$ extends continuously to ${\cal I}$. To investigate this, we express the quantities appearing in eq.(\[omegagr\]) in terms of $\Omega$ and variables that extend smoothly to ${\cal I}$. Clearly, the unphysical volume element $$\tilde {\mbox{\boldmath $\epsilon$}} = \Omega^4 {\mbox{\boldmath $\epsilon$}} \label{tildeeps}$$ and $$\tilde{P}^{abcdef} \equiv \Omega^{-6} P^{abcdef} \label{tildeP}$$ extend smoothly to ${\cal I}$ and are nonvanishing there. We eliminate the the action of the physical derivative operator, $\nabla_a$, on $\gamma_{ab}$ in terms of the unphysical derivative operator, $\tilde{\nabla}_a$, via $$\nabla_a \gamma_{bc} = \tilde{\nabla}_a \gamma_{bc} + 2 {C^d}_{a(b} \gamma_{c)d} \label{der}$$ where (see, e.g., [@w]) $${C^c}_{ab} = 2 \Omega^{-1} {\delta^c}_{(a} n_{b)} - \Omega^{-1} n^c \tilde{g}_{ab} \label{C}$$ Finally, we substitute $$\gamma_{ab} = \Omega^{-1} \tau_{ab} . \label{gamtau}$$ The terms appearing in the resulting expression for ${\mbox{\boldmath $\omega$}}$ may now be classified as follows: (i) Terms in which $\tilde{\nabla}_a$ acts on $\tau_{1ab}$ or $\tau_{2ab}$. For these terms, the powers of $\Omega$ resulting from eqs.(\[tildeeps\]), (\[tildeP\]), and (\[gamtau\]) cancel, so these terms extend smoothly to ${\cal I}$ and are, in general, nonvanishing there. (ii) Terms in which $\tilde{\nabla}_a$ does not act on $\tau_{1ab}$ or $\tau_{2ab}$ and $w^a$ is proportional to $n^a$. These terms cancel due to the antisymmetry in $\tau_{1ab}$ and $\tau_{2ab}$. (iii) Terms in which $\tilde{\nabla}_a$ does not act on $\tau_{1ab}$ or $\tau_{2ab}$ but $w^a$ is not proportional to $n^a$. These terms necessarily contain a contraction of $n^a$ with $\tau_{1ab}$ or $\tau_{2ab}$, and eq.(\[taub\]) can then be used. The extra power of $\Omega$ picked up by the use of this equation ensures that these terms extend smoothly to ${\cal I}$, where they are, in general, nonvanishing. The upshot is that ${\mbox{\boldmath $\omega$}}$ extends smoothly to ${\cal I}$ and is, in general, nonvanishing there. Thus, with our definition of $\cal F$, asymptotically flat spacetimes at null infinity in general relativity do indeed fall into the category of “case II” of the previous section. To apply the proposed prescription of the previous section to define a “conserved quantity”, ${\cal H}_\xi$, for each BMS generator, $\xi^a$, and each cross-section, $\partial \Sigma$, of ${\cal I}$, we need an explicit formula for the pullback, $\bar{\mbox{\boldmath $\omega$}}$, of the extension of ${\mbox{\boldmath $\omega$}}$ to ${\cal I}$. To do so, we define ${}^{(3)} {\mbox{\boldmath $\epsilon$}}$ by $$\tilde{\epsilon}_{abcd} = 4 \, {}^{(3)} \epsilon_{[abc} n_{d]} \label{3eps}$$ so that the pullback, ${}^{(3)} \bar{\mbox{\boldmath $\epsilon$}}$, of ${}^{(3)} {\mbox{\boldmath $\epsilon$}}$ to ${\cal I}$ defines a positively oriented volume element[^19] on ${\cal I}$ (see footnote \[orient4\]). We have $$\bar{\mbox{\boldmath $\omega$}} = - \frac{1}{16 \pi} \Omega^{-4} n_a w^a \, {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{omegabar}$$ A lengthy but entirely straightforward calculation starting with eq.(\[w\]), making the substitutions (\[der\])-(\[gamtau\]), and making heavy use of eqs.(\[bondi\]), (\[n2\]), and (\[taub\]) yields (see also [@am], [@abr]) $$\Omega^{-4} n_a w^a|_{\cal I} = \frac{1}{2} \{ - \tau_{2}^{bc} n^a \tilde{\nabla}_a \tau_{1bc} + \tau_2 n^a \tilde{\nabla}_a \tau_1 + \tau_2 n^a \tau_{1a} \} - [1 \leftrightarrow 2] \label{omegascri}$$ where we have written $\tau = {\tau^a}_a$ and “$1 \leftrightarrow 2$” denotes the same terms as in the preceding expression with $1$ and $2$ interchanged. The above formula can be rewritten in a more useful form as follows. By a direct computation using eq.(7.5.14) of [@w], the variation of the unphysical Ricci tensor at ${\cal I}$ is given by $$\delta \tilde{R}_{ab} |_{\cal I} = - n_{(a} \tilde{\nabla}_{b)} \tau - n^c \tilde{\nabla}_c \tau_{ab} + n_{(b} \tilde{\nabla}^d \tau_{a)d} + n_{(a} \tau_{b)} \label{dR}$$ Hence, defining $S_{ab}$ by $$S_{ab} \equiv \tilde{R}_{ab} - \frac{1}{6} \tilde{R} \tilde{g}_{ab} \label{S}$$ we obtain $$\delta S_{ab} |_{\cal I} = - n_{(a} \tilde{\nabla}_{b)} \tau - n^c \tilde{\nabla}_c \tau_{ab} + n_{(b} \tilde{\nabla}^d \tau_{a)d} + n_{(a} \tau_{b)} - \frac{1}{3} (-n^c \tilde{\nabla}_c \tau + n^c \tau_c) \tilde{g}_{ab} \label{dS}$$ On the other hand, $\tilde{R}_{ab}$ is related to $R_{ab}$ by the usual conformal transformation formulae (see, e.g., Appendix D of [@w]). Setting $R_{ab} = 0$ by the vacuum field equations, it follows that (see eq.(6) of [@g]) $$S_{ab} = - 2 \Omega^{-1} \tilde{\nabla}_{(a} n_{b)} + \Omega^{-2} n^c n_c \tilde{g}_{ab} \label{Sconf}$$ Taking the variation of this equation and evaluating the resulting expression on $\cal I$ using eqs.(\[dbondi\]), (\[tau\]), (\[dn\]) and (\[taub\]), we obtain $$\delta S_{ab}|_{\cal I} = 4n_{(a} \tau_{b)} - n^c \tilde{\nabla}_c \tau_{ab} - n^c \tau_c \tilde{g}_{ab} \label{dSconf}$$ Comparing this formula with eq.(\[dS\]), we obtain $$[\tilde{\nabla}^b \tau_{ab} - \tilde{\nabla}_a \tau - 3 \tau_a] |_{\cal I} = 0 \label{ntau0}$$ as well as $$[n^b \tilde{\nabla}_b \tau + 2 n^b \tau_b] |_{\cal I} = 0 \label{ntau}$$ Using eq.(\[ntau\]) together with eq.(\[dS\]), we see that $$\Omega^{-4} n_a w^a|_{\cal I} = \frac{1}{2} [\tau_{2}^{ab} \delta_1 S_{ab} -\tau_{1}^{ab} \delta_2 S_{ab}] \label{omegascri2}$$ Now, the Bondi news tensor, $N_{ab}$, on ${\cal I}$ is defined by [@g] $$N_{ab} = \bar{S}_{ab} - \rho_{ab} \label{news}$$ where $\bar{S}_{ab}$ denotes the pullback to ${\cal I}$ of $S_{ab}$ and $\rho_{ab}$ is the tensor field on ${\cal I}$ defined in general by eq.(33) of [@g], which, in our gauge choice, is just $\frac{1}{2} k \bar{g}^0_{ab}$, where $\bar{g}^0_{ab}$ denotes the pullback to $\cal I$ of $\tilde{g}^0_{ab}$. Since $\delta \rho_{ab} = 0$ and since, by eq.(\[taub\]), $\tau^{ab}$ on ${\cal I}$ is tangent to ${\cal I}$, we may replace $\delta S_{ab}$ by $\delta N_{ab}$ in eq.(\[omegascri2\]). Thus, we obtain our desired final formula: $$\bar{\mbox{\boldmath $\omega$}} = - \frac{1}{32 \pi} [\tau_{2}^{ab} \delta_1 N_{ab} -\tau_{1}^{ab} \delta_2 N_{ab}] \, {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{omegabar2}$$ To apply our prescription, we must find a symplectic potential, ${\mbox{\boldmath $\Theta$}}$, for $\bar{\mbox{\boldmath $\omega$}}$ on ${\cal I}$ which is locally constructed[^20] out of the spacetime metric, $g_{ab}$, and background structure (and depends analytically on the metric), is independent of any arbitrary choices made in specifying the background structure, and is such that ${\mbox{\boldmath $\Theta$}}(g_{ab}, \gamma_{ab})$ vanishes for all $\gamma_{ab}$ whenever $g_{ab}$ is stationary. By inspection, a symplectic potential satisfying all of these properties is given by[^21] $${\mbox{\boldmath $\Theta$}} = - \frac{1}{32 \pi} N_{ab} \tau^{ab} \, {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{Thetascri}$$ As discussed in section 3, this choice of ${\mbox{\boldmath $\Theta$}}$ will be unique if and only if there does not exist a $3$-form $\bf W$ on ${\cal I}$ which is locally constructed (in the sense of footnote \[loc\]) out of the physical metric, $g_{ab}$, and background structure (and depends analytically on the physical metric), is independent of any arbitrary choices made in specifying the background structure, and is such that $\delta {\bf W}$ vanishes for all $\gamma_{ab}$ whenever $g_{ab}$ is stationary. In our case, the only relevant “background structure” present is the conformal factor, $\Omega$, since all other background quantities (such as $\tilde{g}^0_{ab}$ and $n^a$ on $\cal I$) can be reconstructed from $\Omega$ and the physical metric. Now, the physical metric, $g_{ab}$, its curvature, ${R_{abc}}^d$, and (physical) derivatives of the curvature all can be expressed in terms of the unphysical metric, $\tilde{g}_{ab}$, its curvature, $\tilde{R}_{abc}{}^d$, and unphysical derivatives of the unphysical curvature together with $\Omega$ and its unphysical derivatives. Therefore, we may view $\bf W$ as a function of $\tilde{g}_{ab}$, $\tilde{R}_{abc}{}^d$, and unphysical derivatives of $\tilde{R}_{abc}{}^d$, together with $\Omega$ and its unphysical derivatives. The requirement that $\bf W$ vary analytically with $g_{ab}$ (at fixed $\Omega$) then implies that it must depend analytically on $\tilde{g}_{ab}$ at fixed $\Omega$. In our specification of conditions on the background structure, we required that $\tilde{g}^0_{ab}$ induce a round two-sphere metric of scalar curvature $k$ on all cross-sections of ${\cal I}$. The choice of $k$ was arbitrary, and could have been fixed at any value. If we keep $\cal F$ fixed (i.e., consider the same class of physical metrics) but change $\Omega$ by $\Omega \rightarrow \lambda \Omega$ with $\lambda$ constant, then $\tilde{g}^0_{ab}$ will induce a round two-sphere metric of scalar curvature $\lambda^{-2} k$ rather than $k$ on all cross-sections. We require that under this scaling of $\Omega$ (corresponding to modifying an “arbitrary choice” in the specification of the background structure), we have ${\bf W} \rightarrow {\bf W}$. To analyze the implications of this requirement, it is useful to introduce the following notion of the [*scaling dimension*]{} [@g] of a tensor, ${T^{a_1 ... a_k}}_{b_1 ... b_l}$, of type $(k,l)$ which is locally constructed out of the unphysical metric and $\Omega$: If under the scaling $\Omega \rightarrow \lambda \Omega$ keeping the physical metric fixed we have ${T^{a_1 ... a_k}}_{b_1 ... b_l} \rightarrow \lambda^p {T^{a_1 ... a_k}}_{b_1 ... b_l}$, then we define the scaling dimension, $s$, of ${T^{a_1 ... a_k}}_{b_1 ... b_l}$ by $$s = p + k - l \label{scale}$$ It follows that the scaling dimension of a tensor does not change under the raising and lowering of indices using the unphysical metric. It is easily seen that the scaling dimension of $\Omega$ is $+1$, the scaling dimension of the unphysical metric is $0$, and the scaling dimension of the unphysical curvature tensor is $-2$. Each derivative decreases the scaling dimension by one, so, for example, the scaling dimension of $n_a = \tilde{\nabla}_a \Omega$ is $0$ and the scaling dimension of the $j$th derivative of the unphysical curvature is $-(j+2)$. Since the $3$-form $\bf W$ is required to be invariant under scaling of $\Omega$, it must have a scaling dimension of $-3$. Since ${}^{(3)} \epsilon_{abc}$ has scaling dimension $0$, if we define $w = W_{abc} \, {}^{(3)} \epsilon^{abc}$ we obtain a scalar with scaling dimension $-3$. By our assumptions, $w$ must be locally constructed out of $\Omega$ and $\tilde{g}_{ab}$ (in the sense of footnote \[loc\]) and must vary analytically with $\tilde{g}_{ab}$ at fixed $\Omega$. Presumably, this will imply that we can write $w$ as a convergent sum of terms (with coefficients depending on the conformal factor) of products (with all indices contracted) of the unphysical metric, the unphysical curvature, unphysical derivatives of the unphysical curvature, $n_a = \tilde{\nabla}_a \Omega$ and unphysical derivatives of $n^a$. (Negative powers of $\Omega$ can, of course, occur in the coefficients if they multiply a term which vanishes suitably rapidly at $\cal I$.) Now, the unphysical metric, the unphysical curvature, and $n_a$ all have have a non-positive scaling dimension and derivatives only further decrease the scaling dimension. Therefore, if any term were composed of more than two factors containing the unphysical curvature tensor, the only way of achieving a scaling dimension of $-3$ would be to multiply it by a positive power of $\Omega$, in which case it would vanish at $\cal I$. Similarly, if the term contained a single factor with two or more derivatives of curvature, it also would have to vanish at $\cal I$. Similar restrictions occur for terms containing derivatives of $n^a$. This reduces the possible terms that can occur in $w$ to a small handful, and it is then easily verified that there does not exist an allowed $w$ such that $\delta w$ is nonzero in general (so that it contributes nontrivially to ${\mbox{\boldmath $\Theta$}}$) but $\delta w$ vanishes whenever the physical metric, $g_{ab}$, is stationary. Therefore, we conclude that ${\mbox{\boldmath $\Theta$}}$ is unique. To complete the prescription, we need to specify a stationary “reference solution” $\phi_0$ satisfying eq.(\[consis2\]). A natural candidate for $\phi_0$ is Minkowski spacetime and, indeed, it should be possible to show that no other stationary solution[^22] can satisfy eq.(\[consis2\]). In Minkowski spacetime, an arbitrary infinitesimal asymptotic symmetry can be written as a sum of a Killing vector field plus a supertranslation. Since eq.(\[consis2\]) holds automatically whenever either $\eta^a$ or $\xi^a$ is a Killing vector field, it suffices to check eq.(\[consis2\]) for the case where both $\eta^a$ and $\xi^a$ are supertranslations, i.e., on $\cal I$ they are of the form $\xi^a = \alpha n^a$, $\eta^a = \beta n^a$ where $\alpha$ and $\beta$ are such that $n^a \tilde{\nabla}_a \alpha = n^a \tilde{\nabla}_a \beta = 0$. Since the satisfaction of eq.(\[consis2\]) does not depend upon the choice of representative of the infinitesimal asymptotic symmetry, we may assume that $\eta^a$ and $\xi^a$ satisfy the Geroch-Winicour gauge condition [@gw] $\nabla_a \eta^a = \nabla_a \xi^a = 0$ (see below). In that case, $\int_{\partial \Sigma} {\bf Q}[{\cal L}_\eta \xi]$ will vanish and eq.(\[consis2\]) reduces to $$0 = \int_{\partial \Sigma} \{ \eta \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\xi \phi_0) - \xi \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\eta \phi_0) \} \label{consis3}$$ From eq.(\[thetagr\]) we obtain on $\cal I$ $$\eta^c \theta_{cab}(\phi, \delta \phi) = \frac{1}{16 \pi} \tilde{\epsilon}_{abcd} V^c \eta^d \label{thetab}$$ where $$V^a \equiv \Omega^{-1} [\tilde{\nabla}_b \tau^{ab} - \tilde{\nabla}^a \tau - 3 \tau^a] \label{V2}$$ and it should be noted that $V^a$ has a smooth limit to $\cal I$ on account of eq.(\[ntau0\]). The pullback of $\eta \cdot {\mbox{\boldmath $\theta$}}$ to $\cal I$ is thus $$\eta \cdot \bar{\mbox{\boldmath $\theta$}} = - \frac{1}{16 \pi} \beta n_a V^a n \cdot {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{ptheta}$$ In using this equation to evaluate the term $\eta \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\xi \phi_0)$ in eq.(\[consis3\]), we must substitute $\chi_{ab}$ for $\tau_{ab}$ where $$\begin{aligned} \chi_{ab} & \equiv & \Omega {\cal L}_\xi g_{ab} \nonumber \\ & = & \Omega^{-1} [{\cal L}_\xi \tilde{g}_{ab} - 2K \tilde{g}_{ab}] \label{chi}\end{aligned}$$ with $$K \equiv \Omega^{-1} \xi^a n_a \label{K}$$ (Thus, $\chi_{ab} = 2X_{ab}$ in the notation of [@gw]; it follows directly from the definition of infinitesimal asymptotic symmetries that $\chi_{ab}$ and $K$ extend smoothly to $\cal I$.) It may then be seen by inspection of eq.(19) of [@gw] that ${\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\xi \phi_0)$ is proportional to the “linkage flux” (see below) associated with $\xi^a$. However, from the formula for the linkage flux for supertranslations in Minkowski spacetime given in eq.(10) of [@aw], it may be verified that that $\int_{\partial \Sigma} \eta \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\xi \phi_0)$ cancels $\int_{\partial \Sigma} \xi \cdot {\mbox{\boldmath $\theta$}}(\phi_0,{\cal L}_\eta \phi_0)$, so eq.(\[consis3\]) is indeed satisfied, as we desired to show. Thus, for the case of null infinity in general relativity, the general prescription proposed in section 4 instructs us to define a “conserved quantity”, ${\cal H}_\xi$, for each infinitesimal BMS symmetry $\xi^a$ and each cross-section, $\partial \Sigma$, of $\cal I$ by $$\delta {\cal H}_\xi = \int_{\partial \Sigma} [\delta {\bf Q} - \xi \cdot {\mbox{\boldmath $\theta$}}] - \frac{1}{32 \pi} \int_{\partial \Sigma} N_{ab} \tau^{ab} \xi \cdot {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{dhfinal}$$ together with the requirement that ${\cal H}_\xi = 0$ for all $\xi^a$ and all cross-sections in Minkowski spacetime. By our above arguments, there exists a unique ${\cal H}_\xi$ satisfying the above requirements. How does this prescription compare with the one previously given by Dray and Streubel [@ds]? From our general analysis of section 4, it follows that our prescription automatically yields the flux formula $${\bf F}_\xi = {\mbox{\boldmath $\Theta$}} (g_{ab}, {\cal L}_\xi g_{ab}) = - \frac{1}{32 \pi} N_{ab} \chi^{ab} \, {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{ffinal}$$ Equation (\[ffinal\]) agrees with the flux formula proposed by Ashtekar and Streubel [@as] (see eq.(19) of [@aw]). But it was shown by Shaw [@s] that the Dray-Streubel prescription also yields the Ashtekar-Streubel flux formula. Therefore, the difference between our ${\cal H}_\xi$ and the “conserved quantity” proposed by Dray and Streubel must be a quantity that depends locally on the fields at the cross-section $\partial \Sigma$ and yet—since the flux associated with the difference of these quantities vanishes—for a given solution, is independent of the choice of cross-section (i.e., this difference, if nonzero, would be a truly conserved quantity). If we restrict attention to spacetimes that are asymptotically flat at both null and spatial infinity, the equivalence of our prescription to that of Dray and Streubel would follow from the fact that they both yield the ADM conserved quantities in the limit as the cross-section approaches spatial infinity. However, it is instructive to show the equivalence of the two prescriptions directly (without assuming asymptotic flatness at spatial infinity), and we now turn our attention to doing so. Let $\partial \Sigma$ be a cross-section of ${\cal I}$ and let $\xi^a$ be a representative of an infinitesimal asymptotic symmetry (i.e., an infinitesimal BMS representative). We may uniquely decompose $\xi^a$ into a part that is everywhere tangent to $\partial \Sigma$ on $\partial \Sigma$ plus a supertranslation. Since both our prescription and that of Dray and Streubel are linear in $\xi^a$, it suffices to consider the equivalence of the prescription for each piece separately, i.e., to consider separately the cases where (a) $\xi^a$ is everywhere tangent to $\partial \Sigma$ and (b) $\xi^a$ is a supertranslation. Consider, first, case (a), where as discussed in section 4, a true Hamiltonian exists. In case (a), eq.(\[dhfinal\]) is simply $$\delta {\cal H}_\xi = \int_{\partial \Sigma} \delta {\bf Q} \label{dhfinala}$$ One might think that the solution to this equation would be simply ${\cal H}_\xi = \int_{\partial \Sigma} {\bf Q}$, which corresponds to the Komar formula with the correct numerical factor for angular momentum (see eq.(\[Qgr\]) above). However, although $\int_{\partial \Sigma} \delta {\bf Q}$ is well defined and independent of choice of infinitesimal BMS representative $\xi^a$ (as it must be according to the general considerations of section 4), it was shown in [@gw] that the value of $\int_{\partial \Sigma} {\bf Q}$ depends upon the choice of infinitesimal BMS representative, and, in this sense, is ill defined unless a representative is specified. It was also shown in [@gw] that the Geroch-Winicour condition $\nabla_a \xi^a = 0$ in $M$ (where $\nabla_a$ is the [*physical*]{} derivative operator) picks out a class of representatives which makes $\int_{\partial \Sigma} {\bf Q}$ well defined. (By eq.(\[chi\]), the Geroch-Winicour condition is equivalent to $\chi = 0$, where $\chi = \tilde{g}^{ab} \chi_{ab}$.) We write ${\bf Q}_{GW}$ to denote ${\bf Q}$ when $\xi^a$ has been chosen so as to satisfy the Geroch-Winicour condition. It was shown in [@gw] that $\int_{\partial \Sigma} {\bf Q}_{GW}$ is equivalent to a previously proposed “linkage formula” [@tw] for defining “conserved quantities”. Furthermore, this linkage formula has the property that when $\xi^a$ is everywhere tangent to $\partial \Sigma$, it yields zero in Minkowski spacetime[^23] as desired. This suggests that the solution to eq.(\[dhfinala\]) together with the requirement that ${\cal H}_\xi$ vanish in Minkowski spacetime is ${\cal H}_\xi = \int_{\partial \Sigma} {\bf Q}_{GW}$. However, it is far from obvious that this formula satisfies eq.(\[dhfinala\]), since when we vary the metric, we also must, in general, vary $\xi^a$ in order to continue to satisfy the Geroch-Winicour gauge condition, $\chi = 0$. Indeed, under a variation of the metric, $\delta \tilde{g}_{ab} = \Omega \tau_{ab}$, keeping $\xi^a$ fixed it follows from eq.(\[chi\]) that $$\begin{aligned} \delta \chi & = & \delta (\tilde{g}^{ab} \chi_{ab}) = \delta(\Omega^{-1} g^{ab} {\cal L}_\xi g_{ab}) \nonumber \\ & = & \Omega^{-1} {\cal L}_\xi \gamma = \Omega^{-1} {\cal L}_\xi (\Omega \tau) \nonumber \\ & = & {\cal L}_\xi \tau + K \tau \label{dchi}\end{aligned}$$ where, as previously defined above, $\tau = \tilde{g}^{ab} \tau_{ab}$. Consequently, in order to preserve the Geroch-Winicour condition, it will be necessary to vary the infintesimal BMS representative by $\delta \xi^a = \Omega^2 u^a$ (see [@gw]) where $u^a$ satisfies $$2 \Omega^{-1} \nabla_a (\Omega^2 u^a) = - {\cal L}_\xi \tau - K \tau \label{u}$$ Since $\nabla_a u^a = \tilde{\nabla}_a u^a - 4 \Omega^{-1} u^a n_a$, this relation can be expressed in terms of unphysical variables as $$2 \Omega \tilde{\nabla_a} u^a - 4 u^a n_a = - {\cal L}_\xi \tau - K \tau \label{u2}$$ Clearly, we have $$\delta \int_{\partial \Sigma} {\bf Q}_{GW} = \int_{\partial \Sigma} \delta {\bf Q} - \frac{1}{16 \pi} \int_{\partial \Sigma} \epsilon_{abcd} \nabla^c (\Omega^2 u^d) \label{dQGW}$$ where $u^a$ satisfies eq.(\[u2\]). We wish to show that the second term on the right side of eq.(\[dQGW\]) vanishes. To do so, it is convenient to introduce a null vector field $l^a$ as follows. At points of $\partial \Sigma$ we take $l^a$ to be the unique (past-directed) null vector that is orthogonal to $\partial \Sigma$ and satisfies $l^a n_a = 1$. We extend $l^a$ to all of $\cal I$ by requiring that ${\cal L}_n l^a = 0$ on $\cal I$. Finally, we extend $l^a$ off of $\cal I$ via the geodesic equation $l^b \tilde{\nabla}_b l^a = 0$. A calculation similar to that given in eq.(17) of [@gw] shows that the integrand of the second term in eq.(\[dQGW\]) can be written as $$\begin{aligned} I_{ab} & \equiv & \epsilon_{abcd} \nabla^c (\Omega^2 u^d) \nonumber \\ & = & [l^c \tilde{\nabla}_c Y + \frac{1}{2} Y \tilde{\nabla}_c l^c + \tilde{D}_c s^c] \,\, {}^{(2)} \tilde{\epsilon}_{ab} \label{Iab}\end{aligned}$$ where $s^a$ denotes the projection of $u^a$ to $\partial \Sigma$; $\tilde{D}_a$ and ${}^{(2)} \tilde{\epsilon}_{ab}$ are the derivative operator and volume element on $\partial \Sigma$ associated with the induced unphysical metric, $\tilde{q}_{ab}$, on $\partial \Sigma$; and we have written $$Y \equiv \frac{1}{2} [{\cal L}_\xi \tau + K \tau] \label{Y}$$ The term $\tilde{D}_c s^c$ is a total divergence and integrates to zero[^24]. After a significant amount of algebra, it can be shown that the remaining terms in eq.(\[Iab\]) can be expressed as $${\bf I}' = \frac{1}{2} {\cal L}_\xi [({\cal L}_l \tau +\frac{1}{2} \tau \tilde{\nabla}_a l^a) \,\, {}^{(2)} \tilde{\mbox{\boldmath $\epsilon$}}] \label{Iab2}$$ These remaining terms integrate to zero since $\xi^a$ is tangent to $\partial \Sigma$. This establishes that $$\delta \int_{\partial \Sigma} {\bf Q}_{GW} = \int_{\partial \Sigma} \delta {\bf Q} \label{QQGW}$$ and thus the unique solution to eq.(\[dhfinala\]) which vanishes in Minkowski spacetime is $${\cal H}_\xi = \int_{\partial \Sigma} {\bf Q}_{GW} \label{Ha}$$ which is equivalent to the linkage formula. This agrees with the Dray-Streubel expression in case (a). We turn our attention now to case (b) where $\xi^a$ is a supertranslation and thus takes the form [@gw] $$\xi^a = \alpha n^a - \Omega \tilde{\nabla}^a \alpha + O(\Omega^2) \label{st}$$ where $\alpha$ is such that on $\cal I$ we have $n^a \tilde{\nabla}_a \alpha = 0$. Direct substitution of (\[st\]) into the variation of eq.(\[Qgr\]) yields on $\cal I$ [@i] $$\delta Q_{ab} = - \frac{1}{16 \pi} \tilde{\epsilon}_{abcd} \tilde{\nabla}^c (\alpha \tau^d - \tau^{de} \tilde{\nabla}_e \alpha) \label{dQb}$$ from which it follows that the pullback, $\delta \bar{\bf Q}$, of $\delta {\bf Q}$ to $\cal I$ is given by $$\delta \bar{\bf Q} = - \frac{1}{16 \pi} U \cdot {}^{(3)} \bar{\mbox{\boldmath $\epsilon$}} \label{pdQb}$$ where $$U^a = \tilde{\nabla}^a (\alpha \tau^b n_b) - \alpha n^b \tilde{\nabla}_b \tau^a - n^a \tau^b \tilde{\nabla}_b \alpha + n_b \tilde{\nabla}_c \alpha \tilde{\nabla}^b \tau^{ac} \label{U}$$ The pullback of $\xi \cdot {\mbox{\boldmath $\theta$}}$ to $\cal I$ is given by eq.(\[ptheta\]) above (with the substitutions $\eta \rightarrow \xi$ and $\beta \rightarrow \alpha$). Thus, our general prescription instructs us to define ${\cal H}_\xi$ in case (b) by the requirement that ${\cal H}_\xi = 0$ in Minkowski spacetime together with the equation $$\delta {\cal H}_\xi = - \frac{1}{16 \pi} \int_{\partial \Sigma} [U^a l_a - \alpha V^a n_a + \frac{1}{2} \alpha N_{ab} \tau^{ab}] \,\, {}^{(2)} {\mbox{\boldmath $\epsilon$}} \label{dhfinalb}$$ where $l_a$ is any covector field on $\cal I$ satisfying $n^a l_a = 1$. A lengthy calculation [@i] shows that the solution to this equation is the expression given by Geroch [@g], namely $${\cal H}_\xi = \frac{1}{8 \pi} \int_{\partial \Sigma} P^a l_a \,\, {}^{(2)} {\mbox{\boldmath $\epsilon$}} \label{hfinalb}$$ where $$P^a = \frac{1}{4} \alpha K^{ab} l_b + (\alpha {D}_b l_c + l_b {D}_c \alpha) \bar{g}^{cd} N_{de} \bar{g}^{e[b} n^{a]} \label{Pg}$$ Here ${D}_a$ is the derivative operator on $\cal I$ defined on P.46-47 of [@g]; $\bar{g}^{ab}$ is the (non-unique) tensor field on $\cal I$ satisfying $\bar{g}_{ac} \bar{g}^{cd} \bar{g}_{db} = \bar{g}_{ab}$ where $\bar{g}_{ab}$ denotes the pullback to $\cal I$ of $\tilde{g}_{ab}$; and $K^{ab} = {}^{(3)} \bar{\epsilon}^{acd} \, {}^{(3)} \bar{\epsilon}^{bef} \Omega^{-1} \bar{C}_{cdef}$ where $\Omega^{-1} \bar{C}_{cdef}$ denotes the pullback to $\cal I$ of the limit to $\cal I$ of $\Omega^{-1} \tilde{C}_{cdef}$, where $\tilde{C}_{cdef}$ denotes the unphysical Weyl tensor. Equation (\[hfinalb\]) agrees with the Dray-Streubel prescription in case (b). Consequently, our prescription agrees with that given by Dray and Streubel for all infitesimal BMS representatives $\xi^a$ and all cross-sections $\partial \Sigma$, as we desired to show. Summary and Outlook =================== In this paper, using ideas arising from the Hamiltonian formulation, we have proposed a general prescription for defining notions of “conserved quantities” at asymptotic boundaries in diffeomorphism covariant theories of gravity. The main requirement for the applicability of our ideas is that the symplectic current $(n-1)$-form ${\mbox{\boldmath $\omega$}}$ extend continuously to the boundary. If, in addition, the pullback of ${\mbox{\boldmath $\omega$}}$ vanishes at the boundary (“Case I”), then a Hamiltonian associated with each infinitesimal asymptotic symmetry exists, and the value of the Hamiltonian defines a truly conserved quantity. On the other hand, if the pullback of ${\mbox{\boldmath $\omega$}}$ fails to vanish in general at the boundary (“case II”), our prescription requires us to find a symplectic potential on the boundary which vanishes for stationary solutions. When such a symplectic potential exists and is unique—and when a “reference solution” $\phi_0$ can be found satisfying the consistency condition (\[consis\])—we have provided a well defined prescription for defining a “conserved quantity”, ${\cal H}_\xi$, for each infinitesimal asymptotic symmetry, $\xi^a$ and cross-section $\partial \Sigma$. This “conserved quantity” is automatically local in the fields in an arbitrarily small neighborhood of the cross-section and has a locally defined flux given by the simple formula (\[F5\]). For the case of asymptotically flat spacetimes at null infinity in vacuum general relativity, our proposal was shown to yield a unique prescription which, furthermore, was shown to agree with the one previously given by Dray and Streubel [@ds] based upon entirely different considerations. In this way, we have provided a link between the Dray-Streubel formula and ideas arising from the Hamiltonian formulation of general relativity. Since our approach does not depend on the details of the field equations—other than that they be derivable from a diffeomorphism covariant Lagrangian—there are many possible generalizations of the results we obtained for vacuum general relativity. We now mention some of these generalizations, all of which are currently under investigation. Perhaps the most obvious generalization is to consider asymptotically flat spacetimes at null infinity in general relativity with matter fields, $\psi$, also present. If the asymptotic conditions on $\psi$ are such that the ${\mbox{\boldmath $\omega$}}$ continues to extend continuously to $\cal I$ and are such that the physical stress-energy tensor, $T_{ab}$, satisfies the property that $\Omega^{-2} T_{ab}$ extends continuously to $\cal I$ (so that “$T_{ab}$ vanishes asymptotically to order $4$” in the terminology of [@g]), then an analysis can be carried in close parallel with that given in section 5 for the vacuum case. For minimally coupled fields (i.e., fields such that the curvature does not explicitly enter the matter terms in the Lagrangian), it follows from the general analysis of [@iw] that there will be no matter contributions to $\bf Q$ from the term ${\bf X}^{ab} \nabla_{[a} \xi_{b]}$ (see eq.(\[Qform\]) above). (Even for non-minimally coupled fields such as the conformally invariant scalar field, the ${\bf X}^{ab} \nabla_{[a} \xi_{b]}$ term in $\bf Q$ will retain the vacuum form (\[Qgr\]) in the limit as one approaches $\cal I$.) However, in general the symplectic potential ${\mbox{\boldmath $\theta$}}$ and symplectic current ${\mbox{\boldmath $\omega$}}$ will pick up additional contributions due to the matter fields and the other terms in $\bf Q$ in eq.(\[Qform\]) may also acquire matter contributions. For the massless Klein-Gordon scalar field, $\psi$, we require $\Omega^{-1} \psi$ to have a smooth limit to $\cal I$. In that case, ${\mbox{\boldmath $\omega$}}$ extends continuously to $\cal I$. Although $T_{ab}$ does not actually vanish asymptotically to order $4$ in this case (see the Appendix of [@qw]), it appears that all the essential features of the analysis of section 5 carry through nonetheless. In Einstein-Klein-Gordon theory no additional matter terms occur in $\bf Q$, so $\bf Q$ continues to be given by eq.(\[Qgr\]). Furthermore, the extension to $\cal I$ of the pullback to surfaces of constant $\Omega$ of the matter field contribution to ${\mbox{\boldmath $\theta$}}$ satisfies the property that it vanishes for stationary solutions. Consequently, in this case we can define ${\mbox{\boldmath $\Theta$}}$ on $\cal I$ by simply adding this additional matter contribution to ${\mbox{\boldmath $\theta$}}$ to the right side of eq.(\[Thetascri\]). The upshot is that the explicit matter contributions to formula (\[dhfinal\]) cancel, so that ${\cal H}_\xi$ is again given by the linkage formula (\[Ha\]) when $\xi^a$ is tangent to $\partial \Sigma$ and is given by eq.(\[hfinalb\]) when $\xi^a$ is a supertranslation. However, the flux formula (\[ffinal\]) will pick up additional terms arising from the additional matter contributions to ${\mbox{\boldmath $\theta$}}$ and hence to ${\mbox{\boldmath $\Theta$}}$. Similar results hold for non-minimally-coupled scalar fields, such as the conformally coupled scalar field[^25]. The analysis is similar in the case of higher derivative gravity theories if we [*impose*]{}, in addition to the usual asymptotic conditions at null infinity, the requirement that $\Omega^{-2} R_{ab}$ extends continuously to $\cal I$. (Of course, there is no guarantee that the field equations will admit a reasonable number of solutions satisfying this property.) If we consider a Lagrangian which, in addition to the Einstein-Hilbert term (\[Lgr\]), contains terms which are quadratic and/or higher order in the curvature and its derivatives, then additional terms will appear in ${\bf Q}$ as well as ${\mbox{\boldmath $\theta$}}$ and ${\mbox{\boldmath $\omega$}}$ (see [@iw]). However, it appears that none of these additional terms will contribute to ${\cal H}_\xi$ or its flux when the limit to $\cal I$ is taken. Thus, it appears that the formulas for both the “conserved quantities” and their fluxes will be the same in higher derivative gravity theories as in vacuum general relativity[^26]. Our proposal also can be applied to situations where the asymptotic conditions considered are very different from those arising in vacuum general relativity. Thus, for example, it should be possible to use our approach to define notions of total energy and radiated energy in dilaton gravity theories in $2$-dimensional spacetimes. It should also be possible to use our approach for asymptotically anti-deSitter spacetimes in general relativity with a negative cosmological constant. When suitable asymptotic conditions are imposed, the asymptotically anti-deSitter spacetimes should lie within “Case I” of section 4, so it should be possible to define truly conserved quantities conjugate to all infinitesimal asymptotic symmetries. It would be of interest to compare the results that would be obtained by our approach with those of previous approaches [@ads]. Finally, we note that many of the ideas and constructions of section 4 would remain applicable if $\cal B$ were an ordinary timelike or null surface $\cal S$ in the spacetime, $M$, rather than an asymptotic boundary of $M$. Thus, one could attempt to use the ideas presented here to define notions of quasi-local energy contained within $\cal S$ and/or energy radiated through $\cal S$. However, it seems unlikely that a unique, natural choice of ${\mbox{\boldmath $\Theta$}}$ will exist in this context, so it seems unlikely that this approach would lead to a unique, natural notion of quasi-local energy. Nevertheless, by considering the case where $\cal S$ is the event horizon of a black hole, it is possible that the ideas presented in this paper may contain clues as to how to define the entropy of a nonstationary black hole in an arbitrary theory of gravity obtained from a diffeomorphism covariant Lagrangian. Acknowledgements ================ This research was initiated several years ago by one of us (R.M.W.) and Vivek Iyer. Some unpublished notes provided to us by Vivek Iyer [@i] (dating from an early phase of this research) were extremely useful in our investigations. Some unpublished calculations by Marc Pelath for a scalar field in Minkowski spacetime were useful for refining our proposal. We have benefitted from numerous discussions and comments from colleagues, particularly Abhay Ashtekar and Robert Geroch. We wish to thank Abhay Ashtekar, Piotr Chrusciel, and Roh Tung for reading the manuscript. This research was supported in part by NSF grant PHY 95-14726 to the University of Chicago and by a NATO scholarship from the Greek Ministry of National Economy. [99]{} A. Trautman, Bull. Acad. Pol. Sci., Serie sci. math., astr. et phys. [**VI**]{}, 407 (1958). H. Bondi, M.G.J. van der Berg, and A.W.K. Metzner, Proc. Roy. Soc. London [**A269**]{}, 21, (1962). R. Geroch, in [*Asymptotic Structure of Space-Time*]{}, ed. by F.P. Esposito and L. Witten, Plenum Press (New York), 1977. P.T. Chrusciel, J. Jezierski, and M.A.H. MacCallum, Phys. Rev. [**D58**]{}, 084001 (1998). T. Dray and M. Streubel, Class. Quant. Grav. [**1**]{}, 15 (1984). W. Shaw Class. Quant. Grav. [**1**]{}, L33 (1984). R. Penrose, Proc. Roy. Soc. London [**A381**]{}, 53, (1982). A. Ashtekar and M. Streubel, Proc. Roy. Soc. London [**A376**]{}, 585, (1981). A. Ashtekar, L. Bombelli, and O. Reula, in [*Mechanics, Analysis, and Geometry: 200 Years After Lagrange*]{}, ed. by M. Francaviglia, vol. 376, Elsevier Science Publishers (Amsterdam), 1991. R. Arnowitt, S. Deser, and C.W. Misner, in [*Gravitation: An Introduction to Current Research*]{}, ed. by L. Witten, Wiley (New York), 1962. T. Regge and C. Teitelboim, Ann. Phys. [**88**]{}, 286 (1974). J. Lee and R.M.Wald, J. Math. Phys. [**31**]{}, 725 (1990). V. Iyer and R.M.Wald, Phys. Rev. [**D50**]{}, 846 (1994). V. Iyer and R.M.Wald, Phys. Rev. [**D52**]{}, 4430 (1995). A. Ashtekar and R.O. Hansen, J. Math. Phys. [**19**]{}, 1542 (1978). L. Tamburino and J. Winicour, Phys. Rev. [**150**]{}, 1039 (1966); J. Winicour, J. Math. Phys. [**9**]{}, 861 (1968). R. Geroch and J. Winicour, J. Math. Phys. [**22**]{}, 803 (1981). A. Ashtekar and J. Romano, Class. Quant. Grav. [**9**]{}, 1069 (1992). G.A. Burnett and R.M. Wald, Proc. Roy. Soc. London [**A430**]{}, 57, (1990). V. Iyer, unpublished notes (January, 1997). R.M. Wald, [*General Relativity*]{}, University of Chicago Press (Chicago), 1984. A. Ashtekar and A. Magnon, Commun. Math. Phys. [**86**]{}, 55 (1982). A. Ashtekar and J. Winicour, J. Math. Phys. [**23**]{}, 2410 (1982). T.C. Quinn and R.M.Wald, Phys. Rev. [**D60**]{}, 064009 (1999). L.F. Abbott and S. Deser, Nucl. Phys. [**B195**]{}, 76 (1982); A. Ashtekar and A. Magnon, Class. Quant. Grav. [**1**]{}, L39 (1984); V. Balasubramanian and P. Kraus, hep-th/9902121; A.M. Awad and C.V. Johnson, hep-th/9910040. [^1]: If we change the Lagrangian by ${\bf L} \rightarrow {\bf L} + d {\bf K}$, the equations of motion are unaffected. Under such change in the Lagrangian, we have ${\mbox{\boldmath $\theta$}} \rightarrow {\mbox{\boldmath $\theta$}} + \delta {\bf K}$. Thus, if such changes in the Lagrangian are admitted, we will have this additional ambiguity in ${\mbox{\boldmath $\theta$}}$. However, this ambiguity does not affect the definition of the presymplectic current form (see eq.(\[omega\]) below) and will not affect our analysis. [^2]: The orientation of $\Sigma$ relative to the spacetime orientation $\epsilon_{a_1...a_n}$ is chosen to be $v^{a_1} \epsilon_{a_1...a_n}$ where $v^a$ is a future-directed timelike vector.\[orient1\] [^3]: We choose the orientation of $\partial K$ to be the one specified by Stokes’ theorem, i.e., we dot the first index of the orientation form on $K$ into an outward pointing vector.\[orient2\] [^4]: The solution submanifold, $\bar{{\cal F}}$, is sometimes referred to as the “covariant phase space” [@abr]. [^5]: The assumption that ${\cal B}$ is an $(n-1)$-dimensional manifold structure is not essential in cases where ${\mbox{\boldmath $\omega$}}$ vanishes at ${\cal B}$ (see “Case I” below). In particular, there should be no difficulty in extending our framework to definitions of asymptotic flatness at spatial infinity in which ${\cal B}$ is comprised by a single point [@ah]. [^6]: It should be emphasized that we require that the full ${\mbox{\boldmath $\omega$}}$ extend continuously to $\cal B$—not merely its pullback to hypersurfaces that approach $\cal B$. [^7]: A similar argument has previously been given to show that the “linkage formulas” are well defined (see, [@tw], [@gw]). [^8]: We define the orientation of $\cal B$ to be that obtained by dotting the first index of the orientation of $M$ into an outward pointing vector. The orientation of $\partial \Sigma$ was previously specified in footnotes \[orient1\] and \[orient2\]. The signs in eq.(\[Hnocons\]) to correspond to the case where $\partial \Sigma_2$ lies to the future of $\partial \Sigma_1$.\[orient4\] [^9]: More precisely, by “locally constructed” we mean the following: Suppose that $\chi : M \cup {\cal B} \rightarrow M \cup {\cal B}$ is a diffeomorphism which preserves the universal background structure. Suppose $(\phi, \delta \phi)$ and $(\phi', \delta \phi')$ are such that there exists an open (in $M \cup {\cal B}$) neighborhood, $\cal O$, of $p \in {\cal B}$ such that for all $x \in M \cap \cal O$ we have $\phi = \chi_* \phi'$ and $\delta \phi = \chi_* \delta \phi'$, where $\chi_*$ denotes the pullback map on tensor fields associated with the diffeomorphism $\chi$. Then we require that at $p$ we have ${\mbox{\boldmath $\Theta$}} = \chi_* {\mbox{\boldmath $\Theta$}}'$.\[loc\] [^10]: The condition that $\bf L$ be an analytic function of its variables (as occurs in essentially all theories ever seriously considered) has nothing to do with any smoothness or analyticity conditions concerning the behavior of the dynamical fields themselves on $M$. We do not impose any analyticity conditions on the dynamical fields. [^11]: For the case of asymptotically flat spacetimes at null infinity, $\cal I$, a suitable uniformity condition would be to require the unphysical fields to vary analytically with $\lambda$ at $\cal I$. [^12]: Here it should be noted that the new term on the right side of this equation is an ordinary integral over the surface $\partial \Sigma$ of ${\cal B}$, whereas, as explained above, the first term in general is defined only as an asymptotic limit. [^13]: Note that the ambiguity in ${\mbox{\boldmath $\Theta$}}$ is of an entirely different nature than the ambiguity (\[Y1\]) in ${\mbox{\boldmath $\theta$}}$. The quantity ${\mbox{\boldmath $\theta$}}$ is defined from the Lagrangian ${\bf L}$ ([*before*]{} ${\mbox{\boldmath $\omega$}}$ has been defined) and its ambiguity arises from eq.(\[dL\]). The quantity ${\mbox{\boldmath $\Theta$}}$ is defined from ${\mbox{\boldmath $\omega$}}$ and its ambiguity arises from eq.(\[Theta\]). [^14]: The requirement of smoothness could be weakened considerably without affecting our analysis. [^15]: For solutions to the vacuum field equations, it follows from the fact that $\Omega = 0$ on ${\cal I}$ that $\tilde{\nabla}_a \Omega$ is null on ${\cal I}$ in the metric $\tilde{g}_{ab}$. [^16]: Note that our imposition of this rather rigid structure on $\cal F$ as a result of our gauge fixing is not done merely for convenience, but is necessary in order that ${\mbox{\boldmath $\omega$}}$ extend to $\cal I$. [^17]: The only difference between our definition and the definition given in [@gw] concerns the notion of the equivalence of two representatives, $\xi^a$ and ${\xi'}^a$. In addition to requiring agreement of $\xi^a$ and ${\xi'}^a$ at $\cal I$, we impose the extra requirement that they give rise to the same asymptotic integral (\[I\]). However, it is not difficult to show that if $\xi^a$ and ${\xi'}^a$ agree at $\cal I$ then they automatically give rise to the same asymptotic integral (\[I\]). [^18]: As noted in section 2, ${\mbox{\boldmath $\omega$}}$ has the ambiguity (\[Y2\]). However, Iyer [@i] has shown that if ${\bf Y}$ is such that ${\mbox{\boldmath $\theta$}}$ maintains the general form given by eq.(23) of [@iw] with the coefficients in that formula being regular, analytic functions of the fields, then ${\bf Y}$ must vanish on ${\cal I}$. Consequently, in vacuum general relativity, the limit to ${\cal I}$ of ${\mbox{\boldmath $\omega$}}$ is, in fact, unique. [^19]: For past null infinity, this volume element would be negatively oriented, resulting in sign changes in some of the formulas below. [^20]: A major subtlety would have arisen in the meaning of “locally constructed” if we had not imposed the rigid background structure given by the Bondi condition (\[bondi\]) together with our fixing of $\tilde{g}^0_{ab}$. If, say, the background structure was specified merely by the “asymptotic geometry” as defined on P.22 of [@g], then there would exist diffeomorphisms locally defined in the neighborhood of a point $p \in {\cal I}$ which preserve the background structure but cannot be extended to globally defined diffeomorphisms which preserve the background structure. Indeed, a necessary condition for a background-structure-preserving local diffeomorphism to be globally extendible is that it preserve the tensor field $\rho_{ab}$, defined by eq.(33) of [@g], since $\rho_{ab}$ can be constructed from a global specification of the background structure. Now, locally defined diffeomorphisms that are not globally extendible are not relevant to the definition of “locally constructed” given in footnote \[loc\], since that definition requires globally defined diffeomorphisms. Since the allowed (globally defined) diffeomorphisms must locally preserve $\rho_{ab}$, that quantity would, in effect, count as “local” with regard to the definition of “local construction” of ${\mbox{\boldmath $\Theta$}}$—even though the construction of $\rho_{ab}$ from the background structure given in [@g] involves the global solution to differential equations. Consequently, the Bondi news tensor (which is constructed out of manifestly local quantities and $\rho_{ab}$) would still be considered as “locally constructed” even if the background structure had been specified as in [@g]. This subtlety does not arise here, since with our gauge choice, $\rho_{ab}$ and the Bondi news tensor are manifestly local. [^21]: That $N_{ab}$ and hence ${\mbox{\boldmath $\Theta$}}$ vanish for all stationary solutions is proven, e.g., on P.53-54 of [@g]. [^22]: If $t^a$ denotes the timelike Killing vector field, then $\int_{\partial \Sigma} {\bf Q}[t]$ is proportional to the Komar formula for mass and is nonvanishing for all stationary solutions other than Minkowski spacetime. We expect that eq.(\[consis2\]) will fail when $\eta^a$ is an asymptotic boost and $\xi^a$ is an asymptotic spatial translation such that their commutator yields $t^a$. [^23]: This fact follows immediately from the equivalence of eqs.(21) and (22) of [@s]. [^24]: It is erroneously stated in [@gw] that $\tilde{q}_{ab} \tilde{\nabla}^a u^b$ is an intrinsic divergence. The dropping of that term does not affect any of the results in the body of that paper. However, the formula given in footnote 20 of [@gw] is valid only when $\chi$ ($=2X$ in the notation of [@gw]) vanishes on $\cal I$. [^25]: For Maxwell and Yang-Mills fields, a new issue of principle arises as a result of the additional gauge structure of these theories. If we merely require the vector potential $A_a$ to extend smoothly to $\cal I$, then ${\mbox{\boldmath $\omega$}}$ will extend continuously to $\cal I$ and, by the general analysis of section 4, the integral defining $\Omega_\Sigma$ will always exist. However, $\Omega_\Sigma$ will not be gauge invariant. (Thus, a Hamiltonian on $\Sigma$ conjugate to gauge transformations will fail to exist in general in much the same way as a Hamiltonian conjugate to infinitesimal asymptotic symmetries fails to exist in general.) Consequently, in these cases it appears that substantial gauge fixing at $\cal I$ would be needed in order to obtain gauge invariant expressions for “conserved quantities”. [^26]: The interpretation of this result would be that, although higher derivative gravity theories may have additional degrees of freedom, these extra degrees of freedom are massive and do not propagate to null infinity (and/or they give rise to instabilities and are excluded by our asymptotic assumptions).
--- abstract: 'We discuss work toward developing a 2.5D non-LTE radiative transfer code. Our code uses the short characteristic method with modifications to handle realistic 3D wind velocities. We also designed this code for parallel computing facilities by representing the characteristics with an impact parameter vector [**p**]{}. This makes the interpolation in the radiation angles unnecessary and allows for an independent calculation for characterisitcs represented by different [**p**]{} vectors. The effects of the velocity field are allowed for by increasing, as needed, the number of grid points along a short characteristic. This allows us to accurately map the variation of the opacities and emissivities as a function of frequency and spatial coordinates. In the future we plan to use this transfer code with a 2D non-LTE stellar atmosphere program [@geo04] to self-consistently solve for level populations, the radiation field and temperature structure for stars with winds and without spherical symmetry.' author: - 'J. Zsargó and D. J. Hillier' - 'L. N. Georgiev' title: 'A Short Characteristic Solution for the 2.5D Transfer Equation in the Envelopes of O and B Stars' --- Introduction {#section:intro} ============ Modeling the circumstellar envelopes of O and B stars is a complex nonlinear problem. The non-LTE level populations, the (magneto-) hydrodynamics, and the radiation field are strongly coupled. Thus, an iterative procedure is needed to achieve a consistent solution. An essential constituent of this procedure is the availability of an accurate and fast radiative transfer code. Progress in computer technology and the availability of fast numerical methods now allow the development of such codes for detailed study of 2D and 3D envelopes. There are several possible avenues to follow. The most straightforward is to solve the general radiative transfer equation $$\label{eq:RT} \underline{\bf n} {\bf \nabla} I= - \chi \left[ I - S \right]$$ and calculate the radiative transition rates using the solution. In the above $\underline{\bf n}$ is the direction of the radiation, $I$ is the specific intensity, $\chi$ is the opacity, and $S$ is the source function (functional dependence is not indicated for clarity). Alternatively, one could also use the moment equations, derived from Eq. \[eq:RT\], and solve directly for the moments [@hil98] which set the transition rates; or use Monte-Carlo simulation to solve the radiative transfer equation and calculate estimators of the transition rates [@luc99]. We decided to use the first approach because of its simplicity and since it provides a reasonable compromise between numerical efficiency and flexibility. A simple iteration between the radiative transfer and the rate equations is not a wise choice for the iterative procedure. This is the so-called “$\Lambda$-iteration” which is notorious for convergence difficulties when the optical depth is large. Convergence is ensured, however, by using the Approximate Lambda Iteration [ALI, see e.g., @ryb91; @hub92] which takes some coupling of the radiation and populations into account by using an invertible Approximate Lambda Operator (ALO). In our 2D code we use the local contribution at every spatial point to construct the ALO because it is easy to calculate and has acceptable convergence characteristics. The actual implementation of the ALI procedure into our full non-LTE model atmosphere will be discussed in [@geo04]. Description of the Transfer Code {#section:code} ================================ The optical depth and the formal solution of Eq. \[eq:RT\] at any position, $s$, along a ray are $$\begin{aligned} \label{eq:FS} \tau_{\nu}= \int_{0}^{s} \chi ds' & ~~{\rm and}~~ & I(\tau_{\nu})= I_{BC} \, e^{- \tau_{\nu}} \; + \; \int_{0}^{\tau_{\nu}} S(\tau') \, e^{\tau' - \tau_{\nu}} \, d\tau' \; ,\end{aligned}$$ respectively. The intensity can be calculated by specifying I$_{BC}$ at the up-stream end of the ray (or characterisitic) and by evaluating two integrals. For this purpose, we use the “Short Characteristic” (SC) method, first explored by [@mih78] and [@kun88]. This method requires the evaluation of the integrals only between the point of interest and the closest cell boundary and uses the calculated intensities at other grid points to interpolate $I_{BC}$. In the spherical coordinate system the directional variation of the intensity is normally described by the radiation coordinates $\theta$ and $\phi$, which are defined by $$\begin{aligned} \label{eq:radcoor} cos(\theta)= \underline{\bf n} \cdot \underline{\bf r} & ~~~{\rm and}~~~ & sin(\beta) \cdot sin(\theta) \cdot cos(\phi)= \left[ \underline{\bf n} \times \underline{\bf r} \right] \cdot \left[ \underline{\bf r} \times \underline{\bf z} \right] \; ,\end{aligned}$$ respectively (see Fig. \[fig1\] for definitions). Unfortunately, $\theta$ and $\phi$ vary along a characteristic so using the same $\theta$ and $\phi$ grid for all spatial points would require interpolations in these angles. To avoid this additional interpolation we describe a characteristic with $${\bf p}= {\bf r} \times \underline{\bf n} \; ,$$ which we call the “impact parameter vector” (see Fig. \[fig1\]). This vector describes all essential features of a characteristic and can be considered as an analog of the orbital momentum vector in two body problems. Its absolute value p= $|$[**p**]{}$|$ is the traditional impact parameter while its orientation defines the “orbital plane” of the radiation (the plane that contains the characteristic and the origin). Following the analogy one can define an “inclination” angle for this plane by $$\label{eq:i} p \cdot cos(i)= {\bf p} \cdot \underline{\bf z} \; .$$ In our code we set up a universal grid in impact parameters and in inclination angles and calculate the radiation coordinates by $$\begin{aligned} \label{eq:CT} sin( \theta ) = \frac{p}{r} & ~~~{\rm and}~~~ & sin( \phi )= \frac{cos(i)}{sin( \beta )}\end{aligned}$$ for each grid point (see Fig \[fig1\] for definitions). We evaluate Eq. \[eq:FS\] in the comoving frame of the point of interest which is the proper frame for solving the rate equations. To ensure that the spatial and frequency variations of the opacity and source function are mapped properly in the integrations, we add extra integration points to the characteristics. The number of the extra points (at least one) depends on the ratio of the line of sight velocity difference between the endpoints and a “maximum allowed velocity difference” which is a free parameter in the code. The opacities and source terms at every comoving frequency are then interpoleted onto the integration points by bi-linear approximations using the four closest spatial grid points. It is relatively easy to construct a diagonal ALO in this evaluation scheme. With the exception of the intensity we interpolate all quantities in first order. However, the accuracy of this approximation is insufficient in many cases; therefore, we introduced a rudimentary multi-grid approach. Before evaluating Eq. \[eq:FS\], we calculate opacity and source function on a dense grid by using a sophisticated 3$^{rd}$ order approximation. Then, the transfer equation is solved on the original grid using the dense grid for opacity and source term interpolations. Test Results {#section:tests} ============ We tested our code by reproducing spherical symmetric CMFGEN models [[@hil98]]{}, as well as the results of an accurate 2D long characteristic program [[see @bus00]]{}. The 2D models were static with Schuster-type inner boundary conditions and included electron scattering iterations. We were able to reproduce all test cases within $\sim$5% accuracy. In Figs. \[fig2\] and \[fig3\] we demonstrate the capabilities of our code by showing the results for a rotating stellar envelope. The model was produced by using the opacities and emissivities from the results of a realistic CMFGEN simulation and by introducing a rotational velocity field. As expected the spectral lines show the rotational broadening. Busche, J. R. & Hillier, D. J. 2000, , 531, 1071 Georgiev L. N., Hillier, D. J., & Zsargó, J. 2004, in preparation Hillier, D. J. & Miller, D. L. 1998, , 496, 407 Hubeny, I. 1992, in The Atmospheres of Early-Type Stars, ed. U. Heber & C. J. Jeffery (Berlin:Springer), 377 Kunasz, P. B., & Auer, L. H. 1988, J. Quant. Spectrosc. Radiat. Transfer, 39, 67 Lucy, L. B. 1999, , 344, 282 Mihalas, D., Auer, L. H., & Mihalas, B. W. 1978, , 220, 1001 Rybicki, G. B. & Hummer, D. G. 1991, , 245,171
--- abstract: 'The creation, coherent manipulation, and measurement of spins in nanostructures open up completely new possibilities for electronics and information processing, among them quantum computing and quantum communication. We review our theoretical proposal for using electron spins in quantum dots as quantum bits. We present single- and two qubit gate mechanisms in laterally as well as vertically coupled quantum dots and discuss the possibility to couple spins in quantum dots via superexchange. We further present the recently proposed schemes for using a single quantum dot as spin-filter and spin read-out/memory device.' author: - Patrik Recher and Daniel Loss - Jeremy Levy title: | Spintronics and Quantum Computing\ with Quantum Dots --- Introduction ============ Theoretical research on electronic properties in mesoscopic condensed matter systems has focussed primarily on the charge degrees of freedom of the electron, while its spin degrees of freedom have not yet received the same attention. But an increasing number of spin-related experiments[@Prinz; @Kikkawa; @Fiederling; @Ohno; @Roukes; @Ensslin] show that the spin of the electron offers unique possibilities for finding novel mechanisms for information processing and information transmission—most notably in quantum-confined nanostructures with unusually long spin dephasing times[@Kikkawa; @Fiederling; @Ohno] approaching microseconds, as well as long distances of up to $100\:\mu{\rm m}$ [@Kikkawa] over which spins can be transported phase-coherently. Besides the intrinsic interest in spin-related phenomena, there are two main areas which hold promises for future applications: Spin-based devices in conventional[@Prinz] as well as in quantum computer hardware[@Loss97]. In conventional computers, the electron spin can be expected to enhance the performance of quantum electronic devices, examples being spin-transistors (based on spin-currents and spin injection), non-volatile memories, single spin as the ultimate limit of information storage etc.[@Prinz]. On the one hand, none of these devices exist yet, and experimental progress as well as theoretical investigations are needed to provide guidance and support in the search for realizable implementations. On the other hand, the emerging field of quantum computing[@Steane; @MMM2000] and quantum communication[@MMM2000; @Bennett00] requires a radically new approach to the design of the necessary hardware. As first pointed out in Ref.[@Loss97], the spin of the electron is a most natural candidate for the qubit—the fundamental unit of quantum information. We have shown[@Loss97] that these spin qubits, when located in quantum-confined structures such as semiconductor quantum dots, atoms or molecules, satisfy all requirements needed for a scalable quantum computer. Moreover, such spin-qubits—being attached to an electron with orbital degrees of freedom—can be transported along conducting wires between different subunits in a quantum network[@MMM2000]. In particular, spin-entangled electrons can be created in coupled quantum dots and—as mobile Einstein-Podolsky-Rosen (EPR) pairs[@MMM2000]—provide then the necessary resources for quantum communication. It follows a short introduction of quantum computing and quantum communication and we will then present our current theoretical efforts towards a realization of quantum computing. We thereby focus on the implementation of the necessary gate and read-out operations schemes with quantum dots. Quantum Computing and Quantum Communication {#ssecQC} ------------------------------------------- We give a brief description of the emerging research field of quantum computation. It has attracted much interest recently as it opens up the possibility of outperforming classical computation through new and more powerful quantum algorithms such as the ones discovered by Shor[@Shor94] and by Grover[@Grover]. There is now a growing list of quantum tasks[@MMM2000; @Bennett00] such as cryptography, error correcting schemes, quantum teleportation, etc. that have indicated even more the desirability of experimental implementations of quantum computing. In a quantum computer each quantum bit (qubit) is allowed to be in any state of a quantum two-level system. All quantum algorithms can be implemented by concatenating one- and two-qubit gates. There is a growing number of proposed physical implementations of qubits and quantum gates. A few examples are: Trapped ions[@traps], cavity QED[@cavity], nuclear spins[@nmr; @Kane], superconducting devices[@Schon; @Averin; @Ioffe; @Mooij], and our qubit proposal[@Loss97] based on the spin of the electron in quantum-confined nanostructures. Quantum Dots {#ssecQD} ------------ Since quantum dots are the central objects of this work we shall make some general remarks about these systems here. Semiconductor quantum dots are structures where charge carriers are confined in all three spatial dimensions, the dot size being of the order of the Fermi wavelength in the host material, typically between $10\:{\rm nm}$ and $1\:{\rm \mu m}$ [@kouwenhoven]. The confinement is usually achieved by electrical gating of a two-dimensional electron gas (2DEG), possibly combined with etching techniques. Precise control of the number of electrons in the conduction band of a quantum dot (starting from zero) has been achieved in GaAs heterostructures[@tarucha]. The electronic spectrum of typical quantum dots can vary strongly when an external magnetic field is applied[@kouwenhoven; @tarucha], since the magnetic length corresponding to typical laboratory fields $B\approx 1\,{\rm T}$ is comparable to typical dot sizes. In coupled quantum dots Coulomb blockade effects[@waugh], tunneling between neighboring dots[@kouwenhoven; @waugh], and magnetization[@oosterkamp] have been observed as well as the formation of a delocalized single-particle state[@blick]. Quantum Gate Operations with Coupled Quantum Dots {#coupling} ================================================= One and two qubit gates are known to be sufficient to carry out any quantum algorithm. For electron spins in nearby coupled quantum dots the desired two qubit coupling is provided by a combination of Coulomb interaction and the Pauli exclusion principle. At zero magnetic field, the ground state of two coupled electrons is a spin singlet, whereas the first excited state in the presence of strong Coulomb repulsion is usually a triplet. The remaining spectrum is separated from these two states by a gap which is either given by the Coulomb repulsion or the single particle confinement. The low-energy physics of such a system can then be described by the Heisenberg spin Hamiltonian $$\label{Heisenberg} H_{\rm s}(t)=J(t)\,\,{\bf S}_1\cdot{\bf S}_2,$$ where $J(t)$ is the exchange coupling between the two spins ${\bf S}_{1}$ and ${\bf S}_{2}$, and is given by the energy difference between the singlet and triplet states. If we pulse the exchange coupling such that $\int dtJ(t)/\hbar = J_0\tau_s/\hbar = \pi$ (mod $2\pi$), the associated unitary time evolution $U(t) = T\exp(i\int_0^t H_{\rm s}(\tau)d\tau/\hbar)$ corresponds to the “swap” operator $U_{\rm sw}$ which exchanges the quantum states of qubit 1 and 2 [@Loss97]. Having an array of dots it is therefore possible to couple any two qubits. Furthermore, the quantum XOR gate can be constructed by applying the sequence[@Loss97] $$U_{\rm XOR} = e^{i(\pi/2)S_1^z}e^{-i(\pi/2)S_2^z}U_{\rm sw}^{1/2}e^{i\pi S_1^z}U_{\rm sw}^{1/2},$$ i.e. a combination of “square-root of swap” $U_{\rm sw}^{1/2}$ and single-qubit rotations $\exp(i\pi S_i^z)$. Since $U_{\rm XOR}$ (combined with single-qubit rotations) is proven to be a universal quantum gate[@Barenco], it can be used to assemble any quantum algorithm. The study of universal quantum computation in coupled quantum dots is thus essentially reduced to the study of single qubit rotations and the [*exchange mechanism*]{}, in particular how the exchange coupling $J(t)$ can be controlled experimentally. Note that the switchable coupling mechanism described below need not be restricted to quantum dots: the same principle can be used in other systems, e.g. coupled atoms in a Bravais lattice, supramolecular structures, or overlapping shallow donors in semiconductors. Laterally coupled quantum dots ------------------------------ We first discuss a system of two laterally coupled quantum dots defined by depleted regions in a 2DEG containing one (excess) electron each[@Burkard]. The electrons are allowed to tunnel between the dots (if the tunnel barrier is low) leading to spin correlations via their charge (orbital) degrees of freedom. We model the coupled system with the Hamiltonian $H = H_{\rm orb} + H_{\rm Z}$, where $H_{\rm orb} = \sum_{i=1,2} h_i+C$ with $$\begin{aligned} h_i = \frac{1}{2m}\left({\bf p}_i-\frac{e}{c}{\bf A}({\bf r}_i) \right)^2+V({\bf r}_i),\,\,\,\,\, C={{e^2}\over{\kappa\left| {\bf r}_1-{\bf r}_2\right|}}\,\,\,\,. \label{hamiltonian}\end{aligned}$$ Here, $h_i$ describes the single-electron dynamics in the 2DEG confined to the $xy$-plane, with $m$ being the effective electron mass. We allow for a magnetic field ${\bf B}= (0,0,B)$ applied along the $z$-axis that couples to the electron charge via the vector potential ${\bf A}({\bf r}) = \frac{B}{2}(-y,x,0)$, and to the spin via a Zeeman coupling term $H_{\rm Z}$. The single dot confinement as well as the tunnel-coupling is modeled by a quartic potential, $V(x,y)=\frac{m\omega_0^2}{2}\left(\frac{1}{4 a^2}\left(x^2-a^2 \right)^2+y^2\right)$, which, in the limit $a\gg a_{\rm B}$, separates into two harmonic wells of frequency $\omega_0$ where $2a$ is the interdot distance and $a_{\rm B}=\sqrt{\hbar/m\omega_0}$ is the effective Bohr radius of a dot. This choice for the potential is motivated by the experimental observation[@tarucha] that the low-energy spectrum of single dots is well described by a parabolic confinement potential. The (bare) Coulomb interaction between the two electrons is described by $C$ where $\kappa$ denotes the dielectric constant of the semiconductor. The screening length $\lambda$ in almost depleted regions like few-electron quantum dots can be expected to be much larger than the bulk 2DEG screening length (about $40\,{\rm nm}$ for GaAs). Therefore, $\lambda$ is large compared to the size of the coupled system, $\lambda\gg 2a\approx 40\,{\rm nm}$ for small dots, and we will consider the limit of unscreened Coulomb interaction ($\lambda/a\gg 1$). At low temperatures $kT_{B}\ll \hbar\omega_0$ we are allowed to restrict our analysis to the two lowest orbital eigenstates of $H_{\rm orb}$, leaving us with a symmetric (spin-singlet) and an antisymmetric (three triplets $T_{0}$, $T_{\pm}$) orbital state. In this reduced (four-dimensional) Hilbert space, $H_{\rm orb}$ can be replaced by the effective Heisenberg spin Hamiltonian Eq. (\[Heisenberg\]) where the exchange coupling $J=\epsilon_{\rm t}-\epsilon_{\rm s}$ is given by the difference between the triplet and singlet energy. We make use of the analogy between atoms and quantum dots (artificial atoms) and caculate $\epsilon_{\rm t}$ and $\epsilon_{\rm s}$ with variational methods similiar to the ones used in molecular physics. With the Heitler-London approximation using single-dot groundstate orbitals we find[@Burkard], $$\begin{aligned} \label{J} J &=& \frac{\hbar\omega_0}{\sinh\left(2d^2\,\frac{2b-1}{b}\right)} \Bigg\{ \frac{3}{4b}\left(1+bd^2\right)\\ \nonumber && + c\sqrt{b} \left[e^{-bd^2} \, I_0\left(bd^2\right) - e^{d^2 (b-1)/b}\, I_0\left(d^2\,\frac{b-1}{b}\right)\right] \Bigg\},\end{aligned}$$ where we introduce the dimensionless distance $d=a/a_{\rm B}$ and the magnetic compression factor $b=B/B_0=\sqrt{1+\omega_L^2/\omega_0^2}$, where $\omega_L=eB/2mc$ is the Larmor frequency. ${\rm I_0}$ denotes the zeroth Bessel function. The first term in Eq. (\[J\]) comes from the confinement potential. The terms proportional to $c=\sqrt{\pi/2}(e^2/\kappa a_{\rm B})/\hbar\omega_0$ are due to the Coulomb interaction $C$, where the exchange term appears with a minus sign. Note that typically $|J/\hbar\omega_0|\ll 1$ which makes the exclusive use of ground-state single-dot orbitals in the Heitler-London ansatz a self-consistent approach. The exchange $J$ is given as a function of $B$ and $d$ in Fig. \[Jplots\]. We observe that $J>0$ for $B=0$, which is generally true for a two-particle system with time reversal invariance. The most remarkable feature of $J(B)$, however, is the change of sign from positive (ferromagnetic) to negative (antiferromagnetic), which occurs at some finite $B$ over a wide range of parameters $c$ and $a$. This singlet-triplet crossing is caused by the long-range Coulomb interaction and is therefore absent in the standard Hubbard model that takes only into account short range interaction and, in the limit $t/U\ll 1$, is given by $J=4t^2/U>0$ (see Fig. \[Jplots\]). Large magnetic fields ($b\gg 1$) and/or large interdot distances ($d\gg 1$) reduce the overlap between the dot orbitals leading to an exponential decay of $J$ contained in the $1/\sinh$ prefactor in Eq. (\[J\]). This exponential suppression is partly compensated by the exponentially growing exchange term $\propto \exp(2d^2(b-1/b))$. As a consequence, $J$ decays exponentially as $\exp(-2d^2b)$ for large $b$ or $d$. Thus, $J$ can be tuned through zero and then exponentially suppressed to zero by a magnetic field in a very efficient way (exponential switching is highly desirable to minimize gate errors). Further, working around the singlet-triplet crossing provides a smooth exchange switching, requiring only small local magnetic fields. Qualitatively similar results are obtained[@Burkard] when we extend the Heitler-London result by taking into account higher levels and double occupancy of the dots (using a Hund-Mullikan approach). In the absence of tunneling ($J=0$) direct Coulomb interaction between the electron charges can still be present. However the spins (qubit) remain unaffected provided the spin-orbit coupling is sufficiently small, which is the case for s-wave electrons in GaAs structures with unbroken inversion symmetry. Finally, we note that a spin coupling can also be achieved on a long distance scale by using a cavity-QED scheme[@Imamoglu] or superconducting leads to which the quantum dots are attached[@CBL]. Vertically coupled quantum dots ------------------------------- We also investigated vertically coupled Quantum dots[@vertical]. This kind of coupling can be implemented with multilayer self-assembled quantum dots[@luyken] as well as with etched mesa heterostructures[@austing]. -- -- -- -- We model the vertical coupled dot system by a potential $V=V_l+V_v$ where $V_{l}$ describes the parabolic lateral confinement and $V_{v}$ models the vertical dot coupling assumed to be a quartic potential similar to the one introduced above for the lateral coupling. We allow for different dot sizes $a_{{\rm B}\pm}=\sqrt{\hbar/m\alpha_{0\pm}\omega_z}$ with $\omega_{z}$ being the vertical confinement (see Fig. \[vdots\]), implying an effective Bohr radius $a_{\rm B}=\sqrt{\hbar/m\omega_z}$ and a dimensionless interdot distance $2d = 2a/a_{\rm B}$. By applying an in-plane electric field $E_\parallel$ (see Fig. \[vdots\]) an interesting new switching mechanism arises. The dots are shifted parallel to the field by $\Delta x_\pm =E_\parallel/E_0\alpha_{0\pm}^2$, where $E_0=\hbar\omega_z/ea_B$. Thus, the larger dot is shifted a greater distance $\Delta x_{-}>\Delta x_{+}$ and so the mean distance between the electrons grows as $d'=\sqrt{d^2+A^2(E_\parallel/E_0)^2}>d$, taking $A=(\alpha_{0+}^2-\alpha_{0-}^2)/2\alpha_{0+}^2\alpha_{0-}^2$. Since the exchange coupling $J$ is exponentially sensitive to the interdot distance $d'$ (see Eq. (\[J\])) we have another exponential switching mechanism for quantum gate operations at hand. coupling two spins by superexchange ----------------------------------- There is a principal problem if one wants to couple two “extended" dots whose energy levels are closely spaced (i.e. smaller than $k_BT$), as would be the case if there is a sizable distance between the two confined qubits before the barrier is lowered. In this case, the singlet-triplet splitting becomes vanishingly small, and it would not take much excitation energy to get states which are not entangled at all. In other words, the adiabatic switching time[@Burkard] which is proportional to the inverse level spacing becomes arbitrarily large. A better scenario for coupling the two spin-qubits is to make use of a superexchange mechanism to obtain a Heisenberg interaction[@Loss97]. Consider three aligned quantum dots where the middle dot is empty and so small that only its lowest levels will be occupied by 1 or 2 electrons in a virtual hopping process (see Fig. 3). The left and right dots can be much larger but still small enough such that the Coulomb charging energies $U_{L}\approx U_{R}$ are high enough (compared to $k_BT$) to suppress any double occupancy. Let us assume now that the middle dot has energy levels higher than the ground states of right and left dots, assumed to be approximately the same. These levels include single particle energy (set to zero) and Coulomb charging energy $N^2e^2/2C$, with $N$ the number of electrons and C the capacitance of the middle dot, and thus the ground state energy of the middle dot is $0$ when empty, $\epsilon=e^2/2C$ for one electron, and $4\epsilon$ for 2 electrons. The tunnel coupling between the dots is denoted by $t_{0}$. Now, there are two types of virtual processes possible which couple the spins but only one is dominant. First, the electron of the left (right) dot hops on the middle dot, and then the electron from the right (left) dot hops on the [*same*]{} level on the middle dot, and thus, due to the Pauli principle, the two electrons on the middle dot form a singlet, giving the desired entanglement. And then they hop off again into the left and right dots, respectively. (Note that U must be larger than $k_{B}T$, otherwise real processes involving 2 electrons in the left or right dot will be allowed). It is not difficult to see that this virtual process leads to an effective Heisenberg exchange interaction with exchange constant $J=4t_{0}^4/4\epsilon^3$, where the virtual energy denominators follow the sequence $1/\epsilon\rightarrow 1/4\epsilon\rightarrow 1/\epsilon$. In the second type of virtual process the left (right) electron hops via the middle dot into the right (left) dot and forms there a singlet, giving $J=4t_{0}^4/U_{R}\epsilon^2$. \[levy\] However, this process has vanishing weight because there are also many nearby states available in the outer dots for which there is no spin correlation required by the Pauli principle. Thus, most of the virtual processes, for which we have 2 electrons in the left (right) dot, do not produce spin correlations, and thus we can neglect these virtual processes of the second type altogether. It should be possible to create ferroelectrically defined nanostructures for which superexchange is the dominant mechanism for coupling neighboring electrons. The geometry will resemble closely that of Fig. 3, except that the central barrier becomes a narrow well. Single-Spin Rotations ===================== In order to perform one qubit gates single-spin rotations are required. This is done by exposing a single spin to a time-varying Zeeman coupling $(g\mu_B {\bf S}\cdot {\bf B})(t)$ [@Burkard], which can be controlled through both the magnetic field ${\bf B}$ and/or the g-factor $g$. We have proposed a number of possible implementations[@Loss97; @Burkard; @MMM2000; @BEL] for spin-rotations: Since only relative phases between qubits are relevant we can apply a homogeneous ${\bf B}$-field rotating all spins at once. A local change of the Zeeman coupling is then possible by changing the Larmor frequency $\omega_{L}=g\mu_{B}B/\hbar$. The equilibrium position of an electron can be changed through electrical gating, therefore if the electron wavefunction is pushed into a region with different magnetic field strength or different (effective) g-factor, the relative phase of such an electron then becomes $\phi = (g'B'-gB)\mu_B\tau/2\hbar$. Regions with an increased magnetic field can be provided by a magnetic (dot) material while an effective magnetic field can be produced e.g. with dynamically polarized nuclear spins (Overhauser effect)[@Burkard]. We shall now explain a concept for using g-factor-modulated materials[@MMM2000; @BEL]. In bulk semiconductors the free-electron value of the Landé g-factor $g_0=2.0023$ is modified by spin-orbit coupling. Similarly, the g-factor can be drastically enhanced by doping the semiconductor with magnetic impurities[@Ohno; @Fiederling]. In confined structures such as quantum wells, wires, and dots, the g-factor is further modified and becomes sensitive to an external bias voltage[@Ivchenko]. We have numerically analyzed a system with a layered structure (AlGaAs-GaAs-InAlGaAs-AlGaAs), in which the effective g-factor of electrons is varied by shifting their equilibrium position from one layer to another by electrical gating[@Ensslin2]. We have found that in this structure the effective g-factor can be changed by about $\Delta g_{\rm eff}\approx 1$ [@BEL]. Alternatively one can use electron-spin-resonance (ESR) techniques\ [@Burkard] to perform single-spin rotations, e.g. if we want to flip a certain qubit (say from $|\uparrow\rangle$ to $|\downarrow\rangle$) we apply an ac-magnetic field perpendicular to the $\uparrow$- axis that matches the Larmor frequency of that particular electron. Due to paramagnetic resonance[@Shankar] the spin can flip. Furthermore, localized magnetic fields can be generated with the magnetic tip of a scanning force microscope, a magnetic disk writing head, by placing the dots above a grid of current-carrying wires, or by placing a small wire coil above the dot etc. Read-Out of a Single Spin ========================= The final step of each (quantum) computation, consists in reading out the state of each qubit, i.e. if the electron spin is in the $|\uparrow\rangle$ or $|\downarrow\rangle$ state. It is very hard to detect an electron spin over its tiny (of the order of $\mu_{B}$) magnetic moment directly. We proposed several devices for read out like tunneling of the electron into a supercooled paramagnetic dot[@Loss97; @MMM2000], thereby inducing a magnetization nucleation from the metastable phase into a ferromagnetic domain. The domain’s magnetization direction is along the measured spin and can be detected by conventional methods and provides a 75%-reliable result for the read out of the electron spin. Another possibility is to use a spin selective tunnelbarrier (conventional spin filter), that let pass only one spin direction. If an electron passes the barrier to enter another dot an electrometer can detect the charge[@Loss97]. Quantum Dot As Spin Filter and\ Read-Out/Memory Device ------------------------------- We recently proposed[@Recher] another setup using a quantum dot attached to in and outgoing current leads $l=1,2$—that can work either as a spin filter or as a read-out device, or as a spin memory where the spin stores the information. A new feature of this proposal is that we lift the spin-degeneracy with [*different*]{} Zeeman splittings for the dot and the leads, e.g. by using materials with different effective g-factors for leads and dot[@Recher]. This results in Coulomb blockade oscillation peaks and spin-polarized currents which are uniquely associated with the spin state on the dot. The setup is described by a standard tunneling Hamiltonian $H_0+H_T$ [@Mahan], where $H_0=H_L+H_D$ describes the leads and the dot. $H_D$ includes the charging and interaction energies of the electrons in the dot as well as their Zeeman energy $\pm g\mu_B B/2$ in an external magnetic field ${\bf B}$. Tunneling between leads and the dot is described by $H_T=\sum_{l,k,p,\sigma}t_{lp}c_{lk\sigma}^{\dag}d_{p\sigma}+{\rm h.c.}$, where $c_{lk\sigma}$ annihilates electrons with spin $\sigma$ and momentum $k$ in lead $l$ and $d_{p\sigma}$ annihilates electrons in the dot. We work in the Coulomb blockade regime[@kouwenhoven] where the charge on the dot is quantized. We use a stationary master equation approach[@kouwenhoven; @Recher] for the reduced density matrix of the dot and calculate the transition rates in a “golden-rule” approach up to 2nd order in $H_T$. The first-order contribution to the current is the sequential tunneling (ST) current $I_s$[@kouwenhoven], where the number of electrons on the dot fluctuates and thus the processes of an electron tunneling from the lead onto the dot and vice versa are allowed by energy conservation. The second-order contribution is the cotunneling (CT) current $I_c$[@averinnazarov], where charge is transported over intermediate virtual states of the dot. We now consider a system, where the Zeeman splitting in the leads is negligible (i.e.much smaller than the Fermi energy) while on the dot it is given as $\Delta_z = \mu_B |gB|$. We assume a small bias $\Delta\mu = \mu_1-\mu_2 >0$ between the leads at chemical potential $\mu_{1,\,2}$ and low temperatures so that $\Delta\mu,\, k_{B}T < \delta$, where $\delta$ is the characteristic energy-level distance on the dot. First we tune the system to the ST resonance $\mu_{1}>\Delta E>\mu_{2}$ where the number of electrons can fluctuate between $N$ and $N+1$. $\Delta E$ is the energy difference between the $N+1$ and $N$-particle groundstates (GS) of the dot. We first consider a quantum dot with $N$ odd and total spin $s=1/2$ with the $N$-particle GS to be $|\uparrow\rangle$ and to have energy $E_{\uparrow}=0$. In this state the dot can receive an electron from the leads and, depending on the spin of the incoming electron form a singlet $|S\rangle$ with energy $E_{S}$ (for spin down) or a triplet $|T_{+}\rangle$ with energy $E_{T_{+}}$ (for spin up). The singlet is (usually) the GS for $N$ even, whereas the three triplets $|T_{\pm}\rangle$ and $|T_{0}\rangle$ are excited states. In the regime $E_{T_{+}}-E_{S},\,\Delta_{z}>\Delta\mu,\,k_{B}T$, energy conservation only allows ground state transitions. Thus, spin-up electrons are not allowed to tunnel from lead $1$ via the dot into lead $2$, since this would involve virtual states $|T_{+}\rangle$ and $|\downarrow\rangle$, and so we have $I_s(\uparrow)=0$ for ST. However, spin down electrons may pass through the dot in the process (7,9) (3,3.6)[(0,0)[$\downarrow$]{}]{} (13,10) (6,3.6) (6,3.6)[(0,0)[$\uparrow$]{}]{} $_i$ $\to$ (13,10) (6,3.6) (6,3.6)[(0,0)[$\uparrow\downarrow$]{}]{} $_f$, followed by (13,10) (6,3.6) (6,3.6)[(0,0)[$\uparrow\downarrow$]{}]{} $_i$ $\to$ (13,10) (6,3.6) (6,3.6)[(0,0)[$\uparrow$]{}]{} (7,9) (3,3.6)[(0,0)[$\downarrow$]{}]{} $\!_f$. Here the state of the quantum dot is drawn inside the circle, while the states in the leads are drawn to the left and right, [*resp.*]{}, of the circle. This leads to a [*spin-polarized*]{} ST current $I_s = I_s(\downarrow)$, which we have calculated as[@Recher] $$\begin{aligned} && I_s(\downarrow)/I_0=\theta(\mu_1-E_S)-\theta(\mu_2-E_S), \quad k_B T<\Delta\mu , \label{eqnSmallT} \\ && I_s(\downarrow)/I_0= \frac{\Delta\mu}{4k_BT}\cosh^{-2}\left[\frac{E_S-\mu}{2k_BT}\right], \quad k_BT >\Delta\mu, \label{eqnLargeT}\end{aligned}$$ where $\mu = (\mu_1+\mu_2)/2$ and $I_0=e\gamma_1\gamma_2/(\gamma_1+\gamma_2)$. Here $\gamma_l=2\pi\nu|A_{lnn'}|^2$ is the tunneling rate between lead $l$ and the dot. $n$ and $n'$ denote the $N$ and $N+1$ particle eigenstates of $H_{D}$ involved in the tunnel process. The dependence of $A_{ln'n}=\sum_{p\sigma}t_{lp}\langle n'|d_{p\sigma}| n\rangle$ on $n$ and $n'$ is weak compared to the resonant character of the tunneling current considered here[@Recher]. Similarly, for $N$ even we find $I_s(\downarrow)=0$ while for $I_s(\uparrow)$ a similar result holds[@Recher] as in Eqs. (\[eqnSmallT\]), (\[eqnLargeT\]). Even though $I_s$ is completely spin-polarized, a leakage of current with opposite polarization arises through cotunneling processes[@Recher]; still the leakage is small, and the efficiency for $\Delta_z<|E_{T_+}-E_S|$ for spin filtering in the sequential regime becomes[@Recher] $$\label{efficiencyST} I_s(\downarrow)/I_c(\uparrow)\sim \frac{\Delta_z^2}{(\gamma_1+\gamma_2)\max\{k_BT,\Delta\mu\}},$$ and equivalently for $I_s(\uparrow)/I_c(\downarrow)$ at the even-to-odd transition. In the ST regime we have $\gamma_i< k_{B}T,\Delta\mu$, thus, for $k_{B}T,\Delta\mu<\Delta_z$, we see that the spin-filtering is very efficient. Above or below a ST-resonance the system is in the CT regime where the current is solely due to CT-processes. Again, in the regime $E_{T_{+}}-E_{S},\,\Delta_{z}>\Delta\mu,\,k_{B}T$ the current is [*spin-polarized*]{} and the spin filter also works in the CT regime[@Recher]. We discuss now the opposite case where the leads are fully spin polarized with a much smaller Zeeman splitting on the dot[@Recher]. Such a situation can be realized with magnetic semiconductors (with effective g-factors reaching 100 [@Fiederling]) where spin-injection into GaAs has recently been demonstrated for the first time[@Fiederling; @Ohno]. Another possibility would be to work in the quantum Hall regime where spin-polarized edge states are coupled to a quantum dot[@Sachrajda]. In this setup the device can be used as read-out for the spin state on the dot. Assume now that the spin polarization in both leads is up, and the ground state of the dot contains an odd number of electrons with total spin $1/2$. Now the leads can [*provide*]{} and [*take up*]{} only spin-up electrons. As a consequence, a ST current will only be possible if the dot state is $|\downarrow\rangle$ (to form a singlet with the incoming electron, whereas the triplet is excluded by energy conservation). Hence, the current is much larger for the spin on the dot being in $|\downarrow\rangle$ than it is for $|\uparrow\rangle$. Again, there is a small CT leakage current for the dot-state $|\uparrow\rangle$, with a ratio of the two currents given by Eq. (\[efficiencyST\]) (assuming $E_{S}>\Delta_{z}$). Thus, we can probe (read out) the spin-state on the quantum dot by measuring the current which passes through the dot. Given that the ST current is typically on the order of $0.1-1$ nA [@kouwenhoven], we can estimate the read-out frequency $I/2\pi e$ to be on the order of $0.1-1$ GHz. Combining this with the initialization and read-in techniques, i.e. ESR pulses to switch the spin state, we have a [*spin memory*]{} at the ultimate single-spin limit, whose relaxation time is just the spin relaxation time. This relaxation time can be expected to be on the order of $100$’s of nanoseconds[@Kikkawa], and can be directly measured via the currents when they switch from high to low due to a spin flip on the dot[@Recher]. Conclusions =========== We have described a scalable scenario for the implementation of a solid state quantum computer based on the electron spin in quantum dots as the qubit. We have shown how electron spins can be manipulated through their charge (orbital) degrees of freedom to implement single and two-qubit gates as well as the possibility of read in/out a single qubit (spin). G. Prinz, Phys. Today [**45**]{}(4), 58 (1995); G. A. Prinz, Science [**282**]{}, 1660 (1998). J.M. Kikkawa, I.P. Smorchkova, N. Samarth, and D.D. Awschalom, Science [**277**]{}, 1284 (1997); J.M. Kikkawa and D.D. Awschalom, Phys. Rev. Lett. [**80**]{}, 4313 (1998); D.D. Awschalom and J.M. Kikkawa, Phys. Today [**52**]{}(6), 33 (1999). R. Fiederling [*et al.*]{}, Nature [**402**]{}, 787 (1999). Y. Ohno [*et al.*]{}, Nature [**402**]{}, 790 (1999). F.G. Monzon and M.L. Roukes, J. Magn. Magn. Mater. **** 198 , 632 (1999). S. Lüscher [*et al.*]{}, cond-mat/0002226. D. Loss and D.P. DiVincenzo, Phys. Rev. A [**57**]{}, 120 (1998); cond-mat/9701055. A. Steane, Rep. Prog. Phys. [**61**]{}, 117 (1998). D.P. DiVincenzo and D. Loss, J. Magn. Magn. Mater. [**200**]{}, 202 (1999); cond-mat/9901137. C. H. Bennett and D. P. DiVincenzo, Nature [**404**]{}, 247 (2000). P.W. Shor, in [*Proc. 35th Symposium on the Foundations of Computer Science*]{}, (IEEE Computer Society Press), 124 (1994). L.K. Grover, Phys. Rev. Lett. [**79**]{}, 325 (1997). J.I. Cirac and P. Zoller, Phys. Rev. Lett. [**74**]{}, 4091 (1995); C. Monroe *et al.*, *ibid.* [**75**]{}, 4714 (1995). Q.A. Turchette [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 4710 (1995). D. Cory, A. Fahmy, and T. Havel, Proc. Nat. Acad. Sci. U.S.A. [**94**]{}, 1634 (1997); N. A. Gershenfeld and I. L. Chuang, Science [**275**]{}, 350 (1997). B. Kane, Nature [**393**]{}, 133 (1998). A. Shnirman, G. Schön, and Z. Hermon, Phys. Rev. Lett. [**79**]{}, 2371 (1997). D.V. Averin, Solid State Commun. [**105**]{}, 659 (1998). L.B. Ioffe [*et al.*]{}, Nature [**398**]{}, 679 (1999). T.P. Orlando [*et al.*]{}, Phys. Rev. B [**60**]{}, 15398 (1999). L. P. Kouwenhoven [*et al.*]{}, Wingreen, Proceedings of the ASI on *Mesoscopic Electron Transport*, eds. L.L. Sohn, L.P. Kouwenhoven, and G. Schön (Kluwer, 1997). S. Tarucha [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 3613 (1996). F.R. Waugh [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 705 (1995); C. Livermore [*et al.*]{}, Science [**274**]{}, 1332 (1996). T. H. Oosterkamp [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 4951 (1998). R.H. Blick [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 4032 (1998); [*ibid.*]{} [**81**]{}, 689 (1998). T.H. Oosterkamp [*et al.*]{}, Nature [**395**]{}, 873 (1998); I.J. Maasilta and V.J. Goldman, Phys. Rev. Lett. [**84**]{}, 1776 (2000). A. Barenco [*et al.*]{}, Phys. Rev. A [**52**]{}, 3457 (1995). G. Burkard, D. Loss, and D. P. DiVincenzo, Phys. Rev. B [**59**]{}, 2070 (1999). A. Imamoglu, D.D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin, and A. Small, Phys. Rev. Lett. [**83**]{}, 4204 (1999). M.-S. Choi, C. Bruder, and D. Loss; cond-mat/0001011. G. Burkard, G. Seelig, and D. Loss; Phys. Rev. B [**62**]{}, 2581 (2000) R. J. Luyken [*et al.*]{}, preprint. D. G. Austing [*et al.*]{}, Physica  B [**249-251**]{}, 206 (1998). G. Burkard, H.-A. Engel, and D. Loss, to appear in Fortschritte der Physik, special issue on *Experimental Proposals for Quantum Computation*, eds. S. Braunstein and K.L. Ho; cond-mat/0004182. E.L. Ivchenko, A.A. Kiselev, M. Willander, Solid State Comm. [**102**]{}, 375 (1997). K. Ensslin, private communication. R. Shankar, [*Principles of Quantum Mechanics*]{}, Ch. 14, Plenum Press, New York, 1994. P. Recher, E.V. Sukhorukov, and D. Loss, cond-mat/0003089, to appear in Phys. Rev. Lett. G. D. Mahan, [*Many Particle Physics*]{}, 2nd Ed. (Plenum, New York, 1993). D. V. Averin and Yu. V. Nazarov, in [*Single Charge Tunneling*]{}, eds. H. Grabert, M. H. Devoret, NATO ASI Series B: Physics Vol. 294, Plenum Press, New York, 1992. M. Ciorga [*et al.*]{}, cond-mat/9912446.
--- abstract: 'We introduce the notion of a *weight-almost greedy* basis and show that a basis for a real Banach space is $w$-almost greedy if and only if it is both quasi-greedy and $w$-democratic. We also introduce the notion of *weight-semi-greedy* basis and show that a $w$-almost greedy basis is $w$-semi-greedy and that the converse holds if the Banach space has finite cotype.' address: - 'Department of Mathematics, University of South Carolina, Columbia, SC, USA' - 'Department of Mathematics, University of Illinois at Urbana–Champaign, Urbana, IL, USA; Institute of Mathematics and Informatics, Bulgarian Academy of Sciences' - 'Department of Mathematics, University of South Carolina, Columbia, SC, USA; Steklov Institute of Mathematics, Moscow, Russia; Lomonosov Moscow State University, Moscow, Russia' - 'Department of Mathematical Sciences, Northern Illinois University, Dekalb, IL, USA' author: - 'S. J. Dilworth' - Denka Kutzarova - Vladimir Temlyakov - Ben Wallis title: 'Weight-almost greedy bases' --- [^1] Introduction ============ Let $(e_n)_{n=1}^\infty$ be a normalized basis for a Banach space $X$ with biorthogonal functionals $(e_n^*)_{n=1}^\infty$. Konyagin and Temlyakov defined [@KT] the [**thresholding greedy algorithm (TGA)**]{} for $(e_n)_{n=1}^\infty$ as the sequence $(G_m)_{m=1}^\infty$ of functions $G_m:X\to X$ by letting $G_m(x)$ denote the vector obtained by taking the $m$ largest coefficients of $x=\sum e_n^*(x)e_n\in X$. The TGA provides a theoretical model for the thresholding procedure that is used in image compression. They defined the basis $(e_n)$ to be [*greedy*]{} if the TGA is optimal in the sense that $G_n(x)$ is essentially the best $n$-term approximation to $x$ using the basis vectors, i.e.if there exists a constant $K$ such that for all $x \in X$ and $m \in \mathbb{N}$, we have $$\|x-G_m(x)\|{\ensuremath{\leqslant}}K\inf\{\|x-\sum_{n\in A}\alpha_ne_n\|:\ |A|=m,\ \alpha_n\in\mathbb R,\ n\in A\}.$$ They then showed that greedy bases can be simply characterized as unconditional bases with the additional property of being [*democratic*]{}, i.e. for some $\Delta>0$, we have $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}\Delta \|\sum_{n\in B}e_n\|\quad \text{whenever $|A|{\ensuremath{\leqslant}}|B|$}.$$ They also defined a basis to be *quasi-greedy* if there exists a constant $K$ such that $\|G_m(x)\| {\ensuremath{\leqslant}}K\|x\|$ for all $x \in X$ and $m \in \mathbb{N}$. Subsequently, Wojtaszczyk [@W] proved that these are precisely the bases for which the TGA merely converges, i.e. $\lim_{m\to\infty}G_m(x)=x$ for $x\in X$. The class of *almost greedy* bases was introduced in [@DKKT]. Let us denote the biorthogonal sequence by $(e_n^*)$. Then $(e_n)$ is almost greedy if there is a constant $K$ such that $$\|x-G_m(x)\|{\ensuremath{\leqslant}}K\inf\{\|x-\sum_{n\in A}e_n^*(x)e_n\|:\ |A|=m \}\qquad x\in X, \ m\in\mathbb N.$$ It was proved in [@DKKT] that $(e_n)$ is almost greedy if and only if $(e_n)$ is quasi-greedy and democratic. The class of *semi-greedy* bases was introduced in [@DKK]. A basis is semi-greedy if there is a constant $C$ such that for all $x \in X$ and $m \in \mathbb{N}$ $$\|x-\overline{G}_m(x)\|{\ensuremath{\leqslant}}C\inf\{\|x-\sum_{n\in A}\alpha_ne_n\|:\ |A|=m,\ \alpha_n\in\mathbb R,\ n\in A\},$$ where $\overline{G}_m(x)$ is the best $m$- term approximation to $x$ with the same (or possibly smaller) support as $G_m(x)$ (i.e., $\overline{G}_m(x)$ is supported on the basis vectors corresponding to the $m$ largest coefficients of $x$). It was proved in [@DKK] that an almost greedy basis is semi-greedy and that the converse holds if $X$ has finite cotype. Later, Kerkyacharian, Picard and Temlyakov [@KPT] (see also [@Tbook], Section 1.3) defined the notion of greedy basis with respect to a weight function, *weight-greedy* basis, and proved a criterion similar to the one for greedy bases. Their generalization was inspired by research of Cohen, DeVore and Hochmuth [@CDH]. Recently, Berna and Blasco [@BB] showed that greedy bases can be characterized as those where the error term using $m$-greedy approximant is uniformly bounded by the best $m$-term approximation with respect to polynomials with constant coefficients in the context of the weak greedy algorithm with weights. In this paper we introduce the notion of a *weight-almost greedy* basis and show that a basis for a real Banach space is $w$-almost greedy if and only if it is both quasi-greedy and $w$-democratic. From here onward, a [**weight**]{} $w=(w_i)_{i=1}^\infty$ will always be a sequence of positive real numbers. If $A\subset\mathbb{N}$ then we let $w(A)=\sum_{i\in A}w_i$ denote the [**$\boldsymbol{w}$-measure**]{} of $A$. Let us give a more precise definition of each $G_m(x)$. If $(a_i)_{i=1}^\infty\in c_0$ then we denote by $(a_i^*)_{i=1}^\infty$ the nonincreasing rearrangement of $(|a_i|)_{i=1}^\infty$. Let $\tau$ be any permutation of $\mathbb{N}$ satisfying $|a_{\tau(i)}|=a_i^*$ for all $i\in\mathbb{N}$. Then $G_{m,\tau}(x)=\sum_{i=1}^mx^*_{\tau(i)}(x)x_{\tau(i)}$. We next set $$\Lambda_{m,\tau,x}:=\{\tau(1),\cdots,\tau(m)\},$$ the set of indices of the coefficients associated with $G_{m,\tau}(x)$, called the [**support**]{} of $G_{m,\tau}(x)$. When $\tau$ is understood (or irrelevant) we write $G_m=G_{m,\tau}$ and $\Lambda_m=\Lambda_{m,\tau,x}$, and when $m$ and $x$ are also understood we write $\Lambda=\Lambda_{m,\tau,x}$. As is usual, we write $$\mathbb{N}^m=\{A\subset\mathbb{N}: |A|=m\}\;\;\;\text{ and }\;\;\;\mathbb{N}^{<\infty}=\bigcup_{m=0}^\infty\mathbb{N}^m.$$ Throughout, $C$ and $D$ will denote constants in $[1,\infty)$. We say that a normalized basis $(e_n)_{n=1}^\infty$ for a Banach space $X$ is [**$\boldsymbol{w}$-almost greedy with constant $\boldsymbol{K}$**]{} whenever $$\|x-G_m(x)\|{\ensuremath{\leqslant}}K\inf\left\{\|x-\sum_{n\in A}e_n^*(x)e_n\|:A\in\mathbb{N}^{<\infty},\;w(A){\ensuremath{\leqslant}}w(\Lambda)\right\}$$ for all $m\in\mathbb{N}$ and $x\in X$, and independent of choice of $\tau$. \[x-bound\]In the above definition we can always take $A=\emptyset$ to obtain $$\inf\left\{\|x-\sum_{n\in A}e_n^*(x)e_n\|:A\in\mathbb{N}^{<\infty},\;w(A){\ensuremath{\leqslant}}w(\Lambda)\right\}{\ensuremath{\leqslant}}\|x\|.$$ A normalized basis $(e_n)_{n=1}^\infty$ is [**$\boldsymbol{w}$-democratic with constant $\boldsymbol{D}$**]{} whenever $w(A){\ensuremath{\leqslant}}w(B)$ for $A,B\in\mathbb{N}^{<\infty}$ implies $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}D\|\sum_{n\in B}e_n\|.$$ \[subsequence\]Observe that a subsequence $(x_{n_k})_{k=1}^\infty$ of a $D$-$w$-democratic sequence $(e_n)_{n=1}^\infty$ is $D$-$w'$-democratic, where $w'=(w_{n_k})_{k=1}^\infty$ is the corresponding subsequence of $w$. Indeed, if $A\in\mathbb{N}^{<\infty}$ then we write $A'=\{n_a:a\in A\}$ so that $w'(A)=w(A')$. When $w=(1,1,1,\cdots)$, we have $w(A)=|A|$ for each $A\in\mathbb{N}^{<\infty}$. In this case, a $w$-almost greedy (resp., $w$-democratic) basis is called, simply, [**almost greedy**]{} (resp., [**democratic**]{}). Finally, let us remark that we will often be forced to work with real Banach spaces, as most of the proofs are invalid in the complex setting. $w$-almost greedy bases are quasi-greedy and $w$-democratic =========================================================== \[quasi-greedy\]Every $K$-$w$-almost greedy basis is $(K+1)$-quasi-greedy. Fix $m$ and $x$. By Remark \[x-bound\] together with $K$-$w$-almost-greediness, $$\|x-G_m(x)\|{\ensuremath{\leqslant}}K\|x\|$$ and hence $$\|G_m(x)\|{\ensuremath{\leqslant}}\|x-G_m(x)\|+\|x\|{\ensuremath{\leqslant}}(K+1)\|x\|.$$ Let us give two facts from previous literature, which will be required momentarily. \[constant-unconditional\] If $(e_n)_{n=1}^\infty$ is a $K$-quasi-greedy basis for a real Banach space then $$\frac{1}{2K}\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}\|\sum_{n\in A}{\ensuremath{\varepsilon}}_ne_n\|{\ensuremath{\leqslant}}2K\|\sum_{n\in A}e_n\|$$ for every choice of signs $({\ensuremath{\varepsilon}}_n)_{n=1}^\infty\in\{\pm 1\}^\mathbb{N}$ and every $A\in\mathbb{N}^{<\infty}$. In this case we have $$\|\sum_{n\in A}a_ne_n\|{\ensuremath{\leqslant}}2K\|\sum_{n\in A}e_n\|\cdot\|(a_n)_{n\in A}\|_\infty$$ for every $(a_n)_{n=1}^\infty\in\mathbb{R}^\mathbb{N}$ and $A\in\mathbb{N}^{<\infty}$. \[Lemma-2.2\] If $(e_n)_{n=1}^\infty$ is a $K$-quasi-greedy basis for a real Banach space then $$\|\sum_{n\in A}e_n\|\cdot\min_{n\in A}|a_n|{\ensuremath{\leqslant}}4K^2\|\sum_{n\in A}a_ne_n\|$$ for every $(a_n)_{n=1}^\infty\in\mathbb{R}^\mathbb{N}$ and $A\in\mathbb{N}^{<\infty}$. The reader can also find the above Propositions in [@Tsparse], p.65. \[w-democratic\]Every normalized $K$-$w$-almost greedy basis for a real Banach space is $w$-democratic with constant ${\ensuremath{\leqslant}}K$. Let $A,B\subset\mathbb{N}^{<\infty}$ satisfy $w(A){\ensuremath{\leqslant}}w(B)$. Note that this also means $w(A\setminus B){\ensuremath{\leqslant}}w(B\setminus A)$. For $\delta>0$ we set $$y:=\sum_{n\in A}e_n+(1+\delta)\sum_{n\in B\setminus A}e_n\;\;\;\text{ and }\;\;\;m:=|B\setminus A|).$$ By $K$-$w$-almost-greediness, we now have the estimates $$\begin{aligned} \|\sum_{n\in A}e_n\| &=\|y-(1+\delta)\sum_{n\in B\setminus A}e_n\| \\&=\|y-G_m(y)\| \\&{\ensuremath{\leqslant}}K\|y-\sum_{n\in A\setminus B}e_n\| \\&=K\|\sum_{n\in B}e_n+\delta\sum_{n\in B\setminus A}e_n\| \\&{\ensuremath{\leqslant}}K\|\sum_{n\in B}e_n\|+\delta\|\sum_{n\in B\setminus A}e_n\|.\end{aligned}$$ As $\delta>0$ was arbitrary, we have $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}K\|\sum_{n\in B}e_n\|.$$ \[T2.5\]If a basis $(e_n)_{n=1}^\infty$ for a real Banach space is both $K$-quasi-greedy and $D$-$w$-democratic, then it is $w$-almost greedy with constant ${\ensuremath{\leqslant}}8K^4D+K+1$. Fix $m$ and $x$, and suppose $A\in\mathbb{N}^{<\infty}$ satisfies $w(A){\ensuremath{\leqslant}}w(\Lambda)$. We claim that $$\|x-G_m(x)\|{\ensuremath{\leqslant}}[4K^3(K+1)D+K+1]\cdot\|x-\sum_{n\in A}e_n^*(x)e_n\|,$$ so that, taking the infimum over all such $A$, the proof will be complete. We begin with the estimate $$\begin{aligned} \|x-G_m(x)\| &=\|x-\sum_{n\in\Lambda}e_n^*(x)e_n\| \\&{\ensuremath{\leqslant}}\|x-\sum_{n\in A}e_n^*(x)e_n\|+\|\sum_{n\in A}e_n^*(x)e_n-\sum_{n\in\Lambda}e_n^*(x)e_n\| \\&=\|x-\sum_{n\in A}e_n^*(x)e_n\|+\|\sum_{n\in A\setminus\Lambda}e_n^*(x)e_n-\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n\| \\&{\ensuremath{\leqslant}}\|x-\sum_{n\in A}e_n^*(x)e_n\|+\|\sum_{n\in A\setminus\Lambda}e_n^*(x)e_n\|+\|\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n\|.\end{aligned}$$ Observe that $w(A\setminus\Lambda){\ensuremath{\leqslant}}w(\Lambda\setminus A)$. Thus, $$\begin{aligned} \|\sum_{n\in A\setminus\Lambda}e_n^*(x)e_n\| &{\ensuremath{\leqslant}}2K\|\sum_{n\in A\setminus\Lambda}e_n\|\cdot\|(e_n^*(x))_{n\in A\setminus\Lambda}\|_\infty &\text{(Proposition \ref{constant-unconditional})} \\&{\ensuremath{\leqslant}}2KD\|\sum_{n\in\Lambda\setminus A}e_n\|\cdot\|(e_n^*(x))_{n\in A\setminus\Lambda}\|_\infty &\text{(}D\text{-}w\text{-democracy)} \\&{\ensuremath{\leqslant}}2KD\|\sum_{n\in\Lambda\setminus A}e_n\|\cdot\min_{n\in\Lambda\setminus A}|e_n^*(x)| &\text{(definition of }\Lambda\text{)} \\&{\ensuremath{\leqslant}}8K^3D\|\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n\| &\text{(Proposition \ref{Lemma-2.2}).}\end{aligned}$$ Combining this with the last inequality gives $$\label{3}\|x-G_m(x)\|{\ensuremath{\leqslant}}\|x-\sum_{n\in A}e_n^*(x)e_n\|+(8K^3D+1)\|\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n\|.$$ Finally, we have $$\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n=G_s\left(x-\sum_{n\in A}e_n^*(x)e_n\right),\;\;\;s:=|\Lambda\setminus A|,$$ and therefore, by $K$-quasi-greediness, $$\|\sum_{n\in\Lambda\setminus A}e_n^*(x)e_n\|{\ensuremath{\leqslant}}K\|x-\sum_{n\in A}e_n^*(x)e_n\|.$$ We combine this with to obtain $$\|x-G_m(x)\|{\ensuremath{\leqslant}}(8K^4D+K+1)\|x-\sum_{n\in A}e_n^*(x)e_n\|.$$ \[mainequivalence\] A basis for a real Banach space is $w$-almost greedy if and only if it is both quasi-greedy and $w$-democratic. Our proofs use only the following property of the $w$-measure: $$w(\emptyset) = 0\quad \text{and}\quad w(A) {\ensuremath{\leqslant}}w(B) \Rightarrow w(A \setminus B) {\ensuremath{\leqslant}}w(B \setminus A).$$ This property corresponds to a more general case than the one given by a $w$-measure. Indeed, here is an example of a strictly monotone set function $\nu$ defined on all finite subsets of $\mathbb{N}$ satisfying **Property (\*)**: $$\nu(\emptyset) = 0\quad \text{and}\quad \nu(A) {\ensuremath{\leqslant}}\nu(B) \Rightarrow \nu(A \setminus B) {\ensuremath{\leqslant}}\nu(B \setminus A)$$ for which there does not exist a weight $w$ such that $$\label{eq: equiv} \nu(A) {\ensuremath{\leqslant}}\nu(B) \Leftrightarrow w(A) {\ensuremath{\leqslant}}w(B).$$ First we define $\nu$ on the power set of $\{1,2,3\}$ by $\nu(\{1\}) =\nu(\{2\})=\nu(\{3\})=1/4$, $\nu(\{1,2\})=5/16$, $\nu(\{2,3\})=3/8$, $\nu(\{1,3\})=7/16$, $\nu(\{1,2,3\}) = 1/2$. It is easily seen that $\nu$ is strictly monotone and has Property (\*) (restricted to subsets of $\{1,2,3\}$). Note that $\nu$ is not equivalent to a weight $w$ because $\nu(\{1\}) =\nu(\{2\})=\nu(\{3\})$ would imply by that $w_1=w_2=w_3$ which would imply that $w(\{1,2\})=w(\{1,3\})=w(\{2,3\})$ which in turn would imply by that $\nu(\{1,2\})=\nu(\{1,3\})=\nu(\{2,3\}).$ Now we extend $\nu$ to all finite subsets $A$ of $\mathbb{N}$. Write $A = A_1 \cup A_2$ where $A_1= A \cap \{1,2,3\}$, $A_2=A\setminus A_1$. Define $\nu(A) = \nu(A_1) + |A_2|$. Clearly $\nu$ is strictly monotone. Let us check Property (\*). Suppose $\nu(A) {\ensuremath{\leqslant}}\nu(B)$, where $A = A_1 \cup A_2$ and $B = B_1 \cup B_2$. Then $\nu(A_1) + |A_2| {\ensuremath{\leqslant}}\nu(B_1) + |B_2|$. Now $|\nu(A_1) - \nu(B_1| {\ensuremath{\leqslant}}1/2$ and hence $|A_2| {\ensuremath{\leqslant}}|B_2|$. Also $\nu(A \setminus B) = \nu(A_1 \setminus B_1) + |A_2 \setminus B_2|$ and $\nu(B \setminus A) = \nu(B_1 \setminus A_1) + |B_2 \setminus A_2|$. We consider two cases. First, if $|A_2| = |B_2|$, then $\nu(A_1) {\ensuremath{\leqslant}}\nu(B_1)$ and hence $\nu(A_1 \setminus B_1) {\ensuremath{\leqslant}}\nu(B_1 \setminus A_1)$ since $A_1,B_1 \subseteq \{1,2,3\}$. But also $|A_2 \setminus B_2| = |B_2 \setminus A_2|$ and hence $\nu(A \setminus B) {\ensuremath{\leqslant}}\nu(B \setminus A)$. Secondly, if $|A_2| {\ensuremath{\leqslant}}|B_2| - 1$, then $|A_2 \setminus B_2| {\ensuremath{\leqslant}}|B_2 \setminus A_2|-1$ and hence $$\nu(A_1 \setminus B_1) + |A_2 \setminus B_2| {\ensuremath{\leqslant}}1/2 + |B_2 \setminus A_2|-1 < \nu(B_1 \setminus A_1) + |B_2 \setminus A_2|,$$ and hence $\nu(A \setminus B) {\ensuremath{\leqslant}}\nu(B \setminus A)$. Lebesgue-type inequalities {#LI} ========================== For $x\in X$ and $m\in \mathbb N$ denote $$\tilde \sigma_m(x) := \inf\{\|x-\sum_{n\in A} e_n^*(x)e_n\|\,:\, |A|=m\}$$ the expansional best $m$-term approximation of $x$ with respect to $E:=\{e_n\}_{n=1}^\infty$. The following Lebesgue-type inequality was obtained in [@DKKT]: suppose $E$ is $K$-quasi-greedy and $D$-democratic, then $$\|x-G_m(x)\| {\ensuremath{\leqslant}}C(K)D \tilde \sigma_m(x).$$ This inequality was slightly generalized in [@Tsparse], p.91, Theorem 3.5.2. Suppose that for a basis $E$ there exists an increasing sequence $v(m):=v(m,E)$ such that, for any two sets of indices $A$ and $B$ with $|A|{\ensuremath{\leqslant}}|B|{\ensuremath{\leqslant}}m$, we have $$\label{2.3} \|\sum_{n\in A} e_n\| {\ensuremath{\leqslant}}v(m) \|\sum_{n\in B} e_n\|.$$ The following statement is from [@Tsparse], p.91, Theorem 3.5.2. \[P2.4\] Let $E$ be a $K$-quasi-greedy basis of $X$ satisfying (\[2.3\]). Then, for each $x\in X$, $$\|x-G_m(x)\| {\ensuremath{\leqslant}}C(K)v(m) \tilde \sigma_m(x).$$ The above proof of Theorem \[T2.5\] gives us the following version of Proposition \[P2.4\] in the weighted case. Suppose that for given weight sequence $w=\{w_n\}_{n=1}^\infty$ and basis $E$ there exists an increasing function $v(u,w):=v(u,E,w)$ such that, for any two sets of indices $A$ and $B$ with $w(A){\ensuremath{\leqslant}}w(B){\ensuremath{\leqslant}}u$, we have $$\label{2.4} \|\sum_{n\in A} e_n\| {\ensuremath{\leqslant}}v(u,w) \|\sum_{n\in B} e_n\|.$$ For $x\in X$ and $u>0$ denote for a given weight sequence $w$ $$\tilde \sigma^w_u(x) := \inf\{\|x-\sum_{n\in A} e_n^*(x)e_n\|\,:\, w(A){\ensuremath{\leqslant}}u\}.$$ \[T2.6\] Let $E$, a $K$-quasi-greedy basis of $X$, and a weight sequence $w$ satisfy (\[2.4\]). Then, for each $x\in X$, $$\label{2.5} \|x-G_m(x)\| {\ensuremath{\leqslant}}C(K)v(w(\Lambda_m),w) \tilde \sigma^w_{w(\Lambda_m)}(x),$$ where $\Lambda_m$ is from $$G_m(x) = \sum_{n\in \Lambda_m} e_n^*(x)e_n.$$ We note that the left hand side of (\[2.5\]) does not depend on the weight sequence $w$, but the right hand side of (\[2.5\]) does depend on $w$. Therefore, our setting with weights allows us to estimate $\|x-G_m(x)\|$ choosing different weight sequences $w$ and optimize over them. Theorem \[T2.6\] gives a Lebesgue-type inequality in terms of expansional best $m$-term approximation. We now consider a question of replacing the expansional best $m$-term approximation by the best $m$-term approximation $$\sigma^w_u(x) := \inf\{\|x-\sum_{n\in A} c_ne_n\|\,:\, c_n, n\in A, \, w(A){\ensuremath{\leqslant}}u\}$$ where inf is taken over all sets of indices $A$ with $w$-measure $w(A){\ensuremath{\leqslant}}u$ and all coefficients $c_n$, $n\in A$. In case $w_i=1$, $i\in \mathbb N$, we drop $w$ from the notation. It is clear that $$\sigma^w_u(x,E) {\ensuremath{\leqslant}}\tilde\sigma^w_u(x,E).$$ It is also clear that for an unconditional basis $E$ we have $$\tilde\sigma^w_u(x,E) {\ensuremath{\leqslant}}C(X,E)\sigma^w_m(x,E).$$ We recall some known useful properties of quasi-greedy bases. For a given element $x \in X$ we consider the expansion $$x=\sum_{k=1}^{\infty}e^*_k(x)e_k.$$ Let a sequence $k_j,j=1,2,...,$ of positive integers be such that $$|e^*_{k_1}(x)|{\ensuremath{\geqslant}}|e^*_{k_2}(x)|{\ensuremath{\geqslant}}... \,\,.$$ We use the notation $$a_j(x):=|e^*_{k_j}(x)|$$ for the decreasing rearrangement of the coefficients of $x$. It will be convenient in this section to redefine the quasi-greedy constant $K$ to be the least constant such that $$\|G_m(f)\|{\ensuremath{\leqslant}}K\|f\|\quad\text{and}\quad \|f-G_m(f)\|{\ensuremath{\leqslant}}K\|f\|, \quad f\in X.$$ The following Lemma \[L2.2n\] is from [@DKKT] (see also [@Tsparse], p.66). \[L2.2n\] Suppose $E = \{e_n\}_{n\in\mathbb N}$ has quasi-greedy constant $K$. For any $x\in X$ and any $n\in\mathbb N$ we have $$\label{2.7n} a_n(x)\|\sum_{j=1}^ne_{k_j}\| {\ensuremath{\leqslant}}4K^2\|x\|.$$ For a set of indices $\Lambda$ define $$S_\Lambda(x,E) := \sum_{k\in \Lambda} e^*_k(x)e_k.$$ The following Lemma \[L2.3t\] is from [@DKK] (see also [@Tsparse], p.67). \[L2.3t\] Let $E$ be a quasi-greedy basis of $X$. Then for any finite set of indices $\Lambda$, $|\Lambda|=m$, we have for all $x\in X$ $$\|S_\Lambda(x,E)\| {\ensuremath{\leqslant}}C \ln(m+1)\|x\|.$$ Lemma \[L2.3t\] was used in [@GHO] and [@DS-BT] to prove the following inequality. \[L2.4t\] Let $E$ be a quasi-greedy basis of $X$. Then for all $x\in X$ $$\tilde\sigma_m(x) {\ensuremath{\leqslant}}C\ln(m+1) \sigma_m(x).$$ We prove an estimate for $\tilde\sigma^w_n(x,E)$ in terms of $\sigma^w_m(x,E)$ for a quasi-greedy basis $E$. For a basis $E$ we define the fundamental function $$\varphi^w (u) := \sup _{A:w(A){\ensuremath{\leqslant}}u} \|\sum_{k\in A} e_k\|.$$ We also need the following function $$\phi^w(u):= \inf _{A:w(A) > u} \|\sum_{k\in A} e_k\|.$$ In the case $w_i=1$, $i\in \mathbb N$, we drop $w$ from the notation. The following inequality was obtained in [@DKKT]. \[L2.4kt\] Let $E$ be a quasi-greedy basis. Then for any $m$ and $r$ there exists a set $F$, $|F|{\ensuremath{\leqslant}}m+r$ such that $$\|x-S_F(x)\| {\ensuremath{\leqslant}}C\left(1+\frac{\varphi(m)}{\phi(r)}\right)\sigma_m(x)$$ and, therefore, for any $x\in X$ $$\tilde \sigma_{m+r}(x) {\ensuremath{\leqslant}}C\left(1+\frac{\varphi(m)}{\phi(r)}\right)\sigma_m(x).$$ We now prove the following weighted version of Lemma \[L2.4kt\]. \[L2.4w\] Let $E$ be a quasi-greedy basis. Then for any $u$ and $v$ we have for each $x\in X$ $$\tilde \sigma^w_{u+v}(x) {\ensuremath{\leqslant}}C\left(1+\frac{\varphi(u)}{\phi(v)}\right)\sigma^w_u(x).$$ For an arbitrary ${\ensuremath{\varepsilon}}>0$, let $A$ be a set, $w(A){\ensuremath{\leqslant}}u$, and $p_u(x)$ be a polynomial such that $$\label{1.8kt} \|x-p_u(x)\|{\ensuremath{\leqslant}}\sigma^w_u(x)+{\ensuremath{\varepsilon}}, \quad p_u(x) = \sum_{k\in A}b_ke_k.$$ Denote $g:= x-p_u(x)$. Let $m$ be such that for the set $B:=\Lambda_m(g)$ we have $w(B){\ensuremath{\leqslant}}v$ and $w(\Lambda_{m+1}(g))>v$. Consider $$G_m(g) = \sum_{k\in B}e^*_k(g)e_k.$$ We have $$\label{1.9kt} x-S_{A\cup B}(x) = g-S_{A\cup B}(g) = g-S_B(g) -S_{A\setminus B}(g).$$ By the assumption that $E$ is quasi-greedy and by the definition of $B$ we get $$\label{1.10kt} \|g-S_B(g)\| {\ensuremath{\leqslant}}C_1\|g\| {\ensuremath{\leqslant}}C_1(\sigma^w_u(x)+{\ensuremath{\varepsilon}}).$$ Let us estimate $\|S_{A\setminus B}(g)\|$. By Lemma \[L2.2n\] we get $$\max_{k\in A\setminus B}|e^*_k(g)|{\ensuremath{\leqslant}}a_{m+1}(g) {\ensuremath{\leqslant}}4K^2(\phi(v))^{-1}\|g\|.$$ Next, by Proposition \[constant-unconditional\] we obtain $$\label{1.11kt} \|S_{A\setminus B}(g)\| {\ensuremath{\leqslant}}(2K)^3 \varphi(u)\phi(v)^{-1}\|g\|.$$ Combining (\[1.10kt\]) and (\[1.11kt\]) we derive from (\[1.9kt\]) for $F:=A\cup B$ $$\|f-S_F(x)\| {\ensuremath{\leqslant}}C(K)\left(1+\frac{\varphi(u)}{\phi(v)}\right)(\sigma^w_u(x)+{\ensuremath{\varepsilon}}).$$ Lemma \[L2.4w\] is proved. \[T3.1\] Let $E$ be a $K$-$w$-quasi-greedy and $D$-$w$-democratic. Then, for any $x\in X$, we have $$\|x-G_m(x)\| {\ensuremath{\leqslant}}C(K,D)\sigma^w_{w(\Lambda_m)/2}(x).$$ By Theorem \[T2.5\] we have $$\label{3.9} \|x-G_m(x)\| {\ensuremath{\leqslant}}C(K)D\tilde \sigma^w_{w(\Lambda_m)}(x).$$ Using inequality $\varphi^w(u){\ensuremath{\leqslant}}D\phi^w(u)$, by Lemma \[L2.4w\] with $u=v=w(\Lambda_m)/2$ we obtain $$\label{3.10} \tilde \sigma^w_{w(\Lambda_m)}(x) {\ensuremath{\leqslant}}C(K)(1+D)\sigma^w_{w(\Lambda_m)/2}(x).$$ Combining (\[3.9\]) and (\[3.10\]) we complete the proof of Theorem \[T3.1\]. $w$-Semi-greedy bases {#sec: semigreedy} ===================== In this section we consider an obvious enhancement of the TGA which improves the rate of convergence. Suppose that $x \in X$ and let $\rho$ be the greedy ordering for $x$. Let $\overline{G}_m(x) \in \operatorname{span}\{e_{\rho(n)} \colon 1 {\ensuremath{\leqslant}}n {\ensuremath{\leqslant}}m\}$ be a Chebyshev approximation to $x$. Thus, $$\|x-\overline{G}_m(x)\| = \min\{\|x - \sum_{n=1}^m a_n e_{\rho(n)}\| \colon (a_n)_{n=1}^m \in \mathbb{R}^m\}.$$ For $y>0$, let $$\sigma^w_y(x) := \inf \{ \|x - \sum_{n \in A} a_n e_n\| \colon w(A) {\ensuremath{\leqslant}}y, a_n\in \mathbb{R}\}$$ denote the error in the best approximation to $x$ by vectors of support of weight at most $y$. We say that a basis $(e_n)_{n=1}^\infty$ for a Banach space $X$ is [**$\boldsymbol{w}$-semi-greedy with constant $\boldsymbol{\overline{K}}$**]{} if $$\|x-\overline{G}_m(x)\|{\ensuremath{\leqslant}}\overline{K}\sigma^w_{w(\Lambda_m)}(x)$$ for all $m\in\mathbb{N}$ and $x\in X$. Our first goal is to prove that every $w$-almost greedy basis is $w$-semi-greedy, which generalizes [@DKK Theorem 3.2]. To that end, we recall an important property of the ‘truncation function’ proved in [@DKK]. Fix $M>0$. Define the truncation function $f_M\colon \mathbb{R} \rightarrow [-M,M]$ thus: $$f_M(x) = \begin{cases} M &\text{for $x>M$};\\ x &\text{for $x \in [-M,M]$};\\ -M &\text{for $x<-M$}.\end{cases}$$ \[prop: cutoff\] Suppose that $(e_n)$ is $K$-quasi-greedy. Then, for every $M>0$ and for all real scalars $(a_n)$, we have $$\|\sum_{i=1}^\infty f_M(a_n) e_n\| {\ensuremath{\leqslant}}(1+3K)\|\sum_{n=1}^\infty a_n e_n\|.$$ \[thm: almostgreedysemigreedy\] Every $w$-almost greedy basis of a real Banach space $X$ is $w$-semi-greedy. By Lemma \[quasi-greedy\] and Theorem \[w-democratic\], $(e_n)$ is quasi-greedy and $w$-democratic. Let $K$ and $D$ the quasi-greedy and democratic constants of $(e_n)$, respectively. Fix $m{\ensuremath{\geqslant}}1$ and $x = \sum_{n=1}^\infty a_n e_n$ in $X$. Let $\Lambda := \Lambda_{m,\tau,x}$. Let $z := \sum_{n \in B} b_n e_n$, where $w(B) {\ensuremath{\leqslant}}w(\Lambda)$, satisfy $$\|x-z\| {\ensuremath{\leqslant}}2\sigma^w_{w(\Lambda)}(x).$$ If $B=\Lambda$ then there is nothing to prove. So we may assume (since $w(B) {\ensuremath{\leqslant}}w(\Lambda)$) that $\Lambda \setminus B$ is nonempty. Let $k = |\Lambda \setminus B|$, so that $1{\ensuremath{\leqslant}}k{\ensuremath{\leqslant}}m$, and let $M = |a_{\rho(m)}|$. Then, using both parts of Proposition \[constant-unconditional\], $$\label{eq: x-z1} M\| \sum_{n \in \Lambda\setminus B} e_n\|{\ensuremath{\leqslant}}2KM\|\sum_{n \in \Lambda} e_n\| {\ensuremath{\leqslant}}4K^2 \|x-z\|,$$ since $|e^*_n(x-z)| {\ensuremath{\geqslant}}M$ for all $n \in \Lambda \setminus B$. Let $$x-z:=\sum_{n=1}^\infty y_ne_n.$$ By Proposition \[prop: cutoff\], we have $$\label{eq: x-z2} \|\sum_{n=1}^\infty f_M(y_n)e_n\| {\ensuremath{\leqslant}}(1+3K)\|x-z\|.$$ Note that $$\begin{aligned} v:&= \sum_{n \in \Lambda}f_M(y_n)e_n + \sum_{n \in \mathbb{N}\setminus \Lambda}a_ne_n\\ &= \sum_{n=1}^\infty f_M(y_n) e_n + \sum_{n \in B\setminus \Lambda}(a_n - f_M(y_n))e_n\end{aligned}$$ since $a_n = y_n = f_M(y_n)$ for all $n \in \mathbb{N} \setminus (\Lambda \cup B)$ Hence, $$\begin{aligned} \|v\| &{\ensuremath{\leqslant}}\|\sum_{n=1}^\infty f_M(y_n) e_n\| + \|\sum_{n \in B\setminus \Lambda}(a_n - f_M(y_n))e_n\|\\ &{\ensuremath{\leqslant}}(1+3K)\|x-z\| + 4KM \|\sum_{n \in B \setminus \Lambda} e_n\| \intertext{(by \eqref{eq: x-z2} and by Proposition~\ref{constant-unconditional}, since $|a_n - f_M(y_n)| {\ensuremath{\leqslant}}2M$ for all $n\in B\setminus \Lambda$)} &{\ensuremath{\leqslant}}(1+3K)\|x-z\| + 4KDM \|\sum_{n \in \Lambda \setminus B} e_n\|\\ \intertext{(since $w(B\setminus \Lambda){\ensuremath{\leqslant}}w(\Lambda \setminus B)$ and $(e_n)$ is $w$-democratic)} &{\ensuremath{\leqslant}}(1 + 3K + 16K^3D)\|x-z\|\\ \end{aligned}$$ by . Taking the infimum over all $z$ gives $$\|v\| {\ensuremath{\leqslant}}(1 + 3K + 16K^3D) \sigma^w_{w(\Lambda)}(x)$$ Since $v=x-\sum_{n\in \Lambda} (a_n-f_M(y_n))e_n$, we conclude that $(e_n)$ is semi-greedy with constant $1 + 3K + 16K^3D$. Suppose $(e_n)$ is $w$-almost greedy. Then, for all $x \in X$ and $m {\ensuremath{\geqslant}}1$, $$\| x - G_m(x)\| {\ensuremath{\leqslant}}(1 + 3K + 16K^3D) \sigma^w_{w(\Lambda)}(x) + 2K |e^*_{\rho(m)}(x)| \|\sum_{n \in \Lambda} e_n\|.$$ Using the notation of the last result, note that $x - G_m(x) = v - \sum_{n \in \Lambda} f_M(y_n) e_n$. Hence $$\begin{aligned} \|x - G_m(x)\| &{\ensuremath{\leqslant}}\|v\| + \|\sum_{n \in \Lambda} f_M(y_n) e_n\|\\ &{\ensuremath{\leqslant}}(1 + 3K + 16K^3D)\|x-z\| + 2K|e^*_{\rho(m)}(x)| \|\sum_{n \in \Lambda} e_n\|, \end{aligned}$$ by Proposition \[constant-unconditional\] recalling that $M = |e^*_{\rho(m)}(x)|$. Now take the infimum of $\|x-z\|$ over $z$ to get the result. The remainder of this section investigates the converse of Theorem \[thm: almostgreedysemigreedy\]. To that end we first show that certain properties of the weight sequence $(w_n)$ imply the existence of subsequences of $(e_n)$ that are equivalent to the unit vector basis of $c_0$. \[prop: weightproperties\] Let $(e_n)$ be a $w$-semi-greedy basis with constant $\overline{K}$ and let $\beta$ be the basis constant. Suppose $A \in \mathbb{N}^{<\infty}$. - If $w(A) {\ensuremath{\leqslant}}\limsup_{n \rightarrow \infty} w_n$ then $\max_\pm\|\sum_{n \in A} \pm e_n\| {\ensuremath{\leqslant}}2\beta \overline{K}$. - If $\sum_{n=1}^\infty w_n <\infty$ then $(e_n)$ is equivalent to the unit vector basis of $c_0$. - If $\sup w_n = \infty$ then $(e_n)$ is equivalent to the unit vector basis of $c_0$. - If $\inf w_n =0$ then $(e_n)$ contains a subsequence equivalent to the unit vector basis of $c_0$. \(i) We may select $n_1>n_0 > \max A$ such that $w(A) < \delta := w_{n_0} + w_{n_1}$. Let $\varepsilon>0$. The $w$-semi-greedy condition applied to $ x := \sum_{n \in A} \pm e_n + (1+ \varepsilon)(e_{n_0}+ e_{n_1})$ implies the existence of $\lambda, \mu\in \mathbb{R}$ such that $$\|\sum_{n \in A} \pm e_n + \lambda e_{n_0}+\mu e_{n_1}\| {\ensuremath{\leqslant}}\overline{K}\sigma^w_{\delta}(x).$$ Hence $$\begin{aligned} \|\sum_{n \in A} \pm e_n \| &{\ensuremath{\leqslant}}\beta \|\sum_{n \in A} \pm e_n + \lambda e_{n_0}+ \mu e_{n_1}\|\\ &{\ensuremath{\leqslant}}\beta \overline{K} \sigma^w_{\delta}(x)\\ &{\ensuremath{\leqslant}}\beta \overline{K} (1+ \varepsilon) \| e_{n_0}+ e_{n_1}\|\\ \intertext{(since $w(A) {\ensuremath{\leqslant}}\delta$)}& {\ensuremath{\leqslant}}2\beta \overline{K} (1+ \varepsilon). \end{aligned}$$ Let $\varepsilon \rightarrow 0$ to conclude. \(ii) Choose $N \in \mathbb{N}$ such that $\sum_{N+1}^\infty w_n < w_1$. Suppose $\min A {\ensuremath{\geqslant}}N+1$. The $w$-semi-greedy condition applied to $ x := (1+ \varepsilon) e_1 + \sum_{n \in A} \pm e_n$ implies the existence of $\lambda\in \mathbb{R}$ such that $$\|\lambda e_1 + \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}\overline{K}\sigma^w_{w_1}(x).$$ Hence $$\begin{aligned} \|\sum_{n \in A} \pm e_n \| &{\ensuremath{\leqslant}}2\beta \|\lambda e_1 + \sum_{n \in A} \pm e_n\|\\ &{\ensuremath{\leqslant}}2\beta \overline{K} \sigma^w_{w_{1}}(x)\\ &{\ensuremath{\leqslant}}2\beta \overline{K} (1+ \varepsilon) \| e_{1}\|\\ \intertext{(since $w(A) {\ensuremath{\leqslant}}w_1$)}& = 2\beta \overline{K} (1+ \varepsilon). \end{aligned}$$ Hence $(e_n)$ is equivalent to the unit vector basis of $c_0$. \(iii) By (i), $\| \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}2\beta \overline{K}$ for all $A$ and choices of signs. Hence $(e_n)$ is equivalent to the unit vector basis of $c_0$. \(iv) Choose $(n_k)$ such that $\sum_{k=1}^\infty w_{n_k} < \infty$. By (ii), $(e_{n_k})$ is equivalent to the unit vector basis of $c_0$. The following definition generalizes the notion of superdemocracy to weights, with the constant weight corresponding to the usual definition of superdemocracy (see [@KT]). A basis $(e_n)_{n=1}^\infty$ is [**$\boldsymbol{w}$-superdemocratic with constant $\boldsymbol{\overline{D}}$**]{} whenever $w(A){\ensuremath{\leqslant}}w(B)$ for $A,B\in\mathbb{N}^{<\infty}$ implies $$\max_\pm\|\sum_{n\in A} \pm e_n\|{\ensuremath{\leqslant}}\overline{D} \min_\pm\|\sum_{n\in B} \pm e_n\|.$$ Combining the fact that quasi-greedy sequences are ‘unconditional for constant coefficients’ (Proposition \[constant-unconditional\]) with Theorem \[mainequivalence\] immediately yields the following result. \[prop: almostgreedyissuperdemocratic\] Every $w$-almost greedy basis of a real Banach space is $w$-superdemocratic. We show next that $w$-semi-greedy bases are also $w$-superdemocratic, which is the weighted version of [@DKK Proposition 3.3]. Every $w$-semi-greedy basis of a real Banach space is $w$-superdemocratic. We may assume that $\sum w_n = \infty$ and $\sup w_n < \infty$, for otherwise by Proposition \[prop: weightproperties\] $(e_n)$ is equivalent to the unit vector basis of $c_0$ for which the result is obvious. Suppose $w(A) {\ensuremath{\leqslant}}w(B)$ and that $B$ is nonempty. If $w(B) {\ensuremath{\leqslant}}\limsup w_n$ then by Proposition \[prop: weightproperties\] , $$\max_\pm \| \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}2\beta \overline{K},$$ and hence $$\max_\pm \| \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}2\beta \overline{K}\min_\pm \| \sum_{n \in B} \pm e_n \|$$ as desired. Now suppose that $w(B) > \limsup w_n$. Since $\sum w_n = \infty$ we can choose $E \in \mathbb{N}^{<\infty}$ and $n_0 \in \mathbb{N}$ with $\min E > \max(A \cup B)$ and $n_0 > \max E$ such that $$w(E) {\ensuremath{\leqslant}}w(B) < w(E) + w_{n_0} < 2 w(B).$$ Set $F := E \cup \{n_0\}$. Then $w(E) {\ensuremath{\leqslant}}w(B) < w(F) < 2w(B)$. Let $\varepsilon>0$. Applying the $w$-semi-greedy condition to $$x = (1+\varepsilon) \sum_{n \in B} \pm e_n + \sum_{n \in E} e_n$$ yields scalars $(a_n)_{n \in B}$ such that $$\begin{aligned} \| \sum_{n \in B} a_n e_n + \sum_{n \in E} e_n\| &{\ensuremath{\leqslant}}\overline{K} \sigma^w_{w(B)}(x)\\ &{\ensuremath{\leqslant}}\overline{K}(1+\varepsilon) \|\sum_{n \in B} \pm e_n\|\end{aligned}$$ since $w(E) {\ensuremath{\leqslant}}w(B)$. Hence $$\|\sum_{n \in E} e_n \| {\ensuremath{\leqslant}}2\beta \| \sum_{n \in B} a_n e_n + \sum_{n \in E} e_n\| {\ensuremath{\leqslant}}2\beta \overline{K}(1+\varepsilon) \|\sum_{n \in B} \pm e_n\|.$$ Since $\varepsilon>0$ is arbitrary, $$\|\sum_{n \in E} e_n \| {\ensuremath{\leqslant}}2\beta \overline{K}\|\sum_{n \in B} \pm e_n\|.$$ Similarly, using the fact that $w(A) {\ensuremath{\leqslant}}w(B) < w(F)$, the $w$-semi-greedy condition applied to $y = \sum_{n \in A} \pm e_n + (1 + \varepsilon) \sum_{n \in F} e_n $ yields $$\|\sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}\beta \overline{K}\|\sum_{n \in F} e_n\|.$$ Finally, $$\begin{aligned} \|\sum_{n \in A} \pm e_n \| &{\ensuremath{\leqslant}}\beta \overline{K}\|\sum_{n \in F} e_n\|\\ &{\ensuremath{\leqslant}}\beta \overline{K}(\|\sum_{n \in E} e_n\| +1)\\ &{\ensuremath{\leqslant}}\beta \overline{K}(2\beta \overline{K}\|\sum_{n \in B} \pm e_n\| +1)\\ &{\ensuremath{\leqslant}}(2\beta^2\overline{K}^2 + \beta \overline{K})\|\sum_{n \in B} \pm e_n\|. \end{aligned}$$ \[prop: superdemocraticequiv\] Suppose $0 < \inf w_n {\ensuremath{\leqslant}}\sup w_n < \infty$. Then $(e_n)$ is $w$-superdemocratic $\Leftrightarrow$ $(e_n)$ is superdemocratic for the constant weight sequence. $\Rightarrow$: Let $\overline{D}$ be the $w$-superdemocracy constant of $(e_n)$. We may assume that $0 < \alpha := \inf w_n {\ensuremath{\leqslant}}1 = \sup w_n$. Suppose $|A| = |B|$ and, without loss of generality, $w(A) {\ensuremath{\leqslant}}w(B)$. Then $$\max_\pm \| \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}\overline{D} \min_\pm \| \sum_{n \in B} \pm e_n \|$$ So to prove superdemocracy it suffices to show that $$\max_\pm \| \sum_{n \in B} \pm e_n \| {\ensuremath{\leqslant}}L \min_\pm \| \sum_{n \in A} \pm e_n \|$$ for some constant $L$. If $w(B) {\ensuremath{\leqslant}}2/\alpha$ then $|B| {\ensuremath{\leqslant}}2/\alpha^2$ and so we can take $L = 2/\alpha^2$. Suppose $w(B) > 2/\alpha$. Note that $w(A) {\ensuremath{\geqslant}}\alpha w(B){\ensuremath{\geqslant}}2$. Hence we may partition $B$ into $N$ sets $B_1,\dots, B_N$ satisfying $w(B_j) {\ensuremath{\leqslant}}w(A) {\ensuremath{\leqslant}}w(B_j)+1$, and hence $w(B_j) {\ensuremath{\geqslant}}w(A)/2$, with $$N {\ensuremath{\leqslant}}\frac{w(B)}{w(A)/2} {\ensuremath{\leqslant}}\frac{2}{\alpha}.$$ Since $(e_n)$ is $w$-superdemocratic and $w(B_j) {\ensuremath{\leqslant}}w(A)$ ($1 {\ensuremath{\leqslant}}j {\ensuremath{\leqslant}}N$), we have $$\begin{aligned} \max_{\pm}\| \sum_{n \in B} \pm e_n \| &{\ensuremath{\leqslant}}\sum_{j=1}^N \max_{\pm}\| \sum_{n \in B_j} \pm e_n \|\\ &{\ensuremath{\leqslant}}N\overline{D} \min_{\pm}\| \sum_{n \in A} \pm e_n \|\\ &{\ensuremath{\leqslant}}\frac{2\overline{D}}{\alpha} \min_{\pm}\| \sum_{n \in A} \pm e_n \|\end{aligned}$$ Hence we can take $L = 2\overline{D}/\alpha$. $\Leftarrow$: Let $C$ be the superdemocracy constant (for the constant weight). Suppose $w(A) {\ensuremath{\leqslant}}w(B)$. Then $|A| {\ensuremath{\leqslant}}|B|/\alpha$. We can partition $A$ into fewer than $1 + 1/\alpha$ sets of size at most $|B|$. Hence by the triangle inequality $$\max_{\pm}\| \sum_{n \in A} \pm e_n \| {\ensuremath{\leqslant}}\frac{C(1+\alpha)}{\alpha}\min_{\pm}\| \sum_{n \in B} \pm e_n \|.$$ So $(e_n)$ is $w$-superdemocratic. The previous result is sharp in the following sense. Suppose $w$ is a weight sequence satisfying $\inf w_n = 0$, $\sup w_n =1$, and $\sum w_n = \infty$. Consider the following norm: $$\| \sum_{n=1}^\infty a_n e_n \| = \sup |a_n| \vee (\sum_{n=1}^\infty a_n^2 w_n)^{1/2}.$$ Then $(e_n)$ is a normalized basis which is $w$-superdemocratic but not superdemocratic (and hence $(e_n)$ is $w$-greedy but not greedy). \[cor: almostgreedyequivwalmostgreedy\] Suppose that $(e_n)$ has no subsequence equivalent to the unit vector basis of $c_0$. Then $(e_n)$ is $w$-almost greedy for some weight $w$ $\Leftrightarrow$ $(e_n)$ is almost greedy. By Theorem \[mainequivalence\] and Proposition \[prop: almostgreedyissuperdemocratic\], $(e_n)$ is $w$-almost greedy $\Leftrightarrow$ $(e_n)$ is quasi-greedy and $w$-superdemocratic. Since $(e_n)$ has no subsequence equivalent to the unit vector basis of $c_0$, by Proposition \[prop: weightproperties\] (parts $(iii)$ and $(iv)$), $0 < \inf w_n {\ensuremath{\leqslant}}\limsup w_n < \infty$. Hence, by Proposition \[prop: superdemocraticequiv\], $(e_n)$ is $w$-almost greedy $\Leftrightarrow$ $(e_n)$ is quasi-greedy and superdemocratic $\Leftrightarrow$ $(e_n)$ is almost greedy. \[question: semigreedy\] Does the analogous result holds for semi-greediness? A simple characterization of $w$-semi-greediness analogous to Theorem \[mainequivalence\], which could be useful for answering this question, seems to be lacking. A partial answer is given in Corollary \[cor: semigreedywsemigreedyequiv\] below. We turn now to a converse to Theorem \[thm: almostgreedysemigreedy\] for spaces of finite cotype. The following lemma is needed. \[lem: lowerest\] Suppose $(e_n)$ is $w$-semi-greedy and $0 < \inf w_n {\ensuremath{\leqslant}}\sup w_n$. There exists $M<\infty$ such that for all $A \in \mathbb{N}^{<\infty}$ and for all scalars $(a_n)_{n \in A}$, we have $$\min_{n \in A} |a_n| \| \sum_{n \in A} e_n \| {\ensuremath{\leqslant}}M \| \sum_{n \in A} a_n e_n \|.$$ We may assume $\sup w_n = 1$ and $\inf w_n = \alpha > 0$. By Proposition \[prop: superdemocraticequiv\], $(e_n)$ is superdemocratic. We may assume $w(A) > 2$, for otherwise $|A| {\ensuremath{\leqslant}}2/\alpha$ and the result is clear. Choose $F$ with $\min F > \max A$ such that $w(F) {\ensuremath{\leqslant}}w(A) {\ensuremath{\leqslant}}w(F) + 1$. Note that $$|F| {\ensuremath{\geqslant}}w(A) - 1 {\ensuremath{\geqslant}}\frac{w(A)}{2} {\ensuremath{\geqslant}}\frac{\alpha |A|}{2}.$$ Hence $$\| \sum_{n \in F} e_n \| {\ensuremath{\geqslant}}\frac{\alpha}{(2 +\alpha)D}\| \sum_{n \in A} e_n \|,$$ where $D$ is the democracy constant. Applying the semi-greedy condition to $x = \sum_{n \in A}a_n e_n + (\min |a_n|) \sum_{n \in F} e_n$, there exist scalars $(c_n)_{n \in A}$ such that $$\begin{aligned} \|\sum_{n \in A}c_n e_n + (\min |a_n|) \sum_{n \in F} e_n\| &{\ensuremath{\leqslant}}\overline{K} \sigma^w_{w(A)}\\ &{\ensuremath{\leqslant}}\overline{K} \| \sum_{n \in A}a_n e_n\|. \end{aligned}$$ since $w(F) {\ensuremath{\leqslant}}w(A)$. Let $\beta$ be the basis constant of $(e_n)$. Hence $$\begin{aligned} (\min |a_n|) \|\sum_{n \in A} e_n\| &{\ensuremath{\leqslant}}(\frac{2}{\alpha} +1)D (\min |a_n|) \|\sum_{n \in F} e_n\| \\ &{\ensuremath{\leqslant}}2\beta(\frac{2}{\alpha} +1)D \|\sum_{n \in A}c_n e_n + (\min |a_n|) \sum_{n \in F} e_n\|\\ &{\ensuremath{\leqslant}}2\beta(\frac{2}{\alpha} +1)D\overline{K}\| \sum_{n \in A}a_n e_n\|.\end{aligned}$$ Let us recall that a Banach space $X$ has cotype $q$, where $2{\ensuremath{\leqslant}}q <\infty$, if there exists a constant $C$ such that $$\label{eq: cotypeq} (\sum_{j=1}^n\|x_j\|^q)^{\frac{1}{q}} {\ensuremath{\leqslant}}C (\Ave_{\varepsilon_j=\pm1} \|\sum_{j=1}^n\varepsilon_jx_j\|^q)^{\frac{1}{q}}$$ for all $x_1,\ldots,x_n\in X$ and $n\in\mathbb{N}$. The least such constant $C$ is called the cotype $q$-constant $C_q(X)$. We say that $X$ has *finite cotype* if $X$ has cotype $q$ for some $q<\infty$. The following result and its proof can be extracted from [@DKK p. 76]. It is used below in the proof of Theorem \[thm: finitecotypeequiv\]. \[prop:largekquasigreedy\] Suppose that $X$ has finite cotype and that $(e_n)$ is superdemocratic. Then, for all $0<\theta <1$, there exists $L(\theta)< \infty$ such that for all $m {\ensuremath{\geqslant}}1$ and for all $x = \sum_{n \in F} a_n e_n$, where $|F| = m$, and for all $n {\ensuremath{\geqslant}}\theta m$, we have $\|G_n(x)\| {\ensuremath{\leqslant}}L(\theta) \|x\|$. ($L(\theta)$ also depends on $X$ and on $(e_n)$ but we suppress this dependence.) Now we can prove the weighted version of [@DKK Theorem 3.6], which provides a partial converse to Theorem \[thm: almostgreedysemigreedy\]. \[thm: finitecotypeequiv\] Suppose that $X$ has finite cotype and that $(e_n)$ is $w$-semi-greedy. Then $(e_n)$ is $w$-almost greedy. Since $X$ has finite cotype, no subsequence of $(e_n)$ is equivalent to the unit vector basis of $c_0$. Hence we may assume that $\sup w_n = 1$ and $\inf w_n = \alpha>0$. By Proposition \[prop: superdemocraticequiv\], $(e_n)$ is both $w$-superdemocatic and superdemocratic. Hence, by Theorem \[mainequivalence\], it suffices to show that $(e_n)$ is quasi-greedy, i.e., that there exists $K < \infty$ such that for all $k {\ensuremath{\geqslant}}1$ and $x \in X$, $\|G_k(x)\| {\ensuremath{\leqslant}}K \|x\|$. Fix $m {\ensuremath{\geqslant}}1$. Suppose that $x = \sum_{n \in F} a_n e_n$ with $\|x\| =1$, where $|F| = m$ and $a_n \ne 0$ for $n \in F$. Fix $k {\ensuremath{\geqslant}}1$ and let $A := \{\rho(1),\dots, \rho(k)\}$. Choose $E \in \mathbb{N}^{<\infty}$ such that $\min E > \max F$ and $w(A) {\ensuremath{\leqslant}}w(E) {\ensuremath{\leqslant}}w(A) +1$. This implies that $$\label{eq: cardE} \alpha k = \alpha |A| {\ensuremath{\leqslant}}|E|{\ensuremath{\leqslant}}\frac{|A| + 1}{\alpha} = \frac{k+1}{\alpha} {\ensuremath{\leqslant}}\frac{2k}{\alpha}.$$ Now let $B:= \{\rho(k+1),\dots, \rho(k+ l))\}$, where $w(A) {\ensuremath{\leqslant}}w(B) {\ensuremath{\leqslant}}w(A) + 1$, so that $$\alpha k = \alpha |A| {\ensuremath{\leqslant}}|B|= l{\ensuremath{\leqslant}}\frac{|A| + 1}{\alpha} = \frac{k+1}{\alpha} {\ensuremath{\leqslant}}\frac{2k}{\alpha}.$$ If $k+l > m$, then $k > \alpha m /(2 + \alpha)$, and hence by Proposition \[prop:largekquasigreedy\] $$\|G_k(x)\| {\ensuremath{\leqslant}}L(\alpha/(2+\alpha)) \|x\|.$$ So we may assume $k + l {\ensuremath{\leqslant}}m$. Fix $\varepsilon>0$ and consider $$y := \sum_{n \in F\setminus A} a_ne_n + (|a_{\rho(k)}|+\varepsilon)(\sum_{n \in E} e_n).$$ Since $w(A) {\ensuremath{\leqslant}}w(E)$, we have $$\label{eq: sigmaky} \sigma^w_{w(E)}(y) {\ensuremath{\leqslant}}\|x+(|a_{\rho(k)}|+\varepsilon)(\sum_{n \in E} e_n)\| {\ensuremath{\leqslant}}1 +(|a_{\rho(k)}|+\varepsilon)\|\sum_{n \in E} e_n\|.$$ Since $(e_i)$ is semi-greedy there exist scalars $(c_n)$ ($n \in E$) such that $$\label{eq: sigmaky2} \|\sum_{n \in F\setminus A} a_ne_n + \sum_{n \in E} c_ne_n\| {\ensuremath{\leqslant}}\overline{K}\sigma^w_{w(E)}(y).$$ Since $\max F<\min E$ and $\varepsilon>0$ is arbitrary, and yield $$\|\sum_{n \in F\setminus A} a_ne_n\| {\ensuremath{\leqslant}}\beta \overline{K}(1+ |a_{\rho(k)}|\|\sum_{n \in E} e_n\|)$$ where $\beta$ is the basis constant. Hence $$\label{eq: sumoverA} \|G_k(x)\| = \| x - \sum_{n \in F\setminus A} a_ne_n \| {\ensuremath{\leqslant}}1+ \beta \overline{K}(1+ |a_{\rho(k)}|\|\sum_{n \in E} e_n\|).$$ Now consider $$z:= x - \sum_{n \in A} a_n e_n.$$ Then $\|\sigma^w_{w(B)}(z)\| {\ensuremath{\leqslant}}\|x\|=1$, since $w(A) {\ensuremath{\leqslant}}w(B)$. Since $(e_n)$ is semi-greedy there exist scalars $(\overline{c}_n)$ ($n \in B$) such that $$\|z - \sum_{n \in B} \overline{c}_n e_n \| {\ensuremath{\leqslant}}\overline{K} \|\sigma^w_{w(B)}(z)\| {\ensuremath{\leqslant}}\overline{K}.$$ Hence $$\label{eq: AandB} \|\sum_{n \in A} a_ne_n + \sum_{n \in B} \overline{c}_ne_n\| = \|x - (z - \sum_{n \in B}\overline{c}_ne_n)\| {\ensuremath{\leqslant}}\|x\| + \overline{K} = 1+ \overline{K}.$$ Let $\overline{B} := \{n \in B \colon |\overline{c}_n|{\ensuremath{\geqslant}}|a_{\rho(k)}|\}$. Then $$\sum_{n \in A} a_n e_n + \sum_{n \in \overline{B}} \overline{c}_ne_n = G_s(\sum_{n \in A} a_n e_n + \sum_{n \in B} \overline{c}_ne_n)$$ for some $k {\ensuremath{\leqslant}}s {\ensuremath{\leqslant}}k+l {\ensuremath{\leqslant}}(1 + 2/\alpha)k$. Hence Proposition \[prop:largekquasigreedy\] and yield $$\label{eq: AandEestimate} \|\sum_{n \in A} a_n e_n + \sum_{n \in \overline{B}} \overline{c}_ne_n\| {\ensuremath{\leqslant}}L(\alpha/(\alpha+2))(1+\overline{K})$$ On the other hand, by Lemma \[lem: lowerest\] there exists $M < \infty$, depending only on $(e_n)$, such that $$\label{eq: fromremark} |a_{\rho(k)}| \| \sum_{n \in A \cup \overline{B}} e_n \| {\ensuremath{\leqslant}}M\|\sum_{n \in A} a_n e_n + \sum_{n \in \overline{B}} c_n e_n\|.$$ From , we obtain $$|E| {\ensuremath{\leqslant}}\frac{2}{\alpha} k {\ensuremath{\leqslant}}\frac{2}{\alpha} |A \cup \overline{B}|,$$ and hence, since $(e_n)$ is superdemocratic (with constant $D$, say), $$\label{eq: superdemestimate}\| \sum_{n \in E}e_n \| {\ensuremath{\leqslant}}D(\frac{2}{\alpha} +1)\| \sum_{n \in A \cup \overline{B}} e_n\|.$$ Finally, combining , , , and we get $$\|G_k(x)\| {\ensuremath{\leqslant}}1+\beta \overline{K}+ \beta \overline{K}D(\frac{2}{\alpha}+1)M L(\frac{\alpha}{\alpha+2})(1+\overline{K}).$$ Combining Corollary \[cor: almostgreedyequivwalmostgreedy\] and Theorem \[thm: finitecotypeequiv\] we get a partial answer to Question \[question: semigreedy\]. \[cor: semigreedywsemigreedyequiv\] Suppose that $X$ has finite cotype. Then $(e_n)$ is $w$-semi-greedy for some weight $w$ $\Leftrightarrow$ $(e_n)$ is semi-greedy. $w$-almost greedy bases when $w\in c_0$ ======================================= \[bounded\]Assume $w\in c_0$ and is nonincreasing. Let $(e_n)_{n=1}^\infty$ be a normalized $D$-$w$-democratic basis for a (real or complex) Banach space $X$. Then for every $m\in\mathbb{N}$ there exists $N\in\mathbb{N}$ so that if $A\in\mathbb{N}^m$ with $N{\ensuremath{\leqslant}}\min A$ then $$\label{2}\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}D.$$ If furthermore $w\in\ell_1$, we can choose $M\in\mathbb{N}$ such that holds for all $A\in\mathbb{N}^{<\infty}$ with $M{\ensuremath{\leqslant}}\min A$. Observe that $$\lim_{k\to\infty}\sum_{i=k}^{k+m-1}w_i=0,$$ so that we can find $N\in\mathbb{N}$ satisfying $$w(A)=\sum_{i\in A}w_i{\ensuremath{\leqslant}}\sum_{i=N}^{N+m-1}w_i<w_1=w(\{1\})$$ for all $A\in\mathbb{N}^m$ satisfying $N{\ensuremath{\leqslant}}\min A$. By $D$-$w$-democracy this means $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}D\|x_1\|=D.$$ This proves the first part of the proposition. Next, assume $w\in\ell_1$, and let $M\in\mathbb{N}$ be such that $$\sum_{i=M}^\infty w_i<w_1$$ and hence $w(A)<w(\{1\})$ for all $A\in\mathbb{N}^{<\infty}$ with $M{\ensuremath{\leqslant}}\min A$. Then we have as before. \[nonincreasing\]It is easy to see from the proof of Proposition \[bounded\] that if $w\in c_0$ is not assumed to be nonincreasing, we can still find a constant $D$ such that for any $m\in\mathbb{N}$ there is a positive integer $N\in\mathbb{N}$ satisfying the property that if $k{\ensuremath{\geqslant}}N$ then there exists $A\in\mathbb{N}^m$ with $k{\ensuremath{\leqslant}}A$ and $\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}D$. Let $w\in c_0$. If $(e_n)_{n=1}^\infty$ is a basis for a (real or complex) Banach space which is both $w$-democratic and spreading then it is equivalent to the canonical basis for $c_0$. Assume $(e_n)_{n=1}^\infty$ is both $w$-democratic and spreading. By Proposition \[bounded\] and Remark \[nonincreasing\] there is a constant $C$ such that $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}C$$ for arbitrarily large $A\in\mathbb{N}^{<\infty}$. We claim that $(e_n)_{n=1}^\infty$ is unconditional. Indeed, every conditional spreading basis dominates the summing basis for $c_0$ by [@FOSZ16 p4]. Let $C'$ be the domination constant. Then we would have $$|A|{\ensuremath{\leqslant}}C'\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}CC',$$ which is impossible for sufficiently large $A$. This proves the claim. Let $U$ be the unconditional constant for $(e_n)_{n=1}^\infty$, and select any $(a_n)_{n=1}^\infty$ with support in $A$. Then $$\frac{1}{2\beta M}\sup_{n\in A}|a_n|{\ensuremath{\leqslant}}\|\sum_{n\in A}a_ne_n\|{\ensuremath{\leqslant}}UC\sup_{n\in A}|a_n|,$$ where $M=\sup_{n\in\mathbb{N}}\|e_n\|$. By the spreading property, this means $(e_n)_{n=1}^\infty$ is equivalent to the canonical basis for $c_0$. A basis $(e_n)_{n=1}^\infty$ is [**conservative with constant $\boldsymbol{C}$**]{} whenever $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}C\|\sum_{n\in B}e_n\|$$ for all $A,B\in\mathbb{N}^{<\infty}$ satisfying $|A|{\ensuremath{\leqslant}}\ |B|$ and $A<B$. It is known (see [@DKKT Theorem 3.4]) that a basis is partially-greedy if and only if it is both quasi-greedy and conservative. \[conservative-to-democratic\]Assume $w\in c_0$. If a normalized basis $(e_n)_{n=1}^\infty$ is both $C$-conservative and $D$-$w$-democratic then it is $2CD\beta$-democratic, where $\beta$ is the basis constant for $(e_n)_{n=1}^\infty$. Furthermore, $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}CD\;\;\;\text{ for all }A\in\mathbb{N}^{<\infty}.$$ Let $A,B\in\mathbb{N}^{<\infty}$ with $|A|{\ensuremath{\leqslant}}|B|$. By Proposition \[bounded\] and Remark \[nonincreasing\] we can find $\Gamma\in\mathbb{N}^{<\infty}$ with $|B|{\ensuremath{\leqslant}}|\Gamma|$ and $A\cup B<\Gamma$, and satisfying $$\|\sum_{n\in\Gamma}e_n\|{\ensuremath{\leqslant}}D.$$ Note that $|A|{\ensuremath{\leqslant}}|\Gamma|$ and $A<\Gamma$ so that by $C$-conservativeness we now have $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}C\|\sum_{n\in\Gamma}e_n\|{\ensuremath{\leqslant}}CD.$$ By taking the difference of partial sum projections we also have $$CD{\ensuremath{\leqslant}}2CD\beta\|\sum_{n\in B}e_n\|.$$ \[partially-greedy-and-w-democratic\]Assume $w\in c_0$. A quasi-greedy basis $(e_n)_{n=1}^\infty$ for a real Banach space is both conservative and $w$-democratic if and only if it is equivalent to the canonical basis for $c_0$. It is clear that greediness implies neither $w$-almost-greediness nor $w$-democracy when $w\in c_0$, as for instance $\ell_p$ is greedy but not $w$-democratic in this case. Theorem \[partially-greedy-and-w-democratic\] shows that $w$-almost-greediness implies partially-greediness (or almost-greediness, or greediness) if and only if we have $$(e_n)_{n=1}^\infty\text{ is }w\text{-almost greedy}\;\;\;\Leftrightarrow\;\;\;(e_n)_{n=1}^\infty\approx(f_n)_{n=1}^\infty,$$ where $(f_n)_{n=1}^\infty$ denotes the canonical basis of $c_0$. The “if” part is trivial, so we need only prove the “only if” part. We now assume $(e_n)_{n=1}^\infty$ is quasi-greedy, conservative, and $w$-democratic. Then by Proposition \[conservative-to-democratic\], we can find a constant $C$ with $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}C\;\;\;\text{ for all }A\in\mathbb{N}^{<\infty}.$$ Meanwhile, by quasi-greediness together with Proposition \[constant-unconditional\], and making $C$ larger if necessary, we have $$\|\sum_{n\in A}a_ne_n\|{\ensuremath{\leqslant}}2C\|\sum_{n\in A}e_n\|\cdot\|(a_n)_{n\in A}\|_\infty$$ for any $A\in\mathbb{N}^{<\infty}$. Also, if $\beta$ is the basis constant for $(e_n)_{n=1}^\infty$, then by looking at the difference of partial sum projections we obtain $$\|(a_n)_{n\in A}\|_\infty{\ensuremath{\leqslant}}2\beta\|\sum_{n\in A}a_ne_n\|.$$ Putting all these inequalities together, it follows that $(e_n)_{n=1}^\infty$ is equivalent to $c_0$. \[uniformly-complemented\]Assume $w\in c_0$. If $(e_n)_{n=1}^\infty$ is a normalized $w$-almost greedy basis for a real Banach space $X$, then for some $1{\ensuremath{\leqslant}}C<\infty$ and every $m\in\mathbb{N}$ there exists $N\in\mathbb{N}$ so that $(e_n)_{n=N+1}^{N+m}\approx_C(f_n)_{n=1}^m$, where $(f_n)_{n=1}^\infty$ is the canonical basis for $c_0$. Furthermore, we can choose $C$ such that every subsequence of $(e_n)_{n=1}^\infty$ admits a further subsequence which is $C$-equivalent to the canonical basis for $c_0$. Let $\beta$ be the basis constant for $(e_n)_{n=1}^\infty$. We showed in the previous section that every $w$-almost greedy basis is both quasi-greedy and $w$-democratic. Fix $m\in\mathbb{N}$, and let $N\in\mathbb{N}$ be as in Proposition \[bounded\], which we can do by $w$-democracy. We claim that there exists a constant $C$ such that $$\frac{1}{2\beta}\sup_{n=N+1,\cdots,N+m}|a_n|{\ensuremath{\leqslant}}\|\sum_{n=N+1}^{N+m}a_ne_n\|{\ensuremath{\leqslant}}C\sup_{n=N+1,\cdots,m+N}|a_n|$$ for all $(a_n)_{n=1}^\infty\in c_0$. In the above, the first inequality follows from taking the difference of basis projections, and the second inequality for a constant $C$ follows from combining Propositions \[constant-unconditional\] and \[bounded\]. Without loss of generality we may assume $C{\ensuremath{\geqslant}}2\beta$, completing the proof of the first part of the theorem. Now let us prove the “furthermore” part. Note that by Remark \[subsequence\], every subsequence of $(e_n)_{n=1}^\infty$ admits a further subsequence which is $C$-$w'$-democratic with $w'\in\ell_1$. Thus, it is enough to show that $(e_n)_{n=M}^\infty\approx_C c_0$ for some $M\in\mathbb{N}$ whenever $w\in\ell_1$. By Proposition \[bounded\] we can find $M\in\mathbb{N}$ with $$\|\sum_{n\in A}e_n\|{\ensuremath{\leqslant}}C$$ for all $A\in\mathbb{N}^{<\infty}$ with $M{\ensuremath{\leqslant}}\min A$. Thus, as before we can find $C{\ensuremath{\geqslant}}2\beta$ with $$\frac{1}{2\beta}\|(a_n)_{n\in A}\|_\infty{\ensuremath{\leqslant}}\|\sum_{n\in A}a_ne_n\|{\ensuremath{\leqslant}}C\|(a_n)_{n\in A}\|_\infty.$$ \[weakly-null\]If $w\in c_0$ then every $w$-almost greedy basis of a real Banach space is weakly null. Suppose $(e_n)_{n=1}^\infty$ is not weakly null. Then there exists $f\in X^*$ such that $f(e_n)\not\to 0$. Now find a subsequence such that $|f(x_{n_k})|\to\delta>0$. Then $(x_{n_k})_{k=1}^\infty$ contains no weakly null subsequence. In particular, it contains no subsequence equivalent to the $c_0$ basis. By Theorem \[uniformly-complemented\] it is not $w$-almost greedy. [99]{} P. Berna and O. Blasco, [*The best $m$-term approximation with respect to polynomials with constant coefficients*]{}, preprint. A. Cohen, R. Devore, R. Hochmuth, [*Restricted nonlinear approximation*]{}, Constr. Approx., [**16**]{} (2000), no. 1, 85–113. S.J. Dilworth, N.J. Kalton, Denka Kutzarova, [*On the existence of almost greedy bases in Banach spaces*]{}, Studia Math. [**159**]{} (2003), 67–101. S.J. Dilworth, N.J. Kalton, Denka Kutzarova, V.N. Temlyakov, [*The Thresholding Greedy Algorithm, Greedy Bases, and Duality*]{}, Constructive Approximation [**19**]{} (2003), no. 4, 575–597. S.J. Dilworth, M. Soto-Bajo, and V.N. Temlyakov, Quasi-greedy bases and Lebesgue-type inequalities, IMI Preprints, [**02**]{} (2012), 1–44. D. Freeman, E. Odell, B. Sari, B. Zheng. [*On spreading sequences and asymptotic structures*]{}, (preprint, 2016). `arXiv:1607.03587v1 [math.FA]` G. Garrigos, E. Hernandes, and T. Oikhberg, Lebesgue-type inequalities for quasi-greedy bases, Constr. Approx. [**38**]{} (2013), 447–470. G. Kerkyacharian, D. Picard and V.N. Temlyakov, [*Some inequalities for the tensor product of greedy bases and weight-greedy bases*]{}, East J. Approx. [**12**]{} (2006), 103–118. S. V. Konyagin and V. N. Temlyakov, [*A remark on greedy approximation in Banach spaces*]{}, East J. Approx. [**5**]{} (1999), no. 3, 365–379. V. N. Temlyakov, Greedy Approximation, Cambridge University Press, 2011. V. N. Temlyakov, Sparse Approximation with Bases, Advanced Courses in Mathematics CRM Barcelona, Birkh[ä]{}user, 2015. P. Wojtaszczyk, [*Greedy algorithm for general biorthogonal systems*]{}, J. Approx. Theory [**107**]{} (2000), 293–314. [^1]: The first author was supported by the National Science Foundation under Grant Number DMS–1361461; the third author was supported by the Russian Federation Government Grant No. 14.W03.31.0031. The first and second authors were supported by the Workshop in Analysis and Probability at Texas A&M University in 2017.
--- abstract: 'The nuclear-polarization (NP) energies with the collective model commonly employed in the NP calculations for hydrogenlike heavy ions are found to have serious gauge violations when the ladder and cross diagrams only are taken into account. Using the equivalence of charge-current density with a schematic microscopic model, the NP energy shifts with the collective model are gauge invariantly evaluated for the $1s_{1/2}$ states in $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$.' author: - Yataro Horikawa - Akihiro Haga title: Gauge Invariant Evaluation of Nuclear Polarization with Collective Model --- High-precision Lamb-shift measurement on high-Z hydrogenlike atoms [@BE95] spurred a renewed interest in the quantum electrodynamical (QED) calculation of electronic atoms. Comparison of theoretical results with experimental data allows sensitive tests of QED in strong electromagnetic fields [@SA90; @MO98]. In this context, the study of the nuclear-polarization (NP) effect becomes important because the NP effect, as a non-QED effect which depends on the model used to describe the nuclear dynamics, sets a limit to any high-precision test of QED. A relativistic field-theoretical treatment of NP calculation was presented by Plunien et al. [@PL89; @PL95] utilizing the concept of effective photon propagators with nuclear-polarization insertions. In these studies, only the Coulomb interaction was considered based on the argument that the relative magnitude of transverse interaction is of the order of $ (v/c)^2$ and the velocity $v$ associated with nuclear dynamics is mainly nonrelativistic. The effect of the transverse interaction was studied in the Feynman gauge by Yamanaka et al. [@YA01] with the same collective model used in [@PL89; @PL95; @NE96] for nuclear excitations. They found that the transverse contribution is several times larger than the Coulomb contribution in heavy electronic atoms before the contributions of the positive and negative energy states cancel. However, due to the nearly complete cancellation between them, the transverse effects become small and the net effect is destructive to the Coulomb contribution in both $1s_{1/2}$ states of $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$. As a result, the total NP energy almost vanishes in $^{208}_{~82}$Pb$^{81+}$. Recently, the NP effects for hydrogenlike and muonic $^{208}_{~82}$Pb$^{81+}$ were calculated in both the Feynman and Coulomb gauges, using a microscopic random phase approximation (RPA) to describe nuclear excitations [@HHT02; @HHT02a]. It was found that, in the hydrogenlike atom, the NP effects due to the ladder and cross diagrams have serious gauge dependence and inclusion of the seagull diagram is indispensable to restore the gauge invariance [@HHT02]. In contrast, the magnitude of the seagull collection is a few percent effect in the muonic atom, although it improves the gauge invariance [@HHT02a]. In the present paper, we report that the nuclear collective model employed for hydrogenlike ions in [@NE96; @YA01; @PL89; @PL95] also leads to a large violation of gauge invariance as far as the ladder and cross diagrams only are considered. Then it is shown, based on the equivalence of the transition density of the collective model and a microscopic nuclear model with a schematic interaction between nucleons, that the seagull corrections should also be calculated with the collective model in order to obtain gauge invariant NP results. The resulting gauge invariant NP energy shifts are given for the $1s_{1/2}$ states in $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$. For spherical nuclei, the Hamiltonian of the small amplitude vibration with multipolarity $L$ is written as $$\begin{aligned} H_L = \frac{1}{2} (\frac{1}{D_L} \sum_{M}\hat{\pi}_{LM}^\dagger \hat{\pi}_{LM} + C_L \sum_{M}\hat{\alpha}_{LM}^\dagger \hat{\alpha}_{LM}), \label{harmonic}\end{aligned}$$ where $\hat{\pi}_{LM}$ are the canonically conjugate momenta to the collective coordinates $\hat{\alpha}_{LM}$. The lowest vibrational modes are expected to have density variations with no radial nodes, which may be referred to as shape oscillations. The corresponding charge density operator with the multipolarity $L$ is written as $$\begin{aligned} \hat{\rho}_L(t,\boldsymbol{r}) &= \rho_L(r) \sum_M Y_{LM}^* \hat{\alpha}_{LM}(t) \label{rhoL}\end{aligned}$$ to the lowest order of $\hat{\alpha}^\dagger _{LM}(t)$. The liquid drop model of Bohr (BM)[@Bohr52] is a simple model of such a shape oscillation obtained by considering deformation of the nuclear radius parameter while leaving the surface diffuseness independent of angle: $$\begin{aligned} R(\Omega) = R_0 \left[1 + \sum_{LM} \alpha_{L M} Y^*_{LM}(\Omega)\right], \end{aligned}$$ where $R_0$ is the nuclear radius parameter of the ground state. The radial charge density of BM is given by $$\begin{aligned} \rho_L(r) = - R_0 \frac{d }{dr}\varrho_0(\boldsymbol{r}), \label{BMrho}\end{aligned}$$ where $\varrho_0(\boldsymbol{r})$ is a charge distribution with spherical symmetry. If we assume that under distortion, an element of mass moves from $\boldsymbol{r}_0$ to $\boldsymbol{r}$ without alteration of the volume it occupies, i.e., the nucleus is composed of an inhomogeneous incompressible fluid, a harmonic vibration of an originally spherical surface $r = r_0$ in the nucleus is given by $$\begin{aligned} r(\Omega) = r_0 \left[1 + \sum_{LM} \left(\frac{r_0}{R_0}\right)^{L-2} \alpha_{L M} Y^*_{LM}(\Omega) \right].\end{aligned}$$ For this model we obtain $$\begin{aligned} \rho_L(r) = - \frac{1}{R_0^{L-2}}\ r^{L-1}\ \frac{d}{dr} \varrho_0(\boldsymbol{r}). \label{tassie}\end{aligned}$$ This version will be hereafter referred to as the Tassie Model (TM) [@Tassie]. In Eqs.(\[BMrho\]) and (\[tassie\]), $\varrho_0(\boldsymbol{r})$ is usually taken to be equal to the ground-state charge distribution. In either case, the motion of nuclear matter is assumed to be incompressible and irrotational, hence the velocity field $\boldsymbol{v}(t,\boldsymbol{r})$ is given by a velocity potential as $\boldsymbol{v}(t, \boldsymbol{r}) = \boldsymbol{\nabla}\Phi(t,\boldsymbol{r})$. This implies the nuclear current defined by $ \boldsymbol{J}(\boldsymbol{r})= \varrho_0(r) \boldsymbol{v}(\boldsymbol{r}) $ yields the transition multipole density of current operator $$\begin{aligned} \hat{\boldsymbol{J}}_{L}(t,\boldsymbol{r}) &= J_{LL-1}(r) \sum_M \boldsymbol{Y}^*_{LL-1M} \hat{\alpha}_{LM}(t). \label{JL}\end{aligned}$$ Note that the $J_{LL+1}(r)$ part does not appear in the transition density of current operator given by (\[JL\]). Therefore, in this kind of collective model, the continuity equation of charge gives $$\begin{aligned} [i \Delta E_L \rho_L(r) + \sqrt{\frac{L}{2L+1}}(\frac{d}{dr}-\frac{L-1}{r})J_{LL-1}(r)] = 0, \label{continuity}\end{aligned}$$ where $\Delta E_L $ is the excitation energy of the surface oscillation. Hence the transition density of current is given by $$\begin{aligned} J_{LL-1}(r) = i \Delta E_L \sqrt{\frac{2L+1}{L}} r^{L-1} \int_r^\infty x^{1-L} \rho_L(x) dx \label{current}\end{aligned}$$ in terms of the transition density of charge. If we assume the uniform charge distribution $\varrho_0(r) = \varrho_0\Theta(R_0 -r)$, we obtain, for both BM and TM, $$\begin{aligned} \rho_L(r) &= \langle J_f\|r^LY_L\|J_i\rangle \ \frac{1}{R_0^{L+2}}\delta(R_0-r), \label{unitran} \\ J_{LL-1}(r) &= \langle J_f\|r^LY_L\|J_i\rangle \ i \Delta E_L \sqrt{\frac{2L+1}{L}} \frac{r^{L-1}}{R_0^{2L+1}} \Theta(R_0-r). \label{unicurrent}\end{aligned}$$ The transition densities given by (\[unitran\]) and (\[unicurrent\]) have been employed in the previous NP calculations for $L \ge 1$ [@PL89; @PL95; @NE96; @YA01]. It should be mentioned that, although the surface oscillation applies to the case of the multipolarity $L \ge 2$, Eqs. (\[unitran\]) and (\[unicurrent\]) with $L = 1$ give the transition densities of the giant dipole resonance given by the Goldhaber-Teller model describing the relative motion of neutrons and protons [@DW66]. For the monopole vibration, it is also possible to construct corresponding charge and current densities [@YA01; @PL89]. In general, the charge conservation relation between the charge and current densities is necessary but not sufficient for the gauge invariance of the NP calculation. Unfortunately, it is practically impossible to construct a model that incorporates gauge invariance explicitly in terms of the collective variables of the model. However, it is possible to evaluate the NP energy gauge invariantly with the above collective model as is shown below. The NP calculations with the collective model assume that a single giant resonance with spin multipolarity $L$ saturates the energy-weighted $B(EL)$ strength for each isospin. In this respect, let us recall the fact that the transition densities of charge to the sum-rule saturated levels are given in terms of the ground-state charge density [@DF73]. This can be seen as follows. For a pair of single-particle operators $g(\boldsymbol{r}) = g(r)Y_{LM}(\Omega)$ and $f(\boldsymbol{r}) = f(r)Y_{LM}(\Omega)$, the energy-weighted sum rule can be generalized to $$\begin{aligned} &\frac{1}{2J_i+1} \sum_n (E_n - E_i)[\langle J_n\|g(r)Y_L\|J_i\rangle ^*\langle J_n\|f(r)Y_L\|J_i\rangle ] \nonumber \\ &= \frac{2L+1}{4\pi}\frac{h^2}{2M} \int r^2 dr \varrho_0(r)[g'(r)f'(r) + \frac{L(L+1)}{r^2}g(r)f(r)], \label{ewsumrule}\end{aligned}$$ where $\varrho_0(r)$ is the charge distribution of the ground state normalized as $\int r^2 dr \varrho_0(r) = Z$ [@BM75]. When a single excited state $ |J_fM_f \rangle$ saturates the $B(EL)$ strength, $ |J_fM_f \rangle \propto r^L Y_{LM} |J_iM_i \rangle $, the transition density of charge to this state is derived from the sum-rule relation (\[ewsumrule\]) model independently and given by $$\begin{aligned} \varrho_{fi}(r) = - \frac{1}{2L+1 }\ \frac{\langle J_f\|r^LY_L\|J_i\rangle } {\langle J_i|r^{2L-2}|J_i \rangle } \ r^{L-1}\ \frac{d}{dr} \varrho_0(r). \label{collectrho}\end{aligned}$$ If the charge distribution of the ground state is assumed to be a uniform distribution with a radius $R_0$, this becomes $$\begin{aligned} \varrho_{fi}(r) = \langle J_f\|r^LY_L\|J_i\rangle \ \frac{1}{R_0^{L+2}} \ \delta(r - R_0),\end{aligned}$$ which is equal to the matrix element of the charge density operator of the collective model given by (\[unitran\]). On the other hand, it is well known that the schematic RPA with a separable interaction $$\begin{aligned} V_S(\boldsymbol{r}_i,\boldsymbol{r}_j) = \kappa_L \sum_M r_i^L Y_{LM}(\Omega_i)\ r_j^L Y^*_{LM}(\Omega_j).\end{aligned}$$ for particle-hole excitations $|m i^{-1}\rangle$ with a degenerate particle-hole excitation energy $\epsilon$ gives a collective state $|LM \rangle$, which exhausts the energy-weighted sum rule for the single particle operator $r^L Y_{LM}$: $$\begin{aligned} \Delta E_L\ |\langle LM|r^L Y_{LM}|0\rangle |^2 = \epsilon \sum_{mi} |\langle m|r^L Y_{LM}|i\rangle |^2. \end{aligned}$$ where $\Delta E_L$ is the excitation energy of $|LM \rangle$ [@RS80] . If the ground state is assumed to be a filled major shell of the harmonic oscillator potential: $$\begin{aligned} H_{HO} = \frac{1}{2M_N}\boldsymbol{p}^2 + \frac{M_N\omega^2}{2} \boldsymbol{r}^2,\end{aligned}$$ the particle-hole excitation energy $\epsilon$ is taken to be $1 {\hbar} \omega$ for $1^-$ and $2 \hbar \omega$ for $0^+$ and $2^+$. The corresponding collective states exhaust the energy-weighted sum rules, because the transition strengths vanish outside these p-h excitation spaces. Therefore, the transition densities of charge to the collective states of this fictitious nucleus are given by (\[collectrho\]). When the ground-state charge density is approximated by a uniform charge density, the transition density of charge becomes identical with that of the collective model employed in NP calculations for hydrogenlike atoms. However, the gauge invariant electromagnetic interaction of this schematic microscopic model is given by the minimal substitution $\boldsymbol{p}_i \rightarrow \boldsymbol{p}_i - e_i \boldsymbol{A}$ to the Hamiltonian $ H = H_{HO} + V_S $. Hence the lowest-order contributions to NP with this model are given by the three Feynman diagrams in Fig. 1, where two photons are exchanged between a bound electron and a nucleus. The nuclear vertices are understood to have no diagonal matrix elements for the ladder and cross diagrams, and no nuclear intermediate states for the seagull diagram. It is well known that the NP results with this model is gauge invariant provided these three diagrams are taken into account. Although $J_{LL+1}(r)$ current density appears in this model, $J_{LL-1}(r)$ dominates in the transition to the collective state. Thus we can conclude that the gauge invariance of the collective model is also guaranteed with the charge-current density satisfying the continuity equation (\[continuity\]), provided the contributions from the three diagrams are taken into account. It should be noted that the seagull contribution is given in terms of the ground-state charge distribution and does not depend on the details of the model for nuclear excitations. In the actual NP calculations [@NE96; @PL89; @PL95; @YA01] with the collective model, the assumption that each nuclear intermediate state saturates the sum-rule is not strictly obeyed, because the observed nuclear data are used for the low-lying states. However, since the gauge violation is serious only in the dipole giant resonance, this does not invalidate our arguments as is confirmed by the numerical results in the following. ![Diagrams contributing to nuclear polarization in lowest order; (a) ladder, (b) cross and (c) seagull diagrams.[]{data-label="fig;npdiagram"}](figure1.eps){width="7cm"} The formulas to calculate the NP energy shifts due to the three diagrams of Fig. 1 were given in [@HHT02] for arbitrary nuclear models. In the present NP calculations of the $1s_{1/2}$ states in hydrogenlike $^{208}_{~82}$Pb and $^{238}_{~92}$U, the parameters of the collective model are the same as those given in Refs. [@NE96; @YA01]. The same low-lying states and giant resonances are taken into account. In addition, the contributions from the $4^-$ and $5^-$ giant resonances are also calculated in order to see the effects of higher multipoles neglected previously. The $B(EL)$ values are adjusted to the observed values for low-lying states and the $B(EL)$ are estimated through the energy-weighted sum rule for giant resonances. Tables I and II show the results for the $1s_{1/2}$ states in $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$, where the sum of the contributions from the three diagrams of Fig.1 is given for each multipole. The second and the third columns are the results including the transverse effects in the Feynman and Coulomb gauges, respectively. The values in the parentheses are the contributions from the seagull diagram. The NP energy shifts due to the ladder and crossed diagrams only are obtained by subtraction of the seagull contributions given in the parentheses. The fourth column gives the results of the present Coulomb nuclear polarization (CNP). The last two columns are the results of the previous calculations. ----------- ------- ----------- ------- ---------- ------- ------- ------- $0^+$ -3.3 (-0.2)   -3.3 (+0.0)   -3.3 -6.6 -3.3 $1^-$ -22.1 (-42.3)   -21.5 (-7.3)   -17.0 +16.3 -17.6 $2^+$ -5.8 (+0.3)   -5.8 (+0.6)   -5.8 -7.0 -5.8 $3^-$ -2.7 (+0.2)   -2.8 (+0.2)   -2.9 -2.9 -2.6 $4^+$ -1.0 (+0.1)   -1.0 (+0.1)   -1.1 $5^-$ -0.5 (+0.1)   -0.6 (+0.0)   -0.6 [total]{} -35.4 (-41.8)   -35.0 (-6.4)   -30.7 -0.2 -29.3 ----------- ------- ----------- ------- ---------- ------- ------- ------- : Nuclear-polarization correction (meV) to the $1s_{1/2}$ state of $^{208}_{~82}$Pb$^{81+}$. NP denotes the correction due to the whole of the Coulomb and transverse interactions; CNP the correction only due to the Coulomb interaction. Energy shifts in the parentheses are due to seagull contribution. ----------- -------- ----------- -------- ---------- -------- -------- -------- $0^+$ -9.3 (-0.4)   -9.3 (+0.0)   -9.3 -21.5 -9.5 $1^-$ -54.3 (-65.7)   -52.5 (-3.9)   -41.6 -3.8 -42.4 $2^+$ -131.6 (+0.0)   -131.7 (+1.6)   -131.6 -148.2 -138.9 $3^-$ -6.5 (+0.3)   -6.5 (+0.4)   -6.7 -7.3 -6.8 $4^+$ -2.0 (+0.2)   -2.0 (+0.2)   -2.1 $5^-$ -1.0 (+0.1)   -1.0 (+0.1)   -1.1 [total]{} -204.7 (-65.5)   -203.0 (-1.6)   -192.4 -180.8 -197.6 ----------- -------- ----------- -------- ---------- -------- -------- -------- : Nuclear-polarization correction (meV) to the $1s_{1/2}$ state of $^{238}_{~92}$U$^{91+}$. The notations are the same as in Table I. The results with the collective model, as with the microscopic RPA model [@HHT02], also lead to large violations of gauge invariance if ladder and crossed diagram contributions only are considered. The seagull corrections are considerable in the $1^-$ contributions for both of $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$. Note that, in the limit of point nucleus, which is not unrealistic even for heavy hydrogenlike ions, the seagull collection occurs only in the dipole mode which involves the current density $J_{10}(r)$. In $^{208}_{~82}$Pb$^{81+}$, the contributions from low-lying states are about 10% of the total results and the NP energy shift is mainly determined by the giant resonance contributions. The most dominant contribution comes from the giant dipole resonance, where a large violation of gauge invariance occurs if the seagull contributions in the parentheses are neglected: $- 22$ meV becomes $+ 20$ meV and $- 14$ meV in Feynman and Coulomb gauges, respectively. The column 5 gives the previous results in the Feynman gauge without seagull contributions. The differences between the two results in the Feynman gauge without seagull contribution come from the accuracy of numerical integration over the continuum threshold region of electron intermediate states and from the differences of the electron wave functions: here we have used wave functions in a finite charge distribution, while [@YA01] employs point Coulomb solutions. In $^{238}_{~92}$U$^{91+}$, the dominant contribution comes from the lowest excited states $2^+$ with a large $B(E2)$ value. Since the transition density of current in the present model given by (\[unicurrent\]) is proportional to the excitation energy, the transverse contribution of the lowest $2^+$ is negligible due to its exceptionally small excitation energy $\Delta E_2 = 44.9$ keV. Apart from this large Coulomb contribution, the contributions from other states show similar tendencies as in $^{208}_{~82}$Pb$^{81+}$. Namely, the contributions from other low-lying states are small compared with the giant resonance contributions, and a large gauge violation occurs in the giant dipole resonance when the seagull contribution is omitted. To summarize, the transverse effects with the collective model are estimated gauge invariantly by inclusion of the seagull contribution. The gauge invariance is satisfied to a few percent levels in both $^{208}_{~82}$Pb$^{81+}$ and $^{238}_{~92}$U$^{91+}$ for each of the multipoles separately. Without the seagull correction, the Feynman gauge in particular does not give reliable predictions of NP, although numerical calculation in this gauge is easier than in the Coulomb gauge. Hence the conclusion of [@YA01] on the transverse effects is no longer tenable. The NP energy shifts are $-35.0 (-35.4)$ meV in $^{208}_{~82}$Pb$^{81+}$ and $-205 (-203)$ meV in $^{238}_{~92}$U$^{91+}$ for Coulomb (Feynman) gauge. The net transverse effect is about $14 \sim 15 \%$ of the Coulomb energy shift of $-30.7$ meV in $^{208}_{~82}$Pb$^{81+}$. This is similar to the conclusion of the microscopic model [@HHT02], and should be compared with the transverse effect of the $1s_{1/2}$ state in muonic $^{208}_{~82}$Pb, which is about 6 % of the Coulomb contribution [@HHT02a]. The agreement between the two models provides stability of the prediction of the NP effects with respect to the choice of the nuclear models. The percentage of the transverse effect in the total shift in $^{238}_{~92}$U$^{91+}$ is reduced to about $6$% of the Coulomb effect due to the dominant Coulomb contribution from the lowest $2^+$ state. The authors wish to acknowledge Prof. Y. Tanaka for generous support and useful discussions during our research on NP effects. They appreciate Drs. N. Yamanaka and A. Ichimura for collaboration on the NP effects with the collective model, which motivated the present work. [30]{} H. F. Beyer, G. Menzel, D. Liesen, A. Gallus, F. Bosch, R. Deslattes, P. Indelicato, Th. Stöhlker, O. Klepper, R. Moshammer, F. Nolden, H. Eickhoff, B. Franzke, and M.Steck, Z. Phys. D: At., Mol. Clusters [**35**]{}, 169 (1995). J. R. Sapirstein and D. R. Yennie, in [*Quantum Electrodynamics*]{}, edited by T. Kinoshita (World Scientific, Singapore 1990), p. 560. P. J. Mohr, G. Plunien and G. Soff, Phys. Rep. [**293**]{}, 227 (1998). G. Plunien, B. Müller, W. Greiner, and G. Soff, Phys. Rev. A [**39**]{}, 5428 (1989); [**43**]{}, 5853 (1991). G. Plunien and G. Soff, Phys. Rev. A [**51**]{}, 1119 (1995); [**53**]{}, 4614(E) (1996). A. V. Nefiodov, L. N. Labzowsky, G. Plunien, and G. Soff, Phys. Lett. A [**222**]{}, 227 (1996). N. Yamanaka, A. Haga, Y. Horikawa, and A. Ichimura, Phys. Rev. A [**63**]{}, 062502 (2001). A. Haga, Y. Horikawa, and Y. Tanaka, Phys. Rev. A [**65**]{}, 052509(2002). A. Haga, Y. Horikawa, and Y. Tanaka, Phys. Rev. A [**66**]{}, 034501(2002). A. Bohr, Kgl. Dan. Vidensk.Selsk. Mat. Fys. Medd. [**26**]{}, No.14 (1952) L. J. Tassie, Austr. J. Phys. [**9**]{}, 407 (1956); H. Überall, [*Electron Scattering from Complex Nuclei*]{} (Academic Press, New York and London, 1971), Part B, p. 573. T. deForest, Jr. and J. D. Walecka, Adv. Phys. [**15**]{}, 1 (1966) T. J. Deal and S. Fallieros, Phys. Rev. C[**7**]{}, 1709(1973) A. Bohr and B. R. Mottelson, [*Nuclear Structure*]{} (Benjamin, New York, 1975), Vol. 2, p. 399. P. Ring and P. Schuck, [*The Nuclear Many-body Problem*]{} (Springer-Verlag, Berlin Heidelberg New York, 1980), p. 319.
--- abstract: 'A method based on crossed polarizers to observe high-contrast coherent population trapping (CPT) resonance has been developed. Since crossed polarizers have a simple optical system, our method is suitable for chip-scale atomic clocks (CSACs). In CPT, the Faraday rotation in a linearly polarized light field (lin$\parallel$lin) was calculated using two pairs of $\Lambda$-system models; the spectrum of the Faraday rotation is also estimated. On measuring the contrast and linewidth with the crossed-polarizer method, a comparison of the theoretical model and experiment data showed good agreement. A comparison of the theoretical model and experiment data showed they were in good agreement. Moreover, the experimental results showed that a high contrast (88.4%) and narrow linewidth (1.15 kHz) resonance could be observed using a Cs gas cell and D$_1$-line vertical-cavity surface-emitting laser (VCSEL).' author: - 'Yuichiro Yano and Shigeyoshi Goka [^1]' title: | High-contrast Coherent Population Trapping\ based on Crossed Polarizers Method --- coherent population trapping, chip-scale atomic clocks, Faraday effect, polarization selective method Introduction ============ Atomic clocks based on coherent population trapping (CPT) resonance with a vertical-cavity surface-emitting laser (VCSEL) have attracted attention as a means of fabricating very small atomic references, such as chip-scale atomic clocks (CSACs)[@Knappe]. CPT atomic clocks are in great demand for many applications, such as telecommunications, navigation systems, and synchronization of networks [@Vig]. Such atomic clocks are requrired for their high frequency stability, small volume and low power consumption. In particular, short-term frequency stability is an important parameter. Short-term frequency stability, described as the Allan standard deviation $\sigma_y(\tau)$, is estimated as $$\sigma _y (\tau) \propto \frac{\Delta f}{f_0}\frac{1}{\mbox{\it SNR}} \tau^{-1/2} \label{eq:short}$$ where $\Delta f$ is the resonance linewidth, $f_0$ is the resonance frequency, and $\mbox{\it SNR}$ is the signal-to-noise ratio. Contrast, which is used as a measure of $\mbox{\it SNR}$, is defined as the amplitude of CPT resonance over the background signal level [@Lutwak]. Therefore, short-term stability is determined from contrast and linewidth. A high-contrast CPT resonance can easily be obtained in a high-intensity laser field; however, the resonance linewidth broadens as a result of the power broadening effect [[@Powerbroadening]]{}. It is difficult to both enhance contrast and sharpen the linewidth simultaneously. To resolve this issue, a number of methods have been developed (e.g. double-lambda CPT [[@Pulsed; @CPT]]{}, push-pull pumping [[@Push-pull]]{}). These methods increase the population of the clock transition through using the unique polarization of the incident laser beam to enhance the amplitude of CPT resonance signal. For example, the double-lambda CPT is generated by exciting with two lin$\perp$lin polarized lasers. And, push-pull pumping is a method to excite atoms using laser light of orthogonal linear polarizations with the time-delayed optical components. These optical systems requires several optics (e.g. beam splitter, mirrors, retarders) to generate the unique polarization. Therefore, because the optical system is complex and occupies a large volume, it is difficult to use these methods for CPT atomic clocks requiring small volumes such as CSACs. In contrast, dispersion detection methods have been reported to decrease the background signal level[[@Magneto][@POP]]{}. This method regulates the light incident on a photo detector by selecting the light contributing to the resonance to reduce background signal. Recently, the method was applied in the field of high sensitive magnetometry[[@Magneto]]{}. And, it was reported that by decreasing the background signal level it improved the Allan deviation by one order of magnitude in comparison with absorption detection employing pulsed optical pumped atomic clocks[[@POP]]{}. The dispersion signal can be obtained by simply placing a polarizer or retarder in front of a photo detector along the same axis. As the dispersion detection is a simple optical system, it enables enhancements in the frequency stability of small volume CPT atomic clocks. In this paper, we focus on the Faraday effect in CPT and propose a method based on crossed polarizers for observing high-contrast CPT. Firstly, we calculate the Faraday rotation in CPT under a linearly polarized light field (lin$\parallel$lin) using the two pairs of a $\Lambda$ system model. The CPT lineshape can be estimated by calculating the Faraday rotation angle. Both the resonance amplitude and linewidth behavior of the magnetic field are estimated. Secondly, we show the results of an experiment using a $^{133}$Cs gas cell and the D$_1$-line VCSEL. The contrast and linewidth were measured by varying the magnetic field and the relative transmission angle of the polarizers. A comparison of the experimental and calculated results showed that they had the same tendency, and the difference between experiment and calculation was no more than 20 %. Finally, on the basis of the measurement and calculation data, we determine the optimal condition for enhancing the short-term stability of CPT atomic clocks. ![(a) Excitation scheme with a lin$\parallel$lin field on the $\rm D_1$-line of Cs: the solid line is labeled $\Lambda _a$, and the dotted line is labeled $\Lambda_b$ (b) Closed $\Lambda$-type three-level model used to calculate the CPT phenomenon: $\delta_p$ and $\delta_c$ are detunings from the ground states, $\Omega_p$ and $\Omega_c$ are Rabi frequencies, $\Gamma_{31}$ and $\Gamma_{32}$ are relaxations between an excited state and the two ground states, and $\gamma_s$ is the relaxation term of the ground states. $\gamma_f$ is decoherence rate.[]{data-label="fig:scheme"}](fig4_1.eps) Theory ====== Magneto optical effect under CPT resonance ------------------------------------------ [fig4\_2.eps]{} (135,150)[[$2g_I\mu_B B/h = 2\gamma$]{}]{} Faraday rotation is a magneto-optical phenomenon, that is, an interaction between light and a magnetic field in a medium. The resonant Faraday rotation is known as the Macaluso-Corbino effect [@Budker]. Though the Faraday rotation in CPT is classified as such an effect, the way to calculate it in CPT has not been reported yet. In this section, we describe the Faraday rotation in CPT in a lin$\parallel$lin light field. Figure \[fig:scheme\] (a) shows the excitation scheme using a lin$\parallel$lin light field on the $^{133}$Cs-D$_1$ line. In CPT phenomenon under lin$\parallel$lin light field, two schemes can be formed with two pairs of ground-state hyperfine sublevels simultaneously: $|F=3,m_F=1\rangle$,$|F=4,m_F=-1\rangle$ indicated by the label $\Lambda_a$ ,and $|F=3,m_F=-1\rangle$,$|F=4,m_F=1\rangle$ indicated label $\Lambda_b$ coupled with the common excited states $|F'=3,m_F=0\rangle$ or $|F'=4,m_F=0\rangle$ [@Zibrov]. Therefore, four wave waves that result from a combination of two $\Lambda$ scheme and circularly polarization contribute the CPT phenomena under lin$\parallel$lin light field. Figure \[fig:scheme\] (b) shows a simple $\Lambda$-type three level system of the $\Lambda_a$ scheme. Here, the energy eigenstates $|1\rangle$ and $|2\rangle$ correspond to two ground states $|F=3,m=-1\rangle $ and $|F=4,m=1\rangle$, and the excited state $|3\rangle$ correspond to $|F'=3,m=0\rangle$ or $|F'=4,m=0\rangle$. Assuming equal Rabi frequencies $\Omega = \Omega_p = \Omega_c$ and decay rates $\Gamma_{31} =\Gamma_{32}$, the line shape for CPT resonance is Lorentzian. In the presence of a magnetic field (C axis direction), the two resonances shift in opposite frequency directions because of the Zeeman effect. The Zeeman shift of the two pairs in a weak magnetic field ($<$ 50 mT) is as follows: $$f_{a,b} = \\ f_0 \pm \\ \frac{2g_{I}^{}\mu_{B}^{}}{h}B+\frac{15g_{J}^{2}\mu_{B}^{2}}{32f_{0}h^2}B^2 \label{eq:zeeman-shift}$$ where $B$ is the magnetic field, and $f_0$ is the hyperfine splitting frequency of the ground states in the absence of the magnetic field, $g_I$ is the nuclear $g$-factor, $g_J$ is the Landé $g$-factor, $\mu_B$ is the Bohr magneton, and $h$ is Planck’s constant. [c|c||c|c]{} $\Lambda$& $\sigma$ &Absorption index $\alpha$&Refractive index $n$\ & $\sigma^+$ & $\chi_0\Bigl(1-\cfrac{\gamma^2}{(\delta-\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}\Bigr)$& $-\cfrac{\chi_0\gamma(\delta-\frac{2g_I\mu_B}{\hbar}B)}{(\delta-\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}$ ------------------------------------------------------------------------ \ & $\sigma^-$ & $\chi_0\Bigl(1-\cfrac{\gamma^2}{(\delta-\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}\Bigr)$& $~~\cfrac{\chi_0\gamma(\delta-\frac{2g_I\mu_B}{\hbar}B)}{(\delta-\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}$ ------------------------------------------------------------------------ \ & $\sigma^+$ & $\chi_0\Bigl(1-\cfrac{\gamma^2}{(\delta+\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}\Bigr)$& $~~\cfrac{\chi_0\gamma(\delta+\frac{2g_I\mu_B}{\hbar}B)}{(\delta+\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}$ ------------------------------------------------------------------------ \ &$\sigma^-$ & $\chi_0\Bigl(1-\cfrac{\gamma^2}{(\delta+\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}\Bigr)$& $-\cfrac{\chi_0\gamma(\delta+\frac{2g_I\mu_B}{\hbar}B)}{(\delta+\frac{2g_I\mu_B}{\hbar}B)^2+\gamma^2}$ ------------------------------------------------------------------------ \ The absorption index $\alpha$ and refractive index $n_+,n_-$ of the four light waves as a function of the normalized frequency detuning $d$ (= $\delta/\gamma$) is shown in Fig. [\[fig:spectrum\_abs\_ref\]]{}. All functions of absorption and refractive indices are shown in Table [\[tab:abs\_and\_ref\]]{}, where $\chi_0$ represents the amplitude of linear susceptibility, $\delta$ the frequency detuning of ground state, and $\gamma$ the resonance linewidth (at half width at half maximum). The absorption index is zero at the center of the CPT resonance because the atoms do not interact with the light when the atoms fall into the dark state. The absorptions $\sigma^-$ and $\sigma^+$ vanish simultaneously under CPT resonance. Therefore, the absorption of left-circular-polarized light $\alpha_+$ equals that of right-circular-polarized light $\alpha_-$. From the Kramers-Kronig relation, the refractive index is an odd function of frequency detuning because the absorption index is even function. If $\delta_p$ and $\delta_c$ are both much less than $\gamma_f$, we obtain $\delta_p=-\delta_c =\delta/2$. Because $\delta_p$ and $\delta_c$ have opposite signs and the dispersion spectra are odd functions of the detuning $\delta$, the refractive index $n_+$ and $n_-$ have opposite signs ($n_+=-n_-$). The refractive indices for the two schemes $n_{\Lambda_a}$ and $n_{\Lambda_b}$ also have opposite signs ($n_{\Lambda_a}=-n_{\Lambda_b}$), because the direction of the circular-polarized light $\sigma^+$ and $\sigma^-$ is changed in the $\Lambda_a$ and $\Lambda_b$ schemes. Crossed Polarizers method ------------------------- ![Schematic layout of crossed polarizers method.[]{data-label="fig:schematic-cp"}](fig4_16.eps){width="3in"} The short-term stability of laser-pumped vapor cell atomic clocks is often limited by a combination of light source AM noise and FM-AM conversion noise on the atomic absorption [@Kitching]. The AM noise is caused by the power fluctuation of the light source. And FM-AM conversion noise is caused by the effect that laser frequency fluctuations have on the absorption [@Kitching]. It is known that these noises are proportional to the background signal level (DC level) of the photo detector [@Lutwak]. Therefore, to enhance the short-term stability of CPT atomic clocks, it is necessary to reduce the DC level. The crossed polarizers method is a way of measuring the birefringence of the optical medium. It has high sensitivity for birefringence detection because it suppresses the background signal level [@Dick]. The schematic optical layout is shown in Fig. \[fig:schematic-cp\]. Two linear polarizers are placed on both sides of the gas cell, and the transmission axes are set nearly orthogonal to each other. The first polarizer and the second polarizer are called the polarizer and analyzer, respectively. The transmission axes between the polarizer and analyzer are defined by the relative angle $\theta$. In this paper, $\theta$ is defined as zero when the polarizers are orthogonal to each other. Here, let the Faraday rotation angle induced by the gas cell be denoted $\phi$; the transmitted light $I_s$ can be written as $$\begin{aligned} \frac{I_s}{I_0}=&\frac{1}{4}(e^{-2\pi\alpha_+ l/\lambda}-e^{-2\pi\alpha_- l/\lambda})^2\\ &+e^{-2\pi(\alpha_+ +\alpha_-) l /\lambda}\sin^2 (\theta + \phi )\\ & \because \phi = \pi(n_+ -n_-)\frac{l}{\lambda}\\ \end{aligned} \label{eq:dichroism_birefringence}$$ where $I_0$ is the incident light intensity, $l$ the sample length, and $\lambda$ wavelength of light[[@Budker2]]{}. In general, the two terms in Eq. ([\[eq:dichroism\_birefringence\]]{}) give comparable contributions to the forward-scattering signal. The first term refers to the differential absorption of the $\sigma^+$ and $\sigma^-$ components (circular dichroism) and the second term to the differential dispersion. Because $\alpha_+$ equals $\alpha_-$ at CPT resonance under lin$\parallel$lin field, there is no difference in the absorption between $\sigma^+$ and $\sigma^-$. Therefore, the dichroism term vanishes at CPT resonance. Moreover, because the absorption coefficient $\chi_0 l / \lambda$ is small, taking into account that the conventional resonance contrast is no more than 10% [[@contrast]]{}, we can assume that $e^{-2\pi(\alpha_+ +\alpha_-) l /\lambda}\approx 1$. Eq.([\[eq:dichroism\_birefringence\]]{}) can be rewritten $$\begin{aligned} I_s=I_0\sin^2 (\theta + \phi ). \end{aligned}$$ The refractive index under CPT resonance is the same as the resonant Faraday effect[@Budker]. The Faraday rotation angle $\phi$ is wrriten as $$\begin{aligned} &\phi=\pi(n_+ - n_-)\frac{l}{\lambda}\\ &= \frac{\pi\chi_0l}{\lambda}\Bigl(\frac{(\delta-\frac{2g_I\mu_B B}{\hbar})\gamma}{(\delta-\frac{2g_I\mu_B B}{\hbar})^2+\gamma^2}-\frac{(\delta+\frac{2g_I\mu_B B}{\hbar})\gamma}{(\delta+\frac{2g_I\mu_B B}{\hbar})^2+\gamma^2}\Bigr), \end{aligned} \label{eq:faraday_rotation}$$ and by using a normalized detuning $d \equiv \delta/\gamma$ and normalized Zeeman shift $b \equiv (2g_I\mu_B B/\hbar)/\gamma$, Eq.(\[eq:faraday\_rotation\]) can be simplifiled into $$\phi = \frac{\pi\chi_0\l}{\lambda}\frac{2 b(1+b^2-d^2)}{(1+b^2+d^2)^2-4b^2d^2}. \label{eq:n-faraday-cpt}$$ CPT resonance characteristics with crossed polarizers method ------------------------------------------------------------ The polarization of wavelength components contributing CPT is rotated while passing through the cell. On the other hand, the polarization of wavelength components not contributing CPT is not rotated. Therefore, total transmitted light intensity $I$ can be expressed as $$\begin{aligned} I&=I_s + I_{nc} \sin^2(\theta)\\ &=I_c\sin^2 (\theta + \phi)+I_{nc} \sin^2(\theta) \end{aligned} \label{eq:i_trans}$$ where $I_c$ is the sum of light intensities contributing to CPT, and $I_{nc}$ is the sum of light intensities not contributing to CPT. Taking into account that the conventional resonance contrast is no more than 10%, the relation between $I_c$ and $I_{nc}$ is $I_c < I_{nc}$. Therefore, the DC level is dominated by the relative angle $\theta$. When the relative angle $\theta$ is set larger than the phase shift $\phi$, which enables us to ignore the phase shift $\phi$, the DC level $I_{dc}$ is given as $$I_\mathrm{\it dc} \approx I_c \sin^2(\theta) + I_{nc} \sin^2(\theta). \label{eq:i_dc}$$ And the resonance amplitude $I_s$ is $$I_s \approx \frac{\partial I_c}{\partial \theta} \phi = I_c \sin(2\theta)\phi \label{eq:i_s}$$ From Eq. (\[eq:i\_s\]), as the resonance amplitude is maximized by setting the relative angle $\theta$ to $\pi/4$. However, since the DC level increases with $\theta$ (from Eq. (\[eq:i\_dc\])), the highest contrast resonance can be obtained around $\theta \approx 0$. When the relative angle $\theta$ is close to zero, the approximation that leads to Eq. (\[eq:i\_s\]) cannot be made because it relies on theta being much larger than $\phi$. Instead, from Eq. (\[eq:i\_trans\]) and using a small rotation approximation, the resonance amplitude $I_s$ is given as $$I_s \simeq I_c \phi^2. \label{eq:i_s2}$$ ![Line shape of resonance with crossed polarizers calculated using Eq. (\[eq:n-faraday-cpt\]) and Eq. (\[eq:i\_s2\]).[]{data-label="fig:lineshape-crossed polarizer"}](fig4_3.eps) Next, we calculate the spectrum characteristics of the transmitted light. Figure \[fig:lineshape-crossed polarizer\] shows the resonance lineshape calculated from Eq. (\[eq:n-faraday-cpt\]) and Eq. (\[eq:i\_s2\]) when $\theta$ is set so as to minimize the DC level. It is clear that the both the resonance amplitude and linewidth increase with increasing normalized magnetic field $b$. The resonance amplitude $I_{pp}$, which is defined as the maximum change in the $I_s$ as a function of detuning, can be gotten from Eq. (\[eq:n-faraday-cpt\]) and Eq. (\[eq:i\_s2\]): $$I_{pp} = I_c\frac{\pi^2 l^2\chi_0^2}{\lambda^2}\Big(\frac{(b^2+\sqrt{b^2+1}-1)(\sqrt{b^2+1}+2)}{2b(b^2+1)}\Big)^2 \label{eq:i_pp}$$ ![Resonance amplitude as a function of normalized magnetic field $b$ calculated using Eq. (\[eq:i\_pp\]).[]{data-label="fig:signal-magnetic"}](fig4_4.eps) Figure \[fig:signal-magnetic\] plots the resonance amplitude as a function of the magnetic field using Eq. (\[eq:i\_pp\]). In the weak magnetic field ($b\ \leq 0.2$), the resonance amplitude is small because the Faraday rotation is small. In the range of $0.2< b \leq 1.16$, the resonance amplitude significantly increases with increasing magnetic field. The maximum resonance amplitude is obtained at $b= 1.16$. For magnetic fields $b$ over 1.16, the resonance amplitude tends to decrease because the two resonances separate from each other. ![Linewidth as a function of normalized magnetic field $b$ calculated using Eq. (\[eq:gamma\_cp\]).[]{data-label="fig:linewidth-magnetic"}](fig4_5.eps) $a_n$ $a_0$  $a_1$ $a_2$ $a_3$ ------- ----------------------- ----------------------- ----------------------- ----------------------- Value 3.681$\cdot$10$^{-1}$ 3.988$\cdot$10$^{-1}$ 1.644$\cdot$10$^{-2}$ 3.706$\cdot$10$^{-4}$ : $a_n$ values[]{data-label="tab:an"} Figure \[fig:linewidth-magnetic\] shows the linewidth with the crossed polarizers method $\gamma_{cp}$ normalized by the conventional linewidth $\gamma$ as a function of normalized magnetic field strength. The polynomial approximation of the normalized linewidth $\gamma_{cp}/\gamma $ is $$\gamma_{cp}/\gamma \approx \sum_{n=0}a_n b^{2n} \label{eq:gamma_cp}$$ where $a_n$ are constant values shown in Table \[tab:an\] in the $n$ range from 0 to 3. The linewidth with the proposed method increases with increasing magnetic field. Moreover, since the linewidth is the sum of even functions and has a positive second derivative, the linewidth has a minimum value in the absence of the magnetic field. The minimum linewidth is the conventional one of 36.8%. The linewidth of the proposed method is equal to that of the conventional one at a $b$ of 1.27. From Fig. (\[fig:signal-magnetic\]) and Fig. (\[fig:linewidth-magnetic\]), we can obtain the relation between $b|_{I_{pp}=\max}$ and $b|_{\gamma_{cp}/\gamma=1}$: $$b|_{I_{pp}=\max}<b|_{\gamma_{cp}/\gamma=1} \label{eq:b-inequality}$$ Thus, the linewidth of the proposed method is narrower than that with the conventional method in the magnetic field range from 0 to $b|_{I_{pp}=\max}$. Experimental setup ================== ![Schematic diagram of experimental setup.[]{data-label="fig:setup"}](fig4_6.eps) ![Absorption profile of Cs-D$_1$ line using VCSEL modulated at 4.6 GHz: center frequency 335.1 THz (= 894.6 nm).[]{data-label="fig:absorption"}](fig4_7.eps) Figure \[fig:setup\] shows the experimental setup of the proposed observation method. The two polarizers were near-infrared sheet polarizers. A parallel linear beam (lin$\parallel$lin light field) was incident on the gas cell. The analyzer selected the optical polarization of wavelength components incident on the photodiode. The photodiode signal was connected to the oscilloscope. A single-mode VCSEL fabricated by Ricoh Company, Ltd was used as the laser source. The 894.6 nm wavelength of the VCSEL excites $^{133}$Cs at the D$_1$-line. The VCSEL was driven by a DC injection current using a bias T and was modulated at 4.6 GHz using an analog signal generator to generate the first-order sidebands around the laser carrier. The absorption profile of the Cs-D$_1$ line using the VCSEL modulated at 4.6 GHz is shown in Fig. \[fig:absorption\]. Since the incident light contains first-order sidebands and high-order sidebands, the plot shows some of the absorption lines. The two center absorption lines are excited by the first-order sidebands. Moreover, the minimum and second minimum peak correspond to two excited levels: $|F'=4\rangle$ and $|F'=3\rangle$. The frequency difference between the two excited states is wide enough that we can select either $|F'=3\rangle$ or $|F'=4\rangle$ as the absorption line. In this experiment, we selected $|F'=3\rangle$ as the excited state to stabilize the wavelength of the VCSEL. A Pyrex gas cell containing a mixture of $^{133}$Cs atoms and Ne buffer gas at a pressure of 4.0 kPa was used. The gas cell was cylindrical, it had a diameter of 20 mm and optical length of 22.5 mm. The gas cell temperature was maintained at 42.0 $^\circ$C. The gas cell and Helmholz coil were covered with a magnetic shield to prevent an external magnetic field from affecting the magnetic field inside the cell. The internal magnetic field of the gas cell was created by the Helmholz coil. The internal magnetic field was calibrated using the frequency difference between magnetic-field sensitive and insensitive transitions by using $\rm \sigma$-$\rm \sigma$ excitation. The axis of the magnetic field was set to be parallel to the direction of the laser light (C-axis direction). Experimental Results and Discussion =================================== Line shape of CPT resonance --------------------------- ![Line shape of CPT resonance with crossed polarizers method. The line shape was obtained from one average scan. The applied static magnetic field was 93 $\mu$T (0.48 in normalized magnetic field). The incident light intensity was 1.1 mW/cm$^2$. The conventional resonance contrast and linewidth gotten by using a neutral density filter in place of the analyzer were respectively 3.3% and 2.15 kHz under a static magnetic field of 5.0 $\mu$T.[]{data-label="fig:lineshape"}](fig4_8.eps) Figure \[fig:lineshape\] shows the observed CPT resonance with the crossed polarizers method. A good reduction in DC level was achieved because the transmission axis of the analyzer was optimized. Considering the transmitted light intensity, we estimate that the peak rotation angle is few tens of milli-radians. Since the signal was greater than the DC level, the conventional contrast, which was simply defined as the signal over the DC level, exceeded 100 %. In this paper, contrast is defined so as not to exceed 100% as follows. $$\mbox{Contrast} := \frac{\mbox{Signal}}{\mbox{Signal} + \mbox{DC level}} \label{eq:fhfs}$$ Although the DC level was suppressed with the crossed polarizers method, weak light leakage was picked up by the photo detector. Since the leakage could not be reduced below the DC level by varying the relative angle $\theta$, the leakage was dependent on the extinction ratios of the polarizers. Owing to the DC level reduction, the proposed method yielded a contrast of 88.4% with under a static magnetic field of 93 $\mu$T. Since the conventional contrast was 3.3 % under a static magnetic field of 5.0 $\mu$T when using the neutral density filter instead of the analyzer, the crossed polarizers method obtained about 25 times better contrast than the conventional method did. In addition, the linewidth obtained with the crossed polarizers method was 1.15 kHz, which was about twice as narrow as the conventional method’s value of 2.15 kHz. This result means that the resonance with the crossed polarizers method has not only higher contrast but also a narrower linewidth. Contrast as a function of the relative angle $\theta$ ----------------------------------------------------- ![DC level as a function of the relative angle $\theta$: The square dot is the measured data, and the solid line is the fitting curve calculated from Eq. (\[eq:i\_dc\]). The magnetic field was set to 93 $\mu$T.[]{data-label="fig:DClevel trans angle"}](fig4_9.eps) ![Signal as a function of the relative angle $\theta$: The square dot is the measured data, and the solid line is the fitting curve calculated from Eq. (\[eq:i\_s\]). The magnetic field was set to 93 $\mu$T.[]{data-label="fig:Signal trans angle"}](fig4_10.eps) ![Contrast as a function of the relative angle $\theta$.[]{data-label="fig:contrast trans angle"}](fig4_11.eps) Figure \[fig:DClevel trans angle\] shows the DC level as a function of the relative angle $\theta$. The square dot is the measured data, and the solid line is the fitting curve calculated from Eq. (\[eq:i\_dc\]). The relative angle $\theta$ giving the minimum DC level is shifted from 0$^\circ$. This shift is caused by misalignment between the scale of the polarizer’s mount and the transmission axis of polarizers. The extinction ratio of the polarizers was estimated to be about 40 dB. Figure \[fig:Signal trans angle\] shows the signal of the resonance as a function of the relative angle $\theta$. The square dot is the measured data, and the solid line is the fitting curve calculated from Eq. (\[eq:i\_s\]). The calculated curve is in good agreement with the experimental data. By comparing the signal and DC level, it can be seen that the behavior of the signal was different from that of the DC level. Therefore, this is proof that the polarization of the wavelength components contributing to CPT is rotated. Figure \[fig:contrast trans angle\] shows the contrast as a function of the relative angle $\theta$ from the measurements. Since the signal has a high value despite the DC level reduction, the contrast significantly increased near 0$^\circ$. A resonance contrast over 10% was obtained in the range from -15 to 5$^\circ$. Characteristics as a function of magnetic field ----------------------------------------------- ![Signal and DC level as a function of magnetic field: The square and triangle are the measured signal and DC level, respectively. The solid line is a fit of Eq. (\[eq:i\_pp\]): resonance linewidth $\gamma$ = 1.08 kHz.[]{data-label="fig:SD magnetic field"}](fig4_12.eps) ![Contrast as a function of magnetic field estimated from Fig. \[fig:SD magnetic field\].[]{data-label="fig:contrast magnetic field"}](fig4_13.eps) ![Linewidth as a function of magnetic field: The solid line is data calculated from Eq. (\[eq:gamma\_cp\]): resonance linewidth $\gamma$ = 1.08 kHz.[]{data-label="fig:linewidth magnetic field"}](fig4_14.eps) ![Figure of merit as a function of magnetic field. The figure of merit is normalized by that of the conventional resonance.[]{data-label="fig:figure of merit magnetic field"}](fig4_15.eps) Figure \[fig:SD magnetic field\] shows the signal and DC level as a function of magnetic field. The relative angle $\theta$ was optimized in order to maximize contrast. In weak magnetic fields ($<$ 15 $ \mu$T), the signal was so small that we could not observe CPT resonance, and the maximum magnetic field (93 $\mu$T) was limited by the current source output supplying the Helmholz coil. The signal linearly increased with increasing magnetic field. On the other hand, the DC level was constant regardless of the change in the magnetic field. The results are evidence that the Faraday rotation affected only the wavelength components contributing to CPT. The solid line is a fitting curve of Eq. (\[eq:i\_pp\]). The experimental signal has the same tendency as the theoretical curve. Figure \[fig:contrast magnetic field\] shows the contrast estimated from the measurement results in Fig. \[fig:SD magnetic field\]. The contrast increased with increasing magnetic field. We assume that nearly 100 % contrast can be obtained in a larger magnetic field. The DC level is independent of the magnetic field, and this indicates that the resonance contrast has reached a peak value. From Eq. (\[eq:i\_pp\]), it is estimated that the maximum contrast of 94.0% can be obtained at 224 $\mu$T. The linewidth as a function of the magnetic field is shown in Fig.\[fig:linewidth magnetic field\]. The linewidth broadened with increasing magnetic field and was approximately proportional to it. The linewidth obtained with the crossed polarizers method is less than 2.15 kHz of the linewidth with the conventional excitation method in this range of magnetic field. At a magnetic field of 15 $\mu$T, the narrowest linewidth obtained was 760 Hz, which is about three times narrower than the conventional one. The solid line in Fig. \[fig:linewidth magnetic field\] is calculated data based on Eq. (\[eq:gamma\_cp\]). The difference between the measurement and calculation is no more than 20 %. Figure \[fig:figure of merit magnetic field\] shows the figure of merit (FoM) as a function of magnetic field. Since short-term stability is determined by the contrast and linewidth from Eq. (\[eq:short\]), the FoM is defined as follows. $$\mbox{FoM} := \frac{f_0}{\Delta f}\cdot \mbox{Contrast} \label{eq:fhfs}$$ In small magnetic fields ($<$ 40 $\mu$T), the FoM increased because the increase in contrast was dominant. However, in large magnetic fields ($>$ 60 $\mu$T), the FoM decreased with broadening linewidth. This shows that the FoM of the CPT resonance has a peak value with respect to the magnetic field. In this experiment, the maximum value of FoM was obtained at 55 $\mu$T, and this value is 58 times better than the conventional one. Conclusion ========== We developed a new method based on crossed polarizers for observing high-contrast CPT resonance. Firstly, we calculated the Faraday rotation in CPT under a lin$\parallel$lin light field by using two pairs of $\Lambda$ system models. The calculated results indicated that the resonance amplitude has a peak value with respect to the magnetic field, and the resonance linewidth increases with increasing magnetic field. The minimum linewidth obtained with the crossed polarizers method is 36.8% that of the conventional method in the absence of the magnetic field. Secondly, we measured the resonance characteristics with crossed polarizers method using a $^{133}$Cs gas cell and the D$_1$-line VCSEL. It was shown that the background signal level of the photodetector is suppressed by the crossed polarizers method and a high contrast resonance can be obtained. As the relative angle $\theta$ and magnetic field were optimized, the observed resonance had a contrast of 88.4% and linewidth of 1.15 kHz. The measurement data was in good agreement with the theoretical data and the difference between experiment and theory was no more than 20%. Finally, we investigated the optimal conditions for enhancing short-term stability. The figure of merit has a peak value with respect to magnetic field. By optimizing the relative angle $\theta$ of the analyzer and magnetic field, the figure of merit was 58 times better than the conventional one. Since a high contrast and narrow linewidth resonance can be obtained with such a simple optical system, the crossed polarizers method is an attractive means of enhancing the frequency stability of CPT atomic clocks. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to Ricoh Company, Ltd. for providing us with the Cs-D$_1$ VCSEL. [1]{} S. Knappe, V. Shah, P. D. Schwindt, L. Hollberg, and J. Kitching, L. Liew and J. Moreland, “A microfabricated atomic clock”, [*Appl. Phys. Lett.*]{}, vol. 85, no. 9, pp.1460-1462, Aug. 2004. J. Vig, “Military Applications of High Accuracy Frequency Standards and Clocks”, [*IEEE Trans. Ultrason. Ferroelectr. Freq. Control*]{}, vol. 40, no. 5, pp.522-527, Sep. 1993. R. Lutwak, D. Emmons, T. English and W. Riley, “The chip-scale atomic clock-recent development progress”, in [*Proc. 35th Annu. Precise Time and Time Interval Systems and Applications Meeting*]{}, pp.467-478, 2004. S. Knappe, R. Wynands, J. Kitching, H. G. Robinson, and L. Hollberg, “Characterization of coherent population-trapping resonances as atomic frequency references”, [*J. Opt. Soc. Am. B*]{}, vol. 18, no. 11, p.1545-1553, Nov. 2001. T. Zanon, S. Guerandel, E. de Clercq, D. Holleville, N. Dimarcq, and A. Clairong, “High Contrast Ramsey Fringes with Coherent-Population-Trapping Pulses in a Double Lambda Atomic System”, [*Phys. Rev. Lett.*]{}, vol. 94, art. no. 193002, May. 2005. Y.-Y. Jau, E. Miron, A. B. Post, N. N. Kuzma, and W. Happer, “Push-Pull Optical Pumping of Pure Superposition States”, [*Phys. Rev. Lett*]{}, vol. 93, art. no. 160802, Oct. 2004. S. Pradhan, R. Behera and A. K. Das, “Polarization rotation under two-photon Raman resonance for magnetometry”, [*Appl. Phys. Lett.*]{}, vol. 100 art. no. 1783502, 2012. J. Lin, J. Deng, Y. Ma, H. He and Y. Wang, “Detection of ultrahigh resonance contrast in vapor-cell atomic clocks”, [*Optics Lett.*]{}, vol. 37, pp. 5036-5038, 2012. D. Budker, D. F. Kimball and D. P. Demille, “Atomic physics: an exploration through problems and solutions”, Oxford University Press, 2004. S. A. Zibrov, I. Novikova, D. F. Phillips, R. L. Walsworth, A. S. Zibrov, V. L. Velichansky, A. V. Taichenachev and V. I. Yudin, “Coherent-population-trapping resonances with linearly polarized light for all-optical miniature atomic clocks”, [*Phys. Rev. A*]{}, vol. 81, art. no. 013833, 2010. J. Kitching, H. G. Robinson, L. Hollberg, S. Knappe and R. Wynands, “Optical-pumping noise in laser-pumped, all-optical microwave frequency references”, [*J. Opt. Soc. Am. B*]{}, vol. 18, no. 11, pp.1676-1683, Nov. 2001. B. Dick, “High-contrast polarization spectroscopy of photochemically burned spectral holes in amorphous solids: Potential for fast optical storage” [*Chemical Phys. Lett.*]{}, vol. 143, no. 2, pp.186-192, Jan. 1988. D. Budker, W. Gawlik, D. F. Kimball, S. M. Rochester, V. V. Yashchuk, and A. Weis, “Resonant nonlinear magneto-optical effects in atoms” [*Rev. Mod. Phys.*]{}, vol. 74, pp. 1153-1201, 2002. K. Watabe, T. Ikegami, A. Takamizawa, S. Yanagimachi, S. Ohshima, and S. Knappe, “High-contrast dark resonances with linearly polarized light on the D$_1$ line of alkali atoms with large nuclear spin”, [*Applied Optics*]{}, vol. 48, no. 6, pp.1098-1103, Feb. 2009. [Yuichiro Yano]{} was born in Tokyo, Japan, in 1987. He received M. E. degree in electronics from Tokyo Metropolitan University, Hachioji, Japan, in 2012. [Shigeyoshi Goka]{} received the M. E. degree from Tokyo Metropolitan University, Tokyo, Japan in 1997, and the Ph. D. degree in engineering from same university in 2005. He is an associate professor in the Department of Electrical & Electronic Engineering, Graduate School of Tokyo Metropolitan University, where he is engaged in research on the miniature atomic clocks, the magnetometer, and the design of mesa shaped quartz resonators. Dr. Goka is a member of the Institute of Electronics, Information, and Communication Engineers (IEICE) of Japan, and the Institute of Electrical Engineers of Japan (IEEJ). He is a manager of the Technical Committee of Precise Frequency, the Technical Committee of Electro-Mechanical Devices, and the Technical Committee of Modeling and Simulations of the IEEJ. [^1]: Yuichiro Yano is with the Department of and Electrical Engineering, Tokyo Metropolitan University, Minamiosawa, Hachioji, Tokyo, 1-1 Japan e-mail: yano-yuichiro@ed.tmu.ac.jp
--- abstract: 'In the context of the quantum-mechanical description of single-molecule surface-enhanced Raman scattering, intensity-field correlation measurements of photons emitted from a plasmonic cavity are explored, theoretically, using the technique of conditional homodyne detection. The inelastic interplay between plasmons and vibrations of a diatomic molecule placed inside the cavity can be manifested in phase-dependent third-order fluctuations of the light recorded by the aforesaid technique, allowing us to reveal signatures of non-classicality (indicatives of squeezing) of the outgoing Raman photons.' author: - 'O. de los Santos-Sánchez' title: 'Probing intensity-field correlations of single-molecule surface-enhanced Raman-scattered light' --- Introduction {#sec:1} ============ The development of ultrasensitive instrumentation based upon plasmonic devices on the nanoscale has revived, in recent years, interest in enhancing or modifying the interaction between matter and light in the quantum regime. Investigations in the area of molecular spectroscopy, at the single-molecule level, have particularly benefited from it, wherein the spectroscopic technique known as surface-enhanced Raman scattering (SERS) has played a preponderant role [@ru; @benz; @lombardi]. It is well-known that Raman scattering of molecular species is an inelastic process in which the photon scattered by a given molecule is at a different frequency from that of the incident photon, which, in turn, provides us with information about the vibrational energy level structure of the molecule itself. In a generic SERS configuration, the inelastic scattering from a molecule turns out to be enhanced by placing it within a highly confined field at the interface of a plasmonic cavity, which stems from the photoillumination of the metallic nanostructure giving rise to a localized surface plasmon resonance [@kern; @nabika]. For this plasmonic light enhancement to be performed, a variety of nanostructures can be used, such as (among others) plasmonic resonators [@zhang; @alonso; @yampo; @baumberg], nanowire waveguides [@huang; @kress; @lee], nanoantenas [@peyskens], nanorods [@basske] and plasmonic-photonic hybrid cavities [@barth]; the employment of metallic plasmonic nanogaps in the vicinity of particle dimers [@xu; @talley; @zhu] is also an archetypal example. Indeed, surface-enhanced molecular spectroscopy has paved the way for exploring novel nonlinear effects and tailoring light-matter interaction scenarios [@takase].\ On the other hand, since the publication of the influential work of Hanbury-Brown and Twiss [@hanbury], experimental and theoretical studies of quantum fluctuations of light have mostly been focused on measuring correlations between pairs of photodetections, i.e., the particle aspect of light. Yet another important aspect of it is also its wave character; accordingly, studies of quantum fluctuations of light should not only admit the possibility of scrutinizing these facets independently of each other, as usual in the study of squeezing of the variance of the electromagnetic field amplitude [@walls; @loudon], but should also treat both wave-particle facets jointly. Carmichael and colleagues [@carmichael; @foster1] recently addressed this issue by proposing an extension of the standard Hanbury-Brown and Twiss intensity interferometer, which consists, essentially, in recording the evolution of the amplitude fluctuations of an electromagnetic field on the condition that a photon is detected, also referred to as conditional homodyne detection (CHD). Spectra, non-Gaussian fluctuations and non-classicality are relevant features of light that can be assessed by the intensity-field (wave-particle-type) correlation measurements thus recorded. We briefly delineate this framework further on and a more detailed discussion about it can also be found in Refs. [@foster2; @denisov; @carmichaelreview].\ Herein, we shall utilize the aforementioned CHD technique with a view to exploring theoretically the emergent nonclassical properties of light emitted from a plasmonic cavity in a single-molecule SERS configuration. The light to be probed is considered to be scattered, inelastically, by a single diatomic molecule placed in the gap of a plasmonic dimer. Today’s achievements of this spectroscopic technique on the subnanoscale (enabling, for instance, the identification of specific vibrational modes of the molecule [@zhang], together the ultra-tight confinement of light within the cavity) have made it necessary to resort to a theoretical modeling of these systems from an entirely quantum-mechanical perspective [@shalabney]. To this end, we shall make use of the optomechanical description of SERS introduced in Ref. [@javier1] so as to represent the interplay between the molecule and the quantized plasmons in the cavity. We are particularly interested in addressing the effect of quenching or amplifying the molecular vibrations (an achievable effect when the cavity is illuminated by a laser properly detuned from its resonance [@vahala]) upon the non-classicality of the Raman emission captured by intensity-field correlation measurements. To our knowledge, theoretical studies of joint particle-wave correlations of light in the context of SERS spectroscopy have not been previously reported. So, the present work is an attempt to offer a complementary perspective to analyze phase-dependent correlations of light in the domain of Raman emission where the Stokes and anti-Stokes signals take place, information that, in the light of the current experimental capabilities of SERS, some of them quoted at the outset, might possibly be tested and/or exploited.\ The content of this paper is structured as follows. In section \[sec:2\], we outline the generic molecule-plasmon system we shall be working with and the quantum-mechanical approach to its description is succinctly reviewed [@javier1; @javier2]. Section \[sec:2\] is devoted to a brief description of the optical scheme intended for probing the intensity-field correlation features of our light source, which is obtained by correlating a photon detection with the field quadrature based upon the CHD framework [@carmichael; @foster1]. On the basis of this measurement method, signatures of time-asymmetry of quantum fluctuations revealed by the aforesaid correlations and squeezing in the frequency domain are numerically explored and discussed in section \[sec:3\]. Finally, some concluding remarks are given in Section \[sec:4\]. Quantum-mechanical approach to a generic SERS configuration: A brief review {#sec:2} =========================================================================== Our physical system, intended for generating Raman-scattered photons, is composed of a plasmonic cavity (say, a plasmonic dimer resonator) and a single diatomic molecule placed near the surface thereof within the gap (see Fig. \[figure1\]). ![Sketch of a SERS setup for generating Raman-scattered light emitted from a single diatomic molecule placed in a plasmonic dimer. The open dynamics of the composite system, governed by Eq. (\[eq:mastereq1\]), is such that the molecular vibrations are considered to undergo decay and thermal pumping processes described, respectively, by the Lindbladians $\mathcal{L}_{\hat{b}}$ and $\mathcal{L}_{\hat{b}^{\dagger}}$. The decay of the plasmic dimer, viewed as a lossy cavity with decay rate $\kappa$, is in turn described by the Lindbladian $\mathcal{L}_{\hat{a}}$.[]{data-label="figure1"}](figure_1.pdf){width="6.5cm" height="6cm"} According to the quantum-mechanical description of SERS proposed by Schmidt and collaborators [@javier1] (see also Ref. [@javier2] for further details), the interplay between the localized plasmons in the gap and the molecular vibrations is such that the molecule, considered to be in its electronic ground state, dipolarly couples to the quantized field of the cavity via the interaction Hamiltonian $H_{I}=-\frac{1}{2} \hat{p}\cdot \hat{E}$, where $\hat{p}$ and $\hat{E}$ are, respectively, the quantized molecular polarization and electric field, the latter being evaluated at the molecule’s position and explicitly given, in terms of the plasmon annihilation ($\hat{a}$) and creation ($\hat{a}^{\dagger}$) operators, by $\hat{E}=i \sqrt{\frac{\hbar \omega_{c}}{2\epsilon V_{\textrm{eff}}}}(\hat{a}-\hat{a}^{\dagger})$, where $\omega_{c}$ is the resonant frequency of the cavity, $V_{\textrm{eff}}$ its effective volume and $\epsilon$ the permittivity of the medium. In this context, the molecular polarization is also considered to be induced by the electric field (Raman-induced polarization) according to the relation $\hat{p}=\hat{\alpha}_{\nu} \hat{E}$, where $\hat{\alpha}_{\nu}$ is, in turn, the linear polarizability of the molecule. A fitting, albeit approximate, model entails regarding the molecule as a one-dimensional harmonic oscillator whose polarizability can be expressed as $\alpha_{\nu}=R_{\nu} Q_{\nu}^{0}(\hat{b}+\hat{b}^{\dagger})$; here, $\hat{b}$ and $\hat{b}^{\dagger}$ are, respectively, the standard annihilation and creation operators satisfying the commutation relation $[\hat{b},\hat{b}^{\dagger}]=1$, $R_{\nu}$ is the Raman tensor element and $Q_{\nu}$ is the zero-point amplitude of the vibrations. So, it transpires that making use of the this quantized representation for the molecular polarizability, assuming that the direction of the vibrations is optimally aligned with the field, and neglecting inessential rapidly oscillating terms at $\pm 2\omega_{c}$ frequencies, enables one to explicitly recast the interaction Hamiltonian as $H_{I}=-g\hat{a}^{\dagger}\hat{a}(\hat{b}+\hat{b}^{\dagger})$, with the real-valued coupling parameter $g=R_{\nu}Q_{\nu}^{0}\omega_{c}/(\epsilon V_{\textrm{eff}})$. This Hamiltonian was also introduced independently by Kippenberg and colleagues [@kippenberg1] who suggested the analogy between the optomechanical back-action via radiation pressure force in optical cavities [@kippenberg2] and the modification of the energy of the plasmonic cavity by molecular vibrations in the context of molecular-plasmonic systems; this is why it is generally referred to as the optomechanical description of SERS. The whole quantum mechanical Hamiltonian governing the coherent evolution of the isolated system is then put, after applying the rotating-wave approximation, into the form [@javier1; @javier2] $$H = \omega_{m}\hat{b}^{\dagger}\hat{b}+\omega_{c}\hat{a}^{\dagger}\hat{a}-g \hat{a}^{\dagger}\hat{a} (\hat{b}^{\dagger}+\hat{b} ) +i\Omega(\hat{a}^{\dagger}e^{-i\omega_{l}t}-\hat{a}e^{i\omega_{l}t}), \label{eq:ham1}$$ where $\omega_{m}$ is the phonon frequency of the vibrational mode and where a driving laser, at frequency $\omega_{l}$, with pumping rate $\Omega$, is also added in so as to reinforce the interplay between the cavity field and the molecule. In the frame rotating at $\omega_{l}$, the above Hamiltonian can be rewritten as follows $$\tilde{H} = \omega_{m} \hat{b}^{\dagger}\hat{b}+\Delta \hat{a}^{\dagger}\hat{a}-g\hat{a}^{\dagger}\hat{a}(\hat{b}^{\dagger}+\hat{b}) +i\Omega(\hat{a}^{\dagger}-\hat{a}), \label{eq:ham2}$$ where $\Delta=\omega_{c}-\omega_{l}$ denotes the detuning from the cavity frequency. It is worth emphasizing that this algebraic model is restricted to the off-resonant Raman scattering regime in which a virtual state mediating the Raman transitions is in a faraway energy position from the excited electronic state.\ To fully describe the evolution of the system, it is essential to take into account the influence of its surroundings. The incoherent effects that arise from this scenario are to be properly encapsulated in the fitting Markovian master equation for the system density operator [@yampo]: $$\dot{\hat{\rho}} = i[\hat{\rho}, \tilde{H}]+\frac{\kappa}{2}\mathcal{L}_{\hat{a}}[\hat{\rho}]+\frac{(n_{b}^{\textrm{th}}+1)\gamma_{m}}{2}\mathcal{L}_{\hat{b}}[\hat{\rho}] +\frac{n_{b}^{\textrm{th}}\gamma_{m}}{2}\mathcal{L}_{\hat{b}^{\dagger}}[\hat{\rho}]. \label{eq:mastereq1}$$ Here, the last three constituent terms are identified as the action of the Lindblad generator $\mathcal{L}$ upon the corresponding subsystem variables, that is, $\mathcal{L}_{\hat{O}}[\hat{\rho}]=2\mathcal{O} \hat{\rho} \mathcal{O}^{\dagger}-\mathcal{O}^{\dagger} \mathcal{O} \hat{\rho}-\hat{\rho} \mathcal{O}^{\dagger}\mathcal{O}$, with $\hat{O}=\hat{a}$, $\hat{b}$ or $\hat{b}^{\dagger}$, each giving rise, correspondingly, to clear-cut processes: first, $\mathcal{L}_{\hat{a}}[\hat{\rho}]$ gives an account of the decay of photons of the cavity at the rate $\kappa$, which, in turn, is determined by the cavity quality factor $Q=\omega_{c}/\kappa$ (the plasmonic system that takes part in our analysis is a realistic plasmonic cavity characterized by a very low quality factor: for present-day realizations, $Q \lesssim 10$); then, the terms proportional to $\mathcal{L}_{\hat{b}}[\hat{\rho}]$ and $\mathcal{L}_{\hat{b}^{\dagger}}[\hat{\rho}]$ describe, respectively, the decay and incoherent pumping of phonons of the molecular vibrations thermally excited by the environment at temperature $T$, with $\gamma_{m}$ being the mechanical decay rate and $n_{b}^{\textrm{th}}=(e^{\hbar \omega_{m}/k_{B}T}-1)^{-1}$ the effective thermal population of vibrations at frequency $\omega_{m}$.\ ![Upper panel: Simplified representations of the Stokes (left) and anti-Stokes (right) Raman processes, standardly understood in terms of the mediation of a virtual state $|v\rangle$. Lower panel: Emission spectra, $S(\omega)$, as functions of the scaled frequency $\omega/\omega_{m}$, of the molecule-plamon system for weak pumping, $\Omega^{2}=10^{-2}\ \textrm{eV}^{2}$, with the driving laser being tuned to the cavity ($\Delta=0$), at room temperature ($T=300$ K) and for different values of the coupling parameter as multiples of $g_{0}=1\ \textrm{meV}$: $g=g_{0}\ \textrm{(red line)}, 3g_{0}\ \textrm{(blue line)}$ and $5g_{0}$ (black line). The inset zooms in on the anti-Stokes spectral constituents for $g=g_{0}$ and $3g_{0}$. The ordinate is given in arbitrary units. []{data-label="figure2"}](figure_2a.pdf "fig:"){width="8.1cm" height="4.2cm"} ![Upper panel: Simplified representations of the Stokes (left) and anti-Stokes (right) Raman processes, standardly understood in terms of the mediation of a virtual state $|v\rangle$. Lower panel: Emission spectra, $S(\omega)$, as functions of the scaled frequency $\omega/\omega_{m}$, of the molecule-plamon system for weak pumping, $\Omega^{2}=10^{-2}\ \textrm{eV}^{2}$, with the driving laser being tuned to the cavity ($\Delta=0$), at room temperature ($T=300$ K) and for different values of the coupling parameter as multiples of $g_{0}=1\ \textrm{meV}$: $g=g_{0}\ \textrm{(red line)}, 3g_{0}\ \textrm{(blue line)}$ and $5g_{0}$ (black line). The inset zooms in on the anti-Stokes spectral constituents for $g=g_{0}$ and $3g_{0}$. The ordinate is given in arbitrary units. []{data-label="figure2"}](figure_2b.pdf "fig:"){width="9.3cm" height="5.7cm"} We restrict our analysis to the spectroscopic domain where the inelastic scattering of light takes place and exhibits two archetypal components (see the upper panel of Fig. \[figure2\]): i) the Stokes (S) spectral constituent which arises from the event in which an incident photon is converted into a phonon and a redshifted scattered photon of frequency $\omega_{S}=\omega_{c}-\omega_{m}$ (top left) and ii) the anti-Stokes (aS) constituent corresponding to the scenario where one incident photon and one existing phonon are simultaneously annihilated, thereby giving rise to a blueshifted scattered photon of frequency $\omega_{aS}=\omega_{c}+\omega_{m}$ (top right). For some increasing values of the coupling parameter $g$, the spectral fingerprint of such proceses is depicted in Fig. \[figure2\] by computing, on the basis of solving the master equation (\[eq:mastereq1\]), the stationary inelastic spectrum of emission $S(\omega) \propto \textrm{Re} \int_{0}^{\infty}d\tau e^{-i\omega \tau} \langle \delta \hat{a}^{\dagger}(\tau) \delta \hat{a}(0) \rangle_{ss}$, where the cavity dynamics has been split into a mean plus fluctuations in a way such that $\hat{a}(t)=\langle a \rangle_{ss}+\delta \hat{a}(t)$ (the subindex $ss$ stands for the steady-state value) and the two-time correlation function is evaluated by using the quantum regression formula [@carmichaelbook]. The set of parameters involved in the calculation (to be used hereafter) are within the scope of typical SERS setups [@yampo; @javier1]: $\omega_{m}=0.1$ eV, $\gamma_{m}=1$ meV, $Q=10$, $\omega_{c}=2.5$ eV, $\kappa=0.25$ eV, $T=300$ K, $\Omega^{2}=10^{-2}$ $\textrm{eV}^{2}$ and some multiples of $g=1$ meV; estimations of the molecular parameters were duly reported for the Raman activity of rhodamine 6G molecule in the supplemental material of Ref. [@javier1], and BPE molecules of the type [*trans*]{}-1,2-bis-(4-pyridyl) ethylene, showing strong C=C stretching modes of the same order of magnitude as the previous one, also fall into the category of tractable molecular systems [@yampo]. So, as we can observe, both Stokes and anti-Stokes emission lines, for weak coherent pumping, reveal a conspicuous dependence on the strength of the molecule-plasmon coupling, although the anti-Stokes events are significantly less likely to take hold than the Stokes ones (as observed in the lower panel of figure \[figure2\]). The coherent and incoherent pumping dependence of the maxima of Stokes and anti-Stokes emissions has already been addressed (see, for instance, Refs. [@javier1; @kippenberg1; @dezfouli1]) as well as intensity fluctuations of scattered photons via a frequency-filtered intensity-intensity correlation function, at zero delay, $g^{2}(0)$, observing strong photon bunching in both the weak pumping and coupling regimes [@javier1]. Parenthetically, the relevance of nonclassical Stokes/anti-Stokes correlation measurements in different materials have also been remarked and examined from the viewpoint of establishing an alternative effective Hamiltonian model that incorporates explicitly such Stokes and anti-Stokes constituent fields [@parra]. So then, motivated by the aforementioned findings and investigations, and borrowing from the optomechanical description of SERS, we are in a position to undertake the task of analyzing the quantum fluctuations of light from a different standpoint based upon the so-called conditioned homodyne detection technique [@carmichael; @foster1; @foster2]. This theoretical background will permit us to investigate signatures concerning phase-dependent fluctuations of scattered photons through carrying out intensity-field correlation measurements. Conditional homodyne detection {#sec:3} ============================== Originally conceived for the robust measurement of weak squeezed light in the context of a cavity QED system, the employment of conditional homodyne detection (CHD) [@carmichael; @foster1] entails some practical advantages over standard homodyne detection techniques ascribable to its conditioning character. Pictorially sketched in the top right-hand side of Fig. \[figure3\], together with a basic energy configuration of the molecule-plasmon system regarded as our light source, bottom left-hand side, CHD is an optical measurement scheme in which the delayed evolution of the quadrature of the field to be probed is measured, at time $\tau$, in balanced homodyne detection, standardly composed of two detectors and a local oscillator (LO), at the exact moment of photon detection (intensity), at $\tau=0$, registered by the detector $D_{I}$. More concretely, the quantity we seek to assess, in the steady state, is algebraically embodied by the intensity-field (wave-particle-type) correlation $\propto \langle I(0) E(\tau) \rangle_{ss} $. In accord with the theoretical framework formulated by Carmichael and coworkers [@carmichael], one can scrutinize this magnitude through the normalized third-order (in the field quadrature) correlation function: $$h_{\phi}(\tau) = \frac{\langle: \hat{a}^{\dagger}(0) \hat{a}(0) \hat{a}_{\phi}(\tau) :\rangle_{ss}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss} \langle \hat{a}_{\phi}(0) \rangle_{ss}}, \label{eq:htau}$$ where the dots “$: \ :$" denote time and normal operator ordering and $\hat{a}_{\phi} = \frac{1}{2} (\hat{a}e^{-i\phi}+\hat{a}^{\dagger}e^{i\phi})$ is the quadrature of the quantized transmitted field, with $\phi$ being the phase between the local oscillator with respect to the averaged signal field. Incidentally, there can be circumstances in which the field amplitude of the source is such that $\langle \hat{a} \rangle_{ss} =0$, whereupon the above correlation is clearly not well defined. This difficulty can nonetheless be surmounted by merging an offset component, $E_{\textrm{off}}$, in phase with the local oscillator, with the transmitted field via an additional beam splitter [@carmichael]. It will not be necessary for us to resort to this adaptation in the present work.\ ![Top-right sketch: Optical schema for conditional homodyne detection (CHD) as if looking at the light source. The quadrature $\phi$ of the emitted field, $E_{\phi}(t+\tau)$, is measured, in the steaty state, by balanced homodyne detection (BHD) and conditioned on the photon detection, $I(t)$, at the detector $D_{I}$; LO is the local oscillator. Bottom-left sketch: A harmonic oscillator potential is used to describe the molecular vibrations leading to scattered Stokes photons.[]{data-label="figure3"}](figure_3.pdf){width="8.5cm" height="6.5cm"} The usefulness of CHD may be perceived as threefold. Firstly, degradation of the signal, because of counting noise, in the measurement process is significantly reduced by the conditioning on photon detections, which makes the technique independent of finite detector efficiencies and suitable to detect very weak squeezing properties of the light source [@castro1]. Secondly, it allows us to find out the non-classicality of light from a different perspective, namely, the light registered through CHD is said to be non-classical if the following classical inequalities are violated [@carmichael; @foster1]: $$\begin{aligned} 0 & \le & h_{\phi}(\tau) -1 \le 1, \label{eq:ineq1}\\ |h_{\phi}(\tau)-1| & \le & |h_{\phi}(0)-1| \le 1. \label{eq:ineq2}\end{aligned}$$ Thus, observation of non-classicality of a quadrature is not exclusively dependent upon the conventional criterion for squeezing, which states that fluctuations of such a variable must be below those of a coherent state. And thirdly, CHD is also able to reveal non-Gaussian fluctuations (nonzero odd-order correlations) of the field that can be manifested in the asymmetry displayed by the intensity-field correlation function, i.e., $h_{\phi}(-\tau)\neq h_{\phi}(\tau)$; this is also due in part to the fact that the noise properties of the field’s intensity and quadrature are meant to be of different nature [@carmichaelreview]. This issue has been highlighted both theoretically [@carmichael; @denisov] and experimentally [@foster1] in the context of cavity QED systems and subsequently, also theoretically, in resonance fluorescence of a three level atom [@marquina]. Still more recently, Castro-Beltrán [*et al.*]{} [@castro1; @castro2; @castro3; @castro4] undertook a thorough analytical treatment of non-Gaussian fluctuations encountered in resonance fluorescence of two- and three-level atomic systems via intensity-field correlation measurements. Further context-dependent studies on the subject have also been reported, such as the light emitted from a two-level atom in an optical cavity [@reiner; @rice], superconducting artificial atoms [@wang], fluorescence from optical transitions in $\Xi$- and V-type three-level atoms [@molmer1; @molmer2], and the experimental realization of the resonance fluorescence of a single trapped ${}^{138}\textrm{Ba}^{+}$ ion [@blatt].\ With the help of the quantum regression formula [@carmichaelbook], let Eq. (\[eq:htau\]) be split into positive and negative time intervals to calculate the directly measurable correlations: $$h_{\phi}(\tau \ge 0) = \frac{1}{2} \frac{\textrm{Tr} \left \{ \hat{a}(0) e^{-i\phi}e^{\mathcal{L}|\tau|} \left [\hat{a}(0)\hat{\rho}_{ss}\hat{a}^{\dagger}(0) \right ] \right \}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss} \langle \hat{a}_{\phi}(0)\rangle_{ss}}+\textrm{c.c.}, \label{eq:htaup}$$ $$h_{\phi}(\tau \le 0) = \frac{1}{2} \frac{\textrm{Tr} \left \{ \hat{a}^{\dagger}(0)\hat{a}(0) e^{\mathcal{L}|\tau|} \left [\hat{a}(0)e^{-i\phi}\hat{\rho}_{ss} \right ] \right \}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss} \langle \hat{a}_{\phi}(0)\rangle_{ss}} +\textrm{c.c.}, \label{eq:htaum}$$ where the superoperator $\mathcal{L}$ embraces, in toto, both the coherent and incoherent evolution of the composite system, so that Eq. (\[eq:mastereq1\]) is encapsulated in $\dot{\hat{\rho}}=\mathcal{L}\hat{\rho}$; thus, the steady-state solution, $\hat{\rho}_{ss}$, to this equation is obtained from solving the equation $\mathcal{L} \hat{\rho}_{ss}=0$. We also stress that, for negative time intervals, the correlation given by Eq. (\[eq:htaum\]) takes on the converse interpretation according to which the intensity measurement is conditioned on the field amplitude at $\tau=0$ [@castro1].\ The spectral fingerprint of the of CHD correlations, which can give us additional information about fluctuations and squeezing in the measurement process, can be assessed by application of the Fourier cosine transform upon them [@castro1; @carmichaelreview]: $$\begin{aligned} S_{\phi}^{(\tau \ge 0)}(\omega) & = & 4F \int_{0}^{\infty} [h_{\phi}(\tau \ge 0)-1]\cos (\omega \tau)d\tau, \label{eq:spectaup} \\ S_{\phi}^{(\tau \le 0)}(\omega) & = & 4F \int_{0}^{\infty} [h_{\phi}(|\tau|)-1] \cos(\omega \tau)d\tau, \label{eq:spectaum}\end{aligned}$$ where $F=2\kappa \langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss}$ is the steady-state photon flux into the correlator. In this representation, nonclassical features of light are revealed by negative values of these spectral functions. One can properly identify the origin of such features by splitting the field operator dynamics into its mean and fluctuations, i.e., $\hat{a}=\langle \hat{a} \rangle_{ss}+\delta \hat{a}$, with $\langle \delta \hat{a} \rangle_{ss}$=0. In doing so, for positive $\tau$ intervals, the intensity-field correlation (\[eq:htau\]) is decomposed into its second- and third-order fluctuation-operator constituents, that is, $h_{\phi}(\tau) = 1+h_{\phi}^{(2)}(\tau)+h_{\phi}^{(3)}(\tau)$, where $$\begin{aligned} h_{\phi}^{(2)}(\tau \ge 0) & = & \frac{2 \textrm{Re} \{ \langle \hat{a}(0)\rangle_{ss} \langle \delta a^{\dagger}(0)\delta \hat{a}_{\phi}(\tau) \rangle_{ss} \}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss}\langle \hat{a}_{\phi}(0)\rangle_{ss}}, \label{eq:fluq1}\\ h_{\phi}^{(3)}(\tau \ge 0) & = & \frac{\langle \delta \hat{a}^{\dagger}(0) \delta \hat{a}_{\phi}(\tau) \delta \hat{a}(0) \rangle_{ss}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss}\langle \hat{a}_{\phi}(0)\rangle_{ss}}, \label{eq:fluq2}\end{aligned}$$ and $\delta \hat{a}_{\phi} = (\delta \hat{a}e^{-i\phi}+\delta \hat{a}^{\dagger}e^{i\phi})/2$. For negative intervals, one gets the correlation in the field fluctuation operator $$\begin{aligned} h_{\phi} (\tau \le 0) & \equiv & 1+ h_{\phi}^{(n)}(\tau), \nonumber \\ & = & 1+\frac{ \textrm{Re}\{e^{-i \phi} \langle \delta \hat{n}(\tau) \delta \hat{a}(0) \rangle_{ss}\}}{\langle \hat{a}^{\dagger}(0) \hat{a}(0) \rangle_{ss}\langle \hat{a}_{\phi}(0)\rangle_{ss}}, \label{eq:fluqn}\end{aligned}$$ where the superscript $(n)$ labels negative $\tau$ intervals; this quantity provides us with the same outcome as that of Eq. (\[eq:htaum\]). Thus, Eqs. (\[eq:fluq1\]) and (\[eq:fluq2\]) also enable us to split the spectral representation of fluctuations in the CHD framework, Eq. (\[eq:spectaup\]), into $S_{\phi}^{(\tau \ge 0)}(\omega)=S_{\phi}^{(2)}(\omega)+S_{\phi}^{(3)}(\omega)$, with $$\begin{aligned} S^{(2)}_{\phi}(\omega) & = & 4F \int_{0}^{\infty} h^{(2)}_{\phi}(\tau \ge 0) \cos(\omega \tau) d\tau, \label{eq:s2} \\ S^{(3)}_{\phi}(\omega) & = & 4F \int_{0}^{\infty} h^{(3)}_{\phi}(\tau \ge 0) \cos(\omega \tau) d\tau. \label{eq:s3}\end{aligned}$$ Information about squeezed light, for instance, may be extracted from the second-order spectrum, $S_{\phi}^{(2)}(\omega)$, by virtue of the formerly demonstrated relationship $S_{\phi}^{\textrm{sq}} (\omega)=\eta S_{\phi}^{(2)}(\omega)$ [@carmichael; @foster1], where $S_{\phi}^{\textrm{sq}} (\omega)$ is the so-called [*spectrum of squeezing*]{} [@collet; @rice2] and $\eta$ represents the the combined collection and detection efficiency; we emphasize that the conditioning character of CHD and normalization of the corresponding correlation makes the assessment of squeezing, represented by negative values of $S_{\phi}^{(2)}(\omega)$, independent of the $\eta$ factor. Beyond squeezing, the third-order fluctuations, reflected in the spectrum $S_{\phi}^{(3)}(\omega)$, would permit us to measure the extent to which non-Gaussian fluctuations, brought about nonlinearities of the light-matter interaction process, come into play.\ So, with the foregoing theoretical tools in hand, we now proceed to the discussion of some results concerning the outcome of what we separately refer to as color-blind and frequency-filtered intensity-field correlation measurements; the latter being succinctly put forward and carried out by duly adapting the original algebraic model of the system for the CHD technique to be able to discern the frequency of the outgoing Stokes photons. Results and discussion {#sec:4} ====================== ![Intensity-field correlations, Eqs. (\[eq:htaup\]) and Eqs. (\[eq:htaum\]), versus the scaled time $\omega_{m} \tau$, for the $\phi=0$ (insets) and $\phi=\pi/2$ (main panels) quadratures, calculated for the moderate pumping $\Omega=1.5 \omega_{m}$. In the upper panel the laser is tuned to the cavity ($\Delta=0$), whereas the lower panel displays representative correlations in the heating ($\Delta < 0$) and cooling ($\Delta >0$) regimes. In all cases $g=5\ \textrm{meV}$ and $T=300$ K.[]{data-label="figure4"}](figure_4a.pdf "fig:"){width="9.5cm" height="6.3cm"} ![Intensity-field correlations, Eqs. (\[eq:htaup\]) and Eqs. (\[eq:htaum\]), versus the scaled time $\omega_{m} \tau$, for the $\phi=0$ (insets) and $\phi=\pi/2$ (main panels) quadratures, calculated for the moderate pumping $\Omega=1.5 \omega_{m}$. In the upper panel the laser is tuned to the cavity ($\Delta=0$), whereas the lower panel displays representative correlations in the heating ($\Delta < 0$) and cooling ($\Delta >0$) regimes. In all cases $g=5\ \textrm{meV}$ and $T=300$ K.[]{data-label="figure4"}](figure_4b.pdf "fig:"){width="9.5cm" height="6.3cm"} In our numerical calculations we make use of the Python toolbox QuTip [@nori] to solve the master equation (\[eq:mastereq1\]) and its extended version to be outlined in this section. We also restrict our calculations to weak and moderate pumping rates, $\Omega=\{0.3, 1.5\}\omega_{m}$, detuning $\Delta=\{0,\pm 0.5\} \omega_{m}$, with $\omega_{m}=0.1$ eV, and letting strength of the coupling be $g =5$ meV; the remaining parameters are taken to be fixed: $\gamma_{m}=1$ meV, $\kappa=0.25$ eV; these are realizable and of the same order as those indicated in the introduction of the emission spectra in section \[sec:2\]. At this point, as far as the pumping coefficient $\Omega$ is concerned, it is important to underline that although we are approaching the regime where the aforementioned values of it and the cavity frequency may become comparable, such values are expected to be sufficiently moderate so that the counter-rotating terms $i\Omega(\hat{a}e^{-i\omega_{l}t}+\hat{a}^{\dagger}e^{\omega_{l}t})$, ruled out in the original Hamiltonian (\[eq:ham1\]), do not have a significant influence upon the dynamics of the system; this issue has also been highlighted within the supporting information of Ref. [@javier1]. Color-blind intensity-field correlation measurements ---------------------------------------------------- The time-dependent measurable formulae outlined above, specifically, Eqs. (\[eq:htaup\]) and (\[eq:htaum\]), will permit us to explore the outcome of color-blind photon correlations, so called because the present treatment assumes broadband detectors that do not differentiate between correlating photons of specific Stokes or anti-Stokes frequencies in the process; i.e., the formulae are not selective in frequency, albeit such an information may be encapsulated in their Fourier transform as stated in the previous section. Even so, with these tools, non-classical properties of the emitted field can be picked up by CHD within the whole Raman emission domain of interest to us.\ Figure \[figure4\] shows the outcome of intensity-field correlations, as functions of the scaled time $\omega_{m}\tau$, within a short $\tau$ interval, regarding the in-phase ($\phi=0$, insets) and out-of-phase ($\phi=\pi/2$, main panels) quadratures of the field and moderate pumping, $\Omega=1.5 \omega_{m}$. The upper panel of the figure displays, in the first instance, the case in which the laser is tuned to the cavity, $\Delta =0$, where we see that the behavior of the correlation exhibits a significant time asymmetry and that both quadratures violate the lower bound of the first classical inequality, Eq. (\[eq:ineq1\]), and Eq. (\[eq:ineq2\]) at certain time intervals, thus revealing some degree of non-classicality of the field. Of course, the non-classical character of the out-of-phase quadrature, for this particular case, turns out to be far more conspicuous than that of the in-phase component by several orders of magnitude (see the inset). A similar behavior was found for increasing values of the coupling parameter $g$, ranging from $1$ to $5$ meV; for brevity, we do not show any plot in this respect. In the case of non-zero detuning, it is a well-known fact that the strength of the vibrations of the molecule can be augmented or decreased depending on whether the plasmonic cavity is illuminated with a blue ($\Delta < 0$) or red ($\Delta > 0$) detuned laser, respectively, thus leading, correspondingly, to the heating or cooling of the molecule. So, it is pertinent to scrutinize the imprint of these two states of the molecule on the outcome of the correlations, as exemplified in the lower panel of Fig. \[figure4\] for the particular values of $\Delta=\pm 0.5 \omega_{m}$. Firstly, one can see in the inset that the detuning itself fosters the increment of the amplitude of correlations associated to the in-phase quadrature; two orders of magnitude in comparison with that of the laser tuned to the cavity frequency (inset of the upper panel), whereas the correlation concerning the out-of-phase quadrature (main panel) is found to diminish in amplitude in marked contrast to the zero-detuning case shown in the main upper panel. Secondly, the act of heating the molecule, $\Delta =-0.5 \omega_{m}$, say, also reinforces the correlation function associated with both quadratures as we can observe by making the comparison with the cooling case, $\Delta =+0.5\omega_{m}$.\ ![Spectra of the intensity-field correlation, Eqs. (\[eq:spectaup\]), continuous line, and (\[eq:spectaum\]), dashed line, for the $\phi=\pi/2$ quadrature and moderate pumping $\Omega=1.5\omega_{m}$. Upper and lower panels show, respectively, the spectral outcome for the zero- and negative-detuning cases: $\Delta=0$ and $\Delta=-0.5\omega_{m}$. The remaining parameters are: $g=5$ meV and $T=300$ K.[]{data-label="figure5"}](figure_5a.pdf "fig:"){width="9.5cm" height="6.3cm"} ![Spectra of the intensity-field correlation, Eqs. (\[eq:spectaup\]), continuous line, and (\[eq:spectaum\]), dashed line, for the $\phi=\pi/2$ quadrature and moderate pumping $\Omega=1.5\omega_{m}$. Upper and lower panels show, respectively, the spectral outcome for the zero- and negative-detuning cases: $\Delta=0$ and $\Delta=-0.5\omega_{m}$. The remaining parameters are: $g=5$ meV and $T=300$ K.[]{data-label="figure5"}](figure_5b.pdf "fig:"){width="9.5cm" height="6.3cm"} ![Decomposition of the spectrum $S_{\phi=\pi/2}^{(\tau \ge 0)}$, shown in Fig. \[figure5\], into its second- and third-order constituents, $S_{\phi=\pi/2}^{(2)}$ (main panels) and $S_{\phi=\pi/2}^{(3)}$ (insets), respectively. Upper and lower panels show, respectively, the spectral outcome for the zero- and negative-detuning cases: $\Delta=0$ (black line) and $\Delta=-0.5\omega_{m}$ (green line). The remaining parameters are: $g=5$ meV and $T=300$ K.[]{data-label="figure6"}](figure_6a.pdf "fig:"){width="9.5cm" height="6.3cm"} ![Decomposition of the spectrum $S_{\phi=\pi/2}^{(\tau \ge 0)}$, shown in Fig. \[figure5\], into its second- and third-order constituents, $S_{\phi=\pi/2}^{(2)}$ (main panels) and $S_{\phi=\pi/2}^{(3)}$ (insets), respectively. Upper and lower panels show, respectively, the spectral outcome for the zero- and negative-detuning cases: $\Delta=0$ (black line) and $\Delta=-0.5\omega_{m}$ (green line). The remaining parameters are: $g=5$ meV and $T=300$ K.[]{data-label="figure6"}](figure_6b.pdf "fig:"){width="9.5cm" height="6.3cm"} Figure \[figure5\] illustrates the spectral profile of some of the intensity-field correlation outcomes described above via their Fourier cosine transform. Specifically, the upper panel of the figure displays the spectra $S^{(\tau \ge 0)}_{\phi=\pi/2}(\omega)$, main panel, and $S^{(\tau \le 0)}_{\phi=\pi/2}(\omega)$, inset, of the correlation shown in Fig. \[figure4\], upper panel, corresponding to the out-of-phase quadrature. For $\tau \ge 0$, the spectrum is composed of two distinctive dispersive-like profiles located at $\omega = \pm \omega_{m}$, each attaining large negative values in a region around these anti-Stokes/Stokes positions, whereupon the light Raman scattered is confirmed as being highly non-classical. Even though the spectral function is postive over a wider frequency range concerning the $\tau \le 0$ case, see the inset, a significant reduction of fluctuations takes place also at $\omega=\pm \omega_{m}$, superimposed to what seems to be the cavity resonance contribution (wider spectral profile). The lower panel, on the other hand, exhibits the spectra of the correlation that corresponds to the particular case of $\Delta=-0.5 \omega_{m}$ displayed in the main bottom panel of Fig. \[figure4\] associated with the in-phase quadrature. For positive $\tau$ intervals, the spectral profile (continuous line) turns out to be quite similar to the previous case of zero-detuning, whereas the spectrum $S^{(\tau \le 0)}_{\phi=\pi/2}(\omega)$ (dashed line) has inverted dispersive shapes also with conspicuous negative features near the Stokes and anti-Stokes frequencies. The origin of the aforementioned non-classical features is pictorially uncovered in Fig. \[figure6\], where the spectrum $S_{\phi=\pi/2}^{(\tau \ge 0)}$ is decomposed into its second- and third-order spectral constituents, $S_{\phi=\pi/2}^{(2)}$ (dot-dashed line) and $S_{\phi=\pi/2}^{(3)}$ (dashed line), respectively, for $\Delta = 0$ (top panel) and $\Delta=-0.5\omega_{m}$ (bottom panel). It transpires that the main contribution to the spectral profile and non-classicality is almost due entirely to the second-order spectral function directly associated to the spectrum of squeezing; a barely noticeable third-order contribution to the whole spectrum is understood as due to the use of a moderate pump laser (see the insets).\ Quadrature variances -------------------- A customary approach to assessing squeezing entails employing the variance in a quadrature $$V_{\phi} = \langle : (\delta \hat{a}_{\phi})^{2}: \rangle = \textrm{Re} \left \{ e^{i\phi} \langle \delta \hat{a}^{\dagger} \delta \hat{a}_{\phi} \rangle \right \}, \label{eq:variance}$$ which takes on negative values for squeezed fluctuations. From the standpoint of CHD, signatures of nonclassical light can also be signaled by integrating the spectra (\[eq:s2\]) and (\[eq:s3\]), namely, $\int_{-\infty}^{\infty}S_{\phi}^{(2)}(\omega)d\omega=4\pi F h_{\phi}^{(2)}(0)$, $\int_{-\infty}^{\infty}S_{\phi}^{(3)}(\omega)d\omega=4\pi F h_{\phi}^{(3)}(0)$ and (\[eq:spectaum\]), $\int_{-\infty}^{\infty}S_{\phi}^{(\tau \le 0)}d\omega=4\pi F h_{\phi}^{(n)}(0)$, obtained, correspondingly, from the positive- and negative-time interval parts of the intensity-field correlation. Given the foregoing, it suffices for our purposes to take quantities proportional to the unnormalized correlations in (\[eq:fluq1\]), (\[eq:fluq2\]) and (\[eq:fluqn\]) at $\tau=0$, i.e., $$\begin{aligned} H_{\phi}^{(2)} & = & 2 \textrm{Re} \{ \langle \hat{a} \rangle \langle \delta a^{\dagger} \delta \hat{a}_{\phi} \rangle \}, \label{eq:H20}\\ H_{\phi}^{(3)} & = & \langle \delta \hat{a}^{\dagger} \delta \hat{a}_{\phi} \delta \hat{a} \rangle, \label{eq:H30} \\ H_{\phi}^{(n)} & = & \textrm{Re}\{e^{-i \phi} \langle \delta \hat{n} \delta \hat{a} \rangle \}, \label{eq:Hn}\end{aligned}$$ where, for succinctness, arguments following the operators were omitted and the steady-state correlations, $\langle \cdots \rangle_{ss} \to \langle \cdots \rangle$, are given by the following correspondences $$\begin{aligned} \langle \delta \hat{a}^{\dagger} \delta \hat{a} \rangle & = & \langle \hat{n} \rangle-|\langle \hat{a} \rangle|^{2}, \\ \langle (\delta \hat{a}^{\dagger})^{2} \rangle & = & \langle \hat{a}^{\dagger 2} \rangle-\langle \hat{a}^{\dagger} \rangle^{2}, \\ \langle \delta \hat{n} \delta \hat{a} \rangle & = & \langle \hat{n} \hat{a} \rangle-\langle \hat{n} \rangle \langle \hat{a} \rangle, \\ \langle \delta \hat{a}^{\dagger} \delta \hat{a}^{\dagger} \delta \hat{a} \rangle & = & \langle \hat{a}^{\dagger} \hat{n} \rangle-2\langle \hat{a}^{\dagger} \rangle \langle \hat{n} \rangle -\langle \hat{a}\rangle \left (\langle \hat{a}^{\dagger 2} \rangle-2\langle \hat{a}^{\dagger} \rangle^{2}\right ).\end{aligned}$$ As a function of the detuning, we see in Fig. \[figure7\] that, in general, the variance (\[eq:variance\]) itself does not manifest squeezing for both quadratures, except for very narrow ranges of detuning, approximately around $-0.5 <\Delta/\omega_{m} <0$, for $V_{\phi=0}$, and within the vicinity of $\Delta=\omega_{m}$, for $V_{\phi=\pi/2}$.\ ![Variance, Eq. (\[eq:variance\]), as a function of the scaled detuning $\Delta/\omega_{m}$, for $\phi=0$ (dashed line) and $\pi/2$ (continuous line). The parameters are: $\Omega=1.5\omega_{m}$, eV, $g=5$ meV, and $T=300$ K.[]{data-label="figure7"}](figure_7.pdf){width="9.5cm" height="6.3cm"} In similarity to the variance, one can see in Fig. \[figure8\] a complementary perspective by considering, separately, the noise stemming from the positive and the negative time intervals of the correlations, and also by splitting the second- and third-order constituents of the former case, as indicated from the top to the bottom panels in the figure. The second-order term, $H_{\phi}^{(2)}(0)$, is found to be the dominating source of noise, exhibiting, in the meanwhile, negativities within certain intervals of negative detuning regarding the out-of-phase quadrature, $\phi=\pi/2$. A much lower proportion of noise (two orders of magnitude) is originated from the third-order term, $H_{\phi}^{(3)}(0)$, as seen by comparison with the previous case; some negativities are manifested mostly in the in-phase quadrature, $\phi=0$. Owing to the smallness of this term, the behavior of $H^{(N)}(0)$, see the bottom panel, is essentially identical to that of $H_{\phi}^{(2)}(0)$; at $\tau=0$, $H_{\phi}^{(n)}(0)=H_{\phi}^{(2)}(0)+H_{\phi}^{(3)}(0)$, whence it follows that $H_{\phi}^{(n)}(0) \approx H_{\phi}^{(2)}(0)$. The corresponding panel shows the result of the direct calculation of $H_{\phi}^{(n)}$ from Eq. (\[eq:Hn\]). A first glance at frequency-filtered correlations ------------------------------------------------- ![Noise from CHD calculated through Eqs. (\[eq:H20\]), (\[eq:H30\]) and (\[eq:Hn\]), from top to bottom panels, versus the scaled detuning $\Delta/\omega_{m}$, for $\phi=0$ (dashed line) and $\pi/2$ (continuous line). The parameters are the same as in Fig. \[figure7\].[]{data-label="figure8"}](figure_8.pdf){width="9.5cm" height="9.0cm"} In section \[sec:3\], we presented the essential workings of the CHD scheme with which we have so far revealed signatures of phase-dependent fluctuations in a measurement process that, as such, has been indiscriminately sensible to the “color" (frequency) of the correlated photons that our Raman signal is comprised of. Although this piece of information is valuable in itself, one could, in principle, go further in an attempt to be capable of discerning the frequency of the constituent Raman photons to be correlated. Indeed, tackling the problem of computing frequency-filtered intensity-field correlations in the context of CHD is not a simple endeavor, so that the spirit of the present subsection will be essentially algebraically exploratory. At first sight, to deal with the problem of mathematically differentiating between Stokes and anti-Stokes photons in our calculations, one would be tempted to posit an intensity-field correlation, echoing Eq. (\[eq:htau\]), of the form: $$\tilde{h}_{\phi}(\omega_{1},\omega_{2};\tau) = \lim_{t\to \infty} \frac{\langle: \hat{A}^{\dagger}_{\omega_{1}}(t) \hat{A}_{\omega_{1}}(t) \hat{A}_{\phi; \omega_{2}}(t+\tau) : \rangle}{\langle \hat{A}^{\dagger}_{\omega_{1}}(t)\hat{A}_{\omega_{1}}(t) \rangle \langle \hat{A}_{\phi; \omega_{2}}(t+\tau) \rangle}, \label{eq:corrfreq1}$$ where $\hat{A}_{\omega_{i}}(t) = \int_{-\infty}^{t} e^{(i\omega_{i}-\Gamma_{i}/2)(t-t_{1})}\hat{a}(t_{1})dt_{1}$ is construed as the field amplitude detected at frequency $\omega_{i}$, within the frequency window $\Gamma_{i}$, at time $t$ [@gerard]. The frequency windows would be taken to be Lorentzian filters placed at frequencies $\omega_{1/2}=\omega_{S/aS}$, where the subscript stands for Stokes/anti-Stokes locations. Notwithstanding the reasonableness of this proposal, its actual computation would represent itself a cumbersome task in its present integral form.\ To circumvent the problem of computing correlations of the type (\[eq:corrfreq1\]), one can resort to the sensing method proposed by E. del Valle and co-workers [@elena] consisting in the inclusion of ancillary frequency-resolved sensors subsumed into the whole dynamics of the system we seek to describe. According to this proposal, each sensor can be viewed, for instance, as a two-level system with a annihilation (creation) operator $\varsigma_{i}$ ($\varsigma_{i}^{\dagger}$) and a transition frequency $\omega_{i}$, whose life time is given by the inverse of the detector linewidth, $\Gamma_{i}^{-1}$. In order to prevent the dynamics of our central system from being altered by the sensors’ presence, the strength of the coupling, $\varepsilon_{i}$, between the sensors and the corresponding spectral component of the field to be measured must fulfill the condition $\varepsilon_{i} \ll \sqrt{\Gamma_{i}\gamma_{Q}/2}$, where $\gamma_{Q}$ denotes the decay rate associated with the rung of the energy ladder involved in the corresponding transition; in our case, $\gamma_{Q}=\gamma_{m}$. Although Eq. (\[eq:corrfreq1\]) is, strictly speaking, a third-order correlation function in the field amplitude, the deployment of only two sensors would be necessary for the desired correlation to be emulated: one of them registering the signal of the field emission, say, at frequency $\omega_{2}$, while the other one being intended for recording the intensity of the spectral component centered at $\omega_{1}$. In doing so, the master equation (\[eq:mastereq1\]) has to be extended accordingly by adding $H_{s}=\sum_{i} [\omega_{i}\hat{\varsigma}_{i}^{\dagger}\hat{\varsigma}_{i}+\varepsilon_{i} (\hat{a} \hat{\varsigma}_{i}^{\dagger}+\hat{a}^{\dagger}\hat{\varsigma}_{i})]$ to the original Hamiltonian (\[eq:ham2\]), a dipole-like interaction term, reminiscent of that of the standard Jaynes-Cummings model, and the decay terms $\sum_{i} \frac{\Gamma_{i}}{2} \mathcal{L}_{\hat{\varsigma}_{i}}[\hat{\rho}]$ to the dissipative part; for the sake of simplicity, the $\Gamma_{i}$’s are chosen to match the widths of the Stokes spectral components, the latter being approximately given by $\gamma_{m}$ [@javier1]. Since the structure of the aforesaid interaction would make us think of each sensor as playing the role of a two-level atom dipolarly and very weakly coupled the field, it is surmised that, on the basis of this analogy, the sought correlation between sensors may be deployed as $$h_{\phi}(\omega_{1},\omega_{2},\tau) = \lim_{\varepsilon_{1}, \varepsilon_{2} \to 0} \frac{\langle : \hat{\varsigma}_{1}^{\dagger}(0) \hat{\varsigma}_{1}(0) \hat{\varsigma}_{\phi;2} (\tau) : \rangle_{ss}}{\langle \hat{\varsigma}_{1}^{\dagger}(0) \hat{\varsigma}_{1}(0)\rangle_{ss} \langle \hat{\varsigma}_{\phi;2}(0)\rangle_{ss}}, \label{eq:hext}$$ with $\hat{\varsigma}_{\phi;2}=(\hat{\varsigma}_{2} e^{i\phi}+\hat{\varsigma}_{2}^{\dagger}e^{-i\phi})/2$. Thus, the sensed information is now directly extracted from the dynamical variables of the two two-state systems; the construction itself is feasible from the view point of the sensing method described in the supplemental material of Ref. [@elena], and its structure is inspired by the work of Marquina-Cruz and Castro-Beltrán [@marquina]. Even though this criterion does not enable us to establish rigorously the possible equivalence between (\[eq:corrfreq1\]) and (\[eq:hext\]), the former being conjectural in itself, let us adopt the latter as an assay for exploring, as another complement to the present fact-finding examination, the simultaneous quadrature-photon detection, $\tau=0$, focusing on a given quadrature, say, $$h_{0}(\omega_{1},\omega_{2},0) = \lim_{\varepsilon_{1}, \varepsilon_{2} \to 0} \frac{\langle \hat{n}_{1/2}(0) \hat{\varsigma}_{0;2/1}(0) \rangle_{ss}}{\langle \hat{n}_{1/2}(0) \rangle_{ss} \langle \hat{\varsigma}_{0;2/1}(0)\rangle_{ss}}, \label{eq:hsim}$$ with the shorthand notation $\hat{n}_{1/2}=\hat{\varsigma}_{1/2}^{\dagger} \hat{\varsigma}_{1/2}$ and $\hat{\varsigma}_{0;2/1}=(\hat{\varsigma}_{2/1} +\hat{\varsigma}_{2/1}^{\dagger})/2$; the subindexes highlight the interchangeability of the corresponding quantities. ![Frequency-filtered intensity-field correlations at zero delay, $\tau=0$, as a function of the scaled detuning $\Delta/\omega_{m}$, with frequencies of detection located at $\omega_{1/2}=\mp \omega_{m}$, sensor linewidths $\Gamma_{1/2} = \gamma_{m}$ and couplings $\varepsilon_{1/2} =10^{-4}\omega_{m}$. The parameters used are: $g=5$ meV, $\phi=0$ and $\Omega=0.3\omega_{m}$. The upper (lower) panel shows the correlation between the quadrature of the anti-Stokes (Stokes) signal and the intensity of the Stokes (anti-Stokes) one.[]{data-label="figure9"}](figure_9.pdf){width="9.5cm" height="8cm"} We stress that this algebraic strategy would be tantamount to somehow adjusting the actual detectors in the CHD setup previously delineated so as to operate them under the frequency filtering performance, giving us the possibility to probe, at least, two undoubtedly accomplishable scenarios from the experimental point of view: i) one of them, labeled as the S/aS-event for shortness, is that in which the two detectors constituting the BHD arm, see Fig. \[figure3\], are in charge of registering the field amplitude corresponding to anti-Stokes photon emission, while the detector $D_{I}$ does the same for the intensity of the Stokes constituent; ii) the converse scenario, the aS/S-event, is such that the BHD port registers the quadrature of the Stokes signal, whereas the intensity of the anti-Stokes signal is measured by actual detector $D_{I}$. The outcome of the aforesaid S/aS- and aS/S-events provided by Eq. (\[eq:hsim\]) is displayed, respectively, in the upper and lower panels of Fig. \[figure9\] in keeping with the spirit of looking into the effect of heating ($\Delta <0$) or cooling ($\Delta >0$) of the molecule; the calculation is also restricted to the particular case of a very weak pump laser, $\Omega=0.3 \omega_{m}$. One sees that the frequency selectiveness accentuates the nonclassical features of the scattered photons in the light of the clear violation of both classical inequalities (\[eq:ineq1\]) and (\[eq:ineq2\]); this is particularly emphasized in the case of the aS/S-events (lower panel), regardless of whether the molecule is cooled or heated. In the S/aS events, on the other hand, the nonclassical features are the most marked when cooling the molecule (upper panel).\ Concluding remarks ================== On the basis of the algebraic framework provided by the quantum-mechanical model of SERS, some aspects of phase-dependent fluctuations of the light inelastically scattered by a single molecule placed inside the plasmon field of a cavity were explored, theoretically, through using the technique of conditional homodyne detection. Specifically, intensity-field correlations obtained by this technique made it possible to assess, in both time and frequency domains, the degree of non-Gaussianity and non-classicality of Raman photons mainly from the perspective of color-blind correlation measurements. Despite this unselective approach, given that we dealt with broadband detectors in the CHD setup in the first instance, it already reveals the extent to which the phase-dependent dynamics of the scattered light may be activated by the molecule’s presence in the cavity. In this regard, the following issues are highlighted: - It is found that, by splitting the correlation functions into their second- and third-order fluctuation components, the origin of the non-classicality, for a moderate pump laser, is almost entirely due to the second-order fluctuations, signaling squeezing within frequency domains located around the Stokes and anti-Stokes positions. Given this, the deviation from Gaussian noise is not significant. - The heating or cooling of the molecular vibrations also led to discernible effects on the correlation measurements, the former case yielding slightly larger fluctuations than latter one. However, for the out-of-phase quadrature of the field, even more conspicuous fluctuations are found in the case of zero detuning, as well as bolstering their time asymmetry. - Our discussions have also been briefly supplemented by putting forward the possibility to carry out frequency-filtered intensity-field correlations based on the same CHD set up. In the light of the outcome of an specific example concerning simultaneous intensity-field measurements, one can foresee the manifestation of high correlations between specific Raman photons, even if a very weak pump laser is taken into consideration. This fact, at variance with the color-blind scheme, heralds, in this context, highly nonclassical scattered light by virtue of the violation of the inequalities that take part in CHD (Eqs. (\[eq:ineq1\]) and (\[eq:ineq1\])). The feasibility of the proposed procedure for computing such correlations remains, however, surmise and a theoretical method to cope with their computation is expected to be firmly established elsewhere. Unlike standard homodyne systems, CHD is regarded, owing to its conditioning attribute, as a particularly convenient tool to investigate non-classical and non-Gaussian properties of the light source it seeks to describe that goes beyond analyzing the squeezing itself, even for very weak light sources. Although the partitioning of the corresponding correlation into second- and third-order fluctuations cannot be directly realizable from the experimental viewpoint, they represent valuable theoretical information that enables us to unveil the main contributions to the phase-dependent dynamics, non-Gaussianity and non-classicality of light in a given context-dependent circumstance. These advantages of using the CHD framework have already been highlighted in theoretical studies of two- and three-level atom resonance fluorescence [@castro1; @castro3].\ Another issue that would deserve to be addressed is to investigate the influence of the anharmonicity of the molecule (viewed, for instance, as a Morse-like oscillator [@frank; @carvajal]) on the outcome of either color-blind or frequency-filtered correlation measurements through the proper modification to the current optomechanical description of SERS; this more general situation will hopefully be treated elsewhere. The present results, for the time being, are expected to be a modest first step towards pointing out the importance of undertaking a deeper exploration and understanding on the role of phase-dependent correlations in single-molecule SERS studies, thus triggering the experimental and/or theoretical interest from the quantum optics and SERS communities [@mark]. Acknowledgments {#acknowledgments .unnumbered} =============== I want to thank Prof. H. M. Castro-Beltrán for reading the manuscript. Thanks also to I. Ramos-Prieto for introducing me to the use of QuTip library in Python and for his help with Figs. \[figure1\] and \[figure3\], and Prof. J. Récamier for his kind hospitality at ICF. [99]{} L. E. C. Ru and P. G. Etchegoin, Principles of surface-enhanced Raman spectroscopy and related plasmonic effects, Elsevier, 2009 F. Benz, M. K. Schmidt, A. Dreismann, R. Chikkaraddy, Y. Zhang, A. Demetriadou, C. Carnegie, H. Ohadi, B. de Nijs, R. Esteban, J. Aizpurua and J. J. Baumberg. Single-molecule optomechanics in picocavities, [*Science*]{} [**354**]{} 726 (2016) A. Lombardi, M. K. Schmidt, L. Weller, W. M. Deacon, F. Benz, B. de Nijs, J. Aizpurua and J. J. Baumberg. Pulsed molecular optomechanics in plasmonic nanocavities: From nonlinear vibrational instabilities to bond-breaking, [*Phys. Rev. X*]{}, [**8**]{}(1) 011016 (2018) A. M. Kern, D. Zhang, M. Brecht, A. I. Chizhik, A. V. Failla, F. Wackenhut and A. J. Meixner, Enhanced single-molecule spectroscopy in highly confined optical fields: from $\lambda/2$-Fabry-Pérot resonators to plasmonic nano-antenas, [*Chem. Soc. Rev.*]{} [**43**]{}, 1263 (2014) H. Nabika, M. Takase, F. Nagasawa and K. Murakoshi, Toward plasmon-induced photoexitation of molecules, [*J. Phys. Chem. Lett.*]{} [**1**]{}(16), 2470 (2010) R. Zhang, Y. Zhang, Z. C. Dong, S. Jiang, C. Zhang, L. Chen, L. Zhang, Y. Liao, J. Aizpurua, Y. Luo, J. L. Yang and J. G. Hou, Chemical mapping of a single molecule by plasmon-enhanced Raman scattering, [*Nature*]{} [**498**]{}, 82 (2013) P. Alonso-González, P. Albella, M. Schnell, J. Chen, F. Huth, A. García-Etxarri, F. Casanova, F. Golmar, L. Arzubiaga, L. Hueso, J. Aizpurua, R. Hillenbrand, Resolving the electromagnetic mechanism of surface-enhanced light scattering at single hot spots, [*Nat. Commun.*]{} [**3**]{}, 684 (2012) S. Yampolsky, D. A. Fishman, S. Dey, E. Hulkko, M. Banik, E. O. Potma and V. A. Apkarian. Seeing a single molecule vibrate through rime-resolved coherent anti-Stokes Raman scattering, [*Nat. Photonics*]{} [**8**]{}, 650 (2014) R. Chikkaraddy, B. de Nijs, F. Benz, S. J. Barrow, O. A. Scherman, E. Rosta, A. Demetriadou, P. Fox, O. Hess and J. J. Baumberg. Single-molecule strong coupling at room temperature in plasmonic nanocavities, [*Nature*]{} [**535**]{}, 127 (2016) Y. Huang, Y. Fang, Z. Zhang, L. Zhu, and M. Sun. Nanowire-supported plasmonic waveguide for remote excitation of surface-enhanced Raman scattering, [*Light: Sci. Appl.*]{} [**3**]{}, e199 (2014) S. J. P. Kress, F. V. Antolinez, P. Richner, S. V. Jayanti, D. K. Kim, F. Prins, A. Riedinger, M. P. C. Fischer, S. Meyer, K. M. McPeak, D. Poulikakos, D. J. Norris. Wedge Waveguides and resonators for quantum plasmonics, [*Nano. Lett.*]{} [**15**]{}, 6267 (2015) C. Lee, F. Dieleman, J. Lee, C. Rockstuhl, S. A. Maier, M. Tame. Quantum plasmonic sensing: Beyond the shot-noise and diffraction limit. [*ACS Photonics*]{} [**3**]{}, 992 (2016) F. Peyskens, A. Dhakal, P. van Dorpe, N. Le Thomas, and R. Baets. Surface enhanced Raman spectroscopy using a single mode nanophotonic-plasmonic plataform. [*ACS Photonics*]{} [**3**]{}, 102 (2016) M. D. Basske and F. Vollmer. Optical observation of single atomic ions interacting with plasmonic nanorods in aqueous solution, [*Nat. Photonics*]{} [**10**]{}, 733 (2016) M. Barth, S. Schietinger, S. Fisher, J. Becker, N. Nüsse, T. Aichele, B. Löchel, C. Sönnichsen and O. Benson. Nanoassembled plasmonic-photonic hybrid cavity for tailored light-matter coupling, [*Nano Lett.*]{} [**10**]{}, 891 (2010) H. Xu, E. J. Bjerneld, M. Käll and L. Börjesson. Spectroscopy of single hemoglobin molecules by surface enhanced Raman scattering, [*Phys. Rev. Lett.*]{} [**83**]{}(21) 4357 (1999) C. E. Talley, J. B. Jackson, C. Oubre, N. K. Grady, C. W. Hollars, S. M. Lane, T. R. Huser, P. Nordlander, and N. J. Halas. Surface-enhanced Raman scattering from individual Au nanoparticles and Nanoparticle dimer substrates, [*Nano. Lett.*]{} [**5**]{}, 1569 (2005) W. Zhu and K. B. Crozier. Quantum mechanical limit to plasmonic enhancement as observed by surface-enhanced Raman scattering, [*Nat. Commun.*]{} [**5**]{}, 5228 (2014) M. Takase, S. Yasuda and K. Murakoshi. Single-site surface-enhanced Raman scattering beyond spectroscopy, [*Front. Phys.*]{} [**11**]{}(2) 117803 (2016) R. Hanbury-Brown and R. Q. Twiss. Correlations between photons in two coherent beams of light, [*Nature*]{} (London) [**177**]{}, 27 (1956) D. F. Walls and P. Zoller. Reduced quantum fluctuations in resonance fluorescence, [*Phys. Rev. Lett.*]{} [**47**]{}, 709 (1981) R. Loudon and P. L. Knight. Squeezed Light, [*J. Mod. Opt.*]{} [**34**]{}, 709 (1987) H. J. Carmichael, H. M. Castro-Beltrán, G. T. Foster and L. A. Orozco. Giant Violations of Classical Inequalities through Conditional Homodyne Detection of the Quadrature Amplitudes of Light, [*Phys. Rev. Lett.*]{} [**85**]{}(9) 1855 (2000) G. T. Foster, L. A. Orozco, H. M. Castro-Beltrán and H. J. Carmichael. Quantum State Reduction and Conditional Time Evolution of Wave-Particle Correlations in Cavity QED, [*Phys. Rev. Lett.*]{} [**85**]{}(15) 3149 (2000) G. T. Foster, W. P. Smith, J. E. Reiner and L. A. Orozco. Third-order correlations in cavity quantum electrodynamics, [*J. Opt. B: Quantum Semiclass. Opt.*]{} [**4**]{} S281 (2002) A. Denisov, H. M. Castro-Beltrán and H. J. Carmichael. Time-Asymmetric Fluctuations of Light and the Breakdown of Detailed Balance, [*Phys. Rev. Lett.*]{} [**88**]{}(24) 243601 (2002) Carmichael H J, Foster G T, Orozco L A, Reiner J E and Rice P R “Intensity-field correlations of nonclassical light" [*Progress in Optics*]{}, E. Wolf. ed. (Elsevier, 2004) [**46**]{} A. Shalabney, J. George, J. Hutchison, G. Pupillo, C. Genet and T. W. Ebbesen. Coherent coupling of Molecular Resonators with a Microcavity Mode, [*Nat. Commun.*]{} [**6**]{}, 5981 (2015) T. J. Kippenberg and K. J. Vahala. Cavity Opto-Mechanics, [*Opt. Express*]{} [**15**]{}, 17172 (2007) M. K. Schmidt, R. Esteban, A. González-Tudela, G. Giedke and J. Aizpurua. Quantum Mechanical Description of Raman Scattering from Molecules in Plasmonic Cavities, [*ACS Nano*]{} [**10**]{}(6) 6291 (2016) M. K. Schmidt, R. Esteban, F. Benz, J. J. Baumberg and J. Aizpurua. Linking classical and molecular optomechanics descriptions of SERS [*Faraday Discuss.*]{} [**205**]{}, 31 (2017) P. Roelli, C. Galland, N. Piro and T. J. Kippenberg. Molecular cavity optomechanics as a theory of plasmon-enhanced Raman scattering, [*Nat. Nanotechnol.*]{} [**11**]{}, 164 (2016) M. Aspelmeyer, T. J. Kippenberg and F. Marquart. Cavity optomechanics, [*Rev. Mod. Phys.*]{} [**86**]{}(4) 1391 (2014) M. K. Dezfouli and S. Hughes. Quantum Optics Model of Surface-Enhanced Raman Spectroscopy for Arbitrarily Shaped Plasmonic Resonators [*ACS Photonics*]{} [**4**]{} (5) 1245 (2017) H. J. Carmichael. An Open Systems Approach to Quantum Optics, Springer-Verlag, 1993 C. A. Parra-Murillo, M. F. Santos, C. H. Monken and A. Jorio. Stokes-anti-Stokes correlation in the inelastic scattering of light by matter and generalization of the Bose-Einstein population function [*Phys. Rev.*]{} B [**93**]{}(12) 125141 (2016) E. R. Marquina-Cruz and H. M. Castro-Beltrán. Nonclassicality of resonance fluorescence via amplitude-intensity field correlations, [*Laser Phys.*]{} [**18**]{} 157 (2008) H. M. Castro-Beltrán. Phase-dependent fluctuations of resonance fluorescence beyond the squeezing regime [*Opt. Commun.*]{} [**283**]{}, 4680 (2010) H. M. Castro-Beltrán, L. Gutiérrez and L. Horvath. Squeezed Versus Non-Gaussian Fluctuations in Resonance Fluorescence, [*Appl. Math. Inf. Sci.*]{} [**9**]{}, 2849 (2015) H. M. Castro-Beltrán, R. Román-Ancheyta and L. Gutiérrez. Phase-dependent fluctuations of intermittent resonance fluorescence, [*Phys. Rev. A*]{} [**93**]{}(3) 033801 (2016) L. Gutiérrez, H. M. Castro-Beltrán, R. Román-Ancheyta and L. Horvath. Large time-asymmetric quantum fluctuations in amplitude-intensity correlation measurements of V-type three-level atom resonance fluorescence, [*J. Opt. Soc. Am.*]{} B [**34**]{}, 2301 (2017) J. E. Reiner, W. P. Smith, L. A. Orozco, H. J. Carmichael and P. R. Rice. Time evolution and squeezing of the field amplitude in cavity QED, [*J. Opt. Soc. Am. B*]{} [**18**]{}, 1911 (2001) C. E. Strimbu, J. Leach and P. R. Rice. Conditioned homodyne detection at the single-photon level: Intensity-field correlations for a two-level atom in an optical parametric oscillator, [*Phys. Rev.*]{} A [**71**]{}(1) 013807 (2005) F. Wang, X. Feng and C. H. Oh. Intensity-intensity and intensity-amplitude correlation of microwave photons from a superconducting artificial atom, [*Laser Phys. Lett.*]{} [**13**]{}, 105201 (2016) Q. Xu and K. Mølmer. Intensity and amplitude correlations in the fluorescence from atoms with interacting Rydberg states, [*Phys. Rev.*]{} A [**92**]{}(3) 033830 (2015) Q. Xu, E. Greplova, B. Julsgaard and K. Mølmer. Correlation functions and conditioned quantum dynamics in photodetection theory, [*Phys. Scr.*]{} [**90**]{} 128004 (2015) S. Gerber, D. Rotter, L. Slodicka, J. Eschner, H. J. Carmichael and R. Blatt. Intensity-Field Correlations of Single-Atom Resonance Fluorescence, [*Phys. Rev. Lett.*]{} [**102**]{}(18) 183601 (2009) M. J. Collet, D. F. Walls and P. Zoller. Spectrum of squeezing in resonance fluorescence, [*Opt. Commun.*]{} [**52**]{}, 145 (1984) P. R. Rice and H. J. Carmichael. Nonclassical effects in optical spectra, [*J. Opt. Soc. Am.*]{} B [**5**]{}, 1661 (1988) J. R. Johanson, P. D. Nation and F. Nori. QuTiP 2: A Python framework for the dynamics of open quantum systems, [*Comput. Phys. Comm.*]{} [**284**]{} 1234 (2013) G. Nienhuis. Spectral correlations in resonance fluorescence, [*Phys. Rev. A*]{} [**47**]{}, 510 (1993) E. del Valle, A. González-Tudela, F. P. Laussy, C. Tejedor and M. J. Hartmann. Theory of Frequency-Filtered and Time-Resolved N-Photon Correlations, [*Phys. Rev. Lett.*]{} [**109**]{}(18) 183601 (2012) A. Frank, R. Lemus, M. Carvajal, C. Jung and E. Ziemniak. SU(2) approximation to the coupling of Morse oscillators, [*Chem. Phys. Lett.*]{} [**308**]{}, 91 (1999) M. Carvajal, R. Lemus, A. Frank, C. Jung and E. Ziemniak. An extended SU(2) model for coupled Morse oscillators, [*Chem. Phys.*]{} [**105**]{}, 206 (2000) M. I. Stockman, K. Kneipp, S. I. Bozhevolnyi, S. Saha, A. Dutta, J. Ndukaife, N. Kinsey, H. Reddy, U. Guler, V. M. Shalaev, A. Boltasseva, B. Gholipour, H. N. S. Krishnamoorthy, K. F. MacDonald, C. Soci, N. I. Zheludev, V. Savinov, R. Singh, P. Grob, C. Lienau, M. Vadai, M. L. Solomon, D. R. Barton III, M. Lawrence, J. A. Dionne, S. V. Boriskina, R. Esteban, J. Aizpurua, X. Zhang, S. Yang, D. Wang, W. Wang, T. W. Odom, N. Accanto, P. M. de Roque, I. M. Hancu, L. Piatkowski, N. F. van Hulst and M. F. Kling. Roadmap on plasmonics, [*J. Opt.*]{} [**20**]{}, 043001 (2018)
--- abstract: 'We have conducted extensive investigations on the magnetization and its dynamical relaxation on a Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$ single crystal. It is found that the magnetization relaxation rate is rather weak compared with that in the cuprate superconductors, indicating a higher collective vortex pinning potential (or activation energy), although the intrinsic pinning potential $U_\mathrm{c}$ is weaker. Detailed analysis leads to the following discoveries: (1) A second-peak effect on the magnetization-hysteresis-loop was observed in a very wide temperature region, ranging from 2K to 24K. Its general behavior looks like that in YBa$_2$Cu$_3$O$_7$; (2) Associated with the second peak effect, the magnetization relaxation rate is inversely related to the transient superconducting current density $J_\mathrm{s}$ revealing a quite general and similar mechanism for the second peak effect in many high temperature superconductors; (3) A detailed analysis based on the collective creep model reveals a large glassy exponent $\mu$ and a small intrinsic pinning potential $U_\mathrm{c}$; (4) Investigation on the volume pinning force density shows that the data can be scaled to the formula $F_{p}\propto b^p(1-b)^q$ with $p=2.79$ and $q=3.14$, here $b$ is the reduced magnetic field to the irreversible magnetic field. The maximum normalized pinning force density appears near $b\approx0.47$. Finally, a vortex phase diagram is drawn for showing the phase transitions or crossovers between different vortex phases.' author: - 'Jian Tao, Qiang Deng, Huan Yang, Zhihe Wang, Xiyu Zhu, and Hai-Hu Wen' title: 'Magnetization relaxation, critical current density and vortex dynamics in a Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$ single crystal' --- Introduction ============ Investigation on vortex physics is very important concerning the potential high-power applications of a superconductor. In the cuprate superconductors, due to the very high superconducting transition temperature, layered structure, short coherence length, strong thermal fluctuation etc., the vortex physics is extremely rich, which has led to the unprecedented prosperous development on the vortex physics[@BlatterReview]. Many new concepts and phenomena, such as collective vortex creep[@Vinokur], vortex glass[@Fisher1; @Fisher2; @Koch], first order vortex transitions[@ZeldovNature; @Kwok], vortex melting[@VortexmMelting], second peak effect of magnetization[@LingS; @Bhattacharya] etc. have been proposed or discovered. In the iron based superconductors, the vortex physics looks quite similar to the cuprate[@BSPRB81; @vanderBeek; @Konczykowski] although the anisotropy of $\xi_{ab}/\xi_c$ is only about 2-5 which is much smaller than that in the cuprate system[@JiaYAPL; @WangZSPRB]. Preliminary experimental studies have revealed that the vortex dynamics in iron pnictide may be understood with the model of thermally activated flux motion within the scenario of collective vortex pinning[@YangHPRB; @ProzorovPRB; @ProzorovPhysicaC]. A second-peak (SP) effect on the magnetization-hysteresis-loop (MHL) has also been observed in Ba$_{1-x}$K$_x$Fe$_2$As$_2$[@YangHAPL] and Ba(Fe$_{1-x}$Co$_x)_2$As$_2$ single crystals[@ProzorovPRB; @ProzorovPhysicaC; @ProzorovPRB2009; @YNakajida]. Beside the three typical high temperature superconductors, namely the cuprate, MgB$_2$ and the iron based superconductors, the Ba$_{1-x}$K$_x$BiO$_3$ [@JohnsonPRB; @CavaNature332](hereafter abbreviated as BKBO) superconductor is quite special in terms of its relatively high transition temperature (The highest $T_\mathrm{c}$ can reach about 34 K[@JonesJSSC78]) and almost three-dimensional feature[@SYPeiPRB; @SchneemeyerNature]. In the BKBO superconductors, the coherence length detected from scanning tunneling microscope (STM) and other measurements is about 3-7 nm[@JangPRB; @AffrontePRB; @KwokPRB40; @Schweinfurth]. The structural characteristics[@SYPeiPRB], the coherence length, the Ginzburg-Landau parameter[@SNBariloPRB] $\kappa=\lambda_\mathrm{L}/\xi$ seem to be very different from those in the cuprate superconductors[@VaugheyChemM; @ShepelevPhysicaC; @WorthingtonPRL; @GallagherJAP]. These peculiarities may bring about new ingredients to us in understanding the vortex physics in high temperature superconductors. Therefore we have grown the Ba$_{1-x}$K$_x$BiO$_3$ single crystals and investigated the vortex physics extensively with measurements of magnetization and its dynamical relaxation. Experiment ========== ![(Color online) X-ray diffraction pattern of a crystal with composition Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$. The inset shows the photograph of BKBO crystals grown by the electrochemical method. []{data-label="fig1"}](fig1.eps){width="8cm"} The single crystals with high quality studied in this work were prepared by the molten salt electrochemical method presented previously[@NortonMRB; @Nishio]. In the process of electrochemical growth, the working electrode was made from 0.5 mm-diameter platinum wire (Alfa Aesar, 4N), and the working current was 2 mA. In addition, we placed 43 g of KOH (J$\&$K Chemical Ltd.) in a 100 cm$^3$ Telfon container, and heat it up to 250 $^\circ C$, staying for several hours until KOH was completely melted, then added 1.49 g of Ba(OH)$_2$$\cdot$8H$_2$O (J$\&$K Chemical Ltd., 2N) and 3.22 g of Bi$_2$O$_3$ to the molten KOH solution, the growth begins after stirring the solution for almost two hours. In this way, the crystals can be successfully obtained with the size up to several millimeters if the growing time is long enough. The best growth time in the experiment is around 48 hours. Inset of Fig. 1 shows the photograph of samples we synthesized through the electrochemical reaction method. By the way, all the samples we measured were polished to a proper thickness in order to guarantee the homogeneity. The lattice structure of the sample was characterized by x-ray diffraction (XRD) at room temperature with a Bruck-D8-type diffractometer. The XRD pattern of a sample is shown in Fig. 1, the vast value of the intensity of the ($l$00) indices from the XRD pattern demonstrates the $a$-axis orientation of the single crystal. The $a$-axis lattice constant is 4.2995$\mathrm{\AA}$ through calculating the indexed peaks. The sample composition was analyzed by using the energy dispersive x-ray spectrometer (EDX/EDS). We concluded that the composition of measured sample is about Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$, where the oxygen content cannot be accurately determined. ![(Color online)(a) Temperature dependence of resistivity at different magnetic fields ranging from 0 T to 9 T. (b) Temperature dependence of resistivity at different magnetic fields ranging from 0 T to 0.8 T. (c) Temperature dependence of the diamagnetic moment measured in processes of ZFC and FC at a magnetic field of 20 Oe.[]{data-label="fig2"}](fig2.eps){width="8cm"} The electric transport and magnetization measurement were performed by a physical property measurement system (PPMS, Quantum Design) and SQUID vibrating sample magnetometer (SQUID-VSM, Quantum Design), respectively. Fig. 2 (a) and (b) show the temperature dependence of resistivity for the crystal Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$ under different magnetic fields ranging from 0 T to 9 T. The onset transition temperature at zero field is about 27 K by taking a criterion of 90$\%\rho_\mathrm{n}$, here $\rho_\mathrm{n}$ is the normal state resistivity. The diamagnetic moment of the sample is shown in Fig. 2(c) which was measured in the zero-field-cooled (ZFC) and field-cooled (FC) mode under a DC magnetic field of 20 Oe. The ZFC curve displays a transition with an onset temperature around 26.5 K. The transition temperature of the present sample is in agreement with the phase diagram reported by other group[@SamataJPCS]. From the results of XRD, resistivity and diamagnetic measurements, the quality of the sample has been proved to be good enough to do further study of the vortex dynamics. In Fig. 3 we show the MHL curves with a magnetic field sweeping rate $dB/dt$ of 200 Oe/s and 50 Oe/s at different temperatures ranging from 2 K to 24 K (the magnetic field is vertical to the $ab$ plane of the sample). The symmetric MHL curves demonstrate that the measured sample is bulk superconductive and the vortex pinning is bulk in nature. Undoubtedly, the dia-magnetization here is not due to the surface shielding effect, and the Bean critical state model can be used. ![(Color online) Magnetization hysteresis loops of the BKBO single crystal at various temperatures ranging from 2 K to 15 K (a), 18 K to 24 K (b). The solid lines stand for the magnetization measured with field sweeping rate of 200 Oe/s while the dash lines represent those measured with 50 Oe/s. We need to note that the magnetization at 4 K in the field ascending process with 50 Oe/s experienced a small flux jump at a low field. This flux jump keeps giving influence on the MHL at high field on this curve. For calculating the $\Delta M$, $J_s$ and $Q$ for 4K, we used the data in the left-hand side and second quadrant with negative magnetic field at 4K.[]{data-label="fig3"}](fig3.eps){width="8cm"} Models and analysis method ========================== Thermally activated flux motion and collective flux creep --------------------------------------------------------- To fully understand the vortex motion in the BKBO single crystal, we start from the model of thermally activated flux motion[@AndersonPRL]: $$E=v_0B \exp (-\frac{U(J_\mathrm{s},T,B_\mathrm{e})}{k_\mathrm{B}T}).$$ Here $E$ is the electric field induced by the vortex motion, $v_0$ is the attempting moving velocity of the hopping vortex lines, $U(J_\mathrm{s}, T, B_\mathrm{e})$ is the effective activation energy, and $B_\mathrm{e}$ is the external magnetic field, $B$ is the local averaged magnetic induction. Based on the vortex glass [@Fisher1] and the collective pinning models[@Vinokur], it is predicted that $U(J_\mathrm{s},T,B_\mathrm{e})$ is positively related to $[J_\mathrm{c}(T,B_\mathrm{e})/J_\mathrm{s}]^\mu$, where $\mu$ is the glassy exponent describing the activation energy. In order to ensure that $U(J_\mathrm{s},T,B_\mathrm{e})$ reaches zero when the external applied current $J_\mathrm{s}$ approaches the critical current $J_\mathrm{c}$, then Malozemoff proposed to rewrite the activation energy in a very general way as[@MalozemoffPHYSICAC] $$U(J_\mathrm{s},T,B_\mathrm{e})=\frac{U_\mathrm{c}(T,B_\mathrm{e})}{\mu(T,B_\mathrm{e})}[(\frac{J_\mathrm{c}(T,B_\mathrm{e})}{J_\mathrm{s}(T,B_\mathrm{e})})^{\mu(T,B_\mathrm{e})}-1]$$ where $U_\mathrm{c}$ and $J_\mathrm{c}$ are the characteristic (or called as the intrinsic) pinning energy and initial critical current density (unrelaxed), respectively. The glassy exponent $\mu$ gives different values for different regimes of flux creep. From the elastic manifold theory [@Vinokur], it is predicted that $\mu = 1/7$, 3/2, and 7/9 for the single vortex, small bundles, and large bundles of vortex motion respectively. Interestingly, Eq. (2) has actually covered many models describing the $U(J_\mathrm{s})$. For instance, when $\mu= -1$, it will go back to the Kim-Anderson model[@AndersonPRL], and $\mu= 0$ corresponds to the Zeldov’s logarithmic model[@ZeldovAPL]. Furthermore, when the $J_\mathrm{c}$ is much larger than $J_\mathrm{s}$, Eq. (2) will return to the form of collective pinning models. Therefore, the value of $\mu$ will play a significant role in understanding the vortex motion. Models for analyzing the magnetization relaxation ------------------------------------------------- For the sake of discussion, we will calculate the transient current density $J_\mathrm{s}$ from the width $\Delta M$ of MHLs, where $\Delta M = M^{-}-M^{+}$ with $M^{-}$($M^{+}$) the magnetization at a certain magnetic field in the increasing (decreasing)-field process. According to the Bean critical state model[@BEANRMP36], the transient superconducting current density $J_\mathrm{s}$ can be expressed as $$J_\mathrm{s}=20\frac{\Delta M}{w(1-\frac{w}{3l})},$$ where the unit of $\Delta M$ is emu/cm$^3$, $w$, $l$ are the width and length of the sample measured in cm ($w<l$), respectively. In this work, we utilized the dynamical relaxation method to study the vortex dynamics, instead of using the conventional relaxation method[@MJPRB; @WenHHPhysicaC95; @WENPRB52]. The corresponding physical quantity is the magnetization-relaxation rate $Q$ which is defined as: $$Q \equiv \frac{\textrm{d} \ln J_\mathrm{s}}{\textrm{d} \ln (dB/dt)}= \frac{\textrm{d} \ln (\Delta M)}{\textrm{d} \ln(\textrm{d}B/\textrm{d}t)}.$$ The dynamical relaxation measurements are followed in this way: the sample is cooled down to a certain temperature at ambient magnetic field, and then we measure the MHL curves with two different magnetic field sweeping rates. From the general formulas Eqs. (1) and (2) mentioned above and the definition of $Q$, we will employ the following expression to quantify the characteristic pinning energy as derived by Wen et al.[@WENJAC] $$\frac{T}{Q(T,B_\mathrm{e})}=\frac{U_\mathrm{c}(T,B_\mathrm{e})}{k_B}+\mu(T,B_\mathrm{e})CT,$$ here $C=\ln(2\nu_0B/(ldB_\mathrm{e}/dt)$) is a parameter which is weakly temperature dependent, $l$ is the lateral dimension of the sample. According to Eq.(5), in the low temperature region, the term of $\mu(T,B_\mathrm{e})CT$ is much smaller than the term of $U_\mathrm{c}(T,B_\mathrm{e})/k_B$, so we could ignore $\mu(T,B_\mathrm{e})CT$, and get $T/Q(T,B_\mathrm{e})\approx U_\mathrm{c}(T,B_\mathrm{e})/k_B $, and $Q$ should show a linear dependence on $T$. However, as we will show below, in BKBO, the term $U_\mathrm{c}$ is not big, but the glassy exponent $\mu$ is sizeable, therefore the second term becomes quite important. In the low temperature limit, we could use this approximation to extrapolate the curve $T/Q$ down to zero temperature, the intercept gives $U_\mathrm{c}(0)/k_B$. The relaxation rate $Q$ is related to the balance of the two terms $U_\mathrm{c}$ and $\mu CT$ and shows a complex temperature dependent behavior. Results and discussion ====================== Transient superconducting current density and second peak effect ---------------------------------------------------------------- ![(Color online) Field dependence of $\Delta M$ with different field sweeping rates at various temperatures ranging from 2 K to 10 K.[]{data-label="fig4"}](fig4.eps){width="8cm"} ![(Color online) (a) Magnetic field dependence of the calculated transient superconducting current density $J_\mathrm{s}$ based on the Bean critical state model at temperatures ranging from 2 K to 10 K. (b) Magnetic field dependence of the calculated transient superconducting current density based on the Bean critical state model at temperatures ranging from 15 K to 24 K. []{data-label="fig5"}](fig5.eps){width="8cm"} In Fig. 4, we show the field dependent $\Delta M$ with different field sweeping rates of 200 Oe/s and 50 Oe/s respectively at different temperatures ranging from 2 K to 10 K. Fig. 5 shows the field dependence of $J_\mathrm{s}$ with the magnetic field sweeping rate of 50 Oe/s by using Eq. (3). The calculated $J_\mathrm{s}$ at 2 K at zero magnetic field can reach up to 10$^5$ A/cm$^2$, which is an order of magnitude larger than the values reported in literatures [@BARILOphysicac254], while the $J_\mathrm{s}$ is much smaller than the cuprate and iron-based superconductors. As presented in Fig. 5(b), in the high temperature region $J_\mathrm{s}$ decreases greatly due to severe flux motion. From the MHL curves in Fig. 3 and $J_\mathrm{s}$-$B$ curve in Fig. 5, we can observe second peak (SP) effect (fish-tail effect) in a very wide temperature region ranging from 2 K to 22 K. The second peak is relative to the first one near zero magnetic field in MHL or $J_\mathrm{s}$-$B$ curve, and the magnetic field of the second peak position is defined as $H_\mathrm{sp}$ which is dependent on temperature. We can see this clearly from Fig. 3 that the peak position of SP moves toward lower magnetic field as the temperature rises. This feature was also observed in the cuprate superconductors, such as YBa$_2$Cu$_3$O$_7$ (YBCO) [@LKPRB49], as well as in the iron-based superconductors[@YangHAPL; @BSPRB81]. The second peak effect has been intensively studied previously, and several possible mechanisms have been proposed. These include (1) Inhomogeneities in the sample, such as nonuniform oxygen distribution in the cuprates, which generates oxygen-deficient regions acting as extra pinning centers in high magnetic field[@MDNATURE], and thus enhances the superconducting current density; (2) A crossover from fast relaxation to slow relaxation, the transition occurs between the single vortex regime and the collective pinning and creep regimes with slower relaxation at sufficiently high magnetic fields [@LKPRB49]; (3) A crossover in flux dynamics from elastic to plastic vortex creep [@YAPRL77]. It is well-known that the SP effect in YBCO and Bi$_2$Sr$_2$CaCu$_2$O$_8$ (Bi2212) samples exhibits in different ways. In Bi2212, the SP field is low (usually a few hundreds of oersted) and weakly temperature dependent, but SP in YBCO occurs at a high magnetic field (usually a few or more than ten Tesla) and is strongly temperature dependent. Our present results in BKBO show that it is more like the SP in YBCO. This probably suggests that the SP in BKBO and YBCO is due to the similar reasons. One of possible explanations is that both systems are more three dimensional and containing oxygen vacancies. The oxygen content may not be very uniform leading to many local random pinning centers. Magnetization relaxation and its correlation with $J_\mathrm{s}$ ---------------------------------------------------------------- It can be clearly seen from Fig. 4 that the MHL curves demonstrate a difference in magnitude with different field sweeping rates at a certain temperature. The larger the field sweeping rate is, the bigger of the MHL width will be. As we know the magnitude of the difference between the MHL curves measured at different field sweeping rates reflects how strong the vortex creep is. In the cuprate high temperature superconductors, it was found that the separation between different MHLs is quite large[@WenHHPhysicaC1998]. At high temperatures, however, the magnitude of the diamagnetic moment greatly decreases with increasing magnetic field. Based on Eq. (4) for treating the magnetization relaxation, we calculated the magnetic field and temperature dependence of the dynamical relaxation rate $Q$ from the data shown in Fig. 4. As presented in Fig. 6(a), it can be seen that $Q$ decreases first and then increases with increasing magnetic field at temperatures of 2 K and 10 K, showing a minimum of $Q$ in the intermediate magnetic field region. This corresponds very well with the width of the MHLs or the transient superconducting current density qualitatively as shown in Fig. 6(b). However, not like that reported in Ba(Fe$_{0.92}$Co$_{0.08}$)$_2$As$_2$ [@BSPRB81; @KonczykowskiPRB], we do not observe a clear crossover of $Q$ value near zero field, which was interpreted as a crossover from the strong intrinsic pinning near zero field to a collective pinning at a high magnetic field. This may suggest that there is just one kind of pinning mechanism for the measured BKBO single crystal. As we know that magnetization relaxation rate $Q$ is affected by not only the vortex pinning but also the interaction between vortices. In the normal circumstance, the magnetization relaxation $Q$ will increase with increasing magnetic field because of the enhanced interaction between vortices at a certain temperature. The irreversible magnetization or the transient critical current density $J_\mathrm{s}$ will decrease with the magnetic field as the flux creep is enhanced. However, in Fig. 6 we can observe non-monotonic magnetic field dependence of both $Q$ and $J_\mathrm{s}$. Indeed, one can clearly see that when the MHL is getting wider versus magnetic field, the relaxation rate is getting smaller, showing a slower magnetization relaxation, i.e., there is a close correlation between field dependent behaviors of $Q$ and $\Delta M$(or $J_\mathrm{s}$). This feature is very similar to those in cuprate and iron based superconductors[@YeshurunSP; @BSPRB81]. What we want to notice is that the positions of minimum $Q$ are roughly corresponding to the location where $\Delta M$ takes the maximum. This also reminds us that the second peak effect appearing here is just reflecting a dynamical process of the vortex entity, but not the detailed characteristics of the pinning centers, because one cannot guarantee the pining mechanism is the same in all these different superconductors. It would be very interesting to measure the magnetization relaxation in a long time scale, to reveal whether the SP effect corresponds to a static vortex phase transition[@Yeshurun2]. ![(Color online)(a) The magnetic field dependence of magnetization-relaxation rate $Q$ at temperatures of 2 K and 10 K obtained from curves in (b) with Eq.(4). (b) The MHL curves at temperatures of 2 K and 10 K. Solid lines stand for the magnetization measured with a magnetic field sweeping rate of 200 Oe/s while the dashed line to 50 Oe/s. Several sharp peaks of magnetization in the field ascending process below 0.5 T at 2K are induced by the flux jump effect.[]{data-label="fig6"}](fig6.eps){width="8cm"} ![(Color online) (a) Temperature dependence of log $J_\mathrm{s}$ at different magnetic fields ranging from 0.6 T to 6 T, the data is the same as that presented in Fig. 5. (b) Temperature dependence of magnetization relaxation rate $Q$ at various magnetic fields from 0.6 T to 5.0 T obtained from the corresponding curves in Fig. 3 with Eq.(4).[]{data-label="fig7"}](fig7.eps){width="8cm"} Fig. 7 shows the temperature dependence of transient superconducting current density $J_\mathrm{s}$ and dynamical relaxation rate $Q$ taken from field dependent values at various temperatures, respectively. We notice that in the low and intermediate temperature region, the curves of $\log J_\mathrm{s}(T)$ at different magnetic fields almost merge together below 6 T, which disperses clearly in high temperature region. This behavior is similar to that of Ba(Fe$_{0.88}$Co$_{0.12}$)$_2$As$_2$[@BSPRB81], and may be caused by the SP effect which prevents the rapid decreasing of $J_\mathrm{s}$. Associated with the SP effect, the magnetization relaxation rate $Q$ is inversely related to transient superconducting current density $J_\mathrm{s}$ as shown in Fig. 7(b). As we mentioned already that there is a plateau in $Q - T$ curve below 18 K, and below 2 T the $Q$ value decreases with increasing field, all these features demonstrate that the $Q$-plateau and the SP effect are closely related. When it is in the collective creep regime, the magnetization relaxation rate is low and exhibits a plateau, thus the transient critical current density $J_\mathrm{s}$ decays slowly with temperature. It seems that all these features can be explained coherently. As addressed above, a plateau of $Q$ appears in the intermediate temperature region which is followed by a severe increase in the high temperature region. This behavior of magnetization relaxation rate was also observed in cuprate superconductor YBCO[@RGprl72] and iron-based superconductors, such as Ba(Fe$_{0.92}$Co$_{0.08}$)$_2$As$_2$ [@BSPRB81] and SmFeAsO$_{0.9}$F$_{0.1}$ [@YangHPRB]. This plateau cannot be understood within the picture of single vortex creep with a rigid hopping length as predicted by the Kim-Anderson model, but perhaps due to the effect of collective flux creep. The reason is that, in the Kim-Anderson model,it is quite easy to derive that $T/Q(T)=U_\mathrm{c}(T)$. Suppose $U_\mathrm{c}(T)$ is a weak temperature dependent function, we have $Q(T)\propto T$, which contradicts the observation of a $Q$-plateau in the intermediate temperature region. As described in Eq. (5), the relaxation rate $Q$ is dependent on both $U_\mathrm{c}(T)$ and $\mu CT$. In the intermediate temperature region, the term of $\mu CT$ becomes much larger than $U_\mathrm{c}/k_B$ and we will get almost constant value of $Q$, which provides a simple explanation of the plateau. A coherent picture to interpret the complex temperature and magnetic field dependence of $Q$ and $J_s$ is thus proposed in the following way. When the magnetic field increases from zero, the vortex system runs into the vortex glass regime with much smaller relaxation rate. In the high field region, the magnetic relaxation rate $Q$ goes up drastically, meanwhile the transient superconducting current $J_\mathrm{s}$ drops down quickly as shown in Fig. 7(a), which could be interpreted as a crossover from the elastic motion to a plastic one of the vortex system. Analysis based on the collective pinning/creep model ---------------------------------------------------- ![(Color online) Temperature dependence of the ratio $T/Q$ at different magnetic fields ranging from 0.6 T to 5.0 T.[]{data-label="fig8"}](fig8.eps){width="8cm"} Now let’s have a quantitative consideration on the vortex dynamics based on the collective pinning/creep model. According to Eq. (5), the intercept of $T/Q$-$T$ curve will give $U_\mathrm{c}/k_B$ and the slope gives rise to $\mu C$ if we assume $U_\mathrm{c}$ and $\mu$ is weakly temperature dependent. In Fig.8 we present the $T/Q$ vs. $T$ at different magnetic fields ranging from 0.6 T to 5 T. One can see that the intercept $U_\mathrm{c}/k_B$ is actually very small, about 111 K at 1 T, and 198 K at 2 T, which is much smaller than the value of over 3000 K in MgB$_2$ [@MgB2Uc] and also smaller than about 300-500 K in YBCO [@WenHHPhysicaC95; @WENJAC]. However, since $J_\mathrm{s}$ is quite large and the relaxation rate is very small, the collective pinning should play the role here. As we discussed above, in the model of collective creep, the glassy exponent $\mu$ becomes influential on the vortex dynamics. From Eq.(5), one can see that $\mu C$ can be determined from the slope of $T/Q$ vs $T$. From our data, we obtained the value $\mu C = 63$ in the low temperature regime at 1 T from Fig. 8. So in the intermediate temperature region, $U_\mathrm{c}/k_B\ll\mu C$ and we will get the plateau of $Q\approx1/\mu C$ as shown in Fig. 7(b). Meanwhile, the parameter $C$ can be determined by the slope of $-d\ln J_\mathrm{s}/dT$ vs $Q/T$[@WenHHPRL1997; @WenHHPhysicaC95], and we get the $C=24.8$ and the glassy exponent $\mu\approx 2.54$. The value of $\mu$ is much larger than 1 and clearly shows a collective creep effect. Actually, $\mu$ can also be estimated[@WenHHPRL1997] from $\mu=-Qd\ln^2J_\mathrm{s}/d\ln E^2$. From this equation, one can imagine that large $\mu$ means a stronger glassy effect, showing a strong downward curvature in the $\ln E$ vs $\ln J_\mathrm{s}$ curve. All these indicate that the vortex pinning and dynamics in BKBO can be described by the collective pinning/creep model with weak characteristic pinning energy $U_\mathrm{c}$ but large glassy exponent $\mu$. Characteristics of pinning force density ---------------------------------------- ![(Color online) The scaling of normalized pinning force density $F_p/F_p^\mathrm{max}$ as a function of the reduced field $b=H/H_\mathrm{irr}$. Curves at different temperatures are almost scaled together. A fitting result with Eq. (6) is presented as the red line, and the maximum locates at $b\approx 0.47$. ](fig9.eps){width="8cm"} .\[fig9\] In order to get further insight into the origin of the second peak effect, we need to analyze the pinning force density $F_p$ which is proportional to $J_\mathrm{s} H$. In Fig. 9 we show the $F_ \mathrm{p}$ normalized to its maximum value as a function of reduced field $b=H/H_\mathrm{irr}$, and $H_\mathrm{irr}$ is the irreversible magnetic field determined using a criterion of $J_\mathrm{s}= 20$ A/cm$^2$. Although there is uncertainty to determine $H_\mathrm{irr}$, the curves of normalized pinning force at different temperatures seem to scale well and have the similar maximum value, which is different from the poor scaling results observed in the thick films of BKBO[@BKBOfilm]. The maximum of the pinning force density locates at $b\approx{0.47}$. We use the following expression to study the pinning mechanism for the sample Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$ $$\frac{F_\mathrm{p}}{F_p^\mathrm{max}}=A b^p(1-b)^q,$$ where $A$, $p$ and $q$ are the parameters, and the values of $p$ and $q$ can tell us the characteristic properties of the vortex pinning mechanism in the sample. We use this equation to fit the data, which is presented as the red line in Fig. 9. It can be seen that the fitting curve with $p=2.79$ and $q=3.14$ catches the main feature of the experimental data. As we know the cuprate high-temperature superconductors are usually satisfied with the relation $q > p$, e.g., $q=4$, $p=2$ were obtained for YBCO single crystals [@JNPHYSICAC169], and $q=2$, $p=0.5$ for BSCCO single crystal [@RosePHYSICAC170]. In this sample, $b$ for the maximum value of the pinning force density is near 0.5 which corresponds to the $\delta\kappa$-type pinning[@KRAMERJAP44], however this kind of pinning requests $p=q=1$. Further studies are still required to resolve this discrepancy. Vortex phase and general discussion =================================== ![(Color online) The phase diagram of the BKBO sample. The open symbols are taken from the resistive measurements shown in Fig. 2, while the solid ones are taken from the $J_s$-$\mu_0H$ curve in Fig. 5. The inset shows a typical example for how to determine characteristic fields $H_\mathrm{irr}$, $H_\mathrm{sp}$ and $H_\mathrm{min}$. All red lines in this figure are the fitting curves with the formula $H(T) = H(0)(1-T/T_\mathrm{c})^n$. For the irreversibility field $H_\mathrm{irr}(T)$, the open circles correspond to the data determined from resistivity, the filled green squares represent the data determined from the irreversible magnetization.[]{data-label="fig10"}](fig10.eps){width="8cm"} In Fig. 10, we present the vortex phase diagram of the sample Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$. The upper critical field $H_\mathrm{c2}$ and the irreversible field $H_\mathrm{irr}$ shown as open symbols in Fig. 10 are determined by a criterion of $90\%\rho_\mathrm{n}$ and $10\%\rho_\mathrm{n}$ in Fig. 2. The three characteristic fields are shown as solid symbols in Fig. 10, i.e., the second magnetization peak field $H_\mathrm{sp}$, $H_\mathrm{min}$ determined at the minimum point of $\Delta M$ or $J_s$ between the first and the second magnetization peak, and the irreversibility field $H_\mathrm{irr}$ determined from the field dependent $J_\mathrm{s}$ curve in Fig. 5. In order to get more information, we use the expression $H(T) = H(0)(1-T/T_\mathrm{c})^n$ to fit these curves. We got the following values: $n=1.32$ for $H_\mathrm{min}$ and $H_\mathrm{min}(0)=1.4$ T, $n=0.723$ for $H_\mathrm{sp}$ and $H_\mathrm{sp}(0)=3.59$ T, $n=1.07$ for $H_\mathrm{c2}$ and $H_\mathrm{c2}(0)=21.6$ T. The exponent $n$ from the $H_\mathrm{sp}$-$T$ curve is almost half less than that of YBCO [@LKPRB49]. It is clear that in BKBO, the separation between $H_\mathrm{irr}$-$T$ and $H_\mathrm{c2}$-$T$ is small, which is similar to YBCO and indicates a rather weak vortex fluctuation. In addition, one can see that the separation between $H_\mathrm{sp}$-$T$ and $H_\mathrm{irr}$-$T$ curves is quite large. One cannot interpret this large region as due to the non-uniform distribution of disorders or pinning centers. However, it is reasonable to understand this region as the plastic flux motion since this phase can gradually “melt” through losing the rigidity of the vortex manifold. It is interesting to note that the $H_\mathrm{sp}(T)$ looks very similar to that in YBCO, but very different from that in Bi2212. Therefore we believe that the second peak effect, at least in BKBO and YBCO, may be induced by the similar reason. It is quite possible that the elastic energy which depends on the shear module $C_{66}$ is an influential factor for the occurrence of the second peak effect. Concluding Remarks ================== We have investigated the vortex dynamics and phase diagram through measuring the magnetization and its relaxation on a Ba$_{0.66}$K$_{0.32}$BiO$_{3+\delta}$ single crystal with transition temperature $T_\mathrm{c}=27$ K. Second magnetization peak has been observed in wide temperature region from 2 K to 24 K. It is found that through out the non-monotonic magnetic field dependence of the magnetization, the relaxation rate is inversely related to the transient critical current density, indicating that the SP effect is dynamical in nature. It is found that many observed features can be coherently described by the collective pinning/creep model. Through the fitting and analysis, we find that the characteristic pinning energy $U_\mathrm{c}$ is quite small (about 198 K at 2 T), but the glassy exponent $\mu$ is quite large, which induces a relatively small magnetization relaxation rate. A universal scaling law for the pinning force density $F_\mathrm{p}/F_\mathrm{p}^\mathrm{max}$ vs $H/H_\mathrm{irr}$ is found, which suggests that the pinning mechanism is probably $\delta \kappa$-type. The characteristics of the SP effect and magnetization relaxation as well as the vortex dynamics of the system allow us to conclude that it is more like those in the cuprate superconductor YBCO. Acknowledgments =============== We thank Xiang Ma and Dong Sheng for the assistance in SEM/EDS measurements. This work was supported by NSF of China with the project numbers 11034011 and 11190020, the Ministry of Science and Technology of China (973 projects: 2011CBA00102, 2012CB821403) and PAPD. [00]{} G. Blatter, V. M. Feigelman, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, Rev. Mod. Phys. **66**, 1125 (1994). M. V. Feigelman, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, Phys. Rev. Lett. **63**, 2303 (1989). M. P. A. Fisher, Phys. Rev. Lett. **62**, 1415 (1989). D. S. Fisher, M. P. A. Fisher, and D. A. Huse, Phys. Rev. B **43**, 130 (1991). R. H. Koch, V. Foglietti, W. J. Gallagher, G. Koren, A. Gupta, and M. P. A. Fisher, Phys. Rev. Lett. **63**, 1511 (1989). E. Zeldov, D. Majer, M. Konczykowski, V. B. Geshkenbein, V. M. Vinokur, H. Shtrikman, Nature (London) **375**, 373 (1995). A. M. Petrean, L. M. Paulius, W. K. Kwok, J. A. Fendrich, and G. W. Crabtree, Phys. Rev. Lett. **84**, 5852 (2000). P. L. Gammel, L. F. Schneemeyer, J. V. Wasczak, and D. J. Bishop, Phys. Rev. Lett. **61**, 1666 (1988). X. S. Ling, J. E. Berger, D. E. Prober, Phys. Rev. B **57**, 3249 (1998). S. Bhattacharya and M. J. Higgins, Phys. Rev. Lett. **70**, 2617 (1992). B. Shen, P. Cheng, Z. S. Wang, L. Fang, C. Ren, L. Shan, and H. H. Wen, Phys. Rev. B **81**, 014503 (2010). C. J. van der Beek, P. H. Kes, Phys. Rev. B **43**, 13032 (1991). D. Majer, E. Zeldov, and M. Konczykowski, Phys. Rev. Lett. **75**, 1166 (1995). Y. Jia, P. Cheng, L.Fang, H. Q. Luo, H. Yang, C. Ren, L. Shan, C. Z. Gu, and H. H. Wen, Appl. Phys. Lett. **93**, 032503 (2008). U. Welp, R. Xie, A. E. Koshelev, W. K. Kwok, H. Q. Luo, Z. S. Wang, G. Mu, and H. H. Wen, Phys. Rev. B **79**, 094505 (2009). H. Yang, C. Ren, L. Shan, H. H. Wen, Phys. Rev. B **78**, 092504 (2008). R. Prozorov, N. Ni, M. A. Tanatar, V. G. Kogan, R. T. Gordon, C. Martin, E. C. Blomberg, P. Prommapan, J. Q. Yan, S. L. Bud’ko, P. C. Canfield, Phys. Rev. B **78**, 224506 (2008). R. Prozorov, M. A. Tanatar, E. C. Blomberg, P. Prommapan, R. T. Gordon, N. Ni, S. L. Bud’ko, P. C. Canfield , Physica C **469**, 667 (2009). H. Yang, H. Q. Luo, Z. S. Wang, H. H. Wen, Appl. Phys. Lett. **93**, 142506 (2008). R. Prozorov, M. A. Tanatar, N. Ni, A. Kreyssig, S. Nandi, S. L. Bud’ko, A. I. Goldman, P. C. Canfield, Phys. Rev. B **80**, 174517 (2009). Y. Nakajima, Y. Tsuchiya, T. Taen, T. Tamegai, S. Okayasu, M. Sasase, Phys. Rev. B **80**, 012510 (2009). L. F. Mattheiss, E. M. Gyorgy, and D. W. Johnson, Phys. Rev. B **37**, 3745 (1988). R. J. Cava, B. Batlogg, J. J. Krajewski, R. Farrow, L. W. Rupp Jr, A. E. White, K. Short, W. F. Peck, T. Kometani, Nature (London) **332**, 814 (1988). N. L. Jones, J. B. Parise, R. B. Flippen, A. W. Sleight, J. Solid State Chem. **78**, 319 (1989). S. Pei, J. D. Jorgensen, B. Dabrowski, D. G. Hinks, D. R. Richards, A. W. Mitchell, J. M. Newsam, S. K. Sinha, D. Vaknin, and A. J. Jacobson, Phys. Rev. B **41**, 4126 (1990). L. F. Schneemeyer, J. K. Thomas, T. Siegrist, B. Batlogg, L. W. Rupp, R. L. Opila, R. J. Cava, and D. W. Murphy, Nature **335**, 421 (1988) H. C. Jang, M. H. Hsieh, D. S. Lee, and H. E. Horng, Phys. Rev. B, **42**, 2551 (1990). M. Affronte, J. Marcus, C. Escribe-Filippini, A. Sulpice, H. Rakoto, J. M. Broto, J. C. Ousset, S. Askenazy, and A. G. M. Jansen, Phys. Rev. B **49**, 3502 (1994) W. K. Kwok, U. Welp, G. Crabtree, K. G. Vandervoort, R. Hulsher, Y. Zheng, B. Dabrowski, and L. G. Hinks, Phys. Rev. B **40** , 9400 (1989). R. A. Schweinfurth, C. E. Platt, M. R. Teepl, and D. J. van Harlingen, Appl. Phys. Lett. **61**, 480(1992) S. N. Barilo, S. V. Shiryaev, V. I. Gatalskaya, J. W. Lynn, M. Baran, H. Szymczak, R. Szymczak, and D. Dew-Hughes, Phys. Rev. B **58**, 12355 (1998) J. T. Vaughey , J. P. Thiel , E. F. Hasty , D. A. Groenke , Charlotte L. Stern , K. R. Poeppelmeier , B. Dabrowski , D. G. Hinks , A. W. Mitchell, Chem. Mater. **3**, 935 (1991) Yu. F. Shepelev, A. A. Levin, Yu. I. Smolin, Physica C **215**, 371 (1993) T. K. Worthington, W. J. Gallagher, T. R. Dinger, Phys. Rev. Lett. **59**, 1160 (1987) W. J. Gallagher, J. Appl. Phys. **63**, 4216 (1988) M. L. Norton, Mat. Res. Bull. **24**, 1391 (1989). T. Nishio, H. Minami, H. Uwe, Physica C. **357**, 376 (2001). Y. Nagata, A. Mishiro, T. Uchida, M. Ohtsuka, H. Samata, J. Phys. Chem. Solids. **60**, 1933-1942 (1999). P. W. Anderson, Phys. Rev. Lett. **9**, 309 (1962). A. P. Malozemoff, Physica C. **185-189**, 264 (1991). E. Zeldov, N. M. Amer, G. Koren, A. Gupta, M. W. McElfresh and R. J. Gambino, Appl. Phys. Lett. **56**, 680 (1990). C. P. Bean, Rev. Mod. Phys. **36**, 31 (1964). M. Jirsa, L.Pust, D. Dlouhy, M. R. Koblischka, Phys. Rev. B **55**, 3276 (1997). H. H. Wen, H. G. Schnack, R. Griessen, B. Dam, J. Rector, Physica C **241** 353 (1995). H. H. Wen, Z. X. Zhao, R. J. Wijngaarden, J. Rector, B. Dam, and R. Griessen, Phys. Rev. B **52**, 4583 (1995). H. H. Wen, R. Griessen, D. G. de Groot, B. Dam, J. Rector, J. Alloys Compd. **195** 427 (1993). S. N. Barilo, V. I. Gatalskaya, S. V. Shiryaev, A. S. Shestaca, L. A. Kurochkina, T. V. Smirnovaa, V. T. Koyavab, N. S. Orlovaa, A. V. Pushkareva, Physica C **254** 181 (1995). L. Klein, E. R. Yacoby, E. R. Yacoby, Y. Yeshurun, A. Erb, G. Muller-Vogt, V. Breit, H. Wuhl, Phys. Rev. B **49**, 4403 (1994). M. Daumling, J. M. Senntjens and D. C. Larbalestier, Nature (London) **346** 332 (1980). Y. Abulafia, A. Shaulov, Y. Wolfus, R. Prozorov, L. Burlachkov, and Y. Yeshurun, D. Majer and E. Zeldov, H. Wuhl, V. B. Geshkenbein, V. M. Vinokur, Phys. Rev. Lett. **77**, 1596 (1996). H. H. Wen, P. Ziemann, H. A. Radovan, T. Herzog, Physica C **305**, 185 (1998). M. Konczykowski, C. J. van der Beek, M. A. Tanatar,H. Q. Luo, Z. S. Wang, B. Shen, H. H. Wen. Phys. Rev. B **86**, 024515(2012). Y. Yeshurun, A. P. Malozemoff, F. Holtzberg, T. R. Dinger, Phys. Rev. B **38**, 11828 (1988). Y. Yeshurun, A. P. Malozemoff, and F. Holtzberg, J. Appl. Phys. **64** 5797 (1988). R. Griessen, H. H. Wen, A. J. J. van Dalen, B. Dam, J. Rector, H. G. Schnack, S. Libbrecht, E. Osquiguil, and Y. Bruynseraede, Phys. Rev. Lett. **72**, 1910 (1993). H. Jin, H. H. Wen, H. P. Yang, Z. Y. Liu, Z. A. Ren, G. C. Che, and Z. X. Zhao, Appl. Phys. Lett. **83**, 2626 (2003). H. H. Wen, A. F. Th. Hoekstra, R. Griessen, S. L. Yan, L. Fang, and M. S. Si, Phys. Rev. Lett. **79**, 1559 (1997). S. N. Barilo, V. I. Gatalskaya, S. V. Shiryaev, T. V. Smirnova, H. Szymczak, R. Szymczak, and M. Baran, Phys. Status Solidi A **181**, 471 (2000). J. N. Li, F. R. De Boer, L. W. Roeland, M. J. V. Menken, K. Kadowaki, A. A. Menovsky and J. J. M. Franse,Physica C **169**, 81 (1990). R. A. Rose, S. B. Ota, P. A. J. De Groot and B. Jayaram, Physica C **170**, 51 (1990). E. J. Kramer, J. Appl. Phys. **44**, 1360 (1973).
--- abstract: 'A classical model is presented for the features of parametric down-conversion and homodyne detection relevant to recent proposed “loophole-free” Bell tests. The Bell tests themselves are uncontroversial: there are no obvious loopholes that might cause bias and hence, if the world does, after all, obey local realism, no violation of a Bell inequality will be observed. Interest centres around the question of whether or not the proposed criterion for “non-classical” light is valid. If cit is not, then the experiments will fail in their initial concept, since both quantum theorists and local realists will agree that we are seeing a purely classical effect. The Bell test, though, is not the only criterion by which the quantum-mechanical and local realist models can be judged. If the experiments are extended by including a range of parameter values and by analysing, in addition to the proposed digitised voltage differences, the raw voltages, the models can be compared in their overall performance and plausibility.' author: - Caroline H Thompson title: 'Homodyne detection and parametric down-conversion: a classical approach applied to proposed “loophole-free” Bell tests' --- Introduction ============ No test of Bell’s inequalities [@Bell:1964; @Bell:1971] to date has been free of “loopholes”. This means that, despite the high levels of statistical significance frequently achieved, violations of the inequalities could be the effects of experimental bias of one kind or another, not evidence for the presence of quantum entanglement. Recent proposed experiments by García-Patrón Sánchez [@Sanchez:2004] *et al.* and Nha and Carmichael [@Nha:2004] show promise of being genuinely free from such problems. If the world in fact obeys local realism, they should *not*, therefore, infringe any Bell inequality. ![Proposed experimental set-up. In the current text, phase shifts $\theta $ and $\phi $ are renamed $\theta _{A}$ and $\theta _{B}$. (*Reproduced with permission from Sánchez et al., Phys. Rev. Lett.* ***93****, 130409 (2004)*) []{data-label="fig1"}](ThompsonFig1.eps){width="2.6in" height="2.6in"} The current article presents a classical (local realist) model that should be able, once all relevant details are known, to explain the results. It uses standard classical theory for homodyne detection, but for the behaviour of the “Optical Parametric Amplification” (“Parametric Down-Conversion”) source introduces new ideas. As far as the “loophole-free” status of the proposed experiments is concerned, there would appear to be no problem. A difficulty that seems likely to arise, though, is that theorists may not agree that the test beams used were in fact “non-classical”, so the failure to infringe a Bell inequality will not in itself be interpreted as showing a failure of quantum mechanics [@prediction]. The criterion to be used to establish the non-classical nature of the light is the observation of negative values of the Wigner density, and there is reason to think that, even if the standard method of estimation seems to show that these are achieved, this may not in fact be so. In the quantum mechanical theory discussed in the proposals and in other recent papers [@Lvovsky:2001; @Wenger:2004], the presence of negative Wigner densities is seen almost as a corollary of the observation of double peaks in the distribution of the phase-averaged voltage differences recorded by the homodyne detectors. Such double peaks, however, can readily be shown to occur naturally, under purely classical assumptions, so long as noise levels are low. They are a consequence of the fact that a sine function spends proportionately more time near the extremes than around the central, zero, value. It would seem that the theory that leads from their observation to the deduction of non-classicality may be in error. This, though, is a problem that is of no direct relevance to the classical model under consideration. Far from being, as suggested by Sánchez and others, the “hidden variable” needed, Wigner density plays no part whatsoever. Regardless of the outcome of the Bell tests, and whether or not the light is declared to be non-classical, there are features of the experiments that can usefully be exploited to compare the strengths of quantum mechanics versus (updated) classical theory as viable models. The two theories approach the situation from very different angles. Classical theory traces the causal relationships between phenomena, starting with the simplest assumption and building in random factors later where necessary. Quantum mechanics starts with models of complete ensembles, all random factors included. This, it is argued, is inappropriate, since two features of the proposed experiments demand that we consider the behaviour of individual events, not whole ensembles: the process of homodyne detection itself, and the Bell test. The proposed experiments ======================== The experimental set-up proposed by Sánchez *et al.* is shown in Fig. 1, that of Nha and Carmichael being similar. In the words of the Sánchez *et al.* proposal: > The source (controlled by Sophie) is based on a master laser beam, which is converted into second harmonic in a nonlinear crystal (SHG). After spectral filtering (F), the second harmonic beam pumps an optical parametric amplifier (OPA) which generates two-mode squeezed vacuum in modes A and B. Single photons are conditionally subtracted from modes A and B with the use of the beam splitters BS$_{A}$ and BS$_{B}$ and single-photon detectors PD$_{A}$ and PD$_{B}$. Alice (Bob) measures a quadrature of mode A (B) using a balanced homodyne detector that consists of a balanced beam splitter BS$_{3}$ (BS$_{4 })$ and a pair of highly-efficient photodiodes. The local oscillators LO$_{A}$ and LO$_{B}$ are extracted from the laser beam by means of two additional beam splitters BS$_{1}$ and BS$_{2}$. The classical description, working from the same figure, is just a little different. Quantum theoretical terms such as “squeezed vacuum” and are not used since they are not appropriate to the model and would cause confusion. The description might run as follows: The master laser beam (which is, incidentally, pulsed) is frequency-doubled in the crystal SHG. After filtering to remove the original frequency, the beam is used to pump the crystal OPA, which outputs pairs of classical wave pulses at half the input frequency, i.e. at the original laser frequency. The selection of pairs for analysis is done by splitting each output at an unbalanced beamsplitter (BS$_{A}$ or BS$_{B})$, the smaller parts going to sensitive detectors PD$_{A}$ or PD$_{B}$. Only if there are detections at both PD$_{A}$ and PD$_{B}$ is the corresponding homodyne detection included in the analysis. The larger parts proceed to balanced homodyne detectors, i.e. ones in which the intensities of local oscillator and test inputs are approximately equal. The source of the local oscillators LO$_{A}$ and LO$_{B}$ is the same laser that stimulated, after frequency doubling, the production of the test beams. Homodyne detection ================== In (balanced) homodyne detection, the test beam is mixed at a beamsplitter with a local oscillator beam of the same frequency and the two outputs sent to photodetectors that produce voltage readings for every input pulse. In the proposed “loophole-free” Bell test the difference between the two voltages will be converted into a digital signal by counting all positive values as +1, all negative as –1. ![Inputs and outputs at the beamsplitter in a homodyne detector. E$_{L }$ is the local oscillator beam, E the test beam, E$_{t }$ and E$_{r}$ the transmitted and reflected beams respectively.[]{data-label="fig2"}](ThompsonFig2.eps){width="1.5in" height="1.5in"} Assuming the inputs are all classical waves of the same frequency and there are no losses, it can be shown (see below) that the difference between the intensities of the two output beams is proportional to the product of the two input intensities multiplied by $\sin \theta$, where $\theta$ is the phase difference between the test beam and local oscillator. If voltages are proportional to intensities then it follows that the voltage difference will be proportional to $\sin \theta $. When digitised, this transforms to a step function, taking the value $-1$ for $-\pi < \theta < 0$ and $+1$ for $0 < \theta < \pi $. (The function is not well defined for integral multiples of $\pi$.) Classical derivation of the predicted voltage difference -------------------------------------------------------- Assume the test and local oscillator signals have the same frequency, *$\omega $*, the time-dependent part of the test signal being modelled by $e^{i\phi }$, where (ignoring a constant phase offset [@phase; @offset]) *$\phi =\omega $t* is the phase angle, and the local oscillator phase and test beam phases differ by *$\theta $*. \[Note that although complex notation is used here, only the real part has meaning: this is an ordinary wave equation, not a quantum-mechanical “wave function”. To allay any doubts on this score, the derivation is partially repeated with no complex notation in the Appendix.\] Let the electric fields of the test signal, local oscillator and reflected and transmitted signals from the beamsplitter have amplitudes $E$, $E_{L}$, $E_{r}$ and $E_{t}$ respectively, as shown in Fig. \[fig2\]. Then, after allowance for phase delays of $\pi/2$ at each reflection and assuming no losses, we have $$\label{eq1} E_r = \frac{1}{\sqrt 2} (Ee^{i(\phi +\pi /2)}+E_L e^{i(\phi +\theta )})$$ and $$\label{eq2} E_t = \frac{1}{\sqrt 2} (Ee^{i\phi }+E_L e^{i(\phi +\theta +\pi /2)}).$$ The intensity of the reflected beam is therefore $$\begin{aligned} \label{eq3} E_r E_r^\ast & = & \frac{1}{2} (Ee^{i(\phi +\pi /2)}+E_L e^{i(\phi +\theta )})(Ee^{-i(\phi +\pi /2)} \nonumber \\ & + & E_L e^{-i(\phi +\theta )}) \nonumber \\ & = & \frac{1}{2}(E^2+E_L ^2+EE_L e^{i(\pi /2-\theta )}+EE_L e^{-i(\pi /2-\theta )}) \nonumber \\ & = & \frac{1}{2} (E^2+E_L ^2+2EE_L \cos (\pi /2-\theta ) \nonumber \\ & = & \frac{1}{2} (E^2+E_L ^2+2EE_L \sin \theta ).\end{aligned}$$ Similarly, it can be shown that the intensity of the transmitted beam is $$\label{eq4} E_t E_t^\ast = \frac{1}{2} (E^2+E_L ^2-2EE_L \sin \theta ).$$ If the voltages registered by the photodetectors are proportional to the intensities, it follows that the difference in voltage is proportional to $2EE_L \sin \theta$. When digitised, this translates to the step function mentioned above. The probabilities for the two possible outcomes are, as shown in Fig. \[fig3\], \[eqs5\] $$p_{-} = \left\{ \begin{array}{ll} 1 & \mbox{for $-\pi < \theta < 0$} \\ 0 & \mbox{for $0 < \theta < \pi$} \end{array} \right. \label{subeq5a}$$\ \ $$p_{+} = \left\{ \begin{array}{ll} 0 & \mbox{for $-\pi < \theta < 0$} \\ 1 & \mbox{for $0 < \theta < \pi$} \end{array} \right. \label{subeq5b}$$ Note that the probabilities are undefined for integral multiples of $\pi$. In practice it would be reasonable to assume that, due to the presence of noise, all the values were 0.5, but for the present purposes the integral values will simply be ignored. ![Probabilities of ‘+’ and ‘–’ outcomes versus phase difference, using digitised difference voltages from a perfect, noise-free, balanced homodyne detector.[]{data-label="fig3"}](ThompsonFig3.eps){width="1.8in" height="1.8in"} Fresh ideas on parametric down-conversion ========================================= If the frequencies and phases of both test beams and both local oscillators were all identical apart from the applied phase shifts, the experiment would be expected to produce step function relationships between counts and applied shifts both for the individual (singles) counts and for the coincidences. It may safely be assumed that this is not what is observed. It would have shown up in the preliminary trials on the singles counts (see ref. [@Wenger:2004]), which would have followed something suggestive of the basic predicted step function as the local oscillator phase shift was varied. What is observed in practice is more likely to be similar to the results obtained by Schiller *et al.* [@Schiller:1996]. Their Fig. 2a, reproduced here as Fig. \[fig4\], shows a distribution of photocurrents that is clustered around zero, for $\theta $ taking integer multiples of $\pi $, but is scattered fairly equally among positive and negative values in between. ![A typical scatter of “noise current” (related to voltage difference) for varying local oscillator phase. (*Reproduced from S. Schiller et al., Phys. Rev. Lett. 87 (14), 2933 (1996)*. Permission requested, June 12, 2005.)[]{data-label="fig4"}](ThompsonFig4.eps){width="2.7in" height="1.8in"} When digitised, the distribution would reduce to two straight horizontal lines, showing that for each choice of $\theta $ there is an equal chance of a ‘$+$’ or a ‘$-$’ result. As in any other Bell test setup, though, the absence of variations in the singles counts does not necessarily mean there is no variation in the coincidence rates. However, as explained in the next section, the predicted coincidence curves are not the zig-zag ones of standard classical theory. These would be expected if we had *full* “rotational invariance” [@rotational; @invariance]. If the ideas presented here are correct, we have instead, in the language of an article by the present author [@Thompson:1999], only *binary* rotational invariance. Schiller’s scatter of photocurrent differences is seen as evidence that the relative phase can (under perfect conditions) take just one of two values, 0 or $\pi$. The scatter is formed from a superposition of two sets of points, corresponding to two sine curves that are out of phase, together with a considerable amount of noise. It is suggested that the two “phase sets” arise from the way in which the pulses are produced, which involves, after the frequency doubling, the *degenerate* case of parametric down-conversion, the latter producing pulses that are (in contrast to the quantum-mechanical assumption of conjugate frequencies) of *exactly equal frequency*. Consider an initial pump laser of frequency $\omega $. In the proposed experiment, this will be doubled in the crystal SHG to 2$\omega $ then halved in OPA back to $\omega $. At the frequency doubling stage, one laser input wave peak gives rise to two output ones. Assuming that there are causal mechanisms involved, it seems inevitable that every other wave peak of the output will be exactly in phase with the input. When we now use this input to produce a down-conversion, the outputs will be in phase either with the even or with the odd peaks of the input, which will make them either in phase or exactly out of phase with the original laser. We thus have two classes of output, differing in phase by $\pi $. If we define the random variable $\alpha $ to be 0 for one class, $\pi $ for the other, this will clearly be an important “hidden variable” of the system. The theory of (degenerate) parametric down-conversion assumed here differs in several respects from the accepted quantum-mechanical one, and also from Stochastic Electrodynamics. No attempt is made to give a full explanation of the physics involved. The theory is rather of the nature of an empirical result, effectively forced on us by a number of different experiments in quantum optics (to be discussed in later papers). Accepted theory says (if, indeed, it allows at all for the existence of “photons” with definite frequencies [@Kwiat:1991]) that though the sum of the frequencies of the two outputs is equal to that of the pump laser, even in the degenerate case the two will differ slightly [@Tittel:1998]. The exact frequency of one will be $\frac{1}{2}(\omega _{0}+\delta \omega)$, that of the other $\frac{1}{2}(\omega _{0}-\delta \omega)$, where $\omega _{0}$ is the pump frequency. If this is the case in the proposed experiment, though, it will severely reduce the visibility of any coincidence curve observed when the experimental beam is mixed back with the source laser in the homodyne detector. The preliminary experiments (see ref. [@Wenger:2004]) using just one output beam may already be sufficient to show that the interference is stronger than would be the case if there were any difference between local oscillator and test beam frequencies. It is known that the source laser has quite a broad band width, i.e. that $\omega _{0}$ is not constant. Though it is likely that it is only part of the pump spectrum that induces a down-conversion, so that the band width of the test beam may be considerably narrower than that of the pump, it too is non zero. It follows that agreement of frequency between this and the test beam must be because we are always dealing, in the degenerate case, with *exact* frequency doubling and halving. A classical model of the proposed Bell test =========================================== In the proposed Bell test of Sánchez *et al.*, positive voltage differences will be treated as +1, negative as –1. Applying this version of homodyne detection to both branches of the experiment, the CHSH test ($ -2 \leq S \leq 2$) will then be applied to the coincidence counts. Under quantum theory it is expected that, so long as “non-classical” beams are employed, the test will be violated. However, since there are no obvious loopholes in the actual Bell test (see later), local realism should win: the test should *not* be violated. In the classical view, this prediction is unrelated to any supposed non-classical nature of the light. The basic local realist model ----------------------------- If we take the simplest possible case, in which to all intents and purposes all the frequencies involved are the same, the hidden variable in the local realist model is clearly going to be the phase difference ($\alpha = 0$ or $\pi$) between the test signal and the local oscillator. If high visibility coincidence curves are seen, it must be because the values of $\alpha$ are identical for the A and B beams. Assuming no noise, the basic model is easily written down. From equation (\[subeq5a\]), the probability of a $-1$ outcome on side A is $$\label{eq6} p_{-} (\theta _{A}, \alpha )= \left\{ \begin{array}{ll} 1 & \mbox{for $-\pi < \theta _{A} - \alpha < 0$} \\ 0 & \mbox{for $ 0 < \theta _{A} - \alpha < \pi $}, \end{array} \right.$$ where $\theta _{A}$ is the phase shift applied to the local oscillator A, $\alpha$ is the hidden variable and all angles are reduced modulo $2\pi$. Similarly, the probability of a +1 outcome is $$\label{eq7} p_{+} (\theta _{A}, \alpha )= \left\{ \begin{array}{ll} 0 & \mbox{for $-\pi < \theta _{A} - \alpha < 0$} \\ 1 & \mbox{for $ 0 < \theta _{A} - \alpha < \pi $}, \end{array} \right.$$ Assuming equal probability $\frac{1}{2}$ for each of the two possible values of $\alpha$ [@equal; @probability], the standard “local realist” assumption that independent probabilities can be multiplied to give coincidence ones leads to a predicted coincidence rate of $$\begin{aligned} \label{eq8} P_{++} (\theta _A ,\theta _B)& = & \frac{1}{2} p_+ (\theta _A ,0)p_+ (\theta _B ,0) \nonumber \\ & + & \frac{1}{2} p_+ (\theta _A ,\pi )p_+ (\theta _B ,\pi ),\end{aligned}$$ with similar expressions for $P_{+-}$, $P_{-+ }$ and $P_{- -}$. The result for $\theta _{A}=\pi/2$, for example, is $$\label{eq9} P_{++} (\pi /2, \theta _{B}) = \left\{ \begin{array}{ll} 0 & \mbox{for $-\pi < \theta _{B} < 0$}\\ 1/2 & \mbox{for $0 < \theta _{B} < \pi$}. \end{array} \right.$$ For $\theta _{A}$ = –$\pi $/2 it is $$\label{eq10} P_{++} (-\pi /2, \theta _{B}) = \left\{ \begin{array}{ll} 1/2 & \mbox{for $-\pi < \theta _{B} < 0$}\\ 0 & \mbox{for $0 < \theta _{B} < \pi$}. \end{array} \right.$$ Note that, as illustrated in Fig. \[fig5\], the coincidence probabilities *cannot*, in this basic model, be expressed as functions of the difference in detector settings, $\theta _{B}- \theta _{A}$. This failure, marking a significant deviation from the quantum mechanical prediction, is an inevitable consequence of the fact that we have (as mentioned earlier) only binary, not full, rotational invariance. ![Predicted coincidence curves for the ideal experiment. (a) and (b) illustrate the settings most likely to be chosen in practice, giving the strongest correlations. $\theta _{A}$ is fixed at $\pi/2$ or $-\pi/2$ while $\theta _{B }$ varies. In theory, any value of $\theta _{A}$ between 0 and $\pi$ would give the same curve as (a), any between $-\pi$ and 0 the same as (b). An example is shown in (c), where $\theta _{A }$ is $\pi/4$ but the curve is identical to (a). We do not have rotational invariance: the curve is not a function of $\theta _{B }-\theta _{A}$. []{data-label="fig5"}](ThompsonFig5.eps){width="1.8in" height="2.7in"} Fine-tuning the model --------------------- Many practical considerations mean that the final local realist prediction will probably not look much like the above step function. It may not even be quite periodic. The main logical difference is that, despite all that has been said so far, the actual variable that is set for the local oscillators is not directly the phase shift but the path length, and, since the frequency is likely to vary slightly from one signal to the next (though always keeping the same as that of the pump laser), the actual phase difference between test and local oscillator beams will depend on the path length difference *and* on the frequency. In a complete model, therefore, the important parameters will be path length and frequency, with phase derived from these. If frequency variations are sufficiently large, the situation may approach one of rotational invariance (RI), but it seems on the face of it unlikely that this can be achieved without loss of correlation. If we do have RI, the model becomes the standard realist one in which the predicted quantum correlation varies linearly with difference in phase settings, but it is more likely that what will be found is curves that are *not* independent of the choice of individual phase setting. They will be basically the predicted step functions but converted to curves as the result of the addition of noise. ![Likely appearance of coincidence curves in a real experiment with moderate noise.[]{data-label="fig6"}](ThompsonFig6.eps){width="2.0in" height="1.0in"} It is essential to know the actual experimental conditions. Several relevant factors can be discovered by careful analysis of the variations in the raw voltages in each homodyne detection system. If noise is low, the presence of the two phase sets, and whether or not they are equally represented, should become apparent. All this complexity, though, has no bearing on the basic fact of the existence of a hidden variable model and the consequent prediction that the CHSH Bell test will not be violated. The role of the “event-ready” detectors --------------------------------------- In the quantum-mechanical theory, the expectation of violation of the Bell test all hinges on the production of “non-classical” light. The light directly output from the crystal OPA is assumed to be Gaussian, i.e. it takes the form of pulses of light that have a Gaussian intensity profile and also, as a result of Fourier theory, a Gaussian spectrum. When this is passed through an unbalanced beamsplitter (BS$_{A}$ or BS$_{B}$) and a “single photon” detected by an “event-ready” detector, the theory says that the subtraction of one photon leaves the remaining beam “non-Gaussian”. Although there is mention here of single photons, the theory is concerned with the ensemble properties of the complete beams, not with the individual properties of its constituent pulses. In the local realist (classical) model, the shapes of the spectra are not relevant except insofar as a narrow band width is desirable for the demonstration of dramatic correlations. The event-ready detectors play, instead, the important role of selecting for analysis only the strongest down-converted output pairs, it being assumed that the properties of the transmitted and reflected light at the unbalanced beamsplitters are identical apart from their amplitudes. It is likely that those detected signals that are coincident with each other will be genuine “degenerate” ones, i.e. of exactly equal frequency, quasi-resonant with the pump laser. The unbalanced beamsplitters and the detectors PD$_{A}$ and PD$_{B}$ need to be set so that the intensity of the detected part is sufficient to be above the minimum for detection but low enough to ensure that all but the strongest pulses are ignored. In neither theory are the event-ready detectors really needed in their “Bell test” role of ensuring a fair test (see below), since the homodyne detectors are almost 100[%]{} efficient. Validity of the proposed Bell test ================================== Coincidences between the digitised voltage differences will be used in the CHSH Bell test [@Clauser:1969; @Thompson:1996], but avoiding the “post-selection” that has, since Aspect’s second experiment [@Aspect:1982], become customary. The Sánchez *et al.* proposal is to use “event-ready” detectors, as recommended by Bell himself for use in real experiments [@Clauser:1978]. None of the usual loopholes [@Thompson:2003] are expected to be applicable: 1. With the use of the event-ready detectors, non-detections are of little concern. The detectors (PD$_{A}$ and PD$_{B}$ in Fig. \[fig1\]) act to define the sample to be analysed, and the fact that they do so quite independently of whether or not any member of the sample is then also detected in coincidence at the homodyne detectors ensures that no bias is introduced here. The estimate of “quantum correlation” [@quantum; @correlation] to be used in calculating the CHSH test statistic is $E = (N_{++AB} + N_{--AB} - N_{+-AB} - N_{-+AB}) / N_{AB}$, where the $N$’s are coincidence counts and the subscripts are self-explanatory. This contrasts with the usual method, in which the denominator used is not $N_{AB}$ but the sum of observed coincidences, $N_{++AB} + N_{--AB} + N_{+-AB} + N_{-+AB}$. The use of the latter can readily be shown to introduce bias unless it can be assumed that the sample of detected pairs is a fair one. That such an assumption fails in some plausible local realist models has been known since 1970 or earlier [@Pearle:1970; @Thompson:1996]. 2. There is no possibility of synchronisation problems, since a pulsed source is used. 3. No “accidentals” will be subtracted. 4. The “locality” loophole can be closed by using long paths and a random system for choosing the “detector settings” (local oscillator phase shifts) during the propagation of the signals. The system is almost certainly not going to be “rotationally invariant” (not all phase differences will be equally likely), but this will not invalidate the Bell test. It may, however, be important in another way. It is likely that high visibilities will be observed in the coincidence curves (i.e. high values of (max – min)/(max + min) in plots of coincidence rate against difference in phase shift), leading to the impression that the Bell test ought to be violated. These visibilities, though, will depend on the absolute values of the individual settings. High ones will be balanced by low, with the net effect that violation does not in fact happen. Validity and significance of negative estimates for Wigner densities ==================================================================== Part of the evidence that is put forward as indicating that negative Wigner densities are likely to be obtained consists in the observation that, when $\theta $ is varied randomly, the distribution of observed voltage differences shows a double peak (see Fig. \[fig7\]). There is a tendency to observe roughly equal numbers of + and – results but relatively few near zero. The fact that the relationship depends on the sine of $\theta$ is, however, sufficient to explain why this should be so. ![Observed distribution of voltage differences, X, using homodyne detection in an experiment similar to the proposed Bell test and averaging over a large number of applied phase shifts. (*Based on Fig. 4a of A. I. Lvovsky et al., Phys. Rev. Lett, 87, 050402 (2001).*)[]{data-label="fig7"}](ThompsonFig7.eps){width="2.7in" height="2.00in"} To illustrate, let us consider the following. The sine of an angle is between 0 and 0.5 whenever the angle is between 0 and $\pi/6$. It is between 0.5 and 1.0 when the angle is between $\pi/6$ and $\pi/2$. Since the second range of angles is twice the first yet the range of the values of the sine is the same, it follows that if all angles in the range 0 to $\pi$ are selected equally often there will be twice as many sine values seen above 0.5 as below. When allowance is made for the addition of noise, the production of a distribution such as that of Fig. \[fig7\] for the average when angles are sampled uniformly comes as no surprise. Clearly, as the experimenters themselves recognise, the dip is not in itself sufficient to prove the non-classical state of the light. For this, direct measurement of the Wigner density is required, but there is a problem here. No actual direct measurement is possible, so it has to be estimated, and the method proposed is the Radon transformation [@Leonhardt:1997]. It is claimed [@Grangier:2005] that in *other* experiments Wigner densities calculated by this procedure have shown negative regions, but perhaps the method should be checked for validity? In any event, as already explained, the natural hidden variable relevant to the proposed experiment is the phase of the individual pulse, not any statistical property of the whole ensemble. Suggestions for extending the experiment ======================================== The basic set-up would seem to present an ideal opportunity for investigation of some key aspects of the quantum and classical models, as well as the operation of the Bell test “detection loophole”. **The operation of the detection loophole** could be illustrated if, instead of using the digitised difference voltages of the homodyne detectors, the two separate voltages are passed through discriminators. The latter operate by applying a threshold voltage that can be set by the experimenter and counting those pulses that exceed it. These can be used in a conventional CHSH Bell test, i.e. using total observed coincidence count as denominator in the estimated quantum correlations $E$. The model that has been known since Pearle’s time (1970) predicts that, as the threshold voltage used in the discriminators is increased and hence the number of registered events decreased (interpreted in quantum theory as the detection efficiency being decreased), the CHSH test statistic $S$, if calculated using estimates $E = (N_{++} + N_{--} - N_{+-} - N_{-+}) / (N_{++} + N_{--} + N_{+-} + N_{-+})$, will increase. If noise levels are low, it may well exceed the Bell limit of 2. **The existence of the two phase sets** could also be investigated if either the raw voltages or the undigitised difference voltages are analysed. So long as the noise level is low, the existence of the two superposed curves, one for $\alpha = 0$ and the other for $\alpha =\pi$, should be apparent. It would be interesting to investigate how the pattern changed as optical path lengths were varied. Schiller’s pattern might be hard to reproduce using long path lengths, where exact equality is needed unless the light is monochromatic. If the primary goal of the experimenter is clearly set out to be the comparison of the performance of the two rival models, rather than merely the conduct of a Bell test, further ideas for modifying the set-up will doubtless emerge when the first experiments have been done. Conclusion ========== The proposed experiments would, *if the “non-classicality” of the light could be demonstrated satisfactorily*, provide a definite answer one way or the other regarding the reality of quantum entanglement. They could usefully be extended to include empirical investigations into the operation of the Bell test detection loophole. Perhaps more importantly, though, they present valuable opportunities to compare the performance of the two theories in both their total predictive power and their comprehensibility. Are parameters such as “Wigner density” and “degree of squeezing” really the relevant ones, or would we gain more insight into the situation by talking only of frequencies, phases and intensities? Parameters such as the detection efficiency and the transmittance of the beamsplitters will undoubtedly affect the results, but do they do this in the way the quantum theoretical model suggests? It will take considerably more than just the minimum runs needed for the Bell test if we are to find the answers. The detailed predictions of the local realist model cannot be given until the full facts of the experimental set-up and the performance of the various parts are known, but it gives, in any event, a simple explanation of the double-peaked nature of the distribution of voltage differences. The peaks arise naturally from the way in which homodyne detection works, and the quantum theoretical idea that they are one of the indications of a non-classical beam or of negative Wigner density would not appear to be justifiable. The idea that a classical beam can become non-classical by the act of “subtracting a photon” is, equally, of doubtful validity. In the classical model, the only effect of the subtraction and detection of part of each beam is to aid the selection for coincidence analysis of those pulses that are likely to be most strongly correlated. Acknowledgements ================ I am grateful to Ph. Grangier for drawing my attention to his team’s proposed experiment [@Sanchez:2004], and to him and A. I. Lvovsky for helpful discussions. Alternative classical derivation of the homodyne detection formula ================================================================== A derivation is given here that does not involve complex numbers and hence confirms that the equations in the text are ordinary wave equations, not quantum-mechanical “wave functions”. The relationship between intensity difference and the local oscillator phase can be checked as follows: Assume the two input beams are Experimental beam: $E \cos \phi$ , where, as before, $\phi =\omega t$ Local oscillator: $E_{L} \cos (\phi +\theta )$ Then the output beams, assuming a 50-50 beamsplitter and no losses, can be written Reflected beam: $$\begin{aligned} \label{eqA1} E_r & = & \frac{1}{\sqrt 2} (E\cos (\phi + \pi /2) + E_L \cos (\phi + \theta)) \nonumber \\ & = & \frac{1}{\sqrt 2} (-E\sin \phi + E_L \cos (\phi + \theta))\end{aligned}$$ Transmitted beam: $$\begin{aligned} \label{eqA2} E_t & = & \frac{1}{\sqrt 2} (E\cos \phi + E_L \cos (\phi + \theta + \pi /2)) \nonumber \\ & = & \frac{1}{\sqrt 2} (E\cos \phi - E_L \sin (\phi + \theta)).\end{aligned}$$ Let us define a (constant) angle $\psi$ such that $\tan \psi = E / E_{L}$, making $E \cos \psi = E_{L } \sin \psi$. Consider the case when $\theta = 0$. We have $$\begin{aligned} \label{eqA3} E_r & = & \frac{1}{\sqrt 2} (E/\sin \psi )(-\sin \psi \sin \phi +\cos \psi \cos \phi ) \nonumber \\ & = & \frac{1}{\sqrt 2} (E/\sin \psi )\cos (\psi +\phi ),\end{aligned}$$ so that amplitude is proportional to $\frac{1}{\sqrt 2} E / \sin \psi$ and intensity to $\frac{1}{2} E^{2} / \sin^{2} \psi$. Similarly, the intensity of the transmitted beam is also proportional to $\frac{1}{2} E^{2} / \sin^{2} \psi$. The voltage difference from the homodyne detector is therefore expected to be zero. A zero difference is also found for $\theta =\pi $, but other values of $\theta $ produce more interesting results. For example, for $\theta =\pi /2$ we find $$\begin{aligned} \label{eqA4} E_r & = & \frac{1}{\sqrt 2} (E/\sin \psi )(-\sin \psi \sin \phi - \cos \psi \sin \phi ) \nonumber \\ & = & \frac{1}{\sqrt 2} (E/\sin \psi )\sin \phi (-\sin \psi -\cos \psi ),\end{aligned}$$ $$\begin{aligned} \label{eqA5} E_t & = & \frac{1}{\sqrt 2} (E/\sin \psi )(\sin \psi \cos \phi -\cos \psi \cos \phi ) \nonumber \\ & = & \frac{1}{\sqrt 2} (E/\sin \psi )\cos \phi (\sin \psi -\cos \psi ).\end{aligned}$$ The difference in intensities is therefore proportional to $$\begin{aligned} \label{eqA6} \mbox{Difference} & = & \frac{1}{2} (E^2/\sin ^2\psi )[(-\sin \psi - \cos \psi )^2 \nonumber \\ & - & (\sin \psi - \cos \psi )^2] \nonumber \\ & = & \frac{1}{2} (E^2/\sin ^2\psi )4\sin \psi \cos \psi ) \nonumber \\ & = & \frac{1}{2} E^2\cos \psi /\sin \psi \nonumber \\ & = & 2E^2E_L /E \nonumber \\ & = & 2EE_L .\end{aligned}$$ This is consistent with the result obtained by the method in the main text, so it seems safe to accept that as being correct. [21]{} J. S. Bell, “On the Einstein-Podolsky-Rosen paradox”, Physics (Long Island City, N.Y.) 1, 195 (1964). J. S. Bell, “Introduction to the hidden-variable question”, in J. S. Bell, *The Speakable and Unspeakable in Quantum Mechanics*, (Cambridge University Press 1987). R. García-Patrón, J. Fiurácek, N. J. Cerf, J. Wenger, R. Tualle-Brouri, and Ph. Grangier, “Proposal for a loophole-free Bell test using homodyne detection”, Phys. Rev. Lett. **93**, 130409 (2004), . H. Nha and H. J. Carmichael, “Proposed test of quantum nonlocality for continuous variables”, Phys. Rev. Lett. **93**, 020401 (2004), . The predicted violation is in any case small, so failure may be put down to other “experimental imperfections”. A. I. Lvovsky, H. Hansen, T. Aichele, O. Benson, J. Mlynek and S. Schiller, “Quantum state reconstruction of the single-photon Fock state”, Phys. Rev. Lett.  **87**, 050402 (2001), . J. Wenger, R. Tualle-Brouri and Ph. Grangier, “Non-gaussian statistics from individual pulses of squeezed light”, Phys. Rev. Lett. **92**, 153601 (2004), . The phase offset depends on the difference in optical path lengths, which will not in practice be exactly constant due to thermal oscillations. If a complete model is ever constructed, this should therefore be a parameter. S. Schiller, G. Breitenbach, S. F. Pereira, T. Müller, and J. Mlynek, “Quantum statistics of the squeezed vacuum by measurement of the density matrix in the number state representation”, Phys. Rev. Lett. **87** (14), 2933 (1996). “Rotational invariance” means the hidden variable takes all possible values with equal probability. In the current context, if the experiment does indeed produce high visibility coincidence curves, the hidden variable responsible will be the common phase difference between test beams and local oscillators before application of the phase shifts (“detector settings”) $\theta_A$ and $\theta_B$. It is argued that in the proposed experiment there will be at best *approximate* rotational invariance, if there is appreciable variation in the (again common) frequency. C. H. Thompson, “Rotational invariance, phase relationships and the quantum entanglement illusion”, See, for example, P. G. Kwiat and R. Y. Chiao, who write: “In the light of observed violations of Bell’s inequalities, it is incorrect to interpret these results in terms of an ensemble of conjugate signal and idler photons which possess definite, but unknown, conjugate energies before filtering and detection. Any observable, e.g. energy or momentum, should not be viewed as a local, realistic property carried by the photon *before it is actually measured*.” Phys. Rev. Lett. **66**, 588 (1991). W. Tittel, J. Brendel, B. Gisin, T. Herzog, H. Zbinden and N. Gisin, “Experimental demonstration of quantum-correlations over more than 10 kilometers”, Phys. Rev. A **57**, 3229 (1998), . It is possible that there will not be quite perfect symmetry between the two values of $\alpha$, if the frequency doubling process does not quite eliminate the initial laser frequency. J. F. Clauser, M. A. Horne, A. Shimony and R. A. Holt, “Proposed experiment to test local hidden-variable theories”, Phys. Rev. Lett  **23**, 880-884 (1969). C. H. Thompson, “The chaotic ball: an intuitive analogy for EPR experiments”, Found. Phys. Lett. **9**, 357 (1996), . A. Aspect, Ph. Grangier and G. Roger, “Experimental realization of Einstein-Podolsky-Rosen-Bohm gedankenexperiment: a new violation of Bell’s inequalities”, Phys. Rev. Lett. **49**, 91-94, 1982. J. F. Clauser and A. Shimony, “Bell’s theorem: experimental tests and implications”, Rep. Prog. Phys. **41**, 1881 (1978). C. H. Thompson, “Subtraction of ‘accidentals’ and the validity of Bell tests”, Galilean Electrodynamics **14** (\[eq3\]), 43-50 (May 2003), In the derivation of Bell inequalities such as the CHSH inequality, the statistic required is simply the expectation value of the product of the outcomes. This is commonly referred to in this context as the “quantum correlation”. Only when there are no null outcomes, possible outcomes being restricted to just $+1$ or $-1$, does it coincide with ordinary statistical correlation. P. Pearle, “Hidden-variable example based upon data rejection”, Phys. Rev. D **2**, 1418-25 (1970). U. Leonhardt, *Measuring the quantum state of light* (Cambridge University Press, Cambridge, 1997). P. Grangier (private correspondence, June 2005).
--- abstract: 'By studying numerically the phase-ordering kinetics of a two-dimensional ferromagnetic Ising model with quenched disorder – either random bonds or random fields – we show that a critical percolation structure forms in an early stage and is then progressively compactified by the ensuing coarsening process. Results are compared with the non-disordered case, where a similar phenomenon is observed, and interpreted within a dynamical scaling framework.' author: - 'Federico Corberi$^{1,2}$, Leticia F. Cugliandolo$^3$, Ferdinando Insalata$^{1,3}$, Marco Picco$^{3}$.' title: Coarsening and percolation in a disordered ferromagnet --- Introduction {#intro} ============ The phase ordering kinetics following an abrupt change of a control parameter, as for instance when a magnetic system is quenched from above to below the critical temperature, is, probably, the simplest non-equilibrium phenomenon in which interactions among elementary constituents – spins in the previous example – drive the system from disordered configurations towards an ordered state. The main feature of such evolution is the coarsening kinetics whereby initially small domains of the different equilibrium phases compete and their typical size, $R(t)$, increases in order to reduce surface tension. Quite generally, this process is endowed with a dynamical scaling symmetry [@Bray94; @Puri-Book] that physically amounts to the statistical self-similarity of configurations visited at different times, so that the only effect of the kinetics is to increase the size of the domains preserving their morphology. Using the terminology of magnetic systems, for the equal time spin-spin correlation function $G(r,t)=\langle s_i(t)s_j(t)\rangle$ with $r\equiv |\vec r_i - \vec r_j|$, scaling implies G(r,t)=[G]{}(). \[scal\] This scaling form is expected to apply at distances $r$ much larger than some reference length $r_0$ that is usually associated to the lattice spacing, and for times longer than some time $t_0$ that is supposed to end a non-universal transient regime where scaling does not hold yet. Notice that, in principle, Eq. (\[scal\]) should be more properly written as G(r,t)=(,), \[scal2\] where the second entry regulates the presence of finite-size effects when the domains’ size grows comparable to the system’s linear dimension $L$. However, one is usually concerned with the very large system size limit $R(t)\ll L$ in which the second entry in $\widetilde {\cal G}$ diverges, $\widetilde {\cal G}\left (\frac{r}{R(t)},\frac{L}{R(t)}\right )\simeq \widetilde {\cal G}\left (\frac{r}{R(t)}, \infty \right )$, and Eq. (\[scal\]) holds with $\widetilde {\cal G}\left (\frac{r}{R(t)},\infty \right )= {\cal G}\left (\frac{r}{R(t)}\right )$. The scaling function ${\cal G}$ goes rapidly to zero for $r>R(t)$, meaning that correlations have only established up to distances comparable with the typical domain size. Regions occupied by the various phases grow similarly in the coarsening stage, therefore none of them prevails at any finite time if the thermodynamic limit is taken from the onset. For a magnetic system this implies that no magnetisation is developed. The discussion above applies both to clean systems and to those with quenched disorder, provided it is sufficiently weak not to destroy the ordered phase or to introduce frustration. It was shown in a number of papers [@Arenzon07; @Sicilia07; @Sicilia09; @Barros09; @Olejarz12; @Blanchard13; @Blanchard14] that, starting from a certain time $t_p$, some geometrical features of the growing domains in a clean two-dimensional coarsening spin system are very accurately described by the well known properties of random (site) percolation [@Stauffer; @Christensen; @Saberi] at the critical threshold $p_c$, where a fractal spanning or wrapping cluster exists. This fact comes as a surprise, since percolation is a paradigm of a non-interacting problem, whereas coarsening embodies exactly the opposite. However, although apparently paradoxical, the interactions among the spins provide the mechanism whereby percolative features are built at times of order $t_p$. Indeed, being the initial state uncorrelated, it is an instance of random percolation, and the absence of magnetisation sets the concentration of up and down spins to $1/2$. Clusters in such a state, therefore, are too thin to extend across the entire system, because $1/2<p_c\simeq 0.59$ on the square lattice used in the simulation. Hence, the formation of a spanning or wrapping object at $t_p$ is certainly an effect of the ordering kinetics. Quite intriguingly, in fact, as correlations stretch to larger and larger distances, the droplet structure manages to connect initially disjoint parts until the largest domain crosses the entire system and acquires the geometric properties of the percolation cluster at threshold. If $t\ge t_p$ such a domain is surely stable and persists up to the longest times (although, obviously, some of its geometrical features change in time, as we will further discuss). This is indeed how $t_p$ is defined. From $t=t_p$ onward, phase-ordering exhibits the fractal properties of uncorrelated critical percolation on sufficiently large scales. As a matter of fact, since the magnetisation remains zero at any time, such properties occur in a correlated system with $p=1/2<p_c$. These two apparently contradicting instances – uncorrelated percolation akin to the one occurring at $p=p_c$ on the one hand and correlated coarsening with $p=1/2$ on the other hand – can be actually reconciled because they occur on well separated distances: Percolative properties are confined on length scales $r \gg R(t)$ larger than those where, due to the interactions, some correlation has already developed. Instead, on length scales $r< R(t)$, where the interaction is at work, the correlations compactify the fractal domains and the percolative geometry is lost. Interactions, therefore, play the two contrasting roles of promoting the formation of the critical percolation cluster and of progressively destroying it at scales of order $R(t)$. This whole framework is quite clearly exhibited in the clean (i.e., non disordered) two-dimensional kinetic Ising model with single spin-flip dynamics for which $t_p$ was shown to scale with the system size $L$ as [@Blanchard14] t\_p L\^[z\_p]{} \[tp\] where $z_p$ is a new dynamical exponent that depends on the properties of the lattice populated by the spins. This relation suggests the introduction of a growing length associated with the approach to critical percolation $$R_p(t) \simeq t^{1/z_p} \label{eqrp}$$ that saturates reaching the linear system size $L$ at $t=t_p$, namely $R_p(t_p)=L$. Furthermore, for such non-conserved scalar order parameter dynamics, $R(t) \simeq t^{1/z_d}$ with $z_d=2$ quite generally. Then $$R_p(t) \simeq (t^{1/z_d})^{z_d/z_p} \simeq [R(t)]^{z_d/z_p} \; . \label{eq:Rp-Rd}$$ On the basis of extensive numerical simulations, it was conjectured in [@Blanchard14] that $z_d/z_p=n$, the coordination of the lattice, for this kind of dynamic rule. Other clean systems evolving with different microscopic rules (local and non-local spin exchange and voter) on various lattices are treated in [@Blanchard16], and more details are given in this manuscript, where the guess $R_p(t) \simeq [R(t)]^n$ is revisited. Here we use Glauber dynamics that we define below. Notice that $R_p(t_p)=L$ implies R(t\_p)=L\^[z\_p/z\_d]{}. \[zpzd\] Interestingly, the existence of a percolative structure in coarsening systems is at the heart of one of the few analytical results in finite dimensional phase-ordering in two dimensions [@Arenzon07; @Sicilia07]. In fact, the distribution of cluster areas in percolation is exactly known [@Cardy03] and, since the evolution of a single hull-enclosed area in a non-conserved scalar order parameter dynamics can be inferred, an exact expression for the area distribution at any time in the scaling regime can be derived building upon the one at random percolation. As mentioned above, the relevance of percolation in phase-ordering kinetics was initially addressed in a homogeneous system without quenched disorder. Real systems, on the other hand, are seldom homogeneous, due to the occurrence of lattice defects, because of position-dependent external disturbances, or for other reasons. This fact deeply perturbs the scaling properties [@Corberi15c]. Indeed, although some kind of symmetry comparable to Eq. (\[scal\]) is reported in some experiments [@Likodimos00; @Likodimos01], the asymptotic growth-law $R(t)$ is observed to be much slower than in a clean system [@Likodimos00; @Likodimos01; @Ikeda90; @Schins93; @Shenoy99]. Similar conclusions are reached in the numerical studies of models for disordered ferromagnets, such as the kinetic Ising model in the presence of random external fields [@Fisher98; @Fisher01; @Corberi02; @Rao93; @Rao93b; @Aron08; @Corberi12; @Puri93; @Oguz90; @Oguz94; @Paul04; @Paul05], with varying coupling constants [@Corberi11; @Paul05; @Paul07; @Henkel06; @Henkel08; @Oh86; @Corberi15; @Gyure95], or in the presence of dilution [@Corberi15; @Puri91; @Puri91b; @Puri92; @Bray91; @Biswal96; @Paul07; @Park10; @Corberi13; @Castellano98; @Corberi15b]. A natural question, then, is whether the percolation effects observed in bi-dimensional clean systems show up also in the disordered ones. The answer is not pat because our knowledge in this field is still rather scarce and even simpler questions regarding the nature of the dynamical scaling or the form of the growth law remain, in the presence of disorder, largely unanswered. In [@Sicilia08] the geometric properties of the domain areas in a coarsening Ising model with weak quenched disorder were studied but the analysis of the time needed to reach the critical percolation structure and the detailed evolution during the approach to this state were not analyzed. In this Article we settle this issue. By means of extensive numerical simulations of the kinetics of both the random field Ising model (RFIM) and the random bond Ising model (RBIM) we clearly show the existence, starting from a certain time $t_p$, of a spanning cluster with the geometry of critical percolation. Not only this feature is akin to what was observed in the clean system, but also other quantitative properties, such as the size-dependence (\[tp\]) of the characteristic time $t_p$ and many other details, which all turn out to be exactly reproduced in the presence of disorder as they are in its absence. This qualifies the relation between coarsening and percolation as a robust phenomenon whose origin is deeply rooted into the Physics of order-disorder transitions in two dimensions. Besides interesting and informative on their own, the results of this paper also represent a step towards a theory of phase-ordering in two-dimensional disordered ferromagnets, in the same spirit of the one developed [@Arenzon07; @Sicilia07] for clean systems. Needless to say, this would represent an important achievement in such a largely unsettled field. This paper is organised as follows. In Sec. \[themodel\] we introduce the model that will be considered throughout the paper and the basic observables that will be computed. In Sec. \[numres\] we discuss the outcome of our simulations, both for the clean case (Sec. \[pure\]) and the disordered ones (Sec. \[disorder\]). We conclude the paper in Sec. \[concl\] by summarising the results and discussing some issues that remain open. Model and observable quantities {#themodel} =============================== We will consider a ferromagnetic system described by the Hamiltonian ({S\_i})=-\_[ij]{}J\_[ij]{}S\_iS\_j+\_i H\_iS\_i, \[isham\] where $S_i=\pm 1$ are Ising spin variables defined on a two-dimensional $L\times L$ square lattice (different lattices were considered in [@Blanchard16]) and $\langle ij\rangle$ are two nearest neighbor sites. The properties of the coupling constants $J_{ij}$ and of the external field $H_i$, specified below, define the two disordered models that will be studied in this paper. [**RBIM:**]{} In this case $H_i\equiv 0$ and the coupling constants are $J_{ij}=J_0+\delta _{ij}$, where $\delta _{ij}$ are independent random numbers extracted from a flat distribution in $[-\delta,+\delta]$, with $\delta <J_0$ in order to keep the interactions ferromagnetic and avoid frustration effects. [**RFIM:**]{} In this model the ferromagnetic bonds are fixed $J_{ij}\equiv J_0$ (i.e. $\delta _{ij}\equiv 0$), while the external field $H_i=\pm h$ is uncorrelated in space and sampled from a symmetric bimodal distribution. The ordered state, typical of the clean system below the critical temperature $T_c$, is preserved in the RBIM because the randomness of the coupling constants maintains them all ferromagnetic. Something else occurs in the RFIM, because the external fields efficiently contrast the ordering tendency down to $T=0$ in any $d\le 2$. This is easily explained by the Imry Ma argument [@Imry75] which, upon comparing the energy increase $\Delta E _{surf} \propto J_0\ell ^{d-1}$ due to the reversal of a bubble of size ${\ell }$ in an ordered phase, to its decrease $\Delta E_{field}\propto h \ell ^{d/2}$ due to the possible alignment of its spins with the majority of the random fields, shows the existence of a threshold length \_[IM]{}\~()\^ \[im\] above which flipping droplets becomes favorable, thus disordering the system for $d\le 2$ (it can be shown that ordering is suppressed also right at $d=2$). Moreover, still for $d<2$, $\ell _{IM} \to \infty$ as $\frac{h}{J_0} \to 0$ so that, if such limit is taken from the onset, the system orders in any dimension, even down to $d=1$ [@Corberi02]. In the following, as we will discuss further below, we will consider this limit. Dynamics can be introduced in the Ising model with Hamiltonian (\[isham\]) by flipping single spins according to Glauber transition rates at temperature $T$ w(S\_i-S\_i)=\[trate\] where the local Weiss field H\^W\_i=\_[jnn(i)]{}J\_[ij]{}S\_j=J\_0\_[jnn(i)]{}S\_j+\_[jnn(i)]{}\_[ij]{}S\_j= H\_[i,det]{}\^W+H\_[i,rand]{}\^W, is decomposed, for the RBIM, into a deterministic and a random part (the sum runs over the nearest neighbours $nn(i)$ of $i$). The quenching protocol amounts to evolve a system prepared at time $t=0$ in an uncorrelated state with zero magnetisation – representing an equilibrium configuration at $T=\infty$ – by means of spin flips regulated by the transition rates (\[trate\]) evaluated at the final temperature $T=T_f$ of the quench. In this paper we focus on the limit $T_f \to 0$ while keeping finite the ratio $\epsilon =\frac{\delta}{T_f}$ (for the RBIM) or $\epsilon =\frac{h}{T_f}$ (for the RFIM). We will also set $J_0=1$. The discussion below Eq. (\[im\]) points out that in this limit $\ell _{IM}\to \infty$ and phase-ordering always occurs also in the RFIM. Moreover, the transition rates (\[trate\]) take the simple form w(S\_i-S\_i)={ [lcr]{} 1, & & H\_[i,det]{}\^W S\_i &lt;0\ 0, & & H\_[i,det]{}\^W S\_i &gt;0\ , & & H\_[i,det]{}\^W S\_i =0 . \[simplrates\] which shows that the model depends only on the ratio $\epsilon$. Having a theory with a single parameter is only one of the advantages of the limit we are considering. The low-temperature limit also reduces thermal noise and allows for an accelerated updating rule, since Eq. (\[simplrates\]) shows that only spins with $H_{i,det}^W\le 0$ need to be updated. These are, basically, the spins on the corners of interfaces, a small fraction of the total number of spins in the sample, particularly at long times. At given times, we have computed the average size of the domains $R(t)$ as the inverse density of defects [@Bray94], namely by dividing the number $L^2$ of lattice sites by the number of spins which are not aligned with all the neighbours. An ensemble average is performed then. The equal-time pair correlation function, which was defined above Eq. (\[scal\]), is computed as G(r,t)=\_[i]{}\_[i\_r]{}S\_iS\_[i\_r]{}, \[defg\] where the value of all the spins is measured at time $t$, the $i_r$’s are the four sites at distance $r$ from $i$ along the horizontal and vertical direction, and $\langle \dots \rangle$ means a non-equilibrium ensemble average, that is taken over different realisations of the initial configuration and over various thermal histories, namely the outcomes of the probabilistic updates of the spins. Let us mention here that we adopt periodic boundary conditions, so in Eq. (\[defg\]) $i_r$ is to be intended [*modulo*]{} $L$. As explained in Sec. \[intro\], assuming that $R(t)\ll L$ the two-parameter scaling form (\[scal2\]) transforms into the more conventional single-parameter one in Eq. (\[scal\]). However, Eq. (\[eq:Rp-Rd\]) shows that another growing length $R_p(t)$ is on the scene. The same equation shows also that this quantity grows faster than $R(t)$ and so, given a finite size $L$ of the system, it may happen that $R_p(t)$ becomes comparable to $L$ even if the condition $R(t)\ll L$, which is usually considered as the hallmark of a finite-size free situation, is met. In this case a simple scaling form such as the one in Eq. (\[scal\]) must be upgraded to a form akin to Eq. (\[scal2\]) (with $R_p$ playing the role of $R$ in the second entry) in order to take into account the two characteristic lengths. Indeed, it was shown in [@Blanchard14] that a two-parameter scaling G(r,t)=g(,) \[scal2par\] is more appropriate to describe the space-time correlation data. The correlation function (\[defg\]) is shown in Fig. \[fig\_G\] for the clean model quenched to $T_f=0$ (more simulation details are given in Sec. \[numres\]). A simple scaling form as in Eq. (\[scal\]) would imply the collapse of the curves relative to different value of $R(t)$ in this kind of plot. Instead one sees here a systematic deviation of the curves at large $r$ as $R(t)$ increases. As anticipated, this is because the correct scaling form is Eq. (\[scal2par\]) instead of Eq. (\[scal\]). With a two-parameter scaling as in Eq. (\[scal2par\]), plotting against $r/R(t)$ only fixes the first entry, namely $r/R(t)$, but the second one, namely $L/R_p(t)$ changes as time elapses. This produces the failure of the collapse observed in Fig. \[fig\_G\]. In order to superimpose the curves one should keep fixed also the second entry of the scaling function. This can only be done by using a different system size $L$ for any different value of $R_p(t)$. It was shown in [@Blanchard14] that in so doing one obtains an excellent scaling in the clean system. This confirms the validity of the form (\[scal2par\]). An important quantity to assess percolative properties is the pair connectedness function, which in percolation theory is defined as the probability that two points separated by a distance $r$ belong to the same cluster. In two dimensions, random percolation theory at $p=p_c$ gives [@Stauffer; @Saberi; @Christensen] C\_[perc]{}(r,r\_0)\~( )\^[-2]{} \[connectperc\] where $r_0$ is a reference distance, usually the lattice spacing. Equation (\[connectperc\]) holds true for large $r/r_0$, with the critical exponent $\Delta =5/48$. The dotted black curve in Fig. \[fig\_connect\] is a numerical evaluation of $C_{perc}$ on a square lattice with $r_0=1$ and size $L=512$. The curve is indistinguishable from Eq. (\[connectperc\]) except at large distances where finite-size effects set in. In the coarsening system we evaluate this quantity as follows: at a given time, after identifying all the domains of positive and negative spins, the connectivity is computed as C(r,t)=\_i \_[i\_r]{} \_[S\_i,S\_[i\_r]{}]{} , \[connect\] where $\delta _{S_i,S_j}=1$ if the two spins belong to the same cluster – namely they are aligned and there is a path of aligned spins connecting them – and $\delta _{S_i,S_j}=0$ otherwise. Another quantity that will be considered is the (average) squared winding angle $\langle \theta ^2\rangle$. Its definition is the following: At a given time, we chose two points $i,j$ on the external perimeter – the hull – of a cluster and we compute the winding angle $\theta _{ij} $, namely the angle (measured counterclockwise) between the tangent to the perimeter in $i$ and the one in $j$. After repeating the procedure for all the couples of perimeter points at distance $r$ (measured along the hull), taking the square and averaging over the non-equilibrium ensemble, one ends up with $\langle \theta ^2(r) \rangle$. The behavior of $\langle \theta ^2(r)\rangle$ is exactly known in $2d$ critical percolation at $p=p_c$ [@Duplantier; @Wilson], where one has \^2\_[perc]{}(r,r\_0) =a+ ( ), \[windexact\] with $k=6$ and $a$ a non-universal constant. We will compute this same quantity in the coarsening systems upon moving along the hulls of the growing domains (in this case, we will only consider the largest cluster for numerical convenience). Numerical results {#numres} ================= In this Section we will present the results of our numerical simulations of the kinetic Ising model. We will start in Sec. \[pure\] with the clean system since, although in this case percolation effects have been already reported [@Arenzon07; @Sicilia07; @Sicilia09; @Barros09; @Olejarz12; @Blanchard13; @Blanchard14], we use here tools, such as the connectivity (\[connect\]), that were not considered before. This case, therefore, serves not only to compare with the disordered systems considered further on, but also as a benchmark for these new quantities. We will then turn to the behavior of the disordered models in Sec. \[disorder\]. Let us remark that the very slow growth of $R(t)$ in such systems as compared to the clean case (see Fig. \[fig\_rt\]) introduces severe limitations to the range of values of $R(t)$ than can be accessed. In particular, choosing large values of $t_p$ (meaning large system sizes, see Eq. (\[tp\])) would prevent one to access the region with $t\gg t_p$ (or, equivalently, $R_p(t)\gg L$), the one in which we are mainly interested in, where the percolation structure has been fully established. For this reason we will always work with small or moderate values of $L$. In order to compare the clean system to the disordered ones in the same regimes, the same choice will be made for the pure system. All the results contained in this Section are obtained with periodic boundary conditions by averaging over a non-equilibrium ensemble with order $10^5-10^6$ of realisations. The system linear size is $L=512$ in units of the lattice spacing. Clean case {#pure} ---------- Let us consider a quench of the clean Ising model to $T_f=0$. The wrapping cluster that develops around $t_p$ can cross the system in different ways. The first possibility is to span the system from one side to the other horizontally or vertically. We denote the probabilities of such configurations as $\pi_{1,0}$ and $\pi_{0,1}$, respectively. Obviously, for a lattice with unit aspect ratio, $\pi_{0,1}=\pi_{1,0}$. Another possibility is to have a domain traversing the sample in both the horizontal and the vertical direction, the probability of which we indicate $\pi_{0,0}$. Finally, clusters can percolate along one of the two diagonal directions with equal probabilities $\pi_{1,-1}$ and $\pi_{-1,1}$. Since we operate with periodic boundary conditions, other wrapping shapes – winding the torus more than once – are also possible but these will not be considered in the following since they occur with an extremely small probability. The quantities mentioned above are exactly known for two-dimensional critical percolation. They are [@Pinson] $$\begin{aligned} \pi_{0,1}+\pi_{1,0}&\simeq& 0.3388 \; , \nonumber \\ \pi_{0,0}&\simeq& 0.61908 \; , \label{spanpperc} \\ \pi_{1,-1}+\pi_{1,1} &\simeq& 0.04196 \; . \nonumber\end{aligned}$$ Upon computing the wrapping probabilities defined above during the phase-ordering process and plotting them against $R(t)$ (shown in Fig. \[fig\_rt\]) we find the curves shown in Fig. \[fig\_probcross\] (black curves with circles, which perfectly superimpose on the others, represent the clean case at hand). A first observation is that any of these quantities, starting from zero immediately after the quench ($R(t)\simeq 0$) – since as already mentioned there cannot be any crossing in the initial state – saturate at long times (large $R(t)$) to values which are very precisely consistent with those given in Eq. (\[spanpperc\]) of critical percolation (bold orange segment on the far right). This fact was first pointed out in [@Barros09; @Olejarz12]. The second observation is that all these probabilities attain their asymptotic values around a certain $R(t)$ that can be used as a rough estimation of $R(t_p)$. Concretely, it occurs at $R(t)\gtrsim 5$. Repeating the numerical experiment in systems with different sizes, indeed, one can check that this determination of $R(t_p)$ increases with $L$, as it is expected after Eq. (\[eq:Rp-Rd\]). The next quantity we consider is the pair connectedness defined in Eq. (\[connect\]). In Fig. \[fig\_connect\] we plot this quantity against $r$ for different values of $R(t)$, namely of time measured with the relevant clock (black curves with circles, which perfectly superimpose with the others, represent the clean case at hand). The area $S(t)=\sum _rC(r,t)$ below the curves increases as time elapses, because $S(t)$ is the probability that two points chosen at random in the system belong to the same cluster, and the number of domains decreases during coarsening. Regarding the form of each curve, after a first transient that we identify as $t\lesssim t_p$, one observes the typical power-law behavior of critical percolation, Eq. (\[connectperc\]) (dotted black line). This, however, is only true for values of $r$ larger than a certain time-dependent value. This can be explained as due to the fact that on scales smaller than $R(t)$ correlations set in and domains get compact. It is therefore natural to observe the percolative behavior only at distances larger than $R(t)$. Let us now discuss the properties of $C$ more quantitatively, concerning in particular its scaling behavior. According to the discussion around Eq. (\[scal2par\]) we expect a two-parameter scaling to hold also for the pair connectivity, namely C(r,t)=c(,), \[scalC\] where $c$ is a scaling function. In order to make the discussion simpler, let us start by discussing the scaling properties for times much longer than the one – $t_p$ – where the percolating structure sets in. This means that the second entry in Eq. (\[scalC\]) is small and one has C(r,t)c(,0)=[C]{}(), \[scalC1\] where ${\cal C}$ is a single-variable scaling function. One can infer the form of the scaling function ${\cal C}(x)$ for large values of $x$ as follows. After $t_p$ the system has percolative properties for $r>R(t)$, and it can be thought of as a percolation problem on a lattice with spacing $R(t)$, for which Eq. (\[connectperc\]) must hold with the replacement $r_0 \to R(t)$, namely C(r,t)\~()\^[-2]{} rR(t). \[firstreason\] In order to match Eqs. (\[scalC1\]) and (\[firstreason\]) it must be (x)x\^[-2]{} x1. \[beh1\] In the opposite situation of small distances $r\ll R(t)$ we are exploring the properties of the domains well inside their correlated region, where there is no percolative structure and the scaling function $c$ decays faster. All the above holds for $t\gg t_p$ where the second entry in the scaling function of Eq. (\[scalC\]) is very small. In the opposite situation $t\ll t_p$ (but $t$ larger than the microscopic time $t_0$ when scaling sets in) one has C(r,t)c(,)=(). \[scalC2\] For $r\gg R(t)$ the scaling function $\Xi$ is expected to be different from ${\cal C}$, because these two functions describe two contrasting situations where the percolative structure is, respectively, absent or present. For $r\ll R(t)$, instead, the scaling function $\Xi$ should be akin to ${\cal C}$, because in any case the domain structure is shaped by correlations on these short lengthscales. Let us submit these scaling ideas to the numerical test. In Fig. \[fig\_scalingC\] we plot $C(r,t)$ against $r/R(t)$. We start by discussing the regime of sufficiently long times $t\gg t_p$. Considering the previous rough estimate $R(t_p)\gtrsim 5$ obtained by inspection of the wrapping probabilities (see Fig. \[fig\_probcross\]), we can assume that such a regime can be sufficiently well represented by the curves with $R(t)$ from $R(t)=4.8$ onward in Fig. \[fig\_scalingC\]. According to Eqs. (\[scalC\]) we should find collapse of such curves at different times on a unique mastercurve ${\cal C}$, with the behavior (\[beh1\]). The numerical data show indeed superposition, except in a region of large $r$ (which moves towards smaller values of $r$ as time elapses) where data collapse is lost due to finite size effects. Notice also that, from $x\gtrsim x_{perc}=1.6$ the mastercurve behaves as in Eq. (\[beh1\]), as can be checked by comparing the numerical data with the dotted black line. The departure from the behavior (\[beh1\]) at very large $x$ is always flanked by the failure of data collapse, a fact which strengthens the idea that both these effects are due to finite size corrections. Indeed, in the inset in the same figure we plot $(r/R(t))^{2\Delta} C(r,t)$ against $r$ for all data in the main panel except the ones for $R(t)=2.1$. The data collapse onto a master curve that is flat at not so large $r$ and then bends upwards due to finite-size corrections that can be captured by a correcting factor $f(r/L)$ to be added as a factor to the scaling law (\[firstreason\]). In the inset we also show the data for actual critical percolation that superimpose on the dynamic data thus confirming that the bending of the curves is not a dynamic effect but just conventional finite-size corrections. Let us now move to the regime with $t\ll t_p$. The curves with $R(t)=2.1$ and $R=2.7$ in Fig. \[fig\_scalingC\] are representative of such regime. We find that these two lines do not collapse for all values of $x>x_{perc}$. This is because, since we work with relatively small values of $L$ (the reasons for that having been explained at the beginning of Sec. \[numres\]) the regime with $R(t)\ll L$ where Eq. (\[scalC2\]) holds is not fully reached in our simulations. However it is interesting to observe that in the region $x<x_{perc}$ data collapse is obeyed on a mastercurve $\Xi$ which is almost indistinguishable from ${\cal C}$, as expected according to the discussion below Eq. (\[scalC2\]). On the other hand in the region $x> x_{perc}$ a marked difference is observed between the curves with $R\le 2.7$ and those with $R(t)\ge 4.8$, as due to the fact that the percolative structure is still absent when the former ones are computed whereas it has been established later. Finally, let us comment on the fact that the quantity $C$, besides being very informative about the twofold properties of the system – compact and correlated at small distances vs fractal and uncorrelated at large distances – is also very well suited to assess the role of the [*extra*]{} growing length $R_p(t)$ in determining the scaling properties. Indeed, although in [@Blanchard14] indications that Eq. (\[scal2par\]) reproduces the data for $G$ better than the simple form (\[scal\]) were given, the deviations from Eq. (\[scal\]) were relatively small and located in the region of large $r$ where $G$ becomes very small. Instead, as Fig. \[fig\_scalingC\] clearly shows, a simple scaling form for $C$ completely fails in describing the data, due to the differences between ${\cal C}$ and $\Xi$ in the region $x>x_{perc}$. For this quantity the improvement of Eq. (\[scalC\]) over a single parameter scaling is conspicuous. The last quantity that we will consider is the winding angle, that is plotted in Fig. \[fig\_wind\] (the clean case corresponds to the black lines with circles, which are perfectly superimposed to the others), for various choices of $R(t)$. Following the discussion relative to the connectivity, for $t>t_p$ we expect to observe the percolative behavior of Eq. (\[windexact\]), with the replacement $r_0 \to R(t)$, namely [@Duplantier; @Wilson] \^2(r,t) =a+ ( ), \[windcoars\] for $r\gg R(t)$. If this is true we ought to find data collapse of the curves for $\langle \theta ^2(r,t) \rangle$ at different times upon plotting them against $r/R(t)$, if $r\gg R(t)$ is large enough and $R(t)\gg R(t_p)$ (similarly to what previously observed for $C(r,t)$, see Fig. \[fig\_scalingC\]). In addition, the mastercurve should be the one of Eq. (\[windcoars\]) for $r$ sufficiently larger than $R(t)$. This kind of plot is shown in Fig. \[fig\_wind2\]. Interestingly, not only we observe data collapse on the mastercurve (\[windcoars\]) when $t\gg t_p$, but this occurs with good precision also for a value $R(t)=2.1$ which was shown in Fig. \[fig\_scalingC\] to be representative of a situation with $t< t_p$. This implies that, for this particular quantity, a single parameter scaling \^2(r,t) =[T]{} ( ), \[scaltheta\] where ${\cal T}(x)$ is a scaling function, accounts for the data. This does not contradict the general fact that – due to the existence of the extra-length $R_p$ – a two-parameter scaling has to be expected. Indeed Eq. (\[scaltheta\]) is a particular case of a two-parameter form \^2(r,t) =\^2(,), where $\Theta ^2$ is a scaling function with a weak dependence on the second entry. Quenched disorder {#disorder} ----------------- In this Section we will discuss the effects of quenched disorder on the approach to percolation by considering the dynamics of the RFIM and the RBIM. We will investigate the properties of the same quantities analyzed in the clean limit in Sec. \[pure\]. Before starting this analysis let us briefly overview what is known about the coarsening behavior of weakly disordered models. When weak disorder is added, the growth law $R(t)$ is markedly slowed down with respect to its growth in the pure case. This is observed in the RFIM and in the RBIM as well. In particular, in the low temperature limit we are considering in this paper (i.e. $T\to 0$ with fixed $\epsilon$, see the discussion around Eq. (\[simplrates\])), after a microscopic time, a regime in which an effective algebraic law, $R(t)\sim t^{1/\zeta(\epsilon)}$, is entered. The effective growth exponent, starting from the value $\zeta (\epsilon =0)=2$ of the clean case, monotonously increases with $\epsilon$ [@Paul07; @Kolton]. This shows that in this stage $R(t)$ grows slower in the disordered models than in the clean case. All these features are quite neatly observed in Fig. \[fig\_rt\], where the behavior of $R(t)$ is shown. At rather long times, for $t\gg t_{cross}$, a crossover occurs to an even slower growth, with $R(t)$ increasing logarithmically, representing the asymptotic behavior [@Fisher88; @Kolton]. The simulations presented in this paper will never enter such an asymptotic stage, they will be restricted to $t< t_{cross}$. The slowing down of $R(t)$ reflects a fundamental difference in the coarsening mechanisms at $\epsilon=0$ and $\epsilon \neq 0$. In the clean system phase-ordering proceeds by softening and flattening of interfaces, which is promoted by surface tension and is a non-thermal effect. This means that activated processes are irrelevant and, as a consequence, the kinetics has the same basic features at any $T_f$, including $T_f=0$ (which is indeed the case presented in the previous Section). On the other hand, as soon as some disorder – no matter how small – is present, interfaces get stuck in pinned configurations and the evolution can only be promoted by thermal activation. These systems are frozen at $T_f=0$ (this is why we work at finite, although vanishing small, temperatures $T_f\to 0$). The basic result of this Section is that, despite the dynamics in the presence of disorder being fundamentally different from the ones of the pure case, notwithstanding the fact that $R(t)$ increases in a radically different way, the percolative features observed in the clean case occur with the same qualitative and quantitative modality in the presence of disorder. Moreover, this occurs both for random fields and random bonds. In order to prove the statement in the previous paragraph, let us start by discussing the behaviour of the crossing probabilities introduced at the beginning of Sec. \[pure\]. These quantities are plotted in Fig. \[fig\_probcross\]. For both the RFIM and the RBIM and for any strength of disorder considered parametrized by $\epsilon$ these probabilities are basically indistinguishable from those of the clean $\epsilon=0$ case. This shows that the effect of disorder does not spoil the occurrence of the percolative structure, nor how and when (provided time is parametrized in terms of $R$) it is formed. As a byproduct, the independence of the crossing probabilities on $\epsilon $ implies that Eq. (\[zpzd\]) is valid beyond the pure case with a unique value of the exponent $z_p/z_d$, at least for the non-conserved order parameter dynamics on the square lattice considered here. Let us mention that, since $R(t)$ is strongly $\epsilon$-dependent, we should not find the nice collapse of Fig. \[fig\_probcross\] if we plotted against time, instead of against $R(t)$. Equation (\[zpzd\]) would not look $\epsilon$-independent upon expressing $R_p$ as a function of time either, namely in the form of Eq. (\[tp\]). This shows that the typical size of correlated domains is the natural parametrization of time in this problem. It will also help us conclude about the system size dependence of the percolation time $t_p$ without having to simulate different system sizes, as we will explain below. Let us move on to the connectedness function. A comparison between the clean case and the disordered ones is presented in Fig. \[fig\_connect\]. In this figure one sees that all the disordered cases fall onto the clean case provided that times are chosen in order to have the same growing length. For instance, the curve for $t=103$ without disorder is indistinguishable from the one at $t=1446$ for the RBIM with $\epsilon =1.2$, because at these times the size $R(t)$ of the domains in the two models is the same, $R(t)=14.0$ (last curve on the top in Fig. \[fig\_connect\]). Notice that collapse of the various curves is obtained for the RFIM and the RBIM, and for any strength $\epsilon$ of disorder. This confirms that the way in which the percolation structure develops, and the scaling properties associated to that, are those discussed in the previous Section regarding the clean case, and are largely independent of the presence of disorder. Finally, we arrive at the same conclusion by considering the winding angle. This quantity is shown in Fig. \[fig\_wind\] where the clean system and different disordered cases are shown. As in Fig. \[fig\_connect\], the comparison is made by choosing times so as to have the same $R(t)$. Also in this case one observes an excellent superposition of all curves, further supporting the conclusion that the presence of disorder does not modify the percolative properties observed in the clean case. As a byproduct, this also mean that a scaling plot like the one presented in Fig. \[fig\_wind2\] for the pure case would look the same if data for the disordered models were used. This implies that the scaling properties discussed in the previous Section for the winding angle apply in the presence of quenched disorder as well. The same holds for the connectedness function. As a final important point, let us discuss the role of the crossover time $t_{cross}$ where the growth law turns to a logarithmic form. As we said at the beginning of this Section, our simulations do not even approach times of the order $t_{cross}$. However, the existence of this crossover time is associated to a further characteristic length $\lambda (\epsilon)=R(t_{cross})$ which might be relevant to the scaling properties discussed in Sec. \[pure\]. Indeed, considering the connectedness function for example, the dependence on $\lambda $ is expected to enter a scaling form in the following way C(r,t)=c(,, ), \[scalC2par\] where for simplicity we use the same symbol $c$ also for this new, three entry, scaling function. Clearly, since in the range of times considered in our simulations we have $t\ll t_{cross}$, and consequently $R(t)\ll \lambda(\epsilon)$, the last entry in the above equation is always around zero and can be neglected. This, in turn, makes our results independent of disorder $\epsilon $, as we have already discussed, a fact that has been called [*superuniversality*]{} [@Fisher88]. Notice that this does not mean that disorder is ineffective, since the growth law $R(t)$ changes dramatically due to the quenched randomness. However, it is quite natural to expect that such insensitivity of $c$ on $\epsilon $ could be spoiled if times where pushed to such late regimes as to give $R(t)\gtrsim \lambda(\epsilon)$. In this case the last entry in Eq. (\[scalC2par\]) could start, in principle, to play a role, introducing an $\epsilon $-dependence and spoiling the superuniversality we have shown to hold in the regime accessed in our simulations. The investigation of such a late regime requires a huge numerical effort that is beyond the scope of this paper and remains a challenge for future research. Conclusions {#concl} =========== In this paper we have investigated the relevance of percolative effects on the phase-ordering kinetics of the two-dimensional Ising model quenched from the disordered phase to a very low final temperature. We have considered the clean case as a benchmark, and two forms of quenched randomness, random bonds and random fields. The presence and the properties of the percolation cluster have been detected by inspection of quantities, such as the pair-connectedness, that represent an efficient tool to detect the percolative wrapping structure hidden in the patchwork of growing domains. The main finding of this paper is that the addition of weak quenched randomness, while sensibly changing the speed of the ordering process, does not impede the occurrence of percolation, nor it changes the way in which it sets in, even at a quantitative level. Indeed, we find that quantities such as the wrapping probabilities, the connectivity function and the winding angle behave in the same way with great accuracy for any choice of the disorder strength, including the clean case, once time has been measured in units of the size $R(t)$ of the ordered regions. All the above can be accounted for in a scaling framework where coarsening, percolation and disorder are associated to three characteristic lengths, $R$, $R_p$, and $\lambda $ respectively, and the interplay between them depends on how such lengths compare between them and with the linear system size. The results in [@Sicilia08] and in this paper show that the relevance of percolation, which was previously pointed out for $2d$ clean systems, extends to the much less understood realm of disordered systems, making this issue of a quite general character. This not only opens the way to further studies on more general disordered systems (for instance, randomly diluted models where a more complex scaling structure has been recently observed [@Corberi13]), but also prompts the attention on a possible generalisation of analytical theories where the properties of phase-ordering are traced back to percolation effects, originally developed for clean systems, to the disordered cases. [**Acknowkledgements**]{} F. Corberi and F. Insalata thank the LPTHE Jussieu for hospitality during the preparation of this work. L. F. Cugliandolo is a member of Institut Universitaire de France, and she thanks the KITP University of California at Santa Barbara for hospitality. This research was supported in part by the National Science Foundation under Grant No. PHY11-25915. [0]{} A. J. Bray, Adv. Phys. [**43**]{}, 357 (1994). , edited by S. Puri and V. Wadhawan, CRC Press, Boca Raton (2009). J. J. Arenzon, A. J. Bray, L. F. Cugliandolo, and A. Sicilia, Phys. Rev. Lett. [**98**]{}, 145701 (2007). A. Sicilia, J. J. Arenzon, A. J. Bray, and L. F. Cugliandolo, Phys. Rev. E [**76**]{}, 061116 (2007). A. Sicilia, Y. Sarrazin, J. J. Arenzon, A. J. Bray, and L. F. Cugliandolo, Phys. Rev. E [**80**]{}, 031121 (2009). K. Barros, P. L. Krapivsky, and S. Redner, Phys. Rev. E [**80**]{}, 040101 (2009). J. Olejarz, P. L. Krapivsky, and S. Redner, Phys. Rev. Lett. [**109**]{}, 195702 (2012). T. Blanchard and M. Picco, Phys. Rev. E [**88**]{}, 032131 (2013). T. Blanchard, F. Corberi, L. F. Cugliandolo, and M. Picco, EPL [**106**]{}, 66001 (2014). D. Stauffer and A. Aharony, [*Introduction to Percolation Theory*]{} (Taylor and Francis, London, 1994). K. Christensen, [*Percolation Theory*]{} (Imperial College Press, London, 2002). A. A. Saberi, Phys. Rep. [**578**]{}, 1 (2015). T. Blanchard, L. F. Cugliandolo, M. Picco, and A. Tartaglia, [*Critical percolation in bidimensional kinetic spin models*]{}, arXiv:16 J. Cardy and R. M. Ziff, J. Stat. Phys. [**110**]{}, 1 (2003). F. Corberi, Comptes rendus - Physique [16]{}, 332 (2015). V. Likodimos, M. Labardi, and M. Allegrini, Phys. Rev. B [**61**]{}, 14440 (2000). V. Likodimos, M. Labardi, X. K. Orlik, L. Pardi, M. Allegrini, S. Emonin, and O. Marti, Phys. Rev. B [**63**]{}, 064104 (2001). H. Ikeda, Y. Endoh, and S. Itoh, Phys. Rev. Lett. [**64**]{}, 1266 (1990). A. G. Schins, A. F. M. Arts, and H. W. de Wijn, Phys. Rev. Lett. [**70**]{}, 2340 (1993). D. K. Shenoy, J. V. Selinger, K. A. Grüneberg, J. Naciri, and R. Shashidhar, Phys. Rev. Lett. [**82**]{}, 1716 (1999). D. S. Fisher, P. Le Doussal, and C. Monthus, Phys. Rev. Lett. [**80**]{}, 3539 (1998). D. S. Fisher, P. Le Doussal, and C. Monthus, Phys. Rev. E [**64**]{}, 066107 (2001). F. Corberi, A. de Candia, E. Lippiello, and M. Zannetti, Phys. Rev. E [**65**]{}, 046114 (2002). M. Rao and A. Chakrabarti, Phys. Rev. E [**48**]{}, R25(R) (1993). M. Rao and A. Chakrabarti, Phys. Rev. Lett. [**71**]{}, 3501 (1993). C. Aron, C. Chamon, L. F. Cugliandolo and M. Picco, J. Stat. Mech. P05016 (2008). F. Corberi, E. Lippiello, A. Mukherjee, S. Puri and M. Zannetti, Phys. Rev. E [**85**]{}, 021141 (2012). S. Puri and N. Parekh, J. Phys. A [**26**]{}, 2777 (1993). E. Oguz, A. Chakrabarti, R. Toral, and J.D. Gunton, Phys. Rev.B [**42**]{}, 704 (1990). E. Oguz, J. Phys. A [**27**]{}, 2985 (1994). R. Paul, S. Puri, and H. Rieger, Europhys. Lett. [**68**]{}, 881 (2004). R. Paul, S. Puri, and H. Rieger, Phys. Rev. E [**71**]{}, 061109 (2005). R. Paul, G. Schehr, and H. Rieger, Phys. Rev. E [**75**]{}, 030104 (2007). F. Corberi, E. Lippiello, A. Mukherjee, S. Puri, and M. Zannetti, J. Stat. Mech.: Theory and Experiment P03016 (2011). M. Henkel and M. Pleimling, Europhys. Lett. [**76**]{}, 561 (2006). M. Henkel and M. Pleimling, Phys. Rev. B [**78**]{}, 224419 (2008). J. H. Oh, and D. Choi, Phys. Rev. B [**33**]{}, 3448 (1986). F. Corberi, R. Burioni, E. Lippiello, A. Vezzani, and M. Zannetti, Phys. Rev. E [**91**]{}, 062122 (2015). M. F. Gyure, S. T. Harrington, R. Strilka, and H. E. Stanley, Phys. Rev. E [**52**]{}, 4632 (1995). S. Puri, D. Chowdhury, and N. Parekh, J. Phys. A [**24**]{}, L1087 (1991). S. Puri, D. Chowdhury, and N. Parekh, J. Phys. A [**24**]{}, L1087 (1991). S. Puri and N. Parekh, J. Phys. A [**25**]{}, 4127 (1992). A. J. Bray and K. Humayun, J. Phys. A [**24**]{}, L1185 (1991). B. Biswal, S. Puri, and D. Chowdhury, Physica A [**229**]{}, 72 (1996). R. Paul, G. Schehr, and H. Rieger, Phys. Rev. E [**75**]{}, 030104(R) (2007). H. Park and M. Pleimling, Phys. Rev. B [**82**]{}, 144406 (2010). F. Corberi, E. Lippiello, A. Mukherjee, S. Puri and M. Zannetti, Phys. Rev. E [**88**]{}, 042129 (2013). C.Castellano, F.Corberi, U.Marini Bettolo Marconi and A.Petri, Journal de Physique IV [**8**]{},93 (1998). A. Sicilia, J. J. Arenzon, A. J. Bray, and L. F. Cugliandolo, EPL [**82**]{}, 10001 (2008). F. Corberi, E. Lippiello, and M. Zannetti, J. Stat. Mech., P10001 (2015). Y. Imry and S.-k. Ma, Phys. Rev. Lett. [**35**]{}, 1399 (1975). D. S. Fisher and D. A. Huse, Phys. Rev. B [**38**]{}, 373 (1988). J. L. Iguain, S. Bustingorry, A. B. Kolton, and L. F. Cugliandolo, Phys. Rev. B [**80**]{}, 094201 (2009). H. Pinson, J. Stat. Phys. [**75**]{}, 1167 (1994). B. Duplantier and H. Saleur, Phys. Rev. Lett. [**60**]{}, 2343 (1988). B. Wieland and D. B. Wilson, Phys. Rev. E [**68**]{}, 056101 (2003).
--- abstract: 'We shall introduce the [*singular curvature function*]{} on cuspidal edges of surfaces, which is related to the Gauss-Bonnet formula and which characterizes the shape of cuspidal edges. Moreover, it is closely related to the behavior of the Gaussian curvature of a surface near cuspidal edges and swallowtails.' address: - ' Department of Mathematics, Hokkaido University, Sapporo 060-0810, Japan ' - ' Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan ' - ' Faculty of Mathematics, Kyushu University, Higashi-ku, Fukuoka 812-8581, Japan' author: - Kentaro Saji - Masaaki Umehara - Kotaro Yamada date: 'December 09, 2006' title: The geometry of Fronts --- Introduction {#introduction .unnumbered} ============ Let $M^2$ be an oriented $2$-manifold and $f\colon{}M^2\to {\boldsymbol{R}}^3$ a $C^\infty$-map. A point $p\in M^2$ is called a [*singular point*]{} if $f$ is not an immersion at $p$. A singular point is called a [*cuspidal edge*]{} or [*swallowtail*]{} if it is locally diffeomorphic to $$\tag{1}\label{eq:cuspidal-swallow} f_C(u,v):=(u^2,u^3,v) \quad\text{or}\quad f_S(u,v):=(3u^4+u^2v,4u^3+2uv,v)$$ at $(u,v)=(0,0)$, respectively. These two types of singular points characterize the generic singularities of wave fronts (cf. [@AGV]; for example, parallel surfaces of immersed surfaces in ${\boldsymbol{R}}^3$ are fronts), and we have a useful criterion (Fact \[fact:intrinsic-criterion\]; cf. [@KRSUY]) for determining them. It is of interest to investigate these singularities from the viewpoint of differential geometry. In this paper, we shall distinguish two types of cuspidal edges as in Figure \[fig:parabola\]. More precisely, we shall define the singular curvature function $\kappa_s$ along cuspidal edges. The left-hand figure in Figure \[fig:parabola\] is positively curved and the right-hand figure is negatively curved (see Corollary \[cor:positive-negative-cuspidal-edge\]). ---------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------- ![Positively and negatively curved cuspidal edges (Example \[ex:parabola\]).[]{data-label="fig:parabola"}](figure-0-1-a.eps "fig:"){width="3.5cm"} ![Positively and negatively curved cuspidal edges (Example \[ex:parabola\]).[]{data-label="fig:parabola"}](figure-0-1-b.eps "fig:"){width="3.5cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------- The definition of the singular curvature function does not depend on the orientation nor on the co-orientation of the front and is closely related to the following two Gauss-Bonnet formulas given by Langevin-Levitt-Rosenberg and Kossowski when $M^2$ is compact: $$\begin{aligned} {2} 2\deg(\nu)&= \chi(M_+)-\chi(M_-) +\#S_+-\#S_- \qquad &&(\text{\cite{LLR},\cite{K1}}) \tag{2}\label{eq:GB-signed}\\ 2\pi\chi(M^2) &=\int_{M^2}K\,dA+2\int_{\text{Singular set}}\kappa_s\, ds \qquad &&(\text{\cite{K1}}), \tag{3}\label{eq:GB-unsigned}\end{aligned}$$ where $\deg(\nu)$ is the degree of the Gauss map $\nu$, $\#S_+,\#S_-$ are the numbers of positive and negative swallowtails respectively (see Section \[sec:GB\]), and $M_+$ (resp. $M_-$) is the open submanifold of $M^2$ to which the co-orientation is compatible (resp. not compatible) with respect to the orientation. In the proofs of these formulas in [@LLR] and [@K1], the singular curvature implicitly appeared as a form $\kappa_s\,ds$. (Formula stated in [@LLR], and proofs for both and are in [@K1].) Recently, global properties of fronts were investigated via flat surfaces in hyperbolic $3$-space $H^3$ ([@KUY1; @KRSUY]), via maximal surfaces in Minkowski $3$-space ([@UY]), and via constant mean curvature one surfaces in de Sitter space ([@F], see also Lee and Yang [@LY]). Such surfaces satisfy certain Osserman type inequalities for which equality characterizes the proper embeddedness of their ends. We also note that Martínez [@Mar] investigated global properties of improper affine spheres with singularities, which are related to flat fronts in $H^3$. (See also Ishikawa and Machida [@IM].) The purpose of this paper is to give geometric meaning to the singular curvature function and investigate its properties. For example, it diverges to $-\infty$ at swallowtails (Corollary \[cor:singular-curvature-peak\]). Moreover, we shall investigate behavior of the Gaussian curvature $K$ near singular points. For example, the Gaussian curvature $K$ is generically unbounded near cuspidal edges and swallowtails and will take different signs from the left-hand side to the right-hand side of a singular curve. However, on the special occasions that $K$ is bounded, the shape of these singularities is very restricted: for example, [*singular curvature is non-positive if the Gaussian curvature is non-negative*]{} (Theorem \[thm:gaussian-singular\]). A similar phenomena holds for the case of hypersurfaces (Section \[sec:hyper\]). The paper is organized as follows: In Section \[sec:curvature\], we define the singular curvature, and give its fundamental properties. In Section \[sec:GB\], we generalize the two Gauss-Bonnet formulas and to fronts which admit finitely many corank one “peak” singularities. In Section \[sec:gaussian-curvature\], we investigate behavior of Gaussian curvature. Section \[sec:zigzag\] is devoted to formulating a topological invariant of closed fronts called the “zig-zag number” (introduced in [@LLR]) from the viewpoint of differential geometry. We shall generalize the results of Section \[sec:gaussian-curvature\] to hypersurfaces in Section \[sec:hyper\]. Finally, in Section \[sec:intrinsic\], we introduce an intrinsic formulation of the geometry of fronts. The authors thank Shyuichi Izumiya, Go-o Ishikawa, Osamu Saeki, Osamu Kobayashi and Wayne Rossman for fruitful discussions and valuable comments. Singular curvature {#sec:curvature} ================== Let $M^2$ be an oriented $2$-manifold and $(N^3,g)$ an oriented Riemannian $3$-manifold. The unit cotangent bundle $T_1^*N^3$ has the canonical contact structure and can be identified with the unit tangent bundle $T_1 N^3$. A smooth map $f\colon{}M^2 \to N^3$ is called a [*front*]{} if there exists a unit vector field $\nu$ of $N^3$ along $f$ such that $L:=(f,\nu)\colon{}M^2\to T_1N^3$ is a Legendrian immersion (which is also called an isotropic immersion), that is, the pull-back of the canonical contact form of $T_1N^3$ vanishes on $M^2$. This condition is equivalent to the following orthogonality condition: $$\label{eq:orthogonal} g(f_*X,\nu)=0 \qquad (X\in TM^2),$$ where $f_*$ is the differential map of $f$. The vector field $\nu$ is called the [*unit normal vector*]{} of the front $f$. The [*first fundamental form $ds^2$*]{} and the [*second fundamental form $h$*]{} of the front are defined in the same way as for surfaces: $$\label{eq:first-second} ds^2(X,Y) := g(f_*X,f_*Y),~ h(X,Y) := -g(f_*X,D_Y\nu)\qquad \bigl(X,Y\in TM^2\bigr),$$ where $D$ is the Levi-Civita connection of $(N^3,g)$. We denote by $\mu_g$ the Riemannian volume element of $(N^3,g)$. Let $f\colon{}M^2\to N^3$ be a front and $\nu$ the unit normal vector of $f$, and set $$\label{eq:signed-area-form} d\hat A:=f^*(\iota_\nu\mu_g)= \mu_g(f_u,f_v,\nu)\,du\wedge dv \quad \left(f_u=f_*\left(\frac{\partial}{\partial u}\right), f_v=f_*\left(\frac{\partial}{\partial v}\right) \right),$$ called the [*signed area form*]{}, where $(u,v)$ is a local coordinate system of $M^2$ and $\iota_\nu$ is the interior product with respect to $\nu\in TN^3$. Suppose now that $(u,v)$ is compatible to the orientation of $M^2$. Then the function $$\label{eq:signed-area-density} \lambda(u,v):=\mu_g(f_u,f_v,\nu)$$ is called the ([*local*]{}) [*signed area density function*]{}. We also set $$\begin{gathered} \label{eq:absolute-area-form} d A:=|\mu_g(f_u,f_v,\nu)|\,du\wedge dv =\sqrt{EG-F^2}\,du\wedge dv = |\lambda|\,du\wedge\,dv\\ \bigl(E:=g(f_u,f_u), F:=g(f_u,f_v), G:=g(f_v,f_v)\bigr),\end{gathered}$$ which is independent of the choice of orientation-compatible coordinate system $(u,v)$ and is called the ([*absolute*]{}) [*area form*]{} of $f$. Let $M_+$ (resp. $M_-$) be the open submanifolds where the ratio $(d\hat A)/(dA)$ is positive (resp. negative). If $(u,v)$ is a coordinate system compatible to the orientation of $M^2$, the point $(u,v)$ belongs to $M_+$ (resp. $M_-$) if and only if $\lambda(u,v)>0$ ($\lambda(u,v)<0$), where $\lambda$ is the signed area density function. \[def:nondeg\] Let $f\colon{}M^2\to N^3$ be a front. A point $p\in M^2$ is called a [*singular point*]{} if $f$ is not an immersion at $p$. We call the set of singular points of $f$ the [*singular set*]{} and denote by $\Sigma_f:=\{p\in M^2\,|\,\text{$p$ is a singular point of $f$}\}$. A singular point $p\in \Sigma_f$ is called [*non-degenerate*]{} if the derivative $d\lambda$ of the signed area density function does not vanish at $p$. This condition does not depend on choice of coordinate systems. It is well-known that a front can be considered locally as a projection of a Legendrian immersion $L\colon{}U^2\to P(T^*N^3)$, where $U^2$ is a domain in ${\boldsymbol{R}}^2$ and $P(T^*N^3)$ is the projective cotangent bundle. The canonical contact structure of the unit cotangent bundle $T^*_1N^3$ is the pull-back of that of $P(T^*N^3)$. Since the contact structure on $P(T^*N^3)$ does not depend on the Riemannian metric, the definition of front does not depend on the choice of the Riemannian metric $g$ and is invariant under diffeomorphisms of $N^3$. \[def:limittangent\] Let $f\colon{}M^2\to N^3$ be a front and $TN^3|_M$ the restriction of the tangent bundle of $N^3$ to $M^2$. The subbundle ${\mathcal{E}}$ of rank $2$ on $M^2$ that is perpendicular to the unit normal vector field $\nu$ of $f$ is called the [*limiting tangent bundle*]{} with respect to $f$. There exists a canonical vector bundle homomorphism $$\psi \colon{} TM^2\ni X \longmapsto f_*X \in {\mathcal{E}}.$$ The non-degenerateness in Definition \[def:nondeg\] is also independent of the choice of $g$ and can be described in terms of the limiting tangent bundle: \[prop:limit\] Let $f\colon{}U\to N^3$ be a front defined on a domain $U$ in ${\boldsymbol{R}}^2$ and ${\mathcal{E}}$ the limiting tangent bundle. Let $\mu\colon{}(U;u,v)\to {\mathcal{E}}^* \wedge {\mathcal{E}}^*$ be an arbitrary fixed nowhere vanishing section. Then a singular point $p\in M^2$ is non-degenerate if and only if the derivative $dh$ of the function $h:=\mu\bigl(\psi(\partial/\partial u),\psi(\partial/\partial v)\bigr)$ does not vanish at $p$. Let $\mu_0$ be the $2$-form that is the restriction of the $2$-form $\iota_\nu\mu_g$ to $M^2$, where $\iota_\nu$ denotes the interior product and $\mu_g$ is the volume element of $g$. Then $\mu_0$ is a nowhere vanishing section on ${\mathcal{E}}^* \wedge {\mathcal{E}}^*$, and the local signed area density function $\lambda$ is given by $\lambda=\mu_0(\psi(\partial/\partial u),\psi(\partial/\partial v))$. On the other hand, let $\mu\colon{}(U;u,v)\to {\mathcal{E}}^* \wedge {\mathcal{E}}^*$ be an arbitrary fixed nowhere vanishing section. Then there exists a smooth function $\tau\colon{}U\to {\boldsymbol{R}}\setminus\{0\}$ such that $\mu=\tau \cdot \mu_0$ (namely $h=\tau\lambda$) and $$dh(p)=d\tau(p) \cdot \lambda(p)+\tau(p) \cdot d\lambda(p)=\tau(p) \cdot d\lambda(p) ,$$ since $\lambda(p)=0$ for each singular point $p$. Then $dh$ vanishes if and only if $d\lambda$ does as well. \[rem:newadd\] A $C^\infty$-map $f:U^2\to M^3$ is called a [*frontal*]{} if it is a projection of isotropic map $L:U^2\to T^*_1M^3$, that is, the pull-back of the canonical contact form of $T_1N^3$ by $L$ vanishes on $M^2$. The definition of non-degenerate singular points and the above lemma do not use the properties that $L$ is an immersion. So they hold for any frontals. Let $p\in M^2$ be a non-degenerate singular point. Then by the implicit function theorem, the singular set near $p$ consists of a regular curve in the domain of $M^2$. This curve is called the [*singular curve*]{} at $p$. We denote the singular curve by $$\gamma\colon{}(-\varepsilon,\varepsilon) \ni t \longmapsto \gamma(t)\in M^2 \qquad (\gamma(0)=p).$$ For each $t\in (-\varepsilon,\varepsilon)$, there exists a $1$-dimensional linear subspace of $T_{\gamma(t)}M^2$, called the [*null direction*]{}, which is the kernel of the differential map $f_*$. A non-zero vector belonging to the null direction is called a [*null vector*]{}. One can choose a smooth vector field $\eta(t)$ along $\gamma(t)$ such that $\eta(t)\in T_{\gamma(t)}M^2$ is a null vector for each $t$, which is called a [*null vector field*]{}. The tangential $1$-dimensional vector space of the singular curve $\gamma(t)$ is called the [*singular direction*]{}. \[fact:intrinsic-criterion\] Let $p$ be a non-degenerate singular point of a front $f$, $\gamma$ the singular curve passing through $p$, and $\eta$ a null vector field along $\gamma$. Then 1. \[item:intrinsic-criterion-cuspidal\] $p=\gamma(t_0)$ is a cuspidal edge [(]{}that is, $f$ is locally diffeomorphic to $f_C$ of in the introduction[)]{} if and only if the null direction and the singular direction are transversal, that is, $\det\bigl(\gamma'(t),\eta(t)\bigr)$ does not vanish at $t=t_0$, where $\det$ denotes the determinant of $2\times 2$ matrices and where we identify the tangent space in $T_{\gamma(t_0)}M^2$ with ${\boldsymbol{R}}^2$. 2. \[item:intrinsic-criterion-swallow\] $p=\gamma(t_0)$ is a swallowtail [(]{}that is, $f$ is locally diffeomorphic to $f_S$ of in the introduction[)]{} if and only if $$\det\bigl(\gamma'(t_0),\eta(t_0)\bigr)=0 \qquad\text{and}\qquad \left.\frac{d}{dt}\right|_{t=t_0}\!\! \det\bigl(\gamma'(t),\eta(t)\bigr)\neq 0$$ hold. For later computation, it is convenient to take a local coordinate system $(u,v)$ centered at a given non-degenerate singular point $p\in M^2$ as follows: - the coordinate system $(u,v)$ is compatible with the orientation of $M^2$, - the $u$-axis is the singular curve, and - there are no singular points other than the $u$-axis. We call such a coordinate system $(u,v)$ an [*adapted coordinate system*]{} with respect to $p$. In these coordinates, the signed area density function $\lambda(u,v)$ vanishes on the $u$-axis. Since $d\lambda\ne 0$, $\lambda_v$ never vanishes on the $u$-axis. This implies that $$\label{eq:area-density-sign} \text{the signed area density function $\lambda$ changes sign on singular curves,}$$ that is, the singular curve belongs to the boundary of $M_+$ and $M_-$. Now we suppose that a singular curve $\gamma(t)$ on $M^2$ consists of cuspidal edges. Then we can choose the null vector fields $\eta(t)$ such that $\bigl(\gamma'(t),\eta(t)\bigr)$ is a positively oriented frame field along $\gamma$. We then define the [*singular curvature function*]{} along $\gamma(t)$ as follows: $$\label{eq:def-singular-curvature} \kappa_s(t):={\operatorname{sgn}}\bigl(d\lambda(\eta)\bigr)\, \frac{\mu_g\bigl(\hat \gamma'(t),\hat \gamma''(t),\nu\bigr)} {|\hat \gamma'(t)|^3}.$$ Here, we denote $|\hat\gamma'(t)|=g\bigl(\hat\gamma'(t),\hat\gamma'(t)\bigr)^{1/2}$, $$\label{eq:singular-curve-image} \hat \gamma(t)=f(\gamma(t)),\qquad \hat \gamma'(t)=\frac{d\hat \gamma(t)}{dt},\quad \text{and}\quad \hat \gamma''(t)=D_{t}\hat \gamma'(t),$$ where $D$ is the Levi-Civita connection and $\mu_g$ the volume element of $(N^3,g)$. We take an adapted coordinate system $(u,v)$ and write the null vector field $\eta(t)$ as $$\label{eq:null-vector-adapted} \eta(t)= a(t)\frac{\partial}{\partial u}+ e(t)\frac{\partial}{\partial v},$$ where $a(t)$ and $e(t)$ are $C^\infty$-functions. Since $(\gamma',\eta)$ is a positive frame, we have $e(t)>0$. Here, $$\label{eq:non-degenerate-adapted} \lambda_u=0\qquad\text{and}\qquad \lambda_v\neq 0 \qquad \text{(on the $u$-axis)}$$ hold, and then $d\lambda\bigl(\eta(t)\bigr)=e(t)\lambda_v$. In particular, we have $$\label{eq:singular-curvature-sign-adapted} {\operatorname{sgn}}\bigl(d\lambda(\eta)\bigr)= {\operatorname{sgn}}(\lambda_v) = \begin{cases} +1 & \text{if the left-hand side of $\gamma$ is $M_+$}, \\ -1 & \text{if the left-hand side of $\gamma$ is $M_-$}. \end{cases}$$ So we have the following expression: in an adapted coordinate system $(u,v)$, $$\label{eq:singular-curvature-adapted} \kappa_s(u):={\operatorname{sgn}}(\lambda_v) \frac{\mu_g(f_u,f_{uu},\nu)} {|f_u|^3},$$ where $f_{uu}=D_uf_u$ and $|f_u|=g(f_u,f_u)^{1/2}$. \[thm:invariance-singular-curvature\] The definition of the singular curvature does not depend on the parameter $t$, nor the orientation of $M^2$, nor the choice of $\nu$, nor the orientation of the singular curve. If the orientation of $M^2$ reverses, then $\lambda$ and $\eta$ both change sign. If $\nu$ is changed to $-\nu$, so does $\lambda$. If $\gamma$ changes orientation, both $\gamma'$ and $\eta$ change sign. In all cases, the sign of $\kappa_s$ is unchanged. \[rem:geodesic-curvature\] We have the following expression $$\begin{gathered} \kappa_s={\operatorname{sgn}}\bigl(d\lambda(\eta)\bigr)\, \frac{ \mu_0(\hat \gamma'',\nu,\hat \gamma'/|\hat \gamma'|)}{ |\hat \gamma'|^2} ={\operatorname{sgn}}\bigl(d\lambda(\eta)\bigr)\, \frac{g(\hat \gamma'',n)}{|\hat \gamma'|^2} \quad \left( n:=\nu\times_g \frac{\hat\gamma'}{|\hat \gamma'|} \right). \end{gathered}$$ Here, the vector product operation $\times_g$ in $T_xN^3$ is defined by $a\times_g b:=*(a\wedge b)$, under the identification $TN^3\ni X \leftrightarrow g(X,~)\in T^*N^3$, where $*$ is the Hodge $*$-operator. If $\gamma(t)$ is not a singular curve, $n(t)$ is just the conormal vector of $\gamma$. We call $n(t)$ the [*limiting conormal vector*]{}, and $\kappa_s(t)$ can be considered as the limiting geodesic curvature of (regular) curves with the singular curve on their right-hand sides. \[prop:intrinsic-singular-curvature\] Let $p$ be a point of a cuspidal edge of a front $f$, and $(u,v)$ an adapted coordinate system at $p$ such that $\partial/\partial v$ gives the null direction. Then the singular curvature is given by $$\kappa_s(u)=\frac{-F_vE_u+2EF_{uv}-E E_{vv}}{E^{3/2}\lambda_v},$$ where $E=g(f_u,f_u),F=g(f_u,f_v),G=g(f_v,f_v)$, and where $\lambda$ is the signed area density function with respect to $(u,v)$. Fix $v>0$ and denote by $\gamma(u)=(u,v)$ the $u$-curve. Then the unit vector $$n(u)=\frac{1}{\sqrt{E}\sqrt{EG-F^2}} \left(-F \frac{\partial}{\partial u}+ E \frac{\partial}{\partial v}\right)$$ gives the conormal vector such that $\bigl(\gamma'(u),n(u)\bigr)$ is a positive frame. Let $\nabla$ be the Levi-Civita connection on $\{v>0\}$ with respect to the induced metric $ds^2=Edu^2+2Fdudv+Gdv^2$, and $s$ the arclength parameter of $\gamma(u)$. Then we have $$\nabla_{\gamma'(s)}^{}\gamma'(s) = \frac{1}{\sqrt{E}} \nabla_{\partial/\partial u}^{} \left( \frac{1}{\sqrt{E}}\frac{\partial}{\partial u}\right) \equiv \frac{\Gamma_{11}^2}{E} \frac{\partial}{\partial v} \mod \frac{\partial}{\partial u},$$ where $\Gamma_{11}^2$ is the Christoffel symbol given by $$\Gamma_{11}^2=\frac{-F E_{u}+2E F_u-EE_v}{2(EG-F^2)}.$$ Since $\lambda^2=EG-F^2$ and $g(f_u,n)=0$, the geodesic curvature of $\gamma$ is given by $$\kappa_g=g\bigl(\nabla_{\gamma'(s)},\gamma'(s),{n(s)}\bigr) =\frac{\sqrt{EG-F^2}\, \Gamma_{11}^2}{E^{3/2}} =\frac{-F E_{u}+2E F_u-EE_v}{|\lambda| E^{3/2}}.$$ Hence, by Remark \[rem:geodesic-curvature\], the singular curve of the $u$-axis is $$\kappa_s={\operatorname{sgn}}(\lambda_v)\lim_{v\to 0}\kappa_g ={\operatorname{sgn}}(\lambda_v) \lim_{v\to 0} \frac{-F E_{u}+2E F_u-EE_v}{|\lambda| E^{3/2}}.$$ It is clear that all of $\lambda$, $F$ and $F_u$ tend to zero as $v\to 0$. Moreover, we have $$E_v =2g(D_vf_u,f_u)=2g(D_uf_v,f_u) =2\frac{\partial}{\partial v}g(f_v,f_u)-2g(f_v,D_uf_u)\to 0$$ as $v\to 0$, and the right differential $|\lambda|_v$ is equal to $|\lambda_v|$ since $\lambda(u,0)=0$. By L’Hospital’s rule, we have $$\kappa_s ={\operatorname{sgn}}(\lambda_v) \frac{-F_v E_{u}+2E F_{uv}-EE_v}{|\lambda|_v E^{3/2}} =\frac{-F_v E_{u}+2E F_{uv}-EE_v}{\lambda_v E^{3/2}},$$ which is the desired conclusion. \[ex:parabola\] Define a map $f$ from ${\boldsymbol{R}}^2$ to the Euclidean $3$-space $({\boldsymbol{R}}^3,g_0)$ as $$\label{eq:parabola} f(u,v)=(au^2+v^2,bv^2+v^3,u) \qquad (a,b\in {\boldsymbol{R}}).$$ Then we have $f_u=(2 a u, 0, 1)$, $f_v=(2v, 2bv + 3v^2, 0)$. This implies that the $u$-axis is the singular curve, and the $v$-direction is the null direction. The unit normal vector and the signed area density $\lambda=\mu_{g_0}(f_u,f_v,\nu)$ are given by $$\begin{gathered} \label{eq:parabola-normal} \nu = \frac{1}{\delta} \bigl(-3v-2b,2,2au(3v+2b)\bigr),\qquad \lambda = v\delta, \\ \text{where}\qquad \delta = \sqrt{4+(1+4a^2u^2)(4b^2+12bv+9v^2)}. \end{gathered}$$ In particular, since $d\nu(\partial/\partial v)=\nu_v\neq 0$ on the $u$-axis, $(f,\nu)\colon{}{\boldsymbol{R}}^2\to {\boldsymbol{R}}^3\times S^2=T_1{\boldsymbol{R}}^3$ is an immersion, i.e. $f$ is a front, and each point of the $u$-axis is a cuspidal edge. The singular curvature is given by $$\label{eq:curvature-parabola} \kappa_s(u)= \frac{2a}{(1+4a^2u^2)^{3/2}\sqrt{1+b^2(1+4a^2u^2)}}.$$ When $a>0$ (resp. $a<0$), that is, the singular curvature is positive (resp. negative), we shall call $f$ a [*cuspidal elliptic*]{} (resp. [*hyperbolic*]{}) [*parabola*]{} since the figure looks like a elliptic (resp. hyperbolic) parabola, as seen in Figure \[fig:parabola\] in the introduction. \[def:peak\] A singular point $p\in M^2$ (which is not a cuspidal edge) is called a [*peak*]{} if there exists a coordinate neighborhood $(U;u,v)$ of $p$ such that 1. \[item:peak-1\] there are no singular points other than cuspidal edges on $U\setminus \{p\}$, 2. \[item:peak-2\] the rank of the derivative $f_*\colon{}T_pM^2\to T_{f(p)}N^3$ at $p$ is equal to $1$, and 3. \[item:peak-3\] The singular set of $U$ consists of finitely many regular $C^1$-curves starting at $p$. The number $2m(p)$ of these curves is called the [*number of cuspidal edges*]{} starting at $p$. If a peak is a non-degenerate singular point, it is called a [*non-degenerate peak*]{}. Swallowtails are examples of non-degenerate peaks. A front which admits cuspidal edges and peaks is called [*a front which admits at most peaks*]{}. There are degenerate singular points which are not peaks. Typical examples are cone-like singularities which appear in rotationally symmetric surfaces in ${\boldsymbol{R}}^3$ of positive constant Gaussian curvature. However, since generic fronts (in the local sense) have only cuspidal edges and swallowtails, the set of fronts which admits at most peaks covers a sufficiently wide class of fronts. \[ex:degenerate-peak\] Define a map $f\colon{}{\boldsymbol{R}}^2\to{\boldsymbol{R}}^3$ as $$f(u,v) := (2u^3-uv^2,3u^4-u^2v^2,v).$$ Then $$\nu = \frac{1}{\sqrt{1+4u^2(1+u^2v^2)}}(-2u,1,-2u^2v)$$ is the unit normal vector to $f$. The pull-back of the canonical metric of $T_1{\boldsymbol{R}}^3={\boldsymbol{R}}^3\times S^2$ by $(f,\nu)\colon{}{\boldsymbol{R}}^2\to{\boldsymbol{R}}^3\times S^2$ is positive definite. Hence $f$ is a front. The signed area density function is $\lambda = (v^2-6u^2)\sqrt{1+4u^2(1+u^2v^2)}$, and then the singular set is $\Sigma_f=\{v=\sqrt{6}u\}\cup \{v=-\sqrt{6}u\}$. In particular, $d\lambda =0$ at $(0,0)$. The first fundamental form of $f$ is expressed as $ds^2=dv^2$ at the origin, which is of rank one. Hence the origin is a degenerate peak (see Figure \[fig:double-swallow\]). To analyze the behavior of the singular curvature near a peak, we prepare the following proposition. \[Boundedness of the singular curvature measure\] \[prop:bounded-curvature-measure\] Let $f\colon{}M^2\to (N^3,g)$ be a front with a peak $p$. Take $\gamma\colon{}[0,\varepsilon)\to M^2$ a singular curve of $f$ starting from the singular point $p$. Then $\gamma(t)$ is a cuspidal edge for $t>0$, and the singular curvature measure $\kappa_s\,ds$ is continuous on $[0,\varepsilon)$, where $ds$ is the arclength-measure. In particular, the limiting tangent vector $\displaystyle\lim_{t\to 0}\hat\gamma'(t)/|\hat\gamma'(t)|$ exists, where $\hat\gamma=f\circ \gamma$. Let $ds^2$ be the first fundamental form of $f$. Since $p$ is a peak, the rank $ds^2$ is $1$ at $p$ and then one of the eigenvalues is $0$ and the other is not. Hence the eigenvalues of $ds^2$ are of multiplicity one on a neighborhood of $p$. Hence one can choose a local coordinate system $(u,v)$ around $p$ such that each coordinate curve is tangent to an eigendirection of $ds^2$. In particular, we can choose $(u,v)$ such that $\partial/\partial v$ is the null vector field on $\gamma$. In such a coordinate system, $f_v=0$ and $D_t f_v=0$ hold on $\gamma$. Then the derivatives of $\hat\gamma=f\circ\gamma$ are $$\hat\gamma' = u'f_u,\qquad D_t\hat\gamma' = u'' f_u + u'D_t f_u\qquad \left('=\frac{d}{dt}\right),$$ where $\gamma(t)=\bigl(u(t),v(t)\bigr)$. Hence $$\label{eq:singular-curvature-peak} \kappa_s = \pm \frac{\mu_g(\hat\gamma',D_t\hat\gamma',\nu)}{|\hat\gamma'|^3} = \pm \frac{\mu_g(f_u,D_t f_u,\nu)}{|u'|\,|f_u|^3},$$ where $|X|^2=g(X,X)$ for $X\in TN^3$. Since $ds=|\hat\gamma'|\,dt=|u'|\,|f_u|\,dt$ and $f_u\neq 0$, $$\kappa_s\,ds =\pm \frac{\mu_g(f_u,D_tf_u,\nu)}{|f_u|^2}\,dt$$ is bounded. To analyze the behavior of the singular curvature near a non-degenerate peak, we give another expression of the singular curvature measure: \[prop:bdd2\] Let $(u,v)$ be an adapted coordinate system of $M^2$. Suppose that $(u,v)=(0,0)$ is a non-degenerate peak. Then the singular curvature measure has the expression $$\label{eq:sing-3} \kappa_s(u)ds={\operatorname{sgn}}(\lambda_v)\frac{\mu_g(f_v,f_{uv},\nu)}{|f_v|^2}du,$$ where $ds$ is the arclength-measure and $f_{uv}:=D_uf_v=D_vf_u$. In particular, the singular curvature measure is smooth along the singular curve. We can take the null direction $\eta(u)=a(u)(\partial/\partial u)+ e(u)(\partial/\partial v)$ as in . Since the peak is not a cuspidal edge, $\eta(0)$ must be proportional to $\partial_u$. In particular, we can multiply $\eta(u)$ by a non-vanishing function and may assume that $a(u)=1$. Then $f_u+e(u)f_v=0$ and by differentiation we have $f_{uu}+e_uf_v+e f_{uv}=0$, that is, $$f_u=-e f_v,\quad f_{uu}=-e_u f_v-e f_{uv}.$$ Substituting them into , we have using the relation $ds=|\hat \gamma'|dt=|f_u|dt$. \[Behavior of the singular curvature near a non-degenerate peak\] \[cor:singular-curvature-peak\]  At a non-degenerate peak, the singular curvature diverges to $-\infty$. We take an adapted coordinate $(u,v)$ centered at the peak. Then $$\kappa_s(u) ={\operatorname{sgn}}(\lambda_v) \frac{\mu_g(f_{v},f_{uv},\nu)}{|e(u)|\, |f_v|^3}.$$ On the other hand, $$\mu_g(f_{v},f_{uv},\nu)= \mu_g(f_{v},f_{u},\nu)_v-\mu_g(f_{vv},f_{u},\nu) = (-\lambda)_v-\mu_g(f_{vv},f_{u},\nu).$$ Since $f_u(0,0)=0$ we have $$\left.{\operatorname{sgn}}(\lambda_v) \frac{\mu_g(f_{v},f_{uv},\nu)}{|f_v|^3}\right|_{(u,v)=(0,0)} \!\!= -\frac{|\lambda_v(0,0)|\hphantom{^3}}{|f_v(0,0)|^3}<0.$$ Since $e(u)\to 0$ as $u\to 0$, we have the assertion. \[ex:swallowtail\] The typical example of peaks is a swallowtail. We shall compute the singular curvature of the swallowtail $f(u,v)=(3u^4+u^2v ,4u^3+2uv,v)$ at $(u,v)=(0,0)$ given in the introduction, which is the discriminant set $\{(x,y,z)\in {\boldsymbol{R}}^3\,;\, F(x,y,z,s)=F_{s}(x,y,z,s)=0~ \text{for $s\in {\boldsymbol{R}}$}\}$ of the polynomial $F:=s^3+zs^2+ys+x$ in $s$. Since $f_u\times f_v=2(6u^2+v)(1,-u,u^2)$, the singular curve is $\gamma(t)=(t,-6t^2)$ and the unit normal vector is given by $\nu=(1,-u,u^2)/\sqrt{1+u^2+u^4}$. We have $$\kappa_s(t)= \frac{\det(\hat\gamma',\hat\gamma'',\nu)}{|\hat\gamma'|^3} =-\frac{\sqrt{1+t^2+t^4}}{6|t|(1+4t^2+t^4)^{3/2}},$$ which shows the singular curvature tends to $-\infty$ when $t\to 0$. \[def:null\] Let $f\colon{}M^2\to N^3$ be a front. A regular curve $\sigma(t)$ in $M^2$ is called a [*null curve*]{} of $f$ if $\sigma'(t)$ is a null vector at each singular point. In fact, $\hat \sigma(t)=f\bigl(\sigma(t)\bigr)$ looks like the curve (virtually) transversal to the cuspidal edge, in spite of $\hat \sigma'=0$, and $D_t\hat \sigma'$ gives the “tangential” direction of the surface at the singular point. \[thm:PN-cuspidal-edge\] Let $p$ be a cuspidal edge, $\gamma(t)$ a singular curve parametrized by the arclength $t$ with $\gamma(0)=p$, and $\sigma(s)$ a null curve passing through $p=\sigma(0)$. Then the sign of $$g\bigl(\ddot{\hat \sigma}(0),\hat \gamma''(0)\bigr)$$ coincides with that of the singular curvature at $p$, where $\hat\sigma=f(\sigma)$, $\hat\gamma=f(\gamma)$, $$\dot{\hat\sigma}=\frac{d\hat\sigma}{ds},\qquad \hat\gamma'=\frac{d\hat\gamma}{dt},\qquad \ddot{\hat\sigma}=D_s\left(\frac{d\hat\sigma}{ds}\right), \quad\text{and}\quad \hat\gamma''=D_t\left(\frac{d\hat\gamma}{dt}\right).$$ We can take an adapted coordinate system $(u,v)$ around $p$ such that $\eta:=\partial/\partial v$ is a null vector field on the $u$-axis. Then $f_v=f_*\eta$ vanishes on the $u$-axis, and it holds that $f_{uv}:=D_v f_u=D_u f_v=0$ on the $u$-axis. Since the $u$-axis is parametrized by the arclength, we have $$\label{eq:u-axis-unit-length} g(f _{uu},f_u)=0 \qquad \text{on the $u$-axis} \qquad\left(f_{uu}=D_u f_u\right).$$ Now let $\sigma(s)=\bigl(u(s),v(s)\bigr)$ be a null curve such that $\sigma(0)=(0,0)$. Since $\dot\sigma(0)$ is a null vector, $\dot u(0)=0$, where $\dot{~}=d/ds$. Moreover, since $f_v(0,0)=0$ and $f_{uv}(0,0)=0$, we have $$\begin{aligned} \ddot{\hat \sigma}(0) &= D_s (\dot u f_u+ \dot v f_v) = \ddot u f_u+ \ddot v f_v+ \dot u^2 D_{u} f_u + 2\dot u \dot v D_uf_v +\dot v^2D_v f_v \\ &= \ddot u f_u+ \dot v^2 D_v f_v = \ddot u f_u(0,0)+ \dot v^2 f_{vv}(0,0), \end{aligned}$$ and by , $$g\bigl(\ddot{\hat\sigma}(0)),\hat\gamma''(0)\bigr) =g\bigl(f_{uu}(0,0),\ddot u f_u+ \dot v^2 f_{vv}(0,0)\bigr) =\dot v^2 g\bigl(f_{uu}(0,0),f_{vv}(0,0)\bigr).$$ Now we can write $f_{vv}=a f_u+b(f_u\times_g \nu)+c \nu$, where $a,b,c\in {\boldsymbol{R}}$. Then $$\begin{aligned} c&=g(f_{vv},\nu)=g(f_v,\nu)_v-g(f_v,\nu_{v})=0,\\ b&=g(f_{vv}, f_u\times_g \nu)= g(f_v, f_u\times_g \nu)_v=-\lambda_v, \end{aligned}$$ where we apply the scalar triple product formula $g(X, Y\times_g Z)=\mu_g(X,Y,Z)$ for $X,Y,Z\in T_{f(0,0)}N^3$. Thus $$\begin{aligned} g\bigl(\ddot{\hat\sigma}(0),\hat\gamma''(0)\bigr) &=\dot v^2 g\bigl(f_{uu}, a f_u-\lambda_v(f_u\times_g \nu)\bigr) =-\dot v^2 \lambda_v\,g(f_{uu}, f_u\times_g \nu)\\ &=\dot v^2 \lambda_v\,\mu_g(\hat \gamma',\hat \gamma'',\nu) = \dot v^2|\lambda_v|\,\kappa_s(0). \end{aligned}$$ This proves the assertion. In the case of fronts in the Euclidean $3$-space ${\boldsymbol{R}}^3=({\boldsymbol{R}}^3,g_0)$, positively curved cuspidal edges and negatively curved cuspidal edges look like cuspidal elliptic parabola or hyperbolic parabola (see Example \[ex:parabola\] and Figure \[fig:parabola\]), respectively. More precisely, we have the following: \[cor:positive-negative-cuspidal-edge\] Let $f\colon{}M^2\to({\boldsymbol{R}}^3,g_0)$ be a front, $p\in M^2$ a cuspidal edge point and $\gamma$ a singular curve with $\gamma(0)=p$. Let $T$ be the rectifying plane of the singular curve $\hat\gamma=f\circ\gamma$ at $p$, that is, the plane perpendicular to the principal normal vector of $\hat\gamma$. When the singular curvature at $p$ is positive [(]{}resp. negative[)]{}, every null curve $\sigma(s)$ passing through $\sigma(0)=p$ lies on the same side $D_+$ [(]{}resp. the opposite side $D_-$[)]{} of the principal normal vector of $\hat\gamma$ at $p$ for sufficiently small $s$. Moreover, if the singular curvature is positive, the image of the neighborhood of $p$ itself lies in $D_+$ [(]{}see Figures \[fig:parabola\] and \[fig:principal-space\][)]{}. [c@c]{} & \ positively curved & negatively curved \ The principal half-spaces are behind the rectifying plane. \[def:principal\] The half-space in Corollary \[cor:positive-negative-cuspidal-edge\] bounded by the rectifying plane of the singular curve and in which the null curves lie is called the [*principal half-space*]{} at the cuspidal edge. The surface lies mostly in this half-space. When the singular curvature is positive, the surface is locally inside the principal half-space. Let $(u,v)$ be the same coordinate system at $p$ as in the proof of Proposition \[thm:PN-cuspidal-edge\] and assume $f(0,0)=0$. Since $N^3={\boldsymbol{R}}^3$, with $f_{uu}=\partial^2 f/\partial u^2$ etc., we have the following Taylor expansion: $$\label{eq:taylor} f(u,v) = f_u(0,0) + \frac{1}{2}\bigl(f_{uu}(0,0)u^2 + f_{vv}(0,0)v^2\bigr)+ o(u^2+v^2).$$ Here, $u$ is the arclength parameter of $\hat\gamma(u)=f(u,0)$. Then $g_0(f_u,f_{uu})=0$ holds on the $u$-axis. Thus $$g_0\bigl(f(u,v),f_{uu}(0,0)\bigr) =\frac{1}{2}u^2|f_{uu}(0,0)|^2 + \frac{1}{2}v^2 g_0\bigl(f_{vv}(0,0),f_{uu}(0,0)\bigr) + o(u^2+v^2).$$ If the singular curvature is positive, Theorem \[thm:PN-cuspidal-edge\] implies $g_0\bigl(f(u,v),f_{uu}(0,0)\bigr)>0$ on a neighborhood of $p$. Since $f_{uu}(0,0)$ is the principal curvature vector of $\hat\gamma$ at $p$, $f(u,v)$ lies in the same side of $T$ as the principal normal. Next we suppose that the singular curvature is negative at $p$. We can choose a coordinate system in which the null curve is written as $\sigma(v)=(0,v)$. Then by and Theorem \[thm:PN-cuspidal-edge\], $$g_0\bigl(f(0,v),f_{uu}(0,0)\bigr) = v^2 g_0\bigl(f_{vv}(0,0),f_{uu}(0,0)\bigr)+o(v^2)<0$$ for sufficiently small $v$. Hence we have the conclusion. \[ex:chebyshev\] A front $f\colon{}M^2\to {\boldsymbol{R}}^3$ is said to be of [*constant Gaussian curvature $-1$*]{} if the set $W=M^2\setminus\Sigma_f$ of regular points are dense in $M^2$ and $f$ has constant Gaussian curvature $-1$ on $W$. Then $f$ is a projection of the Legendrian immersion $L_f\colon{}M^2\to T_1{\boldsymbol{R}}^3$, and the pull-back $d\sigma^2=|df|^2+|d\nu|^2$ of the Sasakian metric on $T_1{\boldsymbol{R}}^3$ by $L_f$ is flat. Thus for each $p\in M^2$, there exists a coordinate neighborhood $(U;u,v)$ such that $d\sigma^2=2(du^2+dv^2)$. The two different families of asymptotic curves on $W$ are all geodesics of $d\sigma^2$, giving two foliations of $W$. Moreover, they are mutually orthogonal with respect to $d\sigma^2$. Then one can choose the $u$-curves and $v$-curves to all be asymptotic curves on $W\cap U$. For such a coordinate system $(u,v)$, the first and second fundamental forms are $$\label{eq:chebyshev-front} ds^2 = du^2 + 2\cos\theta\,du\,dv + dv^2 ,\qquad h= 2\sin\theta\,du\,dv,$$ where $\theta=\theta(u,v)$ is the angle between the two asymptotic curves. The coordinate system $(u,v)$ as in is called the [*asymptotic Chebyshev net*]{} around $p$. The sine-Gordon equation $\theta_{uv}=\sin\theta$ is the integrability condition of , that is, if $\theta$ satisfies the sine-Gordon equation, then there exists a corresponding front $f=f(u,v)$. For such a front, we can choose the unit normal vector $\nu$ such that $f_u \times f_v = \sin\theta \,\nu$ holds, that is, $\lambda=\sin\theta$. The singular sets are characterized by $\theta\in \pi{\boldsymbol{Z}}$. We write $\varepsilon = e^{\pi i \theta}=\pm 1$ at a singular point. A given singular point is non-degenerate if and only if $d\theta\ne 0$. Moreover, the cuspidal edges are characterized by $\theta_u-\varepsilon\theta_v\ne 0$, and the swallowtails are characterized by $\theta_u+\varepsilon\theta_v\ne 0$, $\theta_u-\varepsilon\theta_v= 0$ and $\theta_{uu}+\theta_{vv}\ne 0$. By a straightforward calculation applying Proposition \[prop:intrinsic-singular-curvature\], we have $$\kappa_s=-\varepsilon\frac{\theta_u\theta_v}{ |\theta_u-\varepsilon \theta_v|} \qquad (\varepsilon=e^{\pi i \theta}).$$ Recently Ishikawa-Machida [@IM] showed that the generic singularities of such fronts are cuspidal edges or swallowtails, as an application of Fact \[fact:intrinsic-criterion\]. The Gauss-Bonnet theorem {#sec:GB} ======================== In this section, we shall generalize the two types of Gauss-Bonnet formulas mentioned in the introduction to compact fronts which admit at most peaks. \[thm:gaussian-curvature-extended\] Let $f\colon{}M^2\to (N^3,g)$ be a front, and $K$ the Gaussian curvature of $f$ which is defined on the set of regular points of $f$. Then $K\,d\hat A$ can be continuously extended as a globally defined $2$-form on $M^2$, where $d\hat A$ is the signed area form as in . Let $(u,v)$ be a local coordinate system compatible to the orientation of $M^2$, and $S=(S^i_j)$ the (matrix representation of) the shape operator of $f$ which is defined on the set of regular points $M^2\setminus\Sigma_f$. That is, the Weingarten equation holds: $$\nu_u = - S^1_1 f_u - S^2_1 f_v,\quad \nu_v = - S^1_2 f_u - S^2_2 f_v,\quad \text{where } \nu_u=D_{u}\nu,~ \nu_v=D_{v}\nu.$$ Since the extrinsic curvature is defined as $K_{{\operatorname{ext}}}=\det S$, we have $$\mu_g(\nu_u,\nu_v,\nu)= (\det S)\,\mu_g(f_u,f_v,\nu)= K_{{\operatorname{ext}}}\,\lambda,$$ where $\lambda$ is the signed area density. Thus, $$K_{{\operatorname{ext}}}\,d\hat A = K_{{\operatorname{ext}}}\,\lambda\,du\wedge dv = \mu_g(\nu_u,\nu_v,\nu)\,du\wedge dv$$ is a well-defined smooth $2$-form on $M^2$. By the Gauss equation, the Gaussian curvature $K$ satisfies $$\label{eq:k-int-ext} K = c_{N^3}+K_{{\operatorname{ext}}},$$ where $c_{N^3}$ is the sectional curvature of $(N^3,g)$ with respect to the tangent plane. Since $f_*T_pM^2\subset T_{f(p)}N^3$ is the orthogonal complement of the normal vector $\nu(p)^{\perp}$, the tangent plane is well-defined on all of $M^2$. Thus $c_{N^3}$ is a smooth function, and $$K\,d\hat A = c_{N^3}\,d\hat A + K_{{\operatorname{ext}}}\,d\hat A$$ is a smooth $2$-form defined on $M^2$. \[rmk:abs\] On the other hand, $$K\,dA = \begin{cases} \hphantom{-}K\,d\hat A\qquad & (\text{on $M_+$}),\\ -K\,d\hat A\qquad & (\text{on $M_-$}) \end{cases}$$ is bounded, and extends continuously to the closure of $M_+$ and also to the closure of $M_-$. (However, $K\,dA$ cannot be extended continuously to all of $M^2$.) Now we suppose that $M^2$ is compact and $f\colon{}M^2\to {\boldsymbol{R}}^3$ is a front which admits at most peak singularities. Then the singular set coincides with $\partial M_+=\partial M_-$, and $\partial M_+$ and $\partial M_-$ are piecewise $C^1$-differentiable because all singularities are at most peaks, and the limiting tangent vector of each singular curve starting at a peak exists by Proposition \[prop:bounded-curvature-measure\]. For a given peak $p$, let $\alpha_+(p)$ (resp. $\alpha_-(p)$) be the sum of all the interior angles of $f(M_+)$ (resp. $f(M_-)$) at $p$. Then by definition, we have $$\label{eq:sum-interior-angle} \alpha_+(p)+\alpha_-(p)=2\pi.$$ Moreover, since the rank of $f_*$ is one at $p$, we have (see [@SUY2]) $$\label{eq:angle-range} \alpha_+(p),\,\,\alpha_-(p) \in \{0,\pi,2\pi\}.$$ For example, $\alpha_+(p)=\alpha_-(p)=\pi$ when $p$ is a cuspidal edge. If $p$ is a swallowtail, $\alpha_+(p)=2\pi$ or $\alpha_-(p)=2\pi$. If $\alpha_+(p)=2\pi$, $p$ is called a [*positive swallowtail*]{}, and is called a [*negative swallowtail*]{} if $\alpha_-(p)=2\pi$ (see Figure \[fig:positive-negative-swallowtail\]). \ If the image of $M_+$ is the tail part, $\alpha_+(p)=0$. Since $K\,dA$, $K\,d\hat A$ and $\kappa_s\, ds$ are all bounded, we get two Gauss-Bonnet formulas as follows: \[thm:Gauss-Bonnet-compact\] Let $M^2$ be a compact oriented $2$-manifold and $f\colon{}M^2\to (N^3,g)$ a front which admits at most peak singularities, and $\Sigma_f$ the singular set of $f$. Then $$\begin{aligned} \int_{M^2}K\, dA+2\int_{\Sigma_f} \kappa_s\, ds&= 2\pi \chi(M^2), \label{eq:Gauss-Bonnet-unsigned}\\ \int_{M^2}K\, d\hat A-\sum_{p:\text{peak}} \bigl(\alpha_+(p)-\alpha_-(p)\bigr) &=2\pi\bigl(\chi(M_+)-\chi(M_-) \bigr) \label{eq:Gauss-Bonnet-signed} \end{aligned}$$ hold, where $ds$ is the arclength measure on the singular set. \[rem:euler-characteristic\] The integral $\int_{M^2}K\, d\hat A$ is $2\pi$ times the Euler number $\chi_{{\mathcal{E}}}^{}$ of the limiting tangent bundle ${\mathcal{E}}$ (see in Section \[sec:intrinsic\]). When $N^3={\boldsymbol{R}}^3$, $\chi_{\mathcal{E}}^{}/2$ is equal to the degree of the Gauss map. \[rem:generalization\] These formulas are generalizations of the two Gauss-Bonnet formulas in the introduction. If the surface is regular, the limiting tangent bundle ${\mathcal{E}}$ coincides with the tangent bundle, and the two Gauss-Bonnet formulas are the same. Although $\partial M_+$ and $\partial M_-$ are the same set, their orientations are opposite. The singular curvature $\kappa_s$ does not depend on the orientation of the singular curve and coincides with the limit of the geodesic curvature if we take the conormal vector in the positive direction with respect to the velocity vector of the singular curve. Thus we have $$\label{eq:integral-of-singular-curvature} \int_{\partial M_+}\!\! \kappa_s\, ds +\int_{\partial M_-}\!\! \kappa_s\, ds= 2\int_{\Sigma_f} \kappa_s\, ds.$$ Then by the classical Gauss-Bonnet theorem, we have $$\begin{aligned} 2\pi\chi(M_+) &=\int_{M_+} K\,dA+\int_{\partial M_+}\!\!\kappa_s ds + \sum_{p:\text{peak}}\bigl(\pi m(p)-\alpha_+(p)\bigr),\\ 2\pi\chi(M_-) &=\int_{M_-} K\,dA+\int_{\partial M_-}\!\!\kappa_s\, ds + \sum_{p:\text{peak}}\bigl(\pi m(p)-\alpha_-(p)\bigr), \end{aligned}$$ where $2m(p)$ is the number of cuspidal edges starting at $p$ (see Definition \[def:peak\]). Hence by , $$\begin{aligned} 2\pi\chi(M^2) &= \int_{M^2} K\,dA + 2\int_{\Sigma_f}\kappa_s\,ds,\\ 2\pi\bigl(\chi(M_+)-\chi(M_-)\bigr) &= \int_{M^2} K\,d\hat A - \sum_{p:\text{peak}} \bigl(\alpha_+(p)-\alpha_-(p)\bigr), \end{aligned}$$ where we used and $\chi(M^2)=\chi(M_+)+\chi(M_-)- \sum_{p:\text{peak}}\bigl(m(p)-1\bigr)$. We shall now define the completeness of fronts and give Gauss-Bonnet formulas for non-compact fronts: As defined in [@KUY2], a front $f\colon{}M^2\to N^3$ is called [*complete*]{} if the singular set is compact and there exists a symmetric tensor $T$ with compact support such that $ds^2+T$ gives a complete Riemannian metric on $M^2$, where $ds^2$ is the first fundamental form of $f$. On the other hand, as defined in [@KRSUY], a front $f\colon{}M^2\to N^3$ is called [*weakly complete*]{} if the pull-back of the Sasakian metric of $T_1N^3$ by the Legendrian lift $L_f\colon{}M^2\to T_1N^3$ is complete. Completeness implies weak completeness. Let $f\colon{}M^2\to N^3$ be a complete front with finite absolute total curvature. Then there exists a compact $2$-manifold $\overline M^2$ without boundary and finitely many points $p_1,\dots,p_k$ such that $M^2$ is diffeomorphic to $\overline M^2\setminus \{p_1,\dots,p_k\}$. We call the $p_i$’s the [*ends*]{} of the front $f$. According to Theorem A of Shiohama [@S], we define the limiting area growth order $$\label{eq:area-growth} a(p_i)=\lim_{r\to \infty} \frac{{\operatorname{Area}}\bigl(B_0(r)\cap E_i\bigr)}{ {\operatorname{Area}}\bigl(B_{{\boldsymbol{R}}^2}(r)\cap E_i\bigr)},$$ where $E_i$ is the punctured neighborhood of $p_i$ in $\overline{M}^2$. \[thm:Gauss-Bonnet-complete\] Let $f\colon{}M^2\to (N^3,g)$ be a complete front with finite absolute total curvature, which has at most peak singularities, and write $M^2=\overline M^2\setminus\{p_1,\dots,p_k\}$. Then $$\begin{aligned} &\int_{M^2}\!\!K\, dA+2\int_{\Sigma_f} \kappa_s\, ds +\sum_{i=1}^ka(p_i)=2\pi\chi(M^2), \label{eq:Gauss-Bonnet-complete-unsigned}\\ &\int_{M^2}\!\!K\, d\hat A-\sum_{p:\text{peak}} \bigl(\alpha_+(p)-\alpha_-(p)\bigr) +\sum_{i=1}^k\varepsilon(p_i)a(p_i) =2\pi\bigl(\chi(M_+)-\chi(M_-)\bigr) \label{eq:Gauss-Bonnet-complete-signed} \end{aligned}$$ hold, where $\varepsilon(p_i)=1$ [(]{}resp. $\varepsilon(p_i)=-1$[)]{} if the neighborhood $E_i$ of $p_i$ is contained in $M_{+}$ $($resp. $M_{-})$. \[ex:pseudosphere\] Define $f\colon{}{\boldsymbol{R}}^2\to{\boldsymbol{R}}^3$ as $$f(x,y):= \left( {\operatorname{sech}}x\, \cos y, {\operatorname{sech}}x\, \sin y, x - \tanh x \right).$$ If we set $\nu:=(\tanh x\,\cos y, \tanh x\,\sin y, {\operatorname{sech}}x)$, then $\nu$ is the unit normal vector and $f$ is a front whose singular set $\{x=0\}$ consists of cuspidal edges. The Gaussian curvature of $f$ is $-1$, and the coordinate system $(u,v)$ defined as $x=u-v$, $y=u+v$ is the asymptotic Chebyshev net (see Example \[ex:chebyshev\]) with $\theta=4\arctan\exp(u-v)$. Since $f(x,y+2\pi)=f(x,y)$, $f$ induces a smooth map $f_1$ from the cylinder $M^2={\boldsymbol{R}}^2/\{(0,2\pi m)\,;\,m\in{\boldsymbol{Z}}\}$ into ${\boldsymbol{R}}^3$. The front $f_1\colon{}M^2\to {\boldsymbol{R}}^3$ has two ends $p_1$, $p_2$ with growth order $a(p_j)=0$. Hence by Theorem \[thm:Gauss-Bonnet-complete\], we have $$2\int_{\Sigma_{f_1}}\!\!\kappa_s\,ds = {\operatorname{Area}}(M^2)=8\pi.$$ In fact, the singular curvature is positive. \[ex:kuen\] The smooth map $f\colon{}{\boldsymbol{R}}^2\to{\boldsymbol{R}}^3$ defined as $$f(x,y)= \frac{1}{1+2(1+2y^2)e^{2x}+e^{4x}} \begin{pmatrix} 4 e^x(1+e^{2x})(\cos y+y\sin y)\\ 4 e^x(1+e^{2x})(\sin y+y\cos y)\\ 2 + 2x(1+2y^2)e^{2x}+(x-2)e^{4x} \end{pmatrix}$$ is called [*Kuen’s surface*]{}, which is considered as a weakly complete front with the unit normal vector $$\nu(x,y)= \frac{1}{1+2(1+2y^2)e^{2x}+e^{4x}} \begin{pmatrix} 8e^{2x} y \cos y - (1+2(1-2y^2)e^{2x}+e^{4x})\sin y\\ 8e^{2x} y \sin y + (1+2(1-2y^2)e^{2x}+e^{4x})\cos y\\ 4e^{x}(1-e^{2x})y \end{pmatrix},$$ and has Gaussian curvature $-1$. The coordinate system $(u,v)$ such that $x=u-v$ and $y=u+v$ is the asymptotic Chebyshev net with $\theta=-4\arctan\bigl(2ye^x/(1+e^{2x})\bigr)$. Since the singular set $\Sigma_f=\{y=0\}\cap \{y=\pm\cosh x\}$ is non-compact, $f$ is not complete. \[ex:cone\] Define $f\colon{}{\boldsymbol{R}}^2\setminus\{(0,0)\}\to {\boldsymbol{R}}^3$ as $$f (x,y) = (\log r\cos\theta,\log r\sin\theta,a\log r) \qquad (x,y) = (r\cos\theta,r\sin\theta),$$ where $a\neq 0$ is a constant. Then $f$ is a front with $\nu=(a\cos t,a\sin t,-1)/\sqrt{1+a^2}$. The singular set is $\Sigma_f=\{r=1\}$, which corresponds to the single point $(0,0,0)\in{\boldsymbol{R}}^3$. That is, all points in $\Sigma_f$ are degenerate singular points. The image of the singular points is a cone of angle $\mu=2\pi/\sqrt{1+a^2}$ and the area growth order of the two ends are $1/\sqrt{1+a^2}$. Theorem \[thm:Gauss-Bonnet-complete\] cannot be applied to this example because the singularities degenerate. However, this example suggests that it might be natural to define the “singular curvature measure” at a cone-like singularity as the cone angle. Behavior of the Gaussian curvature {#sec:gaussian-curvature} ================================== Firstly, we shall prove the following assertion, which says that the shape of singular points is very restricted when the Gaussian curvature is bounded. \[thm:gaussian-singular\] Let $f\colon{}M^2\to (N^3,g)$ be a front, $p\in M^2$ a singular point, and $\gamma(t)$ a singular curve consisting of non-degenerate singular points with $\gamma(0)=p$ defined on an open interval $I\subset{\boldsymbol{R}}$. Then the Gaussian curvature $K$ is bounded on a sufficiently small neighborhood of $\gamma(I)$ if and only if the second fundamental form vanishes on $\gamma(I)$. Moreover, if the extrinsic curvature $K_{{\operatorname{ext}}}$ [(]{}i.e. the product of the principal curvatures[)]{} is non-negative on $U\setminus\gamma(I)$ for a neighborhood of $U$ of $p$, then the singular curvature is non-positive. Furthermore, if $K_{{\operatorname{ext}}}$ is bounded below by a positive constant on $U\setminus\gamma(I)$ then the singular curvature at $p$ takes a strictly negative value. In particular, when $(N^3,g)=({\boldsymbol{R}}^3,g_0)$, the singular curvature is non-positive if the Gaussian curvature $K$ is non-negative near the singular set. \[Proof of the first part of Theorem \[thm:gaussian-singular\]\] We shall now prove the first part of the theorem. Take an adapted coordinate system $(u,v)$ such that the singular point $p$ corresponds to $(0,0)$, and write the second fundamental form of $f$ as $$\label{eq:second-coef} h=L\,du^2+2\,M\,du\,dv+N\,dv^2\quad \left( \begin{array}{r@{}l} L&=-g(f_u,\nu_u),~ N=-g(f_v,\nu_v),\\ M&=-g(f_v,\nu_u)=-g(f_u,\nu_v) \end{array} \right).$$ Since $f_u$ and $f_v$ are linearly dependent on the $u$-axis, $LN-(M)^2$ vanishes on the $u$-axis as well as the area density function $\lambda(u,v)$. Then by the Malgrange preparation theorem (see [@GG page 91]), there exist smooth functions ${\varphi}(u,v)$, $\psi(u,v)$ such that $$\label{eq:expand-margrange} \lambda(u,v)=v{\varphi}(u,v)\qquad\text{and}\qquad LN-(M)^2=v \psi(u,v).$$ Since , $\lambda_v\neq 0$ holds. Hence ${\varphi}(u,v)\neq 0$ on a neighborhood of the origin. Firstly, we consider the case $p$ is a cuspidal edge point. Then we can choose $(u,v)$ so that $\partial/\partial v$ gives the null direction. Since $f_v=0$ holds on the $u$-axis, we have $M=N=0$. By and , we have $K=c_{N^3}+\psi(u,v)/(v {\varphi}(u,v)^2)$. Thus the Gaussian curvature is bounded if and only if $$L(u,0)N_v(u,0)= \left.\bigl(LN-(M)^2\bigr)_v\right|_{v=0}=\psi(u,0)=0$$ holds on the $u$-axis. To prove the assertion, it is sufficient to show that $N_v(0,0)\ne 0$. Since $\lambda_v=\mu_g(f_{u},f_{vv},\nu)\neq 0$, $\{f_{u},f_{vv},\nu\}$ is linearly independent. Here, we have $$2g(\nu_v,\nu)=g(\nu,\nu)_v=0\qquad\text{and}\qquad \left. g(\nu_v,f_{u})\right|_{v=0}=-M=0.$$ Thus $\nu_v=0$ if and only if $g(\nu_v,f_{vv})=0$. On the other hand, $\nu_v(0,0)\ne 0$ holds, since $f$ is a front and $f_v=0$. Thus we have $$\label{eq:Nv-neq-zero} N_v(0,0)=g(f_v,\nu_v)_v=g(\nu_v,f_{vv})\ne 0.$$ Hence the first part of Theorem \[thm:gaussian-singular\] is proved for cuspidal edges. Next we consider the case that $p$ is not a cuspidal edge point. Under the same notation as in the previous case, $f_u(0,0)=0$ holds because $p$ is not a cuspidal edge. Then we have $M(0,0)=L(0,0)=0$, and thus the Gaussian curvature is bounded if and only if $$L_v(u,0)N(u,0)= \left.\bigl(LN-(M)^2\bigr)_v\right|_{v=0}=\psi(u,0)=0$$ holds on the $u$-axis. Thus, to prove the assertion, it is sufficient to show that $L_v(0,0)\ne 0$. Since $\lambda_v=\mu_g(f_{uv},f_{v},\nu)$ does not vanish, $\{f_{uv},f_{v},\nu\}$ is linearly independent. On the other hand, $\nu_u(0,0)\ne 0$, because $f$ is a front and $f_u(0,0)=0$. Since $g(\nu,\nu)_v=0$ and $g(\nu_u,f_{v})=-M=0$, we have $$\label{eq:Lv-neq-0} L_v(0,0)=g(\nu_{u},f_{u})_v=g(\nu_{u},f_{uv})\ne 0.$$ Hence the first part of the theorem is proved. Before proving the second part of Theorem \[thm:gaussian-singular\], we prepare the following lemma: \[Existence of special adapted coordinates along cuspidal edges\] \[lem:normal-coordinate\] Let $p$ be a cuspidal edge of a front $f\colon{}M^2\to (N^3,g)$. Then there exists an adapted coordinate system $(u,v)$ satisfying the following properties[:]{} 1. \[item:normal-1\] $g(f_u,f_u)=1$ on the $u$-axis, 2. \[item:normal-2\] $f_v$ vanishes on the $u$-axis, 3. \[item:normal-3\] $\lambda_v=1$ holds on the $u$-axis, 4. \[item:normal-4\] $g(f_{vv},f_u)$ vanishes on the $u$-axis, and 5. \[item:normal-5\] $\{f_u,f_{vv},\nu\}$ is a positively oriented orthonormal basis along the $u$-axis. We shall call such a coordinate system $(u,v)$ a [*special adapted coordinate system*]{}. One can easily take an adapted coordinate system $(u,v)$ at $p$ satisfying \[item:normal-1\] and \[item:normal-2\]. Since $\lambda_v\neq 0$ on the $u$-axis, we can choose $(u,v)$ as $\lambda_v>0$ on the $u$-axis. In this case, $r:=\sqrt{\lambda_v}$ is a smooth function on a neighborhood of $p$. Now we set $$u_1=u,\qquad v_1=\sqrt{\lambda_v(u,0)}\,v.$$ Then the Jacobian matrix is given by $$\frac{\partial (u_1,v_1)}{\partial (u,v)} = \begin{pmatrix} 1 & 0 \\ r'(u) & r(u) \end{pmatrix}, \quad \text{where $r(u):=\sqrt{\lambda_v(u,0)}$.}$$ Thus we have $$\left. (f_{u_1},f_{v_1})\right |_{v=0}= \left. (f_u,f_v) \begin{pmatrix} 1 & 0 \\ \frac{r'(u)}{r(u)}v & \frac{1}{r(u)} \end{pmatrix} \right |_{v=0}= (f_u,f_v) \begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{r(u)} \end{pmatrix}.$$ This implies that $f_{u_1}=f_u$ and $f_{v_1}=0$ on the $u$-axis. Thus the new coordinates $(u_1,v_1)$ satisfy \[item:normal-1\] and \[item:normal-2\]. The signed area density function with respect to $(u_1,v_1)$ is given by $\lambda_1:=\mu_g(f_{u_1},f_{v_1},\nu)$. Since $f_{v_1}=0$ on the $u$-axis, we have $$\label{eq:e1} (\lambda_1)_{v_1}:= \mu_g(f_{u_1},D_{v_1}f_{v_1},\nu).$$ On the other hand, we have $$\label{eq:e2} f_{v_1}=\frac{f_v}{r(u)} \quad\text{and}\quad D_{v_1}f_{v_1}=\frac{D_{v}f_{v_1}}{r(u)} =\frac{f_{vv}}{r^2}=\frac{f_{vv}}{\lambda_v}$$ on the $u_1$-axis. By and , we have $(\lambda_1)_{v_1}=\lambda_v/\lambda_v=1$ and have shown that $(u_1,v_1)$ satisfies \[item:normal-1\], \[item:normal-2\] and \[item:normal-3\]. Next, we set $$u_2:=u_1+v_1^2\,s(u_1), \qquad v_2:=v_1,$$ where $s(u_1)$ is a smooth function in $u_1$. Then we have $$\frac{\partial (u_2,v_2)}{\partial (u_1,v_1)} =\begin{pmatrix} 1+v_1^2s' & 2v_1 s(u_1) \\ 0 & 1 \end{pmatrix},$$ and $$\left.\frac{\partial (u_1,v_1)}{\partial (u_2,v_2)} \right |_{v_1=0} =\left. \frac{1}{1+v_2^2s'} \begin{pmatrix} 1 & -2v_1 s(u_2) \\ 0 & 1+v_2^2s' \end{pmatrix} \right |_{v_2=0} =\begin{pmatrix} 1& 0 \\ 0 & 1 \end{pmatrix}.$$ Thus the new coordinates $(u_2,v_2)$ satisfy \[item:normal-1\] and \[item:normal-2\]. On the other hand, the area density function $\lambda_2:=\mu_g(f_{u_2},f_{v_2},\nu)$ satisfies $$(\lambda_2)_{v_2}= \mu_g(f_{u_2},f_{v_2},\nu)_{v_2} =\mu_g(f_{u_2},D_{v_2}f_{v_2},\nu).$$ We have on the $u_2$-axis that $f_{u_2}=f_{u_1}$ and $$\begin{aligned} f_{v_2}&=\frac{-2 v_1s }{1+v_1^2s'}f_{u_1}+f_{v_1} \label{eq:l-1} \\ g(D_{v_2}f_{v_2})&=D_{v_1}f_{v_2}= \frac{-2 s }{1+v_1^2s'}f_{u_1}+D_{v_1}f_{v_1}. \label{eq:l-2} \end{aligned}$$ Thus one can easily check that $(\lambda_2)_{v_2}=1$ on the $u$-axis. By , we have $g(f_{u_2},D_{v_2}f_{v_2})=-2s +g(f_{u_1},D_{v_1}f_{v_1})$. Hence, if we set $$s(u_1):=\frac{1}{2}g\bigl(f_{u_1}(u_1,0),(D_{v_1}f_{v_1})(u_1,0)\bigr),$$ then the coordinate $(u_2,v_2)$ satisfies \[item:normal-1\], \[item:normal-2\], \[item:normal-3\] and \[item:normal-4\]. Since $g\bigl((f_{v_2})_{v_2},\nu\bigr)=-g(f_{v_2},\nu_{v_2})=0$, $f_{v_2v_2}(u_2,0)$ is perpendicular to both $\nu$ and $f_{u_2}$. Moreover, we have on the $u_2$-axis $$1=(\lambda_2)_{v_2} =\mu_g(f_{u_2},f_{v_2},\nu)_{v_2} =\mu_g(f_{u_2},D_{v_2}f_{v_2},\nu) =g(D_{v_2}f_{v_2},f_{u_2}\times_g \nu),$$ and can conclude that $D_{v_2}f_{v_2}$ is a unit vector. Thus $(u_2,v_2)$ satisfies \[item:normal-5\]. Using the existence of the special adapted coordinate system, we shall show the second part of the theorem. \[Proof of the second part of Theorem \[thm:gaussian-singular\]\] We suppose $K\geq c_{N^3}$, where $c_{N^3}$ is the sectional curvature of $(N^3,g)$ with respect to the tangent plane. Then by , $K_{{\operatorname{ext}}}\geq 0$ holds. If a given non-degenerate singular point $p$ is not a cuspidal edge, the singular curvature is negative by Corollary \[cor:singular-curvature-peak\]. Hence it is sufficient to consider the case that $p$ is a cuspidal edge. So we may take a special adapted coordinate system as in Lemma \[lem:normal-coordinate\]. We take smooth functions $\varphi$ and $\psi$ as in . Since $K$ is bounded, $\psi(u,0)=0$ holds, as seen in the proof of the first part. By the Malgrange preparation theorem again, we may put $LN-M^2=v^2\psi_1(u,v)$, and have the expression $K_{{\operatorname{ext}}}={\psi_1}/{{\varphi}^2}$. Since $K_{{\operatorname{ext}}}\geq 0$, we have $\psi_1(u,0)\geq 0$. Moreover, if $K_{{\operatorname{ext}}}\geq \delta>0$ on a neighborhood of $p$, then $\psi_1(u,0)>0$. Since $L=M=N=0$ on the $u$-axis, we have $$\label{eq:uv-positive} 0\leq 2\psi_1(u,0)=\bigl(LN-(M)^2\bigr)_{vv}=L_vN_v-(M_v)^2 \leq L_vN_v.$$ Here, $\{f_u, f_{vv},\nu\}$ is an orthonormal basis, and $g(f_{uu},f_u)=0$ and $L=g(f_{vv},\nu)=0$ on the $u$-axis. Hence $$f_{uu}=g(f_{uu},f_{vv})f_{vv}+g(f_{uu},\nu) \nu=g(f_{uu},f_{vv})f_{vv}.$$ Similarly, since $2g(\nu_v,\nu)=g(\nu,\nu)_v=0$ and $g(\nu_v,f_u)=-M=0$, we have $$\nu_v=g(\nu_v,f_{vv}) f_{vv}.$$ Since $\lambda_v=1>0$ and $|f_u|=1$, the singular curvature is given by $$\label{eq:singular-curvature-normal} \kappa_s=\mu_g(f_u,f_{uu},\nu) =g(f_{uu},f_{vv})\mu_g(f_u,f_{vv},\nu)=g(f_{uu},f_{vv}) =\frac{g(f_{uu},\nu_v)}{g(f_{vv},\nu_v)}.$$ On the other hand, we have on the $u$-axis that $$-L_v=g(f_u,\nu_u)_v=g(f_{uv},\nu_u)+g(f_u,\nu_{uv})= g(f_u,\nu_{uv}),$$ because $g(f_{uv},\nu_u)|_{v=0}=-M_u(u,0)=0$. Moreover, we have $$\nu_{uv}=D_vD_u \nu = D_uD_v \nu+R(f_v,f_u)\nu =D_uD_v \nu=\nu_{vu}$$ since $f_v=0$, where $R$ is the Riemannian curvature tensor of $(N^3,g)$. Thus, $$L_v= -g(f_u,\nu_{uv}) =-g(f_u,\nu_v)_u+g(f_{uu},\nu_v) =M_u+g(f_{uu},\nu_v)=g(f_{uu},\nu_v)$$ holds. Since we have on the $u$-axis that $$-N_v=g(f_v,\nu_v)_v=g(f_{vv},\nu_v) +g(f_v,\nu_{vv})=g(f_{vv},\nu_v),$$ and imply that $$\kappa_s=-\frac{L_v}{N_v}=-\frac{L_vN_v}{N_v^2} \leq 0.$$ If $K_{{\operatorname{ext}}}\geq \delta>0$, becomes $0<L_vN_v$, and we have $\kappa_s<0$. \[rem:alexsandrov\] Let $f\colon{}M^2\to {\boldsymbol{R}}^3$ be a compact front with positive Gaussian curvature. For example, parallel surfaces of compact immersed constant mean curvature surfaces (e.g. Wente tori) give such examples. In this case, we have the following opposite of the Cohn-Vossen inequality by Theorem \[thm:Gauss-Bonnet-compact\]: $$\int_{M^2}K dA> 2\pi\chi(M^2).$$ On the other hand, the total curvature of a compact $2$-dimensional Alexandrov space is bounded from above by $2\pi\chi(M^2)$ (see Machigashira [@Mac]). This implies that a front with positive curvature cannot be a limit of Riemannian $2$-manifolds with Gaussian curvature bounded below by a constant. We can give another explanation of this phenomenon as follows: Since $K>0$, we have $\kappa_s<0$ and the shape of the surfaces looks like cuspidal hyperbolic parabola. So if the front is a limit of the sequence of immersions $f_n$, the curvature of $f_n$ must converge to $-\infty$. \[ex:positive-curvature\] Let $f_0\colon{}M^2\to {\boldsymbol{R}}^3$ be an immersion of constant mean curvature $1$ and $\nu$ the unit normal vector of $f_0$. Then the parallel surface $f:=f_0-\nu$ gives a front of constant Gaussian curvature $1$. If we take isothermal principal curvature coordinates $(u,v)$ on $M^2$ with respect to $f_0$, the first and second fundamental forms of $f$ are given by $$ds^2=dz^2+2\cosh \theta\, dz d\bar z +d\bar z^2 \qquad h=2\sinh\theta\, dz d \bar z,$$ where $z=u+iv$ and $\theta$ is a real-valued function in $(u,v)$, which is called the [*complex Chebyshev net*]{}. The sinh-Gordon equation $\theta_{uu}+\theta_{vv}+4\sinh \theta=0$ is the integrability condition. In this case, the singular curve is characterized by $\theta=0$, and the condition for non-degenerate singular points is given by $d\theta\ne 0$. Moreover, the cuspidal edges are characterized by $\theta_v\ne 0$, and the swallowtails are characterized by $\theta_u\ne 0, \theta_v=0$ and $\theta_{vv}\ne 0$. The singular curvature on cuspidal edges is given by $$\kappa_s=-\frac{(\theta_u)^2+(\theta_v)^2}{4 |\theta_v|}< 0.$$ The negativity of $\kappa_s$ has been shown in Theorem \[thm:gaussian-singular\]. Like the case of fronts of constant negative curvature, Ishikawa-Machida [@IM] also showed that the generic singularities of fronts of constant positive Gaussian curvature are cuspidal edges or swallowtails. Here we should like to remark on the behavior of mean curvature function near the non-degenerate singular points. Let $f:M^2\to (N^3,g)$ be a front and $p\in M^2$ a non-degenerate singular point. Then the mean curvature function of $f$ is unbounded near $p$. The mean curvature function $H$ is given by $$2H:=\frac{EN-2FM+GL}{EG-F^2}=\frac{EN-2FM+GL}{2\lambda^2}.$$ We may assume that $u$-axis is a singular curve. By applying L’Hospital’s rule, we have $$\lim_{v\to 0}H= \lim_{v\to 0}\frac{E_vN+EN_v-2F_vM-2FM_v+G_vL-GL_v}{2\lambda \lambda_v}.$$ Firstly, we consider the case $(0,0)$ is a cuspidal edge. Then by the proof of the first part of Theorem \[thm:gaussian-singular\], we have $$F(0,0)=G(0,0)=M(0,0)=N(0,0)=G_v(0,0)=0.$$ Thus $$\lim_{v\to 0}H= \lim_{v\to 0}\frac{EN_v}{2\lambda \lambda_v}.$$ Since $\lambda(0,0)=0$ and $N_v(0,0)\ne 0$ as shown in the proof of Theorem \[thm:gaussian-singular\], $H$ diverges. Next, we consider the case that $(0,0)$ is not a cuspidal edge. When $p$ is not a cuspidal edge, by the proof of the first part of Theorem \[thm:gaussian-singular\], we then have $$E(0,0)=F(0,0)=L(0,0)=M(0,0)=E_v(0,0)=0, \qquad L_v(0,0)\ne 0.$$ Thus $$\lim_{v\to 0}H=- \lim_{v\to 0}{GL_v}/{(2\lambda \lambda_v)}$$ diverges, since $\lambda(0,0)=0$ and $L_v(0,0)\ne 0$. Generic behavior of the curvature near cuspidal edges {#generic-behavior-of-the-curvature-near-cuspidal-edges .unnumbered} ----------------------------------------------------- As an application of Theorem \[thm:gaussian-singular\], we shall investigate the generic behavior of the Gaussian curvature near cuspidal edges and swallowtails in $({\boldsymbol{R}}^3,g_0)$. We call a given cuspidal edge $p\in M^2$ [*generic*]{} if the second fundamental form does not vanish at $p$. Theorem \[thm:gaussian-singular\] implies that fronts with bounded Gaussian curvature have only non-generic cuspidal edges. In the proof of the theorem for cuspidal edges, $L=0$ if and only if $f_{uu}$ is perpendicular to both $\nu$ and $f_u$, which implies that the osculating plane of the singular curve coincides with the limiting tangent plane, and we get the following: \[cor;cusp\] Let $f\colon{}M^2\to {\boldsymbol{R}}^3$ be a front. Then a cuspidal edge $p\in M^2$ is generic if and only if the osculating plane of the singular curve does not coincide with the limiting tangent plane at $p$. Moreover, the Gaussian curvature is unbounded and changes sign between the two sides of a generic cuspidal edge. By and , $K=\psi/\bigl(v\varphi^2\bigr)$, where $\psi(0,0)\neq 0$ if $(0,0)$ is generic. Hence $K$ is unbounded and changes sign between the two sides along the generic cuspidal edge. We shall now determine which side has positive Gaussian curvature: Let $\gamma$ be a singular curve of $f$ consisting of cuspidal edge points, and let $\hat\gamma=f\circ\gamma$. Define $$\label{eq:normal-curvature} \kappa_{\nu}:=\frac{g_0(\hat \gamma'',\nu)}{|\hat\gamma'|^2}$$ on the singular curve, which is independent of the choice of parameter $t$. We call it the [*limiting normal curvature*]{} of the cuspidal edge $\gamma(t)$. Then one can easily check that [*$p$ is a generic cuspidal edge if and only if $\kappa_{\nu}(p)$ does not vanish*]{}. Let $\Omega({\nu})$ (resp. $\Omega({-\nu})$) be the half-space bounded by the limiting tangent plane such that $\nu$ (resp. $-\nu$) points into $\Omega(\nu)$ (resp. $\Omega(-\nu)$). Then the singular curve lies in $\Omega(\nu)$ if $\kappa_{\nu}(p)>0$ and lies in $\Omega(-\nu)$ if $\kappa_{\nu}(p)<0$. We call $\Omega(\nu)$ (resp. $\Omega(-\nu)$) the [*half-space containing the singular curve*]{} at the cuspidal edge point $p$. This half-space is in general different from the principal half-space (see Definition \[def:principal\] and Figure \[fig:outward-normal\]). We set $$\begin{gathered} {\operatorname{sgn}}_0(\nu):={\operatorname{sgn}}(\kappa_\nu)\\ = \begin{cases} \hphantom{-}1 & \text{ (if $\Omega(\nu)$ is the half-space containing the singular curve)}\\ -1 & \text{ (if $\Omega(-\nu)$ is the half-space containing the singular curve)}. \end{cases}\end{gathered}$$ On the other hand, one can choose the [*outward normal vector*]{} [$\nu_0$]{} near a given cuspidal edge $p$ as in the middle figure of Figure \[fig:outward-normal\]. [c@c@c]{} & & \ the half-space containing the singular curve is $\Omega(-\nu)$ & The outward normal & the vector $\tau_0$ Let $\Delta$ be a sufficiently small domain consisting of regular points sufficiently close to $p$ that lies only to one side of the cuspidal edge. For a given unit normal vector $\nu$ of the front, we define its sign ${\operatorname{sgn}}_{\Delta}(\nu)$ by ${\operatorname{sgn}}_{\Delta}(\nu)=1$ (resp. ${\operatorname{sgn}}_{\Delta}(\nu)=-1$) if $\nu$ coincides with the outward normal $\nu_0$ on $\Delta$. The following assertion holds: \[thm:generic-c\] Let $f\colon{}M^2\to ({\boldsymbol{R}}^3,g_0)$ be a front, $p$ a cuspidal edge and $\Delta$ a sufficiently small domain consisting of regular points sufficiently close to $p$ that lies only to one side of the cuspidal edge. Then ${\operatorname{sgn}}_{\Delta}(\nu)$ coincides with the sign of the function $g_0({\hat\sigma}'',{\hat \nu}')$ at $p$, namely $$\label{eq:domain-sign} {\operatorname{sgn}}_{\Delta}(\nu)= {\operatorname{sgn}}g_0({\hat\sigma}'',{\hat \nu}') \qquad \left({}'=\frac{d}{ds}, ~''=D_s\frac{d}{ds}\right),$$ where $\sigma(s)$ is an arbitrarily fixed null curve starting at $p$ and moving into $\Delta$, and $\hat\sigma(s)=f(\sigma(s))$ and $\hat \nu=\nu(\sigma(s))$. Moreover, if $p$ is a generic cuspidal edge, then $${\operatorname{sgn}}_0({\nu})\cdot {\operatorname{sgn}}_{\Delta}(\nu)$$ coincides with the sign of the Gaussian curvature on $\Delta$. We take a special adapted coordinate system $(u,v)$ as in Lemma \[lem:normal-coordinate\] at the cuspidal edge. The vector $\tau_0:=-f_{vv}=f_u\times \nu$ lies in the limiting tangent plane and points in the opposite direction of the image of the null curve (see Figure \[fig:outward-normal\], right side). Without loss of generality, we may assume that $\Delta=\{v>0\}$. The unit normal $\nu$ is the outward normal on $\Delta$ if and only if $g_0(\nu_{v},\tau_0)>0$, namely $N_v=-g_0(f_{vv},\nu_v)>0$. Thus we have ${\operatorname{sgn}}_{\{v>0\}}(\nu)={\operatorname{sgn}}(N_v)$, which proves . Since $p$ is generic, we have $\kappa_{\nu}(p)\ne 0$ and $\kappa_{\nu}(p)=L$ holds. On the other hand, the sign of $K$ on $v>0$ is equal to the sign of $$\left.\bigl(LN-(M)^2\bigr)_v \right |_{v=0}=L(u,0)N_v(u,0),$$ which proves the assertion. \[ex:parabola2\] Consider again the cuspidal parabola $f(u,v)$ as in Example \[ex:parabola\]. Then $(u,v)$ gives an adapted coordinate system so that $\partial/\partial v$ gives a null direction, and we have $$L = g_0(f_{uu},\nu) = \frac{-2ab}{\sqrt{1+b^2(1+4a^2u^2)}},\qquad N_v=\frac{6}{\sqrt{1+b^2(1+4a^2u^2)}}>0.$$ The cuspidal edges are generic if and only if $ab\neq 0$. In this case, let $\Delta$ be a domain in the upper half-plane $\{(u,v)\,;\,v>0\}$. Then the unit normal vector is the outward normal to the cuspidal edge, that is, ${\operatorname{sgn}}_{\Delta}(\nu)=+1$. The limiting normal curvature as in is computed as $\kappa_\nu = -ab/(2|a|^2\sqrt{1+b^2(1+4a^2u^2)})$, and hence ${\operatorname{sgn}}_0(\nu)=-{\operatorname{sgn}}(ab)$. Then ${\operatorname{sgn}}(K)=-{\operatorname{sgn}}(ab)$ holds on the upper half-plane. In fact, the Gaussian curvature is computed as $$K = \frac{-12(ab+3av)}{v(4+\bigl(1+4a^2u^2)(4b^2+12bv+9v^2)\bigr)}.$$ On the other hand, the Gaussian curvature is bounded if $b=0$. Moreover, the Gaussian curvature is positive if $a<0$. In this case the singular curvature is negative when $a<0$, as stated in Theorem \[thm:gaussian-singular\]. Generic behavior of the curvature near swallowtails {#generic-behavior-of-the-curvature-near-swallowtails .unnumbered} --------------------------------------------------- We call a given swallowtail $p\in M^2$ of a front $f\colon{}M^2\to ({\boldsymbol{R}}^3,g_0)$ [*generic*]{} if the second fundamental form does not vanish at $p$. \[prop:swallow-half-plane\] Let $f\colon{}M^2\to ({\boldsymbol{R}}^3,g_0)$ be a front and $p$ a generic swallowtail Then we can take a half-space $H\subset{\boldsymbol{R}}^3$ bounded by the limiting tangent plane such that any null curve at $p$ lies in $H$ near $p$ [(]{}see Figure \[fig:generic-swallowtails\][)]{}. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------- ![The half-space containing the singular curve for generic swallowtails (Example \[ex:positive-negative-swallowtail\]).[]{data-label="fig:generic-swallowtails"}](figure-3-2-a.eps "fig:"){width="3.8cm"} the swallowtail $f_+$ the swallowtail $f_-$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------- The half-space containing the singular curve is the closer side of the limiting tangent plane for the left-hand figure, and the farther side for the right-hand figure. We shall call $H$ the [*half-space containing the singular curve*]{} at the generic swallowtail. At the end of this section, we shall see that the singular curve is in fact contained in this half-space for a neighborhood of the swallowtail (see Figure \[fig:generic-swallowtails\] and Corollary \[cor:limit-tangent\]). For a given unit normal vector $\nu$ of the front, we define the sign ${\operatorname{sgn}}_0(\nu)$ of it by ${\operatorname{sgn}}_0(\nu)=1$ (resp. ${\operatorname{sgn}}_0(\nu)=-1$) if $\nu$ points (resp. does not point) into the half-space containing the singular curve. Take an adapted coordinate system $(u,v)$ and assume $f(0,0)=0$ by translating in ${\boldsymbol{R}}^3$ if necessary. Write the second fundamental form as in . Since $f_u(0,0)=0$, we have $L(0,0)=M(0,0)=0$, and we have the following Taylor expansion: $$g_0\bigl(f(u,v),\nu\bigr)= \frac{v^2}{2} g_0\bigl(f_{vv}(0,0),\nu(0,0)\bigr) +o(u^2+v^2) = \frac{1}{2}N(0,0) v^2 +o(u^2+v^2).$$ Thus the assertion holds. Moreover we have $$\label{eq:primary-signature} {\operatorname{sgn}}(N)={\operatorname{sgn}}_0(\nu).$$ \[cor:primary-signature\] Let $\sigma(s)$ be an arbitrary curve starting at the swallowtail such that $\sigma'(0)$ is transversal to the singular direction. Then $${\operatorname{sgn}}_0(\nu)={\operatorname{sgn}}\bigl(g_0({\hat\sigma}''(0),\nu(0,0))\bigr)$$ holds¡¤where $\hat\sigma=f\circ\sigma$. We let $\Delta$ be a sufficiently small domain consisting of regular points sufficiently close to a swallowtail $p$. The domain $\Delta$ is called the [*tail part*]{} if $\Delta$ is on the opposite side of the self-intersection of the swallowtail. We define ${\operatorname{sgn}}_{\Delta}(\nu)$ by ${\operatorname{sgn}}_{\Delta}(\nu)=1$ (resp. ${\operatorname{sgn}}_{\Delta}(\nu)=-1$) if $\nu$ is (resp. is not) the outward normal of $\Delta$. Now we have the following assertion: \[thm:secondary-signature\] Let $f\colon{}M^2\to ({\boldsymbol{R}}^3,g_0)$ be a front, $p$ a generic swallowtail and $\Delta$ a sufficiently small domain consisting of regular points sufficiently close to $p$. Then the Gaussian curvature is unbounded and changes sign between the two sides along the singular curve. Moreover, ${\operatorname{sgn}}_0(\nu){\operatorname{sgn}}_{\Delta}(\nu)$ coincides with the sign of the Gaussian curvature on $\Delta$. If we change $\Delta$ to the opposite side, ${\operatorname{sgn}}_{\Delta}(\nu){\operatorname{sgn}}_{\Delta}(K)$ does not change sign. So we may assume that $\Delta$ is the tail part. We take an adapted coordinate system $(u,v)$ at the swallowtail and write the null vector field as $\eta(u)=(\partial/\partial u)+e(u)(\partial/\partial v)$, where $e(u)$ is a smooth function. Then $$f_u(u,0) + e(u)f_v(u,0)=0\quad \text{and}\quad f_{uu}(u,0)+e_u(u)f_v(u,0)+e(u)f_{uv}(u,0)=0$$ hold. Since $u=0$ is a swallowtail, $e(0)=0$ and $e'(0)\neq 0$ hold, where $'=d/du$. The vector $f_{uu}$ points toward the tail part $\Delta$. Thus $f_v$ points toward $\Delta$ if and only if $g_0(f_v,f_{uu})$ is positive. Since $f_u=-e(u)f_v$ and $e(0)=0$, we have $f_{uu}(0,0)=e'(0) f_v(0,0)$ and $$g_0\bigl(f_{uu}(0,0),f_v(0,0)\bigr)= -e'(0)\, g_0\bigl( f_v(0,0),f_v(0,0)\bigr).$$ Thus $g_0\bigl(f_{uu}(0,0),f_v(0,0)\bigr)$ is positive (that is, the tail part is $v>0$) if and only if $e'(0)<0$. Changing $v$ to $-v$ if necessary, we assume $e'(0)>0$, that is, the tail part lies in $v>0$. For each fixed value of $u\neq 0$, we take a curve $$\sigma(s)=\bigl(u+\varepsilon s, s|e(u)|\bigr) =\bigl(u+\varepsilon s, \varepsilon e(u)\,s\bigr) \qquad \varepsilon = {\operatorname{sgn}}e(u)$$ and let $\hat\sigma= f\circ\sigma$. Then $\sigma$ is traveling into the upper half-plane $\{v>0\}$, that is, $\hat\sigma$ is traveling into $\Delta$. Here, we have $$\begin{aligned} {\hat\sigma}'(0) &= \varepsilon\bigl(f_u(u,0) + e(u)f_v(u,0)\bigr) = 0 \qquad\text{and}\\ {\hat\sigma}''(0) &= \left.\varepsilon\bigl( \varepsilon (f_u+ef_v)_u + \varepsilon e (f_u+ef_v)_v\bigr)\right|_{v=0}\\ &= e(u) \bigl(f_{uv}(u,0)+e(u)f_{vv}(u,0)), \end{aligned}$$ where ${}'{~}=d/ds$. In particular, $\sigma$ is a null curve starting at $(u,0)$ and traveling into $\Delta$. Then by Theorem \[thm:generic-c\], we have $${\operatorname{sgn}}_{\Delta}(\nu)=\lim_{u\to 0} {\operatorname{sgn}}\bigl(g_0({\hat\sigma}''_u(s),{\hat\nu}'(s))\bigr).$$ Here, the derivative of $\hat\nu(t)=\nu(\sigma(t))$ is computed as ${\hat\nu}'=\varepsilon\{\nu_u(u,0)+e(u)\nu_v(u,0)\}$. Since $e(0)=0$, we have $$\left. g\bigl({\hat\sigma}''(s),{\hat \nu}'(s)\bigr) \right|_{s=0}= |e(u)| g_0\bigl(f_{uv}(u,0), \nu_u(u,0)\bigr) +\{e(u)\}^2\varphi(u),$$ where $\varphi(u)$ is a smooth function in $u$. Then we have $${\operatorname{sgn}}_{\Delta}(\nu)=\lim_{u\to 0} {\operatorname{sgn}}\bigl(g_0({\hat\sigma}''(s), {\hat \nu}'(s))\bigr) ={\operatorname{sgn}}\bigl(g_0(f_{uv}(0,0), \nu_u(0,0)\bigr).$$ Here, $L_v(0,0)=-g_0(f_u,\nu_u)_v =-g_0(f_{uv},\nu_u)$ because $f_u=0$, which implies that $${\operatorname{sgn}}_{\Delta}(\nu)={\operatorname{sgn}}(L_v(0,0)).$$ On the other hand, the sign of $K$ on $v>0$ is equal to the sign of $$\left.\bigl(LN-(M)^2\bigr)_v \right |_{v=0}= N(0,0)L_v(0,0).$$ Then implies the assertion. \[ex:positive-negative-swallowtail\] Let $$f_{\pm}(u,v)=\frac{1}{12} (3u^4-12u^2v\pm (6u^2-12v)^2,8u^3-24uv,6u^2-12v).$$ Then one can see that $f_{\pm}$ is a front and $(0,0)$ is a swallowtail with the unit normal vector $$\begin{gathered} \nu_{\pm}=\frac{1}{\delta} \bigl(1,u,u^2\pm12(2v-u^2)\bigr )\\ \left(\delta = \sqrt{1+u^2+145u^4+576v(v-u^2)\pm 24u^2(2v-u^2)}\right). \end{gathered}$$ In particular, $(u,v)$ is an adapted coordinate system. Since the second fundamental form is $\pm 24 dv^2$ at the origin, the swallowtail is generic and ${\operatorname{sgn}}_0(\nu_{\pm})=\pm 1$ because of . The images of $f_{\pm}$ are shown in Figure \[fig:generic-swallowtails\]. Moreover, since $L_v=\pm 2$ at the origin, ${\operatorname{sgn}}_D(\nu_{\pm})=\pm 1$. Then by Theorem \[thm:secondary-signature\], the Gaussian curvature of the tail side of $f_{+}$ (resp. $f_{-}$) is positive (resp. negative). Summing up the previous two theorems, we get the following: \[cor:limit-tangent\] Let $\gamma(t)$ be a singular curve such that $\gamma(0)$ is a swallowtail. Then the half-space containing the singular curve at $\gamma(t)$ converges to the half-space at the swallowtail $\gamma(0)$ as $t\to 0$. Zigzag numbers {#sec:zigzag} ============== In this section, we introduce a geometric formula for a topological invariant called the [*zigzag number*]{}. We remark that Langevin, Levitt and Rosenberg [@LLR] gave topological upper bounds of zig-zag numbers for generic compact fronts in ${\boldsymbol{R}}^3$. (See Remark \[rem:add\].) Zigzag number for fronts in the plane {#zigzag-number-for-fronts-in-the-plane .unnumbered} ------------------------------------- First, we mention the Maslov index (see [@A]; which is also called the zigzag number) for fronts in the Euclidean plane $({\boldsymbol{R}}^2,g_0)$. Let $\gamma\colon{}S^1\to{\boldsymbol{R}}^2$ be a generic front, that is, all self-intersections and singularities are double points and $3/2$-cusps, and let $\nu$ be the unit normal vector field of $\gamma$. Then $\gamma$ is Legendrian isotropic (isotropic as the Legendrian lift $(\gamma,\nu)\colon{}S^1\to T_1{\boldsymbol{R}}^2\simeq {\boldsymbol{R}}^2\times S^1$) to one of the fronts in Figure \[fig:zig-zag\] (a). The non-negative integer $m$ is called the [*rotation number*]{}, which is the rotational index of the unit normal vector field $\nu\colon{}S^1\to S^1$. The number $k$ is called the [*Maslov index*]{} or [*zigzag number*]{}. We shall give a precise definition and a formula to calculate the number: a $3/2$-cusp $\gamma(t_0)$ of $\gamma$ is called [*zig*]{} (resp. [*zag*]{}) if the leftward normal vector of $\gamma$ points to the outside (resp. inside) of the cusp (see Figure \[fig:zig-zag\] (b)). We define a $C^\infty$-function $\lambda$ on $S^1$ as $\lambda:=\det(\gamma',\nu)$, where $'=d/dt$. Then the leftward normal vector is given by $({\operatorname{sgn}}\lambda) \nu_{0}$. Since $\gamma''(t_0)$ points to the inside of the cusp, $t_0$ is zig (resp. zag) if and only if $$\label{eq:zigzag-criterion} {\operatorname{sgn}}\bigl(\lambda'g_0(\gamma'',\nu')\bigr) <0 \qquad (\text{resp.}~>0).$$ Let $\{t_0,t_1,\dots,t_l\}$ be the set of singular points of $\gamma$ ordered by their appearance, and define $\zeta_j=a$ (resp. $=b$) if $\gamma(t_j)$ is zig (resp. zag), and set $\zeta_{\gamma}:=\zeta_0\zeta_1\dots \zeta_l$, which is a word consisting of the letters $a$ and $b$. The projection of $\zeta_{\gamma}$ to the free product ${\boldsymbol{Z}}_2*{\boldsymbol{Z}}_2$ (reduction with the relation $a^2=b^2=1$) is of the form $(ab)^k$ or $(ba)^k$. The non-negative integer $k_{\gamma}:=k$ is called the [*zigzag number*]{} of $\gamma$. [c@c@c]{} & &\ & zig & zag\ (a) Canonical forms of plane fronts & \ [c@cc]{} & &\ & zig & zag\ (c) Counterclockwise in $P^1({\boldsymbol{R}})$ & We shall give a geometric formula for the zigzag number via the curvature map defined by the second author: \[def:curvature-map\] Let $\gamma\colon{}S^1\to {\boldsymbol{R}}^2$ be a front with unit normal vector $\nu$. The [*curvature map*]{} of $\gamma$ is the map $$\kappa_{\gamma}\colon{} S^1\setminus \Sigma_{\gamma}\ni t\longmapsto \left[ g_0(\gamma',\gamma'): g_0(\gamma',\nu') \right]\in P^1({\boldsymbol{R}}),$$ where $'=d/dt$, $\Sigma_{\gamma}\subset S^1$ is the set of singular points of $\gamma$, and $[~:~]$ denotes the homogeneous coordinates of $P^1({\boldsymbol{R}})$. \[prop:plane-zigzag\] Let $\gamma$ be a generic front with unit normal vector $\nu$. Then the curvature map $\kappa_{\gamma}$ can be extended to a smooth map on $S^1$. Moreover, the rotation number of $\kappa_{\gamma}$ is the zigzag number of $\gamma$. Let $t_0$ be a singular point of $\gamma$. Since $\gamma$ is a front, $\nu'(t)\neq 0$ holds on a neighborhood of $t_0$. As $\nu'$ is perpendicular to $\nu$, we have $\det(\nu,\nu')\neq 0$. Here, using $\lambda=\det(\gamma',\nu)$, we have $\gamma'=-(\lambda/\det(\nu,\nu'))\nu'$. Hence we have $$\kappa_{\gamma}=[g_0(\gamma',\gamma'):g_0(\gamma',\nu')] = \left[\lambda^2:-\frac{\lambda g_0(\nu',\nu')}{\det(\nu,\nu')}\right] =\left[\lambda:-\frac{g_0(\nu',\nu')}{\det(\nu,\nu')}\right]$$ well-defined on a neighborhood of $t_0$. Moreover, $\kappa_{\gamma}(t)=[0:1](=\infty)$ if and only if $t$ is a singular point. Here, we choose an inhomogeneous coordinate of $[x:y]$ as $y/x$. Since $g_0(\gamma',\nu')' = g_0(\gamma'',\nu')$ holds at a singular point $t_0$, $\kappa_{\gamma}$ passes through $[0:1]$ with counterclockwise (resp. clockwise) direction if $g_0(\gamma'',\nu')>0$ (resp. $<0$), see Figure \[fig:zig-zag\] (c). Let $t_0$ and $t_1$ be two adjacent zigs, and suppose $\lambda'(t_0)>0$. Since $\lambda$ changes sign on each cusp, we have $\lambda'(t_1)<0$. Then by , $g_0(\gamma'',\nu')(t_0)>0$ and $g_0(\gamma'',\nu')(t_1)<0$. Hence $\kappa_{\gamma}$ passes through $[0:1]$ in the counterclockwise direction at $t_0$, and the clockwise direction at $t_1$. Thus, this interval does not contribute to the rotation number of $\kappa_{\gamma}$. On the other hand, if $t_0$ and $t_1$ are zig and zag respectively, $\kappa_{\gamma}$ passes through $[0:1]$ counterclockwisely at both $t_0$ and $t_1$. Then the rotation number of $\kappa_{\gamma}$ is $1$ on the interval $[t_0,t_1]$. Summing up, the proposition holds. Zigzag number for fronts in Riemannian $3$-manifolds {#zigzag-number-for-fronts-in-riemannian-3-manifolds .unnumbered} ---------------------------------------------------- Let $M^2$ be a manifold and $f\colon{}M^2 \to N^3$ be a front with unit normal vector $\nu$ into a Riemannian 3-manifold $(N^3,g)$. Let $\Sigma_f\subset M^2$ be the singular set, and ${\nu}_0$ be the unit normal vector field of $f$ defined on $M^2\setminus\Sigma_f$ which is compatible with the orientations of $M^2$ and $N^3$, that is, ${\nu}_0= (f_u\times_g f_v)/|f_u\times_g f_v|$, where $(u,v)$ is a local coordinate system on $M^2$ compatible to the orientation. Then ${\nu}_0(p)$ is $\nu(p)$ if $p\in M_+$ and $-\nu(p)$ if $p\in M_-$. We assume all singular points of $f$ are non-degenerate. Then each connected component $C\subset\Sigma_f$ must be a regular curve on $M^2$. Let $p\in C$ be a cuspidal edge. Then $p$ is called [*zig*]{} (resp. [*zag*]{}) if ${\nu}_0$ points towards the outward (resp. inward) side of the cuspidal edge (see Figure \[fig:zig-zag\] (d)). As this definition does not depend on $p\in C$, we call $C$ [*zig*]{} (resp. [*zag*]{}) if $p\in C$ is zig (resp. zag). Now, we define the zigzag number for loops on $M^2$. Take a [*null loop*]{} $\sigma\colon{}S^1\to M^2$, that is, the intersection of $\sigma(S^1)$ and $\Sigma_f$ consists of cuspidal edges and $\sigma'$ points in the null direction at each singular point. We remark that there exists a null loop in each homotopy class. Let $Z_{\sigma}=\{t_0,\dots, t_l\}\subset S^1$ be the set of singular points of $\sigma$ ordered by their appearance along the loop. Define $\zeta_j=a$ (resp. $b$) if $\sigma(t_j)$ is zig (resp. zag), and set $\zeta_{\sigma}:=\zeta_0\zeta_1\dots \zeta_l$, which is a word consisting of the letters $a$ and $b$. The projection of $\zeta_{\sigma}$ to the free product ${\boldsymbol{Z}}_2*{\boldsymbol{Z}}_2$ (reduction with the relation $a^2=b^2=1$) is of the form $(ab)^k$ or $(ba)^k$. The non-negative integer $k_{\sigma}:=k$ is called the [*zigzag number*]{} of $\sigma$. It is known that the zigzag number is a homotopy invariant, and the greatest common divisor $k_f$ of $\{k_{\sigma}\,| \text{$\sigma$ is a null loop on $M^2$}\}$ is the [*zigzag number of $f$*]{} (see [@LLR]). \[Langevin-Levitt-Rosenberg’s inequality [@LLR]\] \[rem:add\] Let $M^2$ be a compact orientable 2-manifold of genus $g$ and $f:M^2\to N^3$ a front. When $N^3={\boldsymbol{R}}^3$, [@LLR] proved the following inequality $$\label{eq:add} a_f+\frac{q_f}{2}\ge \frac{\chi_{{\mathcal{E}}}^{}}{2}+1-g +2k_f,$$ where $a_f$ is the number of the connected components of the singular set $\Sigma_f,$ $q_f$ the number of the swallowtails, and half the Euler number of the limiting tangent bundle $\chi_{{\mathcal{E}}}^{}/{2}$ is equal to the degree of the Gauss map. Their proof is valid for the general case and holds for any $N^3$. In this section, we shall give a geometric formula for zigzag numbers of loops. First, we define the normal curvature map, similar to the curvature map for fronts in ${\boldsymbol{R}}^2$: \[def:front-curvature-map\] Let $f\colon{}M^2\to (N^3,g)$ be a front with unit normal vector $\nu$ and $\sigma\colon{}S^1\to M^2$ a null loop. The [*normal curvature map*]{} of $\sigma$ is the map $$\kappa_{\sigma}\colon{} S^1\setminus Z_{\gamma}\ni t\longmapsto \left[ g(\hat\sigma',\hat\sigma'): g(\hat\sigma',\hat\nu') \right]\in P^1({\boldsymbol{R}}),$$ where $\hat\sigma=f\circ \sigma$, $\hat\nu=\nu\circ \sigma$, $'=d/dt$, $Z_{\sigma}\subset S^1$ is the set of singular points of $\sigma$, and $[~:~]$ denotes the homogeneous coordinates of $P^1({\boldsymbol{R}})$. Then we have the following: \[thm:zigzag-front\] Let $f\colon{}M^2\to (N^3,g)$ be a front with unit normal vector $\nu$, whose singular points are all non-degenerate, and $\sigma\colon{}S^1\to M^2$ a null loop. Then the normal curvature map $\kappa_{\sigma}$ can be extended to $S^1$, and the rotation number of $\kappa_{\sigma}$ is equal to the zigzag number of $\sigma$. Let $t_0$ be a singular point of $\sigma$, and take a normalized coordinate system $(u,v)$ of $M^2$ on a neighborhood $U$ of $\sigma(t_0)$. Then $f_v=0$ and $f_{vv}\neq 0$ holds on the $u$-axis, and by the Malgrange preparation theorem, there exists a smooth function $\alpha$ such that $g(f_v,f_v)=v^2\alpha(u,v)$ and $\alpha(u,0)\neq 0$. On the other hand, $g(f_v,\nu_v)=-N$ vanishes and $N_v\neq 0$ on the $u$-axis. Hence there exists a function $\beta$ such that $g(f_v,\nu_v)=v \mu(u,v)$ and $\mu(u,0)\neq 0$. Thus $$\label{eq:normal-curvature-extend} \kappa_{\sigma}=[g(f_v,f_v):g(f_v,\nu_v))] =[v^2\alpha(u,v):v\beta(u,v)] =[v\alpha(u,v):\beta(u,v)]$$ can be extended to the singular point $v=0$. Namely, $\kappa_{\sigma}(t_0)=[0:1](=\infty)$, where we choose an inhomogeneous coordinate $y/x$ for $[x:y]$. Moreover, $g(\hat\sigma',\hat\sigma')\neq 0$ on regular points, and $\kappa_{\sigma}(t)=[0:1]$ if and only if $t$ is a singular point. Since ${\nu}=({\operatorname{sgn}}\lambda){\nu}_0$, so a singular point $t_0$ is zig (resp. zag) if and only if $${\operatorname{sgn}}(\lambda) {\operatorname{sgn}}_{\Delta}(\nu)>0\qquad (\text{resp.\ }<0),$$ where $\varepsilon$ is a sufficiently small number and $\Delta$ is a domain containing $\sigma(t_0+\varepsilon)$ which lies only to one side of the cuspidal edge. By Theorem \[thm:generic-c\], ${\operatorname{sgn}}_{\Delta}(\nu)={\operatorname{sgn}}g(\hat\sigma'',\hat\nu')$, $t_0$ is zig (resp. zag) if and only if $$\label{eq:zig-zag-criterion} {\operatorname{sgn}}\bigl(\hat\lambda' g(\hat\sigma'',\hat\nu')\bigr) >0 \qquad (\text{resp.}\ <0),$$ where $\hat\lambda=\lambda\circ\sigma$. Since $g(\hat\sigma',\hat\nu')=g(\hat\sigma'',\hat\nu')$ holds at singular points, we have - if $t_0$ is zig and $\hat\lambda'(t_0)>0$ (resp. $<0$), then $\kappa_{\sigma}$ passes through $[0:1]$ counterclockwisely (resp. clockwisely). - if $t_0$ is zag and $\hat\lambda'(t_0)>0$ (resp. $<0$), then $\kappa_{\sigma}$ passes through $[0:1]$ clockwisely (resp. counterclockwisely). Let $Z_{\sigma}=\{t_0,\dots,t_l\}$ be the set of singular points. Since the function $\lambda$ has alternative sign on the adjacent domains, $\hat\lambda'(t_j)$ and $\hat\lambda'(t_{j+1})$ have opposite sign. Thus, if both $t_j$ and $t_{j+1}$ are zigs and $\hat\lambda(t_j)>0$, $\kappa_{\sigma}$ passes through $[0:1]$ counterclockwisely (resp. clockwisely) at $t=t_j$ (resp. $t_{j+1}$). Hence the interval $[t_j,t_{j+1}]$ does not contribute to the rotation number of $\kappa_{\sigma}$. Similarly, two consecutive zags do not affect the rotation number. On the other hand, if $t_j$ is zig and $t_{j+1}$ is zag and $\hat\lambda(t_j)>0$, $\kappa_{\sigma}$ passes through $[0:1]$ counterclockwisely at both $t_j$ and $t_{j+1}$. Hence the rotation number of $\kappa_{\sigma}$ on the interval $[t_j,t_{j+1}]$ is $1$. Similarly, two consecutive zags increases the rotation number by $1$. Hence we have the conclusion. Singularities of hypersurfaces {#sec:hyper} ============================== In this section, we shall investigate the behavior of sectional curvature on fronts that are hypersurfaces. Let $U^n$ ($n\ge 3$) be a domain in $({\boldsymbol{R}}^n;u_1,u_2,\dots,u_n)$ and $$f\colon{}U^n\longrightarrow ({\boldsymbol{R}}^{n+1},g_0)$$ a front, that is, there exists a unit vector field $\nu$ (called the [*unit normal vector*]{}) such that $g_0(f_*X,\nu)=0$ for all $X\in TU^n$ and $(f,\nu)\colon{}U^n\to {\boldsymbol{R}}^{n+1}\times S^n$ is an immersion. We set $$\lambda:=\det(f_{u_1},\dots,f_{u_n},\nu),$$ and call it the [*signed volume density function*]{}. A point $p\in U^n$ is called a [*singular point*]{} if $f$ is not an immersion at $p$. Moreover, if $d\lambda\ne 0$ at $p$, we call $p$ a [*non-degenerate singular point*]{}. On a sufficiently small neighborhood of a non-degenerate singular point $p$, the singular set is a $(n-1)$-dimensional submanifold called the [*singular submanifold*]{}. The $1$-dimensional vector space at the non-degenerate singular point $p$ which is the kernel of the differential map $(f_*)_p\colon{}T_pU^n\to {\boldsymbol{R}}^{n+1}$ is called the [*null direction*]{}. We call $p\in U^n$ a [*cuspidal edge*]{} if the null direction is transversal to the singular submanifold. Then, by a similar argument to the proof of Fact \[fact:intrinsic-criterion\] in [@KRSUY], one can prove that a cuspidal edge is an $A_2$-singularity, that is, locally diffeomorphic at the origin to the front $f_C(u_1,\dots,u_n)=(u_1^2,u_1^3,u_2,\dots,u_n)$. \[thm:hyper\] Let $f\colon{}U^n\to ({\boldsymbol{R}}^{n+1},g_0)$ $(n\ge 3)$ be a front whose singular points are all cuspidal edges. If the sectional curvature $K$ at the regular points is bounded, then the second fundamental form on the singular submanifold vanishes. Moreover, if $K$ is positive everywhere on the regular set, the sectional curvature of the singular submanifold is non-negative. Furthermore, if $K\geq \delta(>0)$, then the sectional curvature of the singular submanifold is positive. The previous Theorem \[thm:gaussian-singular\] is deeper than this theorem. When $n\ge 3$ we can consider sectional curvature on the singular set, but when $n=2$ the singular set is $1$-dimensional and so we cannot define the sectional curvature. Rather, one defines the singular curvature instead. We do not define singular curvature for fronts when $n\ge 3$. Without loss of generality, we may assume that the singular submanifold of $f$ is the $(u_1,\dots,u_{n-1})$-plane, and $\partial_n:=\partial/\partial {u_n}$ is the null direction. To prove the first assertion, it is sufficient to show that $h(X,X)=0$ for an arbitrary fixed tangent vector of the singular submanifold. By changing coordinates if necessary, we may assume that $X=\partial_1=\partial/\partial u_1$. The sectional curvature $K(\partial_1\wedge \partial_n)$ with respect to the $2$-plane spanned by $\{\partial_1,\partial_{n}\}$ is given by $$K(\partial_1\wedge \partial_n)= \frac{h_{11}h_{nn}-(h_{1n})^2}{g_{11}g_{nn}-(g_{1n})^2}\qquad \bigl( g_{ij}=g_0(\partial_i,\partial_j),~ h_{ij}=h(\partial_i,\partial_j) \bigr),$$ where $h$ is the second fundamental form. By the same reasoning as in the proof of Theorem \[thm:gaussian-singular\], the boundedness of $K(\partial_1\wedge \partial_n)$ implies $$0=\left. \left (h_{11}h_{nn}-(h_{1n})^2\right)_{u_n} \right |_{u_n=0}\!\!\! =h_{11} \left.\frac{\partial h_{nn}}{\partial u_n}\right |_{u_n=0} \!\!\! =h_{11} \left. g_0(D_{u_n}f_{u_n},\nu_{u_n})\right |_{u_n=0}.$$ To show $h_{11}=h(X,X)=0$, it is sufficient to show $g_0(D_{u_n}f_{u_n},\nu_{u_n})$ does not vanish when $u_n=0$. Since $f$ is a front with non-degenerate singularities, we have $$0\ne \lambda_{u_n}=\det(f_{u_1},\dots,f_{u_{n-1}},D_{u_n}f_{u_n},\nu),$$ which implies $f_{u_1},\dots,f_{u_{n-1}},D_{u_n}f_{u_n}$, and $\nu$ are linearly independent when $u_n=0$, and then $\nu_{u_n}$ can be written as a linear combination of them. Since $f$ is a front, $\nu_{u_n}\ne 0$ holds when $u_n=0$, and we have $2g_0(\nu_{u_n},\nu)=g_0(\nu,\nu)_{u_n}=0$, and $$g_0(\nu_{u_n},f_{u_j})=-g_0(\nu,D_{u_n}f_{u_j})= g_0(\nu_{u_j},f_{u_n})=0 \qquad (j=1,\dots,n-1).$$ Thus we have that $g_0(D_{u_n}f_{u_n},\nu_{u_n})$ never vanishes at $u_n=0$. Next we show the non-negativity of the sectional curvature $K_S$ of the singular manifold. It is sufficient to show $K_S(\partial_1\wedge \partial_2)\ge 0$ at $u_n=0$. Since the sectional curvature $K_{U_n}$ is non-negative, we have $$\label{eq:h1} \left. \frac{\partial^2}{(\partial u_n)^2} (h_{11}h_{22}-(h_{12})^2)\right |_{u_n=0}\ge 0,$$ by the same argument as in the proof of Theorem \[thm:gaussian-singular\]. Since the restriction of $f$ to the singular manifold is an immersion, the Gauss equation yields that $$K_S(\partial_1\wedge \partial_2) =\frac{g_0(\alpha_{11},\alpha_{22})-g_0(\alpha_{12},\alpha_{12})} {g_{11}g_{22}-(g_{12})^2},$$ where $\alpha$ is the second fundamental form of the singular submanifold in ${\boldsymbol{R}}^{n+1}$ and $\alpha_{ij}=\alpha(f_{u_j},f_{u_j})$. On the other hand, since the second fundamental form $h$ of $f$ vanishes, $g_0(\nu_{u_n},f_{u_j})=0$ holds for $j=1,\dots,n$, that is, $\nu$ and $\nu_{u_n}$ are linearly independent vectors. Moreover, we have $$\begin{aligned} \alpha_{ij} &=g_0(\alpha_{ij},\nu)\nu+ \frac{1}{|\nu_{u_n}|^2}g_0(\alpha_{ij},\nu_{u_n})\nu_{u_n} \\ &=h_{ij}\nu+ \frac{1}{|\nu_{u_n}|^2} g_0(\alpha_{ij},\nu_{u_n})\nu_{u_n} =\frac{1}{|\nu_{u_n}|^2}(h_{ij})_{u_n}\nu_{u_n}, \end{aligned}$$ since the second fundamental form $h$ of $f$ vanishes and $$\begin{aligned} g_0(\alpha_{ij},\nu_{u_n})= g_0(D_{u_j}f_{u_i},\nu_{u_n}) =(h_{ij})_{u_n}-g_0(D_{u_i}D_{u_j}f_{u_n},\nu)=(h_{ij})_{u_n} \end{aligned}$$ for $i,j=1,\dots,n-1$. Thus we have $$K_S(\partial_1\wedge \partial_2) =\frac{1} {g_{11}g_{22}-(g_{12})^2} \left. \frac{\partial^2}{(\partial u_n)^2} (h_{11}h_{22}-(h_{12})^2)\right |_{u_n=0}\ge 0.$$ \[ex:hyper\] We set $$f(u,v,w):=(v,w,u^2+a v^2+b w^2,u^3+c u^2):{\boldsymbol{R}}^3\to {\boldsymbol{R}}^4,$$ which gives a front with the unit normal vector $$\begin{gathered} \nu = \frac{1}{\delta}\bigl(2av(2c+3u),2bw(2c+3u),-2c-3u,2\bigr),\\ \qquad\text{where}\quad \delta=\sqrt{4+(3u+2c)^2(1+4a^2v^2+4b^2w^2)}. \end{gathered}$$ The singular set is the $vw$-plane and the $u$-direction is the null direction. Then all singular points are cuspidal edges. The second fundamental form is given by $h = \delta^{-1}\{6u\,du^2-2(3u+2c)(a\,dv^2+b\,dw^2)\}$, which vanishes on the singular set if $ac=bc=0$. On the other hand, the sectional curvatures are computed as $$\begin{aligned} K(\partial_u\wedge\partial_v) &= \frac{12a(3u+2c)}{u\delta^2\bigl(4+(3u+2c)^2(1+4a^2v^2)\bigr)}, \\ K(\partial_u\wedge\partial_w) &= \frac{12b(3u+2c)}{u\delta^2\bigl(4+(3u+2c)^2(1+4b^2w^2)\bigr)}, \end{aligned}$$ which are bounded in a neighborhood of the singular set if and only if $ac=bc=0$. If $ac=bc=0$, $K\geq 0$ if and only if $a\geq 0$ and $b\geq 0$, which implies $K_S=4ab(3u+2c)^2/(\delta^2|\partial_v\wedge\partial_w|^2)>0$. Intrinsic formulation {#sec:intrinsic} ===================== The Gauss-Bonnet theorem is intrinsic in nature, and it it quite natural to formulate the singularities of wave fronts intrinsically. We can characterize the limiting tangent bundles of the fronts and can give the following abstract definition: Let $M^2$ be a $2$-manifold. An orientable vector bundle ${\mathcal{E}}$ of rank $2$ with a metric ${\left\langle{~},{~}\right\rangle}$ and a metric connection $D$ is called [*an abstract limiting tangent bundle*]{} or [*a coherent tangent bundle*]{} if there is a bundle homomorphism $$\psi\colon{}TM^2\longrightarrow {\mathcal{E}}$$ such that $$D_{X}\psi(Y)-D_{Y}\psi(X)=\psi([X,Y]) \qquad (X,Y\in TM^2).$$ In this setting, the pull-back of the metric $ds^2:=\psi^*{\left\langle{~},{~}\right\rangle}$ is called [*the first fundamental form*]{} of ${\mathcal{E}}$. A point $p\in M^2$ is called a [*singular point*]{} if the first fundamental form is not positive definite. Since ${\mathcal{E}}$ is orientable, there exists a skew-symmetric bilinear form $\mu_p\colon{}{\mathcal{E}}_p\times{\mathcal{E}}_p\to {\boldsymbol{R}}$ for each $p\in M^2$, where ${\mathcal{E}}_p$ is the fiber of ${\mathcal{E}}$ at $p$, such that $\mu(e_1,e_2)=\pm 1$ for any orthonormal frame $\{e_1,e_2\}$ on ${\mathcal{E}}$. A frame $\{e_1,e_2\}$ is called positive if $\mu(e_1,e_2)=1$. A singular point $p$ is called [*non-degenerate*]{} if the derivative $d\lambda$ of the function $$\label{eq:intlambda} \lambda:=\mu\left( \psi\left(\frac{\partial}{\partial u}\right), \psi\left(\frac{\partial}{\partial v}\right) \right)$$ does not vanish at $p$, where $(U;u,v)$ is a local coordinate system of $M^2$ at $p$. On a neighborhood of a non-degenerate singular point, the singular set consists of a regular curve, called the [*singular curve*]{}. The tangential direction of the singular curve is called the [*singular direction*]{}, and the direction of the kernel of $\psi$ is called the [*null direction*]{}. Then we can define [*intrinsic cuspidal edges*]{} and [*intrinsic swallowtails*]{} according to Fact \[fact:intrinsic-criterion\]. For a given singular curve $\gamma(t)$ consisting of intrinsic cuspidal edge points, the singular curvature function is defined by $$\kappa_s(t):={\operatorname{sgn}}\bigl(\lambda(\eta)\bigr)\hat \kappa_g(t),$$ where $\hat\kappa_g(t):={\left\langle{D_t\psi(\gamma'(t))},{n(t)}\right\rangle}$ is the [*limiting geodesic curvature*]{}, $n(t)\in {\mathcal{E}}_{\gamma(t)}$ is a unit vector such that $\mu\bigl(\psi(\gamma'(t)),n(t)\bigr)=1$, and $\eta(t)$ is the null direction such that $\bigl(\gamma'(t),\eta(t)\bigr)$ is a positive frame on $M^2$. Then Theorem \[thm:invariance-singular-curvature\] and Proposition \[prop:intrinsic-singular-curvature\] hold. Let $(U;e_1,e_2)$ be an orthonormal frame field of ${\mathcal{E}}$ such that $\mu(e_1,e_2)=1$. Then there exists a unique $1$-form $\alpha$ on $U$ such that $$D_Xe_1=-\alpha(X)e_2, \qquad D_Xe_2=\alpha(X)e_1\qquad (X\in TM^2),$$ which is called the [*connection form*]{}. Moreover, the exterior derivative $d\alpha$ does not depend on the choice of a positive frame $(U;e_1,e_2)$ and gives a (globally defined) $2$-form on $M^2$. When $M^2$ is compact, the integration $$\label{eq:Euler} \chi_{{\mathcal{E}}}^{}:=\frac{1}{2\pi}\int_{M^2} d\alpha$$ is an integer called the [*Euler number*]{} of ${\mathcal{E}}$. Let $(U;e_1,e_2)$ be a positive orthonormal frame field of ${\mathcal{E}}$ and $\gamma(s)$ a curve in $U(\subset M^2)$ such that ${\left\langle{\psi(\gamma'(s))},{\psi(\gamma'(s))}\right\rangle}=1$. Let ${\varphi}(s)$ be the angle of $\psi(\gamma'(s))$ from $e_1(\gamma(s))$. Then we have $$\label{eq:GB-key} \hat\kappa_g\,ds=d{\varphi}-\alpha.$$ Let $\Delta$ be a triangle with interior angles $A,B,C$. In the interior of $\Delta$, we suppose that there are no singular points and that $\psi^*d\alpha$ is compatible with respect to the orientation of $M^2$. We give an orientation to $\partial \Delta$ such that conormal vector points into the domain $\Delta$. By using the same argument as in the classical proof of the Gauss-Bonnet Theorem, we get the formulas and in the introduction intrinsically. This intrinsic formulation is meaningful if we consider the following examples: A map $f\colon{}M^2\to {\boldsymbol{R}}^3$ is called a [*frontal*]{} if there exists a unit normal vector field $\nu$ such that $f_*X$ is perpendicular to $\nu$ for all $X\in TM^2$. A frontal is a front if $(f,\nu)\colon{}M^2\to {\boldsymbol{R}}^3\times S^2$ is an immersion. A cuspidal cross cap is a singular point locally diffeomorphic to the map $(u,v)\mapsto (u,v^2,uv^3)$ and is a frontal but not a front. In [@FSUY], a useful criterion for cuspidal cross caps are given. Though a cuspidal cross cap is not a cuspidal edge, the limiting tangent bundle is well defined and the singular point is an intrinsic cuspidal edge. In particular, our Gauss-Bonnet formulas hold for a frontal that admits only cuspidal edges, swallowtails and cuspidal cross caps, and degenerate peaks like as for a double swallowtail. A smooth map $f\colon{}M^2\to {\boldsymbol{R}}^n$ defined on a $2$-manifold $M^2$ into ${\boldsymbol{R}}^n$ ($n>3$) is called [*an admissible map*]{} if there exists a map $\nu\colon{}M^2\to G_2({\boldsymbol{R}}^n)$ into the oriented $2$-plane Grassman manifold $G_2({\boldsymbol{R}}^n)$, such that it coincides with the Gauss map of $f$ on regular points of $f$. For an admissible map, the limiting tangent bundle is canonically defined and we can apply our intrinsic formulation to it. A realization problem for abstract limiting tangent bundles is investigated in [@SUY2]. The realization of first fundamental forms with singularities has been treated in [@K2]. [KRSUY]{} V. I. Arnol’d, [Topological Invariants of Plane Curves and Caustics]{}, University Lecture Series [**5**]{}, Amer. Math. Soc. (1991). V. I. Arnol’d, S. M. Gusein-Zade and A. N. Varchenko, [Singularities of differentiable maps, Vol. $1$]{}, Monographs in Math. [**82**]{}, Birkhäuser (1985). J. W. Bruce and P. J. Giblin, [Curves and singularities]{}, Cambridge University Press (1984). S. Fujimori, [*Spacelike CMC $1$ surfaces with elliptic ends in de Sitter $3$-Space,* ]{} Hokkaido Math. J. [**35**]{} (2006), 289–320. S. Fujimori, K. Saji, M. Umehara and K. Yamada, [*Singularities of maximal surfaces*]{}, preprint, math.DG/0510366. M. Golubitsky and V. Guillemin, [Stable Mappings and Their Singularities]{}, Graduate Texts in Math. [**14**]{}, Springer-Verlag (1973). G. Ishikawa and Y. Machida, [*Singularities of improper affine spheres and pseudo-spherical surfaces*]{}, preprint, math.DG/0502154, to appear in International J. ournal of Math. M. Kokubu, W. Rossman, K. Saji, M. Umehara and K. Yamada, [*Singularities of flat fronts in hyperbolic $3$-space*]{}, Pacific J. of Math. [**221**]{} (2005), 303–351. M. Kokubu, M. Umehara and K. Yamada, [*An elementary proof of Small’s formula for null curves in [${\operatorname{PSL}}(2,{\boldsymbol{C}})$]{} and an analogue for Legendrian curves in [${\operatorname{PSL}}(2,{\boldsymbol{C}})$]{}*]{}, Osaka J. Math. [**40**]{} (2003), 697–715. M. Kokubu, M. Umehara and K. Yamada, [*Flat fronts in hyperbolic $3$-space*]{}, Pacific J. Math. [**216**]{} (2004), 149–175. M. Kossowski, [*The Boy-Gauss-Bonnet theorems for $C^{\infty}$-singular surfaces with limiting tangent bundle*]{}, Annals of Global Analysis and Geometry [**21**]{} (2002), 19–29. M. Kossowski, [*Realizing a singular first fundamental form as a nonimmersed surface in Euclidean $3$-space*]{}, J. Geom. [**81**]{} (2004), 101–113. R. Langevin, G. Levitt and H. Rosenberg, [*Classes d’homotopie de surfaces avec rebroussements et queues d’aronde dans $\mathbb R^3$*]{}, Canad. J. Math. [**47**]{} (1995), 544–572. S. Lee and S.-D. Yang, [*A spinor representation for spacelike surfaces of constant mean curvature $-1$ in de Sitter three-space*]{}, Osaka J. Math., [**43**]{} (2006), 641–663. Y. Machigashira, [*The Gaussian curvature of Alexandrov spaces*]{}, J. Math. Soc. Japan [**50**]{} (1998) 859–878. A. Martínez, [*Improper Affine maps*]{}, Math. Z. [**249**]{} (2005), 755–766. K. Shiohama, [*Total curvatures and minimal area of complete open surfaces*]{}, Proc. Amer. Math. Soc. [**94**]{} (1965), 310–316. K. Saji, M. Umehara and K. Yamada, [ *Behavior of cuspidal edges at corank one singular points and the realization of intrinsic wave fronts*]{}, preprint. M. Umehara, [*Geometry of curves and surfaces with singularities*]{}, in [Mathematics in the 21st century—unscaled peaks of geometry]{}, edited by R. Miyaoka and M. Kotani, Nihon-Hyoronsha, 2004 (in Japanese). M. Umehara and K. Yamada, [*Maximal surfaces with singularities in Minkowski space*]{}, Hokkaido Math. J., [**35**]{} (2006), 13–40.
--- abstract: 'Recent precise measurements of CP violation parameters in kaon decays at the NA48 experiment: indirect CPV parameter $|\eta_{+-}|$, and charge asymmetries in $K^\pm\to3\pi$ decays, are presented.' address: 'School of Physics and Astronomy, University of Birmingham, B15 2TT, United Kingdom' author: - Evgueni Goudzovski title: | Measurements of CP violation parameters\ at the NA48 experiment at CERN --- CP violation ,kaon decays Introduction {#intro} ============ The CERN programme in experimental kaon physics of the last decade has been carried out by the NA48 series of experiments. NA48 has accomplished several physics subprogrammes based on data taken with $K_L$, $K_S$, $K^\pm$ and neutral hyperon beams in 1997–2004. The principal components of the experimental setup (modified and upgraded in the course of operation) are a beam line followed by a vacuum decay volume, a magnetic spectrometer consisting of four drift chambers, a trigger scintillator hodoscope, a liquid krypton electromagnetic calorimeter, a hadron calorimeter, and and a muon detector [@fa07]. The present paper reports a number recent precise measurements of CP violation (CPV) parameters: 1) the indirect CPV parameter $|\eta_{+-}|$ with $K_L\to\pi^+\pi^-$ decays; 2) the direct CP violating charge asymmetries of Dalitz plot slopes $A_g$ in $K^\pm\to3\pi^\pm$ and $K^\pm\to\pi^\pm\pi^0\pi^0$ decays. Measurement of the indirect CP violation parameter $|\eta_{+-}|$ ================================================================ \[eta\] The interest in a precise measurement of the indirect CPV parameter $|\eta_{+-}|=A(K_L\to\pi^+\pi^-)/A(K_S\to\pi^+\pi^-)$ stems, in particular, from the fact that its recent measurements by KTeV and KLOE experiments published in 2004 and 2006, respectively, differ by $\sim\!5\%$, or more than four standard deviations, from the previous world average. The NA48 measurement of $|\eta_{+-}|$ [@la07] is based on a data set taken during two days of dedicated running in 1999. The directly measured quantity is the ratio of the decay rates $R=\Gamma(K_L\to\pi^+\pi^-)/\Gamma(K_L\to\pi e\nu)$; these decays are characterized by similar signatures involving two reconstructed tracks of charged particles. Then $|\eta_{+-}|$ is computed as $$\label{eta-main} |\eta_{+-}| = \sqrt{\frac{\Gamma(K_L\to\pi^+\pi^-)}{\Gamma(K_S\to\pi^+\pi^-)}}= \sqrt{\frac{\textrm{BR}(K_L\to\pi^+\pi^-)}{\textrm{BR}(K_S\to\pi^+\pi^-)} \cdot\frac{\tau_{KS}}{\tau_{KL}}}.$$ In this approach the $K_L$ and $K_S$ lifetimes $\tau_{KL}$ and $\tau_{KS}$, and the branching fractions $\textrm{BR}(K_L\to\pi e\nu)$ and $\textrm{BR}(K_S\to\pi^+\pi^-)$ are external inputs taken from the best single measurements. The data sample contains about $80\times10^6$ 2-track triggers. Event selection is similar for the $K_L\to\pi^+\pi^-$ and $K_L\to\pi e\nu$ modes. A crucial difference is electron vs pion identification based on the ratio of particle energy deposition in the EM calorimeter to its momentum measured by the spectrometer (expected to be close to 1 for electrons). Particle identification efficiencies were directly measured and corrected for. Samples of $47\times 10^3$ $K_L\to\pi^+\pi^-$ and $5.0\times 10^6$ $K_L\to\pi e\nu$ candidates were selected, with about $0.5\%$ background contamination in each. Acceptance corrections and background subtraction were performed by Monte Carlo simulation. Trigger efficiencies were measured directly with the data and corrected for. The most relevant systematic uncertainties come from precision of simulation of kaon momentum spectrum, precision of radiative corrections, and precision of trigger efficiency measurement. The final result is $$\Gamma(K_L\to\pi^+\pi^-)/\Gamma(K_L\to\pi e\nu) = (4.835\pm0.022_{stat.}\pm0.016_{syst.})\times 10^{-3}.$$ This leads, subtracting the $K_L\to\pi^+\pi^-\gamma$ direct emission contribution, but retaining the inner bremsstrahlung contribution, to $$\textrm{BR}(K_L\to\pi^+\pi^-) = (1.941\pm0.019)\times 10^{-3}.$$ Finally, the CP violation parameter is computed according to (\[eta-main\]) to be $$|\eta_{+-}| = (2.223\pm0.012)\times 10^{-3}.$$ The result in in agreement with the recent KLOE and KTeV measurements, while it contradicts the 2004 PDG average. The latter disagreement is understood to be due to the improved treatment of the radiative corrections in the recent analyses. Measurement of the direct CPV parameter $A_g$ in $K^\pm\to3\pi$ decays ====================================================================== $K^\pm\to\pi^\pm\pi^+\pi^-$ and $K^\pm\to\pi^\pm\pi^0\pi^0$ decays are among the most promising processes in kaon physics to search for CPV phenomena. The decay density is parameterized (up to radiative and $\pi\pi$ rescattering effects studied separately [@slopes; @cusp]) by a polynomial expansion $$d^2\Gamma/dudv\sim 1+gu+hu^2+kv^2, \label{slopes}$$ where $g$, $h$, $k$ are the so called linear and quadratic Dalitz plot slope parameters ($|h|,|k|\ll |g|$), and the two Lorentz invariant kinematic variables $u$ and $v$ are defined as $$u=\frac{s_3-s_0}{m_\pi^2},~~v=\frac{s_2-s_1}{m_\pi^2},~~ s_i=(P_K-P_i)^2,~i=1,2,3;~~s_0=\frac{s_1+s_2+s_3}{3}. \label{uvdef}$$ Here $m_\pi$ is the charged pion mass, $P_K$ and $P_i$ are the kaon and pion four-momenta, the indices $i=1,2$ correspond to the two pions of the same electrical charge, and the index $i=3$ to the pion of different charge. A non-zero difference $\Delta g$ between the slope parameters $g^+$ and $g^-$ describing the decays of $K^+$ and $K^-$, respectively, is a manifestation of direct CP violation expressed by the corresponding slope asymmetry $$A_g = (g^+ - g^-)/(g^+ + g^-) \approx \Delta g/(2g). \label{agdef}$$ The above slope asymmetry is expected to be strongly enhanced with respect to the asymmetry of integrated decay rates. A recent full next-to-leading order ChPT computation [@ga03] predicts $A_g$ to be of the order of $10^{-5}$ within the SM. Calculations involving processes beyond the SM [@sh98; @ga00] allow a wider range of $A_g$, including substantial enhancements up to a few $10^{-4}$. A measurement of the quantity $A_g$ was performed with a record data sample collected in 2003–04 with simultaneous $K^+$ and $K^-$ beams [@ba07]. The measurement method is based on the study of ratios of $u$ spectra of $K^+$ and $K^-$ decays, and exploits cancellations of major systematic effects due to the simultaneous collection of $K^+$ and $K^-$ decays, and regular inversions of magnetic fields in the beam line and the spectrometric magnet, which allows achieving $\sim10^{-4}$ precision. The event samples are practically background-free, and contain $3.11\times 10^9$ $K^\pm\to\pi^\pm\pi^+\pi^-$ candidates, and $9.13\times 10^7$ $K^\pm\to\pi^\pm\pi^0\pi^0$ candidates (the $K^+$/$K^-$ flux ratio, on which however the results do not depend, is 1.8). The CP violating charge asymmetries of the linear slope parameter of the Dalitz plot of the $K^\pm\to\pi^\pm\pi^+\pi^-$ and $K^\pm\to\pi^\pm\pi^0\pi^0$ decays were measured to be $$\begin{array}{rcrllllcrcl} A_g^c &=& (-1.5 &\pm& 1.5_{stat.} &\pm& 1.6_{syst.})\times 10^{-4} &=& (-1.5&\pm&2.2)\times 10^{-4},\\ A_g^n &=& (1.8 &\pm& 1.7_{stat.}&\pm& 0.6_{syst.})\times 10^{-4}&=& (1.8&\pm&1.8)\times 10^{-4}. \end{array}$$ The archived precision is more than an order of magnitude better that those of the previous measurements. The results do not show evidences for large enhancements due to non-SM physics, and can be used to constrain certain SM extensions predicting enhanced CPV effects. Summary ======= A number of recent measurements of CPV parameters in kaon decays by the NA48 collaboration at CERN are presented. The achieved precisions are similar to or better than the best previous ones. [00]{} V. Fanti [*et al.*]{} (NA48), . A. Lai [*et al*]{}. (NA48), Phys. Lett. [**B645**]{} (2007) 26. E. Gámiz, J. Prades, I. Scimemi, JHEP [**10**]{} (2003) 042. E.P. Shabalin, ITEP preprint [**8-98**]{} (1998). G. D’Ambrosio, G. Isidori, G. Martinelli, Phys. Lett. [**B480**]{} (2000) 164. J.R. Batley [*et al.*]{} (NA48/2), Phys. Lett. [**B649**]{} (2007) 349. J.R. Batley [*et al.*]{} (NA48/2), Phys. Lett. [**B633**]{} (2006) 173. J.R. Batley [*et al*]{}. (NA48/2), Eur. Phys. J. [**C52**]{} (2007) 875.
--- abstract: 'Given a positive integer $n$ and a positive semidefinite matrix $A = (A_{ij}) \in {\mathbb{R}}^{m \times m}$, the positive semidefinite Grothendieck problem with rank-$n$-constraint $(\operatorname{SDP}_n)$ is $$\text{maximize } \sum_{i=1}^m \sum_{j=1}^m A_{ij} \; x_i \cdot x_j,\qquad\text{where } x_1, \ldots, x_m \in S^{n-1}.$$ In this paper we design a randomized polynomial-time approximation algorithm for $\operatorname{SDP}_n$ achieving an approximation ratio of $$\gamma(n) = \frac{2}{n}\left(\frac{\Gamma((n+1)/2)}{\Gamma(n/2)}\right)^2 = 1 - \Theta(1/n).$$ We show that under the assumption of the unique games conjecture the achieved approximation ratio is optimal: There is no polynomial-time algorithm which approximates $\operatorname{SDP}_n$ with a ratio greater than $\gamma(n)$. We improve the approximation ratio of the best known polynomial-time algorithm for $\operatorname{SDP}_1$ from $2/\pi$ to $2/(\pi\gamma(m)) = 2/\pi + \Theta(1/m)$, and we show a tighter approximation ratio for $\operatorname{SDP}_n$ when $A$ is the Laplacian matrix of a graph with nonnegative edge weights.' author: - 'Jop Briët[^1], Fernando Mário de Oliveira Filho[^2] and Frank Vallentin[^3]' date: 'February 8, 2010' title: The positive semidefinite Grothendieck problem with rank constraint --- Introduction ============ Given a positive integer $n$ and a positive semidefinite matrix $A = (A_{ij}) \in {\mathbb{R}}^{m \times m}$, the [*positive semidefinite Grothendieck problem with rank-$n$-constraint*]{} is defined as $$\operatorname{SDP}_n(A) = \max\biggl\{\sum_{i=1}^m \sum_{j=1}^m A_{ij} \; x_i \cdot x_j : x_1, \ldots, x_m \in S^{n-1}\biggr\},$$ where $S^{n-1} = \{x \in {\mathbb{R}}^n : x \cdot x = 1\}$ is the unit sphere. Note that the inner product matrix of the vectors $x_1, \ldots, x_m$ has rank $n$. This problem was introduced by Briët, Buhrman, and Toner [@BrietBuhrmanToner] in the context of quantum nonlocality where they applied it to nonlocal XOR games. The case $n = 1$ is the classical positive semidefinite Grothendieck problem where $x_1, \ldots, x_m \in \{-1,+1\}$. It was introduced by Grothendieck [@Grothendieck] in the study of norms of tensor products of Banach spaces. It is an $\mathrm{NP}$-hard problem: If $A$ is the Laplacian matrix of a graph then $\operatorname{SDP}_1(A)$ coincides with the value of a maximum cut of the graph. The maximum cut problem (MAX CUT) is one of Karp’s 21 $\mathrm{NP}$-complete problems. Over the last years, there has been a lot of work on algorithmic applications, interpretations, and generalizations of the Grothendieck problem and the companion Grothendieck inequalities. For instance, Nesterov [@Nesterov] showed that it has applications to finding and analyzing semidefinite relaxations of nonconvex quadratic optimization problems. Ben-Tal and Nemirovski [@BenTalNemirovski] showed that it has applications to quadratic Lyapunov stability synthesis in system and control theory. Alon and Naor [@AlonNaor] showed that it has applications to constructing Szemerédi partitions of graphs and to estimating the cut norm of matrices. Linial and Shraibman [@LinialSchraibman] showed that it has applications to finding lower bounds in communication complexity. Khot and Naor [@KhotNaor], [@KhotNaor2] showed that it has applications to kernel clustering. For other applications, see also Alon, Makarychev, Makarychev, and Naor [@AlonMakarychevMakarychevNaor], and Raghavendra and Steurer [@RaghavendraSteurer]. One can reformulate the positive semidefinite Grothendieck problem with rank-$n$-constraint as a semidefinite program with an additional rank constraint: $$\begin{split} \text{maximize } & \sum_{i = 1}^m \sum_{j = 1}^m A_{ij} X_{ij}\\ \text{subject to } & \text{$X = (X_{ij}) \in {\mathbb{R}}^{m \times m}$ is positive semidefinite,}\\ & X_{ii} = 1,\quad\text{for $i = 1, \ldots, m$,}\\ & \text{$X$ has rank at most $n$.} \end{split}$$ When $n$ is a constant that does not depend on the matrix size $m$ there is no polynomial-time algorithm known which solves $\operatorname{SDP}_n$. It is also not known if the problem $\operatorname{SDP}_n$ is $\mathrm{NP}$-hard when $n \geq 2$. On the other hand the [*semidefinite relaxation*]{} of $\operatorname{SDP}_n(A)$ defined by $$\operatorname{SDP}_{\infty}(A) = \max\biggl\{\sum_{i = 1}^m \sum_{j = 1}^m A_{ij} \; u_i \cdot u_j : u_1, \ldots, u_m \in S^{\infty}\biggr\}$$ can be computed in polynomial time to any desired precision by using, e.g., the ellipsoid method. Here $S^{\infty}$ denotes the unit sphere of the Hilbert space $l^2({\mathbb{R}})$ of square summable sequences, which contains ${\mathbb{R}}^n$ as the subspace of the first $n$ components. Clearly, it would suffice to use unit vectors in ${\mathbb{R}}^m$ for solving $\operatorname{SDP}_{\infty}(A)$ when $A \in {\mathbb{R}}^{m \times m}$, but using $S^{\infty}$ will simplify many formulations in this paper. Rietz [@Rietz] (in the context of the Grothendieck inequality) and Nesterov [@Nesterov] (in the context of approximation algorithms for $\mathrm{NP}$-hard problems) showed that $\operatorname{SDP}_1$ and $\operatorname{SDP}_{\infty}$ are always within a factor of at most $2/\pi$ from each other. That is, for all positive semidefinite matrices $A \in {\mathbb{R}}^{m \times m}$ we have $$\label{eq:rietznesterov} 1 \geq \frac{\operatorname{SDP}_1(A)}{\operatorname{SDP}_{\infty}(A)} \geq \frac{2}{\pi}.$$ By exhibiting an explicit series of positive semidefinite matrices, Grothendieck [@Grothendieck] (see also Alon and Naor [@AlonNaor Section 5.2]) showed that one cannot improve the constant $2/\pi$ to $2/\pi + \varepsilon$ for any positive $\varepsilon$ which is independent of $m$. Nesterov [@Nesterov] gave a randomized polynomial-time approximation algorithm for $\operatorname{SDP}_1$ with approximation ratio $2/\pi$ which can be derandomized using the techniques presented by Mahajan and Ramesh [@MahajanRamesh]. This algorithm is optimal in the following sense: Khot and Naor [@KhotNaor] showed that under the assumption of the unique games conjecture (UGC) there is no polynomial-time algorithm which approximates $\operatorname{SDP}_1$ to within a ratio of $2/\pi + \varepsilon$ for any positive $\varepsilon$ independent of $m$. The unique games conjecture was introduced by Khot [@Khot] and by now many tight UGC hardness results are known, see e.g. Khot, Kindler, Mossel, and O’Donnell [@KhotKindlerMosselODonnel] for the maximum cut problem, Khot and Regev [@KhotRegev] for the minimum vertex cover problem, and Raghavendra [@Raghavendra] for general constrained satisfaction problems. The aim of this paper is to provide a corresponding analysis for $\operatorname{SDP}_n$. Our results {#our-results .unnumbered} ----------- In Section \[sec:psd\] we start by reviewing our methodological contributions: Our main contribution is the analysis of a rounding scheme which can deal with rank-$n$-constraints in semidefinite programs. For this we use the Wishart distribution from multivariate statistics (see e.g. Muirhead [@Muirhead]). We believe this analysis is of independent interest and will turn out to be useful in different contexts, e.g. for approximating low dimensional geometric embeddings. Our second contribution is that we improve the constant in inequality slightly by considering functions of positive type for the unit sphere $S^{m-1}$ and applying a characterization of Schoenberg [@Schoenberg]. This slight improvement is the key for our UGC hardness result of approximating $\operatorname{SDP}_n$ given in Theorem \[th:hardness\]. We analyze our rounding scheme in Section \[sec:algorithm\]. \[th:algorithm\] For all positive semidefinite matrices $A \in {\mathbb{R}}^{m \times m}$ we have $$1 \geq \frac{\operatorname{SDP}_n(A)}{\operatorname{SDP}_{\infty}(A)} \geq \gamma(n) = \frac{2}{n}\left(\frac{\Gamma((n+1)/2)}{\Gamma(n/2)}\right)^2 = 1 - \Theta(1/n),$$ and there is a randomized polynomial-time approximation algorithm for $\operatorname{SDP}_n$ achieving this ratio. The first three values of $\gamma(n)$ are: $$\begin{split} & \gamma(1) = 2/\pi = 0.63661\ldots\\ & \gamma(2) =\pi/4 = 0.78539\ldots\\ & \gamma(3) = 8/(3\pi)= 0.84882\ldots \end{split}$$ In Section \[sec:improved\] we show that one can improve inequality  slightly: \[th:improvement\] For all positive semidefinite matrices $A \in {\mathbb{R}}^{m \times m}$ we have $$1 \geq \frac{\operatorname{SDP}_1(A)}{\operatorname{SDP}_{\infty}(A)} \geq \frac{2}{\pi\gamma(m)} = \frac{m}{\pi} \left(\frac{\Gamma(m/2)}{\Gamma((m+1)/2)}\right)^2 = \frac{2}{\pi} + \Theta\left(\frac{1}{m}\right),$$ and there is a polynomial-time approximation algorithm for $\operatorname{SDP}_1$ achieving this ratio. With this, the current complexity status of the problem $\operatorname{SDP}_1$ is similar to the one of the minimum vertex cover problem. Karakostas [@Karakostas] showed that one can approximate the minimum vertex cover problem for a graph having vertex set $V$ with an approximation ratio of $2-\Theta(1/\sqrt{\log |V|})$ in polynomial time. On the other hand, Khot and Regev [@KhotRegev] showed, assuming the unique games conjecture, that there is no polynomial-time algorithm which approximates the minimum vertex cover problem with an approximation factor of $2 - \varepsilon$ for any positive $\varepsilon$ which is independent of $|V|$. In Section \[sec:hardness\] we show that the approximation ratio $\gamma(n)$ given in Theorem \[th:algorithm\] is optimal for $\operatorname{SDP}_n$ under the assumption of the unique games conjecture. By using the arguments of the proof of Theorem \[th:improvement\] and by the UGC hardness of approximating $\operatorname{SDP}_1$ due to Khot and Naor [@KhotNaor] we get the following tight UGC hardness result for approximating $\operatorname{SDP}_n$. \[th:hardness\] Under the assumption of the unique games conjecture there is no polynomial-time algorithm which approximates $\operatorname{SDP}_n$ with an approximation ratio greater than $\gamma(n) + \varepsilon$ for any positive $\varepsilon$ which is independent of the matrix size $m$. In Section \[sec:Laplacian\] we show that a better approximation ratio can be achieved when the matrix $A$ is the Laplacian matrix of a graph with nonnegative edge weights. Rounding schemes and functions of positive type {#sec:psd} =============================================== In this section we discuss our rounding scheme which rounds an optimal solution of $\operatorname{SDP}_{\infty}$ to a feasible solution of $\operatorname{SDP}_n$. In the case $n = 1$ our rounding scheme is equivalent to the classical scheme of Goemans and Williamson [@GoemansWilliamson]. To analyze the rounding scheme we use functions of positive type for unit spheres. The randomized polynomial-time approximation algorithm which we use in the proofs of the theorems is the following three-step process. The last two steps are our rounding scheme. 1. Solve $\operatorname{SDP}_{\infty}(A)$, obtaining vectors $u_1, \ldots, u_m \in S^{m-1}$. 2. Choose $X = (X_{ij}) \in {\mathbb{R}}^{n \times m}$ so that every matrix entry $X_{ij}$ is distributed independently according to the standard normal distribution with mean $0$ and variance $1$, that is, $X_{ij} \sim N(0,1)$. 3. Set $x_i = Xu_i/\|Xu_i\|\in S^{n-1}$ with $i = 1, \ldots, m$. The quality of the feasible solution $x_1, \ldots, x_m$ for $\operatorname{SDP}_n$ is measured by the expectation $${\mathbb{E}}\biggl[\sum_{i=1}^m\sum_{j=1}^m A_{ij} \; x_i \cdot x_j\biggr] = \sum_{i=1}^m\sum_{j=1}^m A_{ij} {\mathbb{E}}\biggl[\frac{Xu_i}{\|Xu_i\|} \cdot \frac{Xu_j}{\|Xu_j\|} \biggr],$$ which we analyze in more detail. For vectors $u, v \in S^{\infty}$ we define $$\label{eq:expectation} E_n(u,v) = {\mathbb{E}}\biggl[\frac{Xu}{\|Xu\|} \cdot \frac{Xv}{\|Xv\|} \biggr],$$ where $X = (X_{ij})$ is a matrix with $n$ rows and infinitely many columns whose entries are distributed independently according to the the standard normal distribution. Of course, if $u, v \in S^{m-1}$, then it suffices to work with finite matrices $X \in {\mathbb{R}}^{n \times m}$. The first important property of the expectation $E_n$ is that it is [*invariant under $\operatorname{O}(\infty)$*]{}, i.e. for every $m$ it is invariant under the orthogonal group $\operatorname{O}(m) = \{T \in {\mathbb{R}}^{m \times m}: T^{\sf T} T = I_m\}$, where $I_m$ denotes the identity matrix. More specifically, for every $m$ and every pair of vectors $u, v \in S^{m-1}$ we have $$E_n(Tu,Tv) = E_n(u,v)\qquad\text{for all $T \in \operatorname{O}(m)$.}$$ If $n = 1$, then $$E_1(u,v) = {\mathbb{E}}[\operatorname{sign}(\xi \cdot u) \operatorname{sign}(\xi \cdot v)],$$ where $\xi \in {\mathbb{R}}^m$ is chosen at random from the $m$-dimensional standard normal distribution. By Grothendieck’s identity (see e.g. [@Jameson Lemma 10.2]) $${\mathbb{E}}[\operatorname{sign}(\xi \cdot u) \operatorname{sign}(\xi \cdot v)] = \frac{2}{\pi} \arcsin u \cdot v.$$ Hence, the expectation $E_1$ only depends on the inner product $t = u \cdot v$. For general $n$, the $\operatorname{O}(\infty)$ invariance implies that this is true also for $E_n$. The second important property of the expectation $E_n$ (now interpreted as a function of the inner product) is that it is a function of positive type for $S^{\infty}$, i.e. it is of positive type for any unit sphere $S^{m-1}$, independent of the dimension $m$. In general, a continuous function $f : [-1,1] \to {\mathbb{R}}$ is called [*a function of positive type for $S^{m-1}$*]{} if the matrix $(f(v_i \cdot v_j))_{1 \leq i, j \leq N}$ is positive semidefinite for every positive integer $N$ and every choice of vectors $v_1, \ldots, v_N \in S^{m-1}$. The expectation $E_n$ is of positive type for $S^{\infty}$ because one can write it as a sum of squares. Schoenberg [@Schoenberg] characterized the continuous functions $f : [-1,1] \to {\mathbb{R}}$ which are of positive type for $S^{\infty}$: They are of the form $$f(t) = \sum_{i = 0}^\infty f_i t^i,$$ with nonnegative $f_i$ and $\sum_{i = 0}^{\infty} f_i < \infty$. In the case $n = 1$ we have the series expansion $$E_1(t) = \frac{2}{\pi} \arcsin t = \frac{2}{\pi} \sum_{i=0}^\infty \frac{(2i)!}{2^{2i}(i!)^2(2i+1)} t^{2i+1}.$$ In Section \[sec:algorithm\] we treat the cases $n \geq 2$. Suppose we develop the expectation $E_n(t)$ into the series $E_n(t) = \sum_{i = 0}^{\infty} f_i t^i$. Then because of Schoenberg’s characterization the function $t \mapsto E_n(t) - f_1 t$ is of positive type for $S^{\infty}$ as well. This together with the inequality $\sum_{i,j} X_{ij} Y_{ij} \geq 0$, which holds for all positive semidefinite matrices $X, Y \in {\mathbb{R}}^{m \times m}$, implies $$\label{eq:approxineq} \operatorname{SDP}_n(A) \geq \sum_{i=1}^m \sum_{j=1}^m A_{ij} E_n(u_i,u_j) \geq f_1 \sum_{i=1}^m \sum_{j=1}^m A_{ij} \; u_i \cdot u_j = f_1 \operatorname{SDP}_{\infty}(A).$$ When $n = 1$ the series expansion of $E_1$ gives $f_1 = 2/\pi$ and the above argument is essentially the one of Nesterov [@Nesterov]. To improve on this (and in this way to improve the constant $2/\pi$ in inequality ) one can refine the analysis by working with functions of positive type which depend on the dimension $m$. In Section \[sec:improved\] we show that $t \mapsto 2/\pi (\arcsin t - t/\gamma(m))$ is a function of positive type for $S^{m-1}$. For the cases $n \geq 2$ we show in Section \[sec:algorithm\] that $f_1 = \gamma(n)$. Analysis of the approximation algorithm {#sec:algorithm} ======================================= In this section we show that the expectation $E_n$ defined in is a function of positive type for $S^{\infty}$ and that in the series expansion $E_n(t) = \sum_{i=0}^{\infty} f_i t^i$ one has $f_1 = \gamma(n)$. These two facts combined with the discussion in Section \[sec:psd\] imply Theorem \[th:algorithm\]. Let $u, v \in S^{m-1}$ be unit vectors and let $X = (X_{ij}) \in {\mathbb{R}}^{n \times m}$ be a random matrix whose entries are independently sampled from the standard normal distribution. Because of the invariance under the orthogonal group, for computing $E_n(u,v)$ we may assume that $u$ and $v$ are of the form $$\begin{split} & u = (\cos\theta, \sin\theta, 0,\ldots, 0)^{\sf T}\\ & v = (\cos\theta, -\sin\theta, 0, \ldots, 0)^{\sf T}. \end{split}$$ Then by the double-angle formula $\cos 2\theta = t$ with $t = u \cdot v$. We have $$Xu = \begin{pmatrix} X_{11} & X_{12}\\ \vdots & \vdots\\ X_{n1} & X_{n2} \end{pmatrix} \begin{pmatrix} \cos\theta\\ \sin\theta \end{pmatrix}, \quad Xv = \begin{pmatrix} X_{11} & X_{12}\\ \vdots & \vdots\\ X_{n1} & X_{n2} \end{pmatrix} \begin{pmatrix} \cos\theta\\ -\sin\theta \end{pmatrix}.$$ Hence, $$\frac{Xu}{\|Xu\|} \cdot \frac{Xv}{\|Xv\|} = \frac{x^{\sf T} Y y}{\sqrt{(x^{\sf T}Yx) (y^{\sf T}Y y)}},$$ where $x = (\cos\theta,\sin\theta)^{\sf T}$, $y = (\cos\theta,-\sin\theta)^{\sf T}$, and $Y \in {\mathbb{R}}^{2 \times 2}$ is the Gram matrix of the two vectors $(X_{11}, \ldots, X_{n1})^{\sf T}$, $(X_{12}, \ldots, X_{n2})^{\sf T} \in {\mathbb{R}}^n$. By definition, $Y$ is distributed according to the Wishart distribution from multivariate statistics. This distribution is defined as follows (see e.g. Muirhead [@Muirhead]). Let $p$ and $q$ be positive integers so that $p \geq q$. The (standard) [*Wishart distribution*]{} $W_q(p)$ is the probability distribution of random matrices $Y = X^{\sf T} X \in {\mathbb{R}}^{q \times q}$, where the entries of the matrix $X = (X_{ij}) \in{\mathbb{R}}^{p\times q}$ are independently chosen from the standard normal distribution $X_{ij} \sim N(0,1)$. The density function of $Y \sim W_q(p)$ is $$\frac{1}{2^{pq/2} \Gamma_q(p/2)}e^{-\operatorname{Tr}(Y)/2}(\det Y)^{(p-q-1)/2},$$ where $\Gamma_q$ is the [*multivariate gamma function*]{}, defined as $$\Gamma_q(x) = \pi^{q(q-1)/4}\prod_{i=1}^q\Gamma\Big(x - \frac{i-1}{2}\Big).$$ We denote the cone of positive semidefinite matrices of size $q \times q$ by $S^q_{\geq 0}$. In our case $p = n$ and $q = 2$. We can write $E_n(t)$ as $$E_n(t) = \frac{1}{2^n \Gamma_2(n/2)} \int_{S^{2}_{\geq 0}} \frac{x^{\sf T} Y y}{\sqrt{(x^{\sf T}Yx) (y^{\sf T} Y y)}} e^{-\operatorname{Tr}(Y)/2} (\det Y)^{(n-3)/2} dY,$$ where $t =\cos 2\theta$, and $x$ as well as $y$ depend on $\theta$. The parameterization of the cone $S^2_{\geq 0}$ given by $$\label{psdmatrix} S^2_{\geq 0} = \left\{ Y = \begin{pmatrix} \frac{a}{2} + \alpha\cos\phi & \alpha\sin\phi\\ \alpha\sin\phi & \frac{a}{2} - \alpha\cos\phi \end{pmatrix} : \phi \in [0,2\pi], \alpha \in [0,a/2], a \in {\mathbb{R}}_{\geq 0}\right\}$$ allows us to write the integral in a more explicit form. With this parametrization we have $$\operatorname{Tr}(Y) = a, \quad \det(Y) = \frac{a^2}{4} - \alpha^2, \quad dY =\alpha \; d\phi d\alpha da,$$ and $$\begin{split} & x^{\sf T}Yy = \frac{at}{2} + \alpha\cos\phi,\\ & x^{\sf T}Yx = \frac{a}{2} + \alpha(t\cos\phi + 2\sin\theta\cos\theta\sin\phi),\\ & y^{\sf T}Yy = \frac{a}{2} + \alpha(t\cos\phi - 2\sin\theta\cos\theta\sin\phi).\\ \end{split}$$ So, $$\begin{split} E_n(t) = \frac{1}{2^n\Gamma_2(n/2)} \int_{0}^{\infty} \int_0^{a/2} \int_{0}^{2\pi} \frac{\frac{at}{2}+\alpha\cos\phi}{\sqrt{(\frac{a}{2}+\alpha t\cos\phi)^2-\alpha^2(1-t^2)(\sin \phi)^2}}\qquad&\\ {}\cdot e^{-a/2} \left(\frac{a^2}{4} - \alpha^2\right)^{(n-3)/2} \alpha \; d\phi d\alpha da.& \end{split}$$ Substituting $\alpha = (a/2)r$ and integrating over $a$ yields $$E_n(t) = \frac{\Gamma(n)}{2^{n-1}\Gamma_2(n/2)} \int_{0}^1 \int_{0}^{2\pi}\frac{(t+r\cos\phi)r (1-r^2)^{(n-3)/2}}{\sqrt{(1+rt\cos\phi)^2-r^2(1-t^2)(\sin\phi)^2}} d\phi dr.$$ Using Legendre’s duplication formula (see [@AndrewsAskeyRoy Theorem 1.5.1]) $ \Gamma(2x)\Gamma(1/2) = 2^{2x-1}\Gamma(x)\Gamma(x+1/2) $ one can simplify $$\frac{\Gamma(n)}{2^{n-1}\Gamma_2(n/2)} = \frac{n-1}{2\pi}.$$ Recall from that the approximation ratio is given by the coefficient $f_1$ in the series expansion $E_n(t) = \sum_{i=0}^{\infty} f_i t^i$. Now we compute $f_1$: $$\begin{aligned} f_1 & = & \frac{\partial E_n}{\partial t}(0)\\ & = & \frac{n-1}{2\pi} \int_{0}^1 \int_{0}^{2\pi} \frac{r(1-r^2)^{(n-1)/2}}{(1 - r^2(\sin \phi)^2)^{3/2}}d\phi dr.\\\end{aligned}$$ Using Euler’s integral representation of the hypergeometric function [@AndrewsAskeyRoy Theorem 2.2.1] and by substitution we get $$\begin{aligned} f_1 & = & \frac{n-1}{2\pi}\int_{0}^{2\pi} \frac{\Gamma(1)\Gamma((n+1)/2)}{2\Gamma((n+3)/2)} {}_{2}F_{1} \left(\begin{array}{cc} 3/2, 1\\ (n+3)/2\end{array}; \sin^2\phi \right) d\phi\\ & = & \frac{n-1}{4\pi} \frac{\Gamma((n+1)/2)}{\Gamma((n+3)/2)} 4 \int_0^1 {}_{2}F_{1} \left(\begin{array}{cc} 3/2, 1\\ (n+3)/2\end{array}; t^2 \right) (1-t^2)^{-1/2} dt\\ & = & \frac{n-1}{\pi} \frac{\Gamma((n+1)/2)}{\Gamma((n+3)/2)}\frac{1}{2} \int_0^1 {}_{2}F_{1} \left(\begin{array}{cc} 3/2, 1\\ (n+3)/2\end{array}; t \right) (1-t)^{-1/2} t^{-1/2 }dt.\end{aligned}$$ This simplies futher by Euler’s generalized integral [@AndrewsAskeyRoy (2.2.2)], and Gauss’s summation formula [@AndrewsAskeyRoy Theorem 2.2.2] $$\begin{aligned} f_1 & = & \frac{n-1}{2\pi} \frac{\Gamma((n+1)/2)}{\Gamma((n+3)/2)} \frac{\Gamma(1/2)\Gamma(1/2)}{\Gamma(1)} {}_{3}F_{2} \left(\begin{array}{cc} 3/2, 1,1/2\\ (n+3)/2,1\end{array}; 1 \right)\\ & = & \frac{n-1}{2} \frac{\Gamma((n+1)/2)}{\Gamma((n+3)/2)} {}_{2}F_{1} \left(\begin{array}{cc} 3/2, 1/2\\ (n+3)/2\end{array}; 1 \right)\\ & = & \frac{n-1}{2} \frac{\Gamma((n+1)/2)}{\Gamma((n+3)/2)} \frac{\Gamma((n+3)/2)\Gamma((n-1)/2)}{\Gamma(n/2)\Gamma((n+2)/2)}\\ & = & \frac{2}{n}\left(\frac{\Gamma((n+1)/2)}{\Gamma(n/2)}\right)^2.\end{aligned}$$ Improved analysis {#sec:improved} ================= Nesterov’s proof of inequality relies on the fact that the function $t \mapsto 2/\pi (\arcsin t - t)$ is of positive type for $S^{\infty}$. Now we determine the largest value $c(m)$ so that the function $t \mapsto 2/\pi (\arcsin t - c(m) t)$ is of positive type for $S^{m-1}$. By this we improve the approximation ratio of the algorithm given in Section \[sec:psd\] for $\operatorname{SDP}_1$ from $2/\pi$ to $(2/\pi) c(m)$. The following lemma showing $c(m) = 1/\gamma(m)$ implies Theorem \[th:improvement\]. \[lem:psd\] The function $$t \mapsto \frac{2}{\pi} \left(\arcsin t - \frac{t}{\gamma(m)} \right)$$ is of positive type for $S^{m-1}$. We equip the space of all continuous functions $f : [-1,1] \to {\mathbb{R}}$ with the inner product $$(f, g)_{\alpha} = \int_{-1}^1 f(t) g(t) (1-t^2)^\alpha dt,$$ where $\alpha = (m-3)/2$. With this inner product the Jacobi polynomials satisfy the orthogonality relation $$(P_i^{(\alpha,\alpha)}, P_j^{(\alpha,\alpha)})_{\alpha} = 0, \quad \text{if $i \neq j$},$$ where $P_i^{(\alpha,\alpha)}$ is the Jacobi polynomial of degree $i$ with parameters $(\alpha, \alpha)$, see e.g. Andrews, Askey, and Roy [@AndrewsAskeyRoy]. Schoenberg [@Schoenberg] showed that a continuous function $f : [-1,1] \to {\mathbb{R}}$ is of positive type for $S^{m-1}$ if and only if it is of the form $$f(t) = \sum_{i = 0}^\infty f_i P_i^{(\alpha,\alpha)}(t),$$ with nonnegative coefficients $f_i$ such that $\sum_{i = 0}^{\infty} f_i < \infty$. Now we interpret $\arcsin$ as a function of positive type for $S^{m-1}$ where $m$ is fixed. By the orthogonality relation and because of Schoenberg’s result the function $\arcsin t - c(m)t$ is of positive type for $S^{m-1}$ if and only if $$(\arcsin t - c(m) t, P_i^{(\alpha,\alpha)})_{\alpha} \geq 0,\qquad\text{for all $i = 0$, $1$, $2$, \dots.}$$ We have $P_1^{(\alpha,\alpha)}(t) = (\alpha + 1)t$. By the orthogonality relation and because the $\arcsin$ function is of positive type we get, for $i \ne 1$, $$(\arcsin t -c(m)t , P_i^{(\alpha,\alpha)})_{\alpha} = (\arcsin t, P_i^{(\alpha,\alpha)})_{\alpha} \ge 0.$$ This implies that the maximum $c(m)$ such that $\arcsin t - c(m)t$ is of positive type for $S^{m-1}$ is given by $c(m) = (\arcsin t, t)_{\alpha}/(t,t)_{\alpha}$. The numerator of $c(m)$ equals $$\begin{split} (\arcsin t, t)_{\alpha} & = \int_{-1}^1 \arcsin(t) t (1-t^2)^{\alpha} dt \\ & = \int_{-\pi/2}^{\pi/2} \theta \sin \theta (\cos \theta)^{2\alpha+1} d\theta\\ & = \frac{\Gamma(1/2)\Gamma(a+3/2)}{(2\alpha+2) \Gamma(\alpha + 2)}. \end{split}$$ The denominator of $c(m)$ equals $$(t,t)_{\alpha} = \int_{-1}^1 t^2 (1-t^2)^{\alpha} dt = \frac{\Gamma(3/2) \Gamma(\alpha+1)}{\Gamma(\alpha + 5/2)},$$ where we used the beta integral (see e.g. Andrews, Askey, and Roy [@AndrewsAskeyRoy (1.1.21)]) $$\int_{0}^1t^{2x-1}(1-t^2)^{y-1}dr = \int_0^{\pi/2} (\sin \theta)^{2x-1}(\cos \theta)^{2y-1}d\theta = \frac{\Gamma(x)\Gamma(y)}{2\Gamma(x+y)},$$ Now, by using the functional equation $x \Gamma(x) = \Gamma(x+1)$, the desired equality $c(m) = 1/\gamma(m)$ follows. Hardness of approximation {#sec:hardness} ========================= Suppose that $\rho$ is the largest approximation ratio a polynomial-time algorithm can achieve for $\operatorname{SDP}_n$. Let $u_1, \ldots, u_m \in S^{n-1}$ be an approximate solution to $\operatorname{SDP}_n(A)$ coming from such a polynomial-time algorithm. Then, $$\sum_{i=1}^m \sum_{j=1}^m A_{ij} \; u_i \cdot u_j \geq \rho \operatorname{SDP}_n(A).$$ Applying the rounding scheme to $u_1, \ldots, u_m \in S^{n-1}$ gives $x_1, \ldots, x_m \in \{-1,+1\}$ with $$\begin{split} {\mathbb{E}}\biggl[\sum_{i=1}^m\sum_{j=1}^m A_{ij} \; x_i x_j\biggr] & = \frac{2}{\pi} \sum_{i=1}^m \sum_{j=1}^m A_{ij} \arcsin u_i \cdot u_j\\ & \geq \frac{2 \rho}{\pi \gamma(n)}\operatorname{SDP}_n(A), \end{split}$$ where we used that the matrix $A$ and the matrix $${\left(\frac{2}{\pi}\left(\arcsin u_i \cdot u_j - \frac{u_i \cdot u_j}{\gamma(n)}\right)\right)}_{1 \leq i,j \leq m}$$ are both positive semidefinite. The last statement follows from Lemma \[lem:psd\] applied to the vectors $u_1, \ldots, u_m $ lying in $S^{n-1}$. Since $\operatorname{SDP}_n(A) \geq \operatorname{SDP}_1(A)$, this is a polynomial-time approximation algorithm for $\operatorname{SDP}_1$ with approximation ratio at least $(2\rho)/(\pi\gamma(n))$. The UGC hardness result of Khot and Naor now implies that $\rho \leq \gamma(n)$. The case of Laplacian matrices {#sec:Laplacian} ============================== In this section we show that one can improve the approximation ratio of the algorithm if the positive semidefinite matrix $A = (A_{ij}) \in {\mathbb{R}}^{m \times m}$ has the following special structure: $$\begin{split} A_{ij} \leq 0, & \quad \text{if $i \neq j$,}\\ \sum_{i = 1}^n A_{ij} = 0, & \quad \text{for every $j = 1, \ldots, n$}. \end{split}$$ This happens for instance when $A$ is the Laplacian matrix of a weighted graph with nonnegative edge weights. A by now standard argument due to Goemans and Williamson [@GoemansWilliamson] shows that the algorithm has the approximation ratio $$v(n) = \min\left\{\frac{1-E_n(t)}{1-t} : t \in [-1,1]\right\}.$$ To see this, we write out the expected value of the approximation and use the properties of $A$: $$\begin{aligned} {\mathbb{E}}\Big[\sum_{i,j=1}^nA_{ij}x_i\cdot x_j\Big] &=& \sum_{i,j=1}^nA_{ij}E_n(u_i\cdot u_j)\\ &=& \sum_{i\not= j}(-A_{ij})\left(\frac{1 - E_k(u_i\cdot u_j)}{1-u_i\cdot u_j}\right)(1-u_i\cdot u_j)\\ &\geq& v(n)\sum_{i, j=1}^nA_{ij}u_i\cdot u_j\\ &=& v(n)\operatorname{SDP}_{\infty}(A).\end{aligned}$$ The case $n = 1$ corresponds to the MAX CUT approximation algorithm of Goemans and Williamson [@GoemansWilliamson]. For this we have $$v(1) = 0.8785\dots,\quad \text{minimum attained at $t_0 = -0.689\dots$.}$$ We computed the values $v(2)$ and $v(3)$ numerically and got $$\begin{split} & v(2) = 0.9349\dots,\quad \text{minimum attained at $t_0 = -0.617\dots$,}\\ & v(3) = 0.9563\dots, \quad \text{minimum attained at $t_0 = -0.584\dots$.}\\ \end{split}$$ Acknowledgements {#acknowledgements .unnumbered} ================ We thank Joe Eaton, Monique Laurent, Robb Muirhead, and Achill Schürmann for helpful discussions. The third auhor thanks the Institute for Pure & Applied Mathematics at UCLA for its hospitality and support. [\[99\]]{} G.E. Andrews, R. Askey, R. Roy, [*Special functions*]{}, Cambridge University Press, 1999. N. Alon, K. Makarychev, Y. Makarychev, A. Naor, *Quadratic forms on graphs*, Invent. Math. [**163**]{} (2006), 499–522. N. Alon, A. Naor, *Approximating the cut-norm via Grothendieck’s inequality*, SIAM J. Comp. [**35**]{} (2006), 787–803. A. Ben-Tal, A. Nemirovski, [*On tractable approximations of uncertain linear matrix inequalities affected by interval uncertainty*]{}, SIAM J. Optim. [**12**]{} (2002), 811–833 J. Briët, H. Buhrman, B. Toner, [*A generalized Grothendieck inequality and entanglement in XOR games*]{}, preprint, January 2009. M.X. Goemans, D.P. Williamson, [*Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming*]{}, J. ACM [**42**]{} (1995), 1115–1145. A. Grothendieck, *Résumé de la théorie métrique des produits tensoriels topologiques*, Bol. Soc. Mat. São Paulo [**8**]{} (1953), 1–79. G.J.O. Jameson, *Summing and nuclear norms in Banach space theory*, Cambridge University Press, 1987. G. Karakostas, *A better approximation ratio for the vertex cover problem*, pp. 1043–1050 in [*Proceedings of the 32nd International Colloquium on Automata, Languages and Programming*]{}, Springer, 2005. S. Khot, *On the power of unique 2-prover 1-round games*, pp. 767–775 in [*Proceedings of the 34th Annual ACM Symposium on Theory of Computing*]{}, 2002. S. Khot, G. Kindler, E. Mossel, R. O’Donnell, [*Optimal inapproximability results for MAX-CUT and other 2-variable CSPs?*]{}, SIAM J. Comput. [**37**]{} (2007), 319–357. S. Khot, A. Naor, *Approximate kernel clustering*, pp. 561–570 in [*Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science*]{}, IEEE Computer Society, 2008. S. Khot, A. Naor, *Sharp kernel clustering algorithms and their associated Grothendieck inequalities*, preprint, June 2009. S. Khot, O. Regev, *Vertex cover might be hard to approximate to within $2-\varepsilon$*, pp. 379–386 in [*Proceedings of the 18th Annual IEEE Conference on Computational Complexity*]{}, IEEE Computer Society, 2003. N. Linial, A. Shraibman, [*Lower bounds in communication complexity based on factorization norms*]{}, pp. 699–708 in [*Proceedings of the 39th Annual ACM Symposium on Theory of Computing*]{}, 2007. S. Mahajan, H. Ramesh, [*Derandomizing approximation algorithms based on semidefinite programming*]{}, ... R.J. Muirhead, *Aspects of multivariate statistical theory*, Jon Wiley & Sons, 1982. Y.E. Nesterov, [*Semidefinite relaxation and nonconvex quadratic optimization*]{}, Optimization Methods and Software [**9**]{} (1998), 141–160. P. Raghavendra, [*Optimal algorithms and inapproximability results for every csp?*]{}, pp. 245–254 in [*Proceedings of the 40th Annual ACM Symposium on Theory of Computing*]{}, 2008. P. Raghavendra, D. Steurer, [*Towards computing the Grothendieck constant*]{} pp. 525–534 in [*ACM-SIAM Symposium on Discrete Algorithms*]{}, 2009. R.E. Rietz, [*A proof of the Grothendieck inequality*]{}, Israel J. Math. [**19**]{} (1974), 271–276. I.J. Schoenberg, [*Positive definite functions on spheres*]{}, Duke Math. J. [**9**]{} (1942), 96–108. [^1]: The first author is supported by Vici grant 639.023.302 from the Netherlands Organization for Scientific Research (NWO), by the European Commission under the Integrated Project Qubit Applications (QAP) funded by the IST directorate as Contract Number 015848, and by the Dutch BSIK/BRICKS project. [^2]: The second author was partially supported by CAPES/Brazil under grant BEX 2421/04-6, and the research was carried out at the Centrum Wiskunde & Informatica, The Netherlands. [^3]: The third author is supported by Vidi grant 639.032.917 from the Netherlands Organization for Scientific Research (NWO).
--- abstract: 'We analyze the dynamics of the Learning-Without-Recall model with Gaussian priors in a dynamic social network. Agents seeking to learn the state of the world, the “truth", exchange signals about their current beliefs across a changing network and update them accordingly. The agents are assumed memoryless and rational, meaning that they Bayes-update their beliefs based on current states and signals, with no other information from the past. The other assumption is that each agent hears a noisy signal from the truth at a frequency bounded away from zero. Under these conditions, we show that the system reaches truthful consensus almost surely with a convergence rate that is polynomial in expectation. Somewhat paradoxically, high outdegree can slow down the learning process. The lower-bound assumption on the truth-hearing frequency is necessary: even infinitely frequent access to the truth offers no guarantee of truthful consensus in the limit.' author: - 'Chu Wang$^{1}$ and Bernard Chazelle$^{2}$[^1][^2]' title: '**Gaussian Learning-Without-Recall in a Dynamic Social Network** ' --- Introduction ============ People typically form opinions by updating their current beliefs and reasons in response to new signals from other sources (friends, colleagues, social media, newspapers, etc.) [@tahbaz2009learning; @acemoglu2011opinion; @golub2010naive]. Suppose there were an information source that made a noisy version of the “truth" available to agents connected through a social network. Under which conditions would the agents reach consensus about their beliefs? What would ensure truthful consensus (meaning that the consensus coincided with the truth)? How fast would it take for the process to converge? To address these questions requires agreeing on a formal model of distributed learning. Fully rational agents update their beliefs by assuming a prior and using Bayes’ rule to integrate all past information available to them [@acemoglu2011bayesian; @mueller2013general; @lobel2015preferences; @mossel2011agreement; @banerjee1992simple; @bala1998learning]. Full rationality is intractable in practice [@molavi2015foundations; @rahimian2016learning], so much effort has been devoted to developing computationally effective mechanisms, including non- (or partially) Bayesian methods [@jadbabaie2012non; @molavi2015foundations; @golub2010naive; @golub2012homophily; @jadbabaie2013information]. Much of this line of work can be traced back to the seminal work of DeGroot [@degroot1974reaching] on linear opinion pooling. This paper is no exception. Specifically, it follows the [*Bayesian-Without-Recall*]{} (BWR) model recently proposed by Rahimian and Jadbabaie in [@rahimian2016learning]; see also [@rahimian2015learning; @rahimian2015log; @rahimian2016naive]. The agents are assumed to be memoryless and rational: this means that they use Bayesian updates based on current beliefs and signals with no other information from the past. The process is local in that agents can collect information only from their neighbors in a directed graph. In this work, the graph is allowed to change at each time step. The BWR model seeks to capture the benefits of rational behavior while keeping both the computation and the information stored to a minimum [@rahimian2016naive]. A distinctive feature of our work is that the social network need [*not*]{} be fixed once and for all. The ability to modify the communication channels over time reflects the inherently changing nature of social networks as well as the reality that our contacts do not all speak to us at once. Thus even if the underlying network is fixed over long timescales, the model allows for agents to be influenced by selected subsets of their neighbors. Dynamic networks are common occurrences in opinion dynamics [@hegselmann2002opinion; @mohajer2012convergence; @chazelle2015; @chazelle2015diffusive] but, to our knowledge, somewhat new in the context of social learning. Our working model in this paper posits a Gaussian setting: the likelihoods and initial priors of the agents are normal distributions. During the learning process, signals are generated as noisy measurements of agents’ beliefs and the noise is assumed normal and unbiased. Thus all beliefs remain Gaussian at all times [@rahimian2016learning; @box2011bayesian]. Our main result is that, under the assumption that each agent hears a noisy signal from the truth at a frequency bounded away from zero, the system reaches truthful consensus almost surely with a convergence rate polynomial in expectation. Specifically, we show that, as long as each agent receives a signal from the truth at least once every $1/\gamma$ steps, the convergence rate is $O(t^{-\gamma/2d})$, where $d$ is the maximum node outdegree. Somewhat paradoxically, high outdegree can slow down learning. The reason is that signals from peer agents are imperfect conveyors of the truth and can, on occasion, contaminate the network with erroneous information; this finding is in line with a similar phenomenon uncovered by Harel et al. [@harel2014speed], in which social learning system with two Bayesian agents is found to be hindered by increased interaction between the agents. We note that our lower-bound assumption on the truth-hearing frequency is necessary: even infinitely frequent access to the truth is not enough to achieve truthful consensus in the limit. **Further background.** Researchers have conducted empirical evaluations of both Bayesian and non-Bayesian models [@grimm2014experiment; @chandrasekhar2015testing; @das2014modeling; @bakshy2012role]. In [@mossel2010efficient], Mossel et al. analyzed a Bayesian learning system in which each agent gets signals from the truth only once at the beginning and then interact with other agents using Gaussian estimators. In [@moscarini1998social], Moscarini et al. considered social learning in a model where the truth is not fixed but is, instead, supplied by a Markov chain. In the different but related realm of iterated learning, agents learn from ancestors and teach descendants. The goal is to pass on the truth through generations while seeking to prevent information loss [@griffiths2007language; @smith2009iterated]. **Organization.** Section \[sec:model\] introduces the model and the basic formulas for single-step belief updates. Section \[sec:mean\] investigates the dynamics of the beliefs in expectation and derive the polynomial upper bound on the convergence rate under the assumption that each agent hears a signal from the truth at a frequency bounded away from zero. We demonstrate the necessity of this assumption in Section \[sec:real\] and prove that the convergence occurs almost surely. Preliminaries {#sec:model} ============= The Model --------- We choose the real line $\mathbb{R}$ as the state space and we denote the agents by $1,2,\ldots, n$; for convenience, we add an extra agent, labeled $0$, whose belief is a fixed number, unknown to others, called the [*truth*]{}. At time $t=0,1,\ldots$, the belief of agent $i$ is a probability distribution over the state space $\mathbb{R}$, which is denoted by $\mu_{t,i}$. We assume that the initial belief $\mu_{0,i}$ of agent $i$ is Gaussian: $\mu_{0,i}\sim \mathcal{N}(x_{0,i},\sigma_{0,i}^2)$. Without loss of generality, we assume the truth is a constant (single-point distribution: $\mu_{t,0}=0$; $\sigma_{t,0}=0$ for all $t$) and the standard deviation is the same for all other agents, ie, $\sigma_{0,i}=\sigma_0>0$ for $i>0$. The interactions between agents are modeled by an infinite sequence $(G_t)_{t\geq 0}$, where each $G_t$ is a directed graph over the node set $\{0,\ldots, n\}$. An edge pointing from $i$ to $j$ in $G_t$ indicates that $i$ receives data from $j$ at time $t$. Typically, the sequence of graphs is specified ahead of time or it is chosen randomly: the only condition that matters is that it should be independent of the randomness used in the learning process; specifically, taking expectations and variances of the random variables that govern the dynamics will assume a fixed graph sequence (possibly random). Because agent $0$ holds the truth, no edge points away from it. The adjacency matrix of $G_t$ is denoted by $A_t$: it is an $(n+1)\times(n+1)$ matrix whose first row is $(1,0,\dots,0)$. Information Transfer -------------------- At time $t\geq 0$, each agent $i>0$ samples a state $\theta_{t,i}\in {\mathbb R}$ consistent with her own belief: $\theta_{t,i}\sim\mu_{t,i}$. A noisy measurement $a_{t,i}=\theta_{t,i}+{\varepsilon}_{t,i}$ is then sent to each agent $j$ such that $(A_t)_{ji}=1$. All the noise terms ${\varepsilon}_{t,i}$ are sampled [*iid*]{} from $\mathcal{N}(0,\sigma^2)$. An equivalent formulation is to say that the likelihood function $l(a|\theta)$ is drawn from $\mathcal{N}(\theta,\sigma^2)$. In our setting, agent $i$ sends the same data to all of her neighbors; this is done for notational convenience and the same results would still hold if we were to resample independently for each neighbor. Except for the omission of explicit utilities and actions, our setting is easily identified as a variant of the BWR model [@rahimian2016learning]. Updating Beliefs ---------------- A single-step update for agent $i>0$ consists of setting $\mu_{t+1,i}$ as the posterior ${\mathbb P}[\mu_{t,i} | d]\propto {\mathbb P}[d| \mu_{t,i} ]{\mathbb P}[\mu_{t,i}]$, where $d$ is the data from the neighbors of $i$ received at time $t$. Plugging in the corresponding Gaussians gives us the classical update rules from Bayesian inference [@box2011bayesian]. Updated beliefs remain Gaussian so we can use the notation $\mu_{t,i}\sim {\mathcal N}(x_{t,i}, \tau_{t,i}^{-1})$, where $\tau_{t,i}$ denotes the precision $\sigma_{t,i}^{-2}$. Writing $\tau= \sigma^{-2}$ and letting $d_{t,i}$ denote the outdegree of $i$ in $G_t$, for any $i>0$ and $t\geq 0$, $$\label{eq:single} \begin{cases} \begin{split} x_{t+1,i} &=(\tau_{t,i} x_{t,i}+\tau a_1+\dots+\tau a_{d_{t,i}})/(\tau_{t,i}+ d_{t,i} \tau) ; \\ \tau_{t+1,i}&= \tau_{t,i} + d_{t,i}\tau, \end{split} \end{cases}$$ where $a_1, \ldots, a_{d_{t,i}}$ are the signals received by agent $i$ from its neighbors at time $t$. Expressing the Dynamics in Matrix Form -------------------------------------- Let $D_t$ and $P_t$ denote the $(n+1)$-by-$(n+1)$ diagonal matrices $\text{diag} (d_{t,i})$ and $(\tau_0/\tau)I+ \sum_{k=0}^{t-1}D_k$, respectively, where $I$ is the identity matrix and the sum is $0$ for $t=0$. It follows from that $\mu_{t,i}\sim \mathcal{N}(x_{t,i}, (\tau P_t)_{ii}^{-1})$ for $i>0$. Regrouping the means in vector form, $\bm{x}_t \! :=(x_{t,0},\dots,x_{t,n})^T$, where $x_{t,0}=0$ and $x_{0,1},\ldots, x_{0,n}$ are given as inputs, we have $$\label{eq:noisy} \bm{x}_{t+1}=\left( P_t+D_t\right)^{-1}\left( P_t\bm{x}_t+A_t\left(\bm{x}_t+\bm{u}_t+\bm{{\varepsilon}}_t\right)\right),$$ where $\bm{u}_t$ is such that $u_{t,0}\sim {\mathcal N}(0,0)$ and, for $i>0$, $u_{t,i}\sim {\mathcal N}(\bm{0}, (\tau (P_t)_{ii})^{-1})$; and $\bm{{\varepsilon}}_t$ is such that ${\varepsilon}_{t,0}\sim {\mathcal N}(0,0)$ and, for $i>0$, ${\varepsilon}_{t,i}\sim {\mathcal N}(\bm{0}, 1/\tau)$. We refer to the vectors $\bm{x}_t$ and $\bm{y}_t \! := \mathbb{E}\, \bm{x}_t$ as the [*mean process*]{} and the [*expected mean process*]{}, respectively. Taking expectations on both sides of  with respect to the random vectors $\bm{u}_t$ and $\bm{{\varepsilon}}_t$ yields the update rule for the expected mean process: $\bm{y}_{0}= \bm{x}_{0}$ and, for $t>0$, $$\label{eq:mean} \bm{y}_{t+1}=\left( P_t+D_t\right)^{-1}\left( P_t+A_t\right)\bm{y}_t.$$ A key observation is that $\left( P_t+D_t\right)^{-1}\left( P_t+A_t\right)$ is a stochastic matrix, so the expected mean process $\bm{y}_t$ forms a diffusive influence system [@chazelle2015diffusive]: the vector evolves by taking convex combinations of its own coordinates. What makes the analysis different from standard multiagent agreement systems is that the weights vary over time. In fact, some weights typically tend to 0, which violates one of the cardinal assumptions used in the analysis of averaging systems [@chazelle2015diffusive; @moreau05]. This leads us to the use of arguments, such as fourth-order moment bounds, that are not commonly encountered in this area. Our Results ----------- The belief vector $\bm{\mu}_t$ is Gaussian with mean $\bm{x}_t$ and covariance matrix $\Sigma_t$ formed by zeroing out the top-left element of $(\tau P_t)^{-1}$. We say that the system reaches [*truthful consensus*]{} if both the mean process $\bm{x}_t$ and the covariance matrix tend to zero as $t$ goes to infinity. This indicates that all the agents’ beliefs share a common mean equal to the truth and the “error bars" vanish over time. In view of , the covariance matrix indeed tends to $0$ as long as the degrees are nonzero infinitely often, a trivial condition. To establish truthful consensus, therefore, boils down to studying the mean process $\bm{x}_t$. We do this in two parts: first, we show that the expected mean process converges to the truth; then we prove that fluctuations around it eventually vanish almost surely.[^3] [*Truth-hearing assumption*]{}: Given any interval of length $\kappa \! := \lfloor 1/\gamma \rfloor$, every agent $i>0$ has an edge $(i,0)$ in $G_t$ for at least one value of $t$ in that interval. \[MainTheorem\] Under the truth-hearing assumption, the system reaches truthful consensus with a convergence rate bounded by $O(t^{-\gamma/2d})$, where $d$ is the maximum outdegree over all the networks. We prove the theorem in the next two sections. It will follow directly from Lemmas \[lemma:y\] and \[lemma:x\] below. The convergence rate can be improved to the order of $t^{-(1-{\varepsilon})\gamma /d}$, for arbitrarily small ${\varepsilon}>0$. The inverse dependency on $\gamma$ is not surprising: the more access to the truth the stronger the attraction to it. On the other hand, it might seem counterintuitive that a larger outdegree should slow down convergence. This illustrates the risk of groupthink. It pays to follow the crowds when the crowds are right. When they are not, however, this distracts from the lonely voice that happens to be right. How essential is the truth-hearing assumption? We show that it is necessary. Simply having access to the truth infinitely often is not enough to achieve truthful consensus. Useful Matrix Inequalities {#sec:matrix} -------------------------- We highlight certain matrix inequalities to be used throughout. We use the standard element-wise notation $R\le S$ to indicate that $R_{ij}\le S_{ij}$ for all $i,j$. The infinity norm $\|R\|_\infty \! =\max_{i}\sum_{j}|r_{ij}|$ is submultiplicative: $\|RS\|_\infty \! \le\|R\|_\infty\|S\|_\infty$, for any matching rectangular matrices. On the other hand, the max-norm $\|R\|_\text{max} \! :=\max_{i,j}|r_{ij}|$ is not, but it is transpose-invariant and also satisfies: $\|RS\|_\text{max}\le\|R\|_\infty\|S\|_\text{max}$. It follows that $$\label{MatrixIneq} \begin{split} \|RSR^T\|_\text{max} \! &\le \|R\|_\infty\|SR^T\|_\text{max}=\|R\|_\infty\|RS^T\|_\text{max}\\ &\le \|R\|_\infty^2\|S^T\|_\text{max}=\|R\|_\infty^2\|S\|_\text{max} . \end{split}$$ The Expected Mean Process Dynamics {#sec:mean} ================================== We analyze the convergence of the mean process in expectation. The expected mean $\bm{y}_t=\mathbb{E}\, \bm{x}_t$ evolves through an averaging process entirely determined by the initial value $\bm{y}_0= (0, x_{0,1},\ldots, x_{0,n})^T$ and the graph sequence $G_t$. Intuitively, if an agent communicates repeatedly with a holder of the truth, the weight of the latter should accumulate and increasingy influence the belief of the agent in question. Our goal in this section is to prove the following result: \[lemma:y\] Under the truth-hearing assumption, the expected mean process $\bm{y}_t$ converges to the truth asymptotically. If, at each step, no agent receives information from more than $d$ agents, then the convergence rate is bounded by $C t^{-\gamma/2d}$, where $C$ is a constant that depends on $\bm{x}_0, \gamma, d, \sigma_0/\sigma$. [*Proof.*]{} We define $B_t$ as the matrix formed by removing the first row and the first column from the stochastic $P_{t+1}^{-1}\left(P_t+A_t\right)$. If we write $\bm{y}_t$ as $(0, \bm{z}_t)$ then, by , $$\label{eq:mean_matrix} \begin{pmatrix} 0\\ \bm{z}_{t+1} \end{pmatrix}= \begin{pmatrix} 1&\bm{0}\\ \bm{\alpha}_t&B_t \end{pmatrix} \begin{pmatrix} 0\\ \bm{z}_{t} \end{pmatrix},$$ where $\bm{\alpha}_{t,i}=(P_{t+1}^{-1})_{ii}$ if there is an edge $(i,0)$ at time $t$ and $\bm{\alpha}_{t,i}=0$ otherwise. This further simplifies to $$\label{eq:By} \bm{z}_{t+1}=B_t\bm{z}_t.$$ Let $\bm{1}$ be the all-one column vector of length $n$. Since $P_{t+1}^{-1}\left(P_t+A_t\right)$ is stochastic, $$\label{eq:alpha_B} \bm{\alpha}_t+B_t\bm{1}=\bm{1}$$ In matrix terms, the truth-hearing assumption means that, for any $t\geq 0$, $$\label{eq:alpha} \bm{\alpha}_{t}+\bm{\alpha}_{t+1}+\dots+\bm{\alpha}_{t+\kappa -1} \geq Q^{-1}_{t+ \kappa} {\mathbf 1},$$ where $Q_t$ is the matrix derived from $P_t$ by removing the first row and the last column; the inequality relies on the fact that $P_t$ is monotonically nondecreasing. For any $t>s\ge 0$, we define the product matrix $B_{t:s}$ defined as $$B_{t:s}:=B_{t-1}B_{t-2}\dots B_{s},$$ with $B_{t:t}= I$. By , for any $t>s\geq 0$, $$\label{ztBt} \bm{z}_t=B_{t:s} \, \bm{z}_s.$$ To bound the infinity norm of $B_{t:0}$, we observe that, for any $0\le l<\kappa-1$, the $i$-th diagonal element of $B_{s+\kappa:s+l+1}$ is lower-bounded by $$\begin{aligned} \label{eq:B_diagonal} && \prod_{j=l+1}^{\kappa-1}(B_{s+j})_{ii} = \prod_{j=l+1}^{\kappa-1} \frac{(P_{s+j} + A_{s+j})_{ii}}{(P_{s+j+1})_{ii}} \\ &\ge&\prod_{j=l+1}^{\kappa-1} \frac{(P_{s+j})_{ii}}{(P_{s+j+1})_{ii}} = \frac{(P_{s+l+1})_{ii}}{ (P_{s+\kappa})_{ii} } \ge \frac{(P_{s})_{ii}}{(P_{s+\kappa})_{ii}}. \nonumber\end{aligned}$$ The inequalities follow from the nonnegativity of the entries and the monotonicity of $(P_t)_{ii}$. Note that  also holds for $l=\kappa-1$ since $(B_{s+\kappa:s+\kappa})_{ii}=1$. Since $P_{t+1}^{-1}\left(P_t+A_t\right)$ is stochastic, the row-sum of $B_t$ does not exceed 1; therefore, by premultiplymultiplying $B_{s+1},B_{s+1},\dots$ on both sides of , we obtain: $$\label{B1} B_{s+\kappa:s}\bm{1} \leq \bm{1}-\sum_{l=0}^{\kappa-1} B_{s+\kappa:s+l+1}\bm{\alpha}_{s+l}.$$ Noting that $\|B_t\|_\infty=\|B_t \bm{1}\|_\infty$ for any $t$, as $B_t$ is non-negative, we combine , , and together to derive: $$\label{eq:B1} \|B_{s+\kappa:s}\|_\infty\le1-\min_{i>0}\frac{(P_{s})_{ii}}{(P_{s+\kappa})_{ii}^2}.$$ Let $d \! := \max_{t\ge 0} \max _{1\le i\le n} d_{t,i}$ denote the maximum outdegree in all the networks, and define $\delta=\min\{\tau_0/\tau, 1\}$. For any $i>0$ and $s\ge \kappa$, $$\label{eq:ps} \frac{s\delta}{\kappa}\le (P_s)_{ii}\le ds+ \frac{\tau_0}{\tau} ;$$ hence, $$\label{eq:ps2} \max_i (P_{s+\kappa})_{ii} \le d(s+\kappa)+ \frac{\tau_0}{\tau}.$$ It follows that $$\frac{(P_{s+\kappa})_{ii}- (P_s)_{ii}}{(P_{s+\kappa})_{ii}} =\frac{\sum_{l=0}^{\kappa-1}d_{s+l, i}}{(P_{s+\kappa})_{ii}} \le\frac{d\kappa^2\delta^{-1}}{s+\kappa}.$$ Thus, we have $$\label{eq:poverp} \begin{split} \min_{i>0}\frac{(P_{s})_{ii}}{(P_{s+\kappa})_{ii}} &= 1- \max_{i>0}\frac{(P_{s+\kappa})_{ii}-(P_{s})_{ii}} {(P_{s+\kappa}){ii} }\\ &\ge 1- \frac{d \kappa^2 \delta^{-1}}{s+\kappa}. \end{split}$$ We can replace the upper bound of  by $$1- \frac{1}{\max_{i>0} (P_{s+\kappa})_{ii}} \min_{i>0}\frac{(P_{s})_{ii}}{(P_{s+\kappa})_{ii}^2} \, ,$$ which, together with  and gives us $$\label{BskIneq} \begin{split} \|B_{s+\kappa:s}\|_\infty &\le 1- \frac{1}{d(s+\kappa)+\tau_0/\tau} \left(1-\frac{d\kappa^2\delta^{-1}}{s+\kappa}\right) \\ &\le 1- \frac{1}{2d\kappa(m+2)}. \end{split}$$ The latter inequality holds as long as $s=m\kappa >0$ and $$m\ge m^* \! := \frac{2d\kappa}{\delta} + \frac{\tau_0}{d\kappa \tau} .$$ It follows that, for $m_0\ge m^*$, $$\label{eq:B} \begin{split} \|B_{(m_0+m)\kappa:m_0 \kappa}\|_\infty &\le \prod_{j=2}^{m+1} \left( 1-\frac{1}{2d \kappa (m_0+j)} \right) \\ &\le \exp\left\{ -\frac{1}{2d \kappa}\sum_{j=2}^{m+1}\frac{1}{m_0+j} \right\}. \end{split}$$ The matrices $B_t$ are sub-stochastic so that $$\|B_t \, \bm{z}\|_\infty\leq \|B_t\|_\infty \|\bm{z}\|_\infty\leq \|\bm{z}\|_\infty.$$ By , for any $t\ge (m_0+m)\kappa$, $$\bm{z}_t = B_{t: (m_0+m)\kappa}B_{(m_0+m)\kappa:m_0 \kappa}\, \bm{z}_{m_0},$$ so that, by using standard bounds for the harmonic series, $\ln(k+1)<1+\frac{1}{2}+\dots+\frac{1}{k}\leq 1+ \ln k$, we find that $$\begin{split} \|\bm{z}_t\|_\infty&\leq \|B_{(m_0+m)\kappa:m_0 \kappa}\, \bm{z}_{m_0}\|_\infty \\ &\le \|B_{(m_0+m)\kappa:m_0 \kappa} \|_\infty\|\bm{z}_{0}\|_\infty\nonumber\\ &\le C t^{-1/(2d\kappa)}, \end{split}$$ where $C>0$ depends on $\bm{z}_0, \kappa, d, \tau_0/\tau$. We note that the convergence rate can be improved to the order of $t^{-(1-{\varepsilon})\gamma /d}$, for arbitrarily small ${\varepsilon}>0$, by working a little harder with . $\Box$ The Mean Process Dynamics {#sec:real} ========================= Recall that $\mu_{t,i}\sim {\mathcal N}(x_{t,i}, \tau_{t,i}^{-1})$, where $\tau_{t,i}$ denotes the precision $\sigma_{t,i}^{-2}$. A key observation about the updating rule in  is that the precision $\tau_{t,i}$ is entirely determined by the graph sequence $G_t$ and is independent of the actual dynamics. Adding to this the connectivity property implied by the truth-hearing assumption, we find immediately that $\tau_{t,i} \rightarrow\infty$ for any agent $i$. This ensures that the covariance matrix $\Sigma_t$ tends to $0$ as $t$ goes to infinity, which satisfies the second criterion for truthful consensus. The first criterion requires that the mean process $\bm{x}_t$ should converge to the truth $\bm{0}$. Take the vector $\bm{x}_t-\bm{y}_t$ and remove the first coordinate $(\bm{x}_t-\bm{y}_t)_0$ to form the vector $\bm{\Delta}_t\in {\mathbb R}^n$. Under the truth-hearing assumption, we have seen that $\bm{y}_t \rightarrow \bm{0}$ (Lemma \[lemma:y\]), so it suffices to prove the following: \[lemma:x\] Under the truth-hearing assumption, the deviation $\bm{\Delta}_t$ vanishes almost surely. [*Proof.*]{} We use a fourth-moment argument. The justification for the high order is technical: it is necessary to make a certain “deviation power" series converge. By , $\bm{x}_t$ is a linear combination of independent Gaussian random vectors $\bm{u}_s$ and $\bm{{\varepsilon}}_s$ for $0\le s\le t-1$, and thus $\bm{x}_t$ itself is a Gaussian random vector. Therefore $\bm{\Delta}_t$ is also Gaussian and its mean is zero. From Markov’s inequality, for any $c>0$, $$\label{eq:markov} \sum_{t\ge 0}\mathbb{P}[|\Delta_{t,i}|\geq c]\le \sum_{t\ge 0} \frac{\mathbb{E}\, \Delta_{t,i}^4}{c^4}.$$ If we are able to show the right hand side of  is finite for any $c>0$, then, by the Borel-Cantelli lemma, with probability one, the event $|\Delta_{t,i}|\geq c$ occurs only a finite number of times, and so $\Delta_{t,i}$ goes to zero almost surely. Therefore, we only need to analyze the order of the fourth moment $\mathbb{E} \, \Delta_{t,i}^4$. By subtracting from , we have: $$\label{eq:z} \bm{\Delta}_{t+1}=B_t\bm{\Delta}_t+M_t\bm{v}_t,$$ where $\bm{v}_t:=\bm{u}_t+\bm{{\varepsilon}}_t$ and $M_t:=P_{t+1}^{-1}A_t$; actually, for dimensions to match, we remove the top coordinate of $\bm{v}_t$ and the first row and first column of $M_t$ (see previous section for definition of $B_t$). Transforming the previous identity into a telescoping sum, it follows from $\bm{\Delta}_0=\bm{x}_0-\bm{y}_0=\bm{0}$ and the definition $B_{t:s}=B_{t-1}B_{t-2}\dots B_s$ that $$\label{eq:z2} \bm{\Delta}_{t}=\sum_{s=0}^{t-1}B_{t:s+1}M_s\bm{v}_s=\sum_{s=0}^{t-1}R_{t,s}\bm{v}_s,$$ where $R_{t,s}:=B_{t:s+1}M_s$. We denote by $C_1, C_2, \dots$ suitably large constants (possibly depending on $\kappa, d, n, \tau, \tau_0$). By , $\|M_s\|_{\infty}\le C_1/(s+1)$ and, by , for sufficiently large $s$, $$\|B_{t:s+1}\|_{\infty} \le C_2 (s+1)^{\beta}(t+1)^{-\beta},$$ where $\beta=1/2d\kappa <1$. Combining the above inequalities, we obtain the following estimate of $R_{t,s}$ as $$\label{eq:Q} \|R_{t,s}\|_{\infty}\le C_3(s+1)^{-1+\beta}(t+1)^{-\beta}.$$ In the remainder of the proof, the power of a vector is understood element-wise. We use the fact that $\bm{v}_s$ and $\bm{v}_{s'}$ are independent if $s\neq s'$ and that the expectation of an odd power of an unbiased Gaussian is always zero. By Cauchy-Schwarz and Jensen’s inequalities, $$\begin{aligned} \label{eq:Ez4} && \hspace{-1cm}\mathbb{E}\, \bm{\Delta}_{t}^4=\left(\sum_{s=0}^{t-1}R_{t,s}\bm{v}_s\right)^4\nonumber\\ &=&\sum_{s=0}^{t-1}\mathbb{E}(R_{t,s} \bm{v}_s)^4+\sum_{0\le s\neq s'<t} 3\, \mathbb{E}(R_{t,s} \bm{v}_s)^2\mathbb{E}(R_{t,s'} \bm{v}_{s'})^2\nonumber\\ &\le& \sum_{s=0}^{t-1}\mathbb{E}(R_{t,s} \bm{v}_s)^4+ 3\left(\sum_{s=0}^{t-1} \mathbb{E}(R_{t,s} \bm{v}_s)^2\right)^2\nonumber\\ &\le& \sum_{s=0}^{t-1}\mathbb{E}(R_{t,s} \bm{v}_s)^4+ 3t \, \sum_{s=0}^{t-1} \mathbb{E}^2(R_{t,s} \bm{v}_s)^2 \nonumber\\ &\le&(3t+1)\sum_{s=0}^{t-1}\mathbb{E}(R_{t,s} \bm{v}_s)^4.\end{aligned}$$ Notice that since the variance of $\bm{v}_t= (v_{t,1},\ldots, v_{t,n})^T$ is nonincreasing, there exists a constant $C_4$ such that $\mathbb{E}\, v_{t,i}^4 \le C_4$. By Jensen’s inequality and the fact that the variables $v_{t,i}$ are independent for different values of $i$, we have, for any $i,j,k,l$, $$|\mathbb{E}\, v_{t,i} v_{t,j} v_{t,k} v_{t,l}| \leq \max_k \mathbb{E}\, v_{t,k}^4.$$ By direct calculation, it then follows that $$\begin{aligned} \label{eq:4moment} && \hspace{-2.5cm} \max_i \mathbb{E}(R_{t,s} \bm{v}_s)_i^4 = \max_i \mathbb{E} \Bigl(\sum_{j=1}^n (R_{t,s})_{ij} v_{s,j}\Bigr)^4\nonumber\\ &\le& \max_i \Bigl(\sum_{j=1}^n (R_{t,s})_{i,j}\Bigr)^4 \max_k \mathbb{E}\, v_{s,k}^4 \nonumber\\ &=& \|R_{t,s}\|^4_\infty \max_k \mathbb{E}\, v_{s,k}^4 \nonumber\\ &\le& C_5(s+1)^{-4+4\beta}(t+1)^{-4\beta}.\end{aligned}$$ Summing  over $0\le s\le t-1$, we conclude from  that $\mathbb{E}\, \bm{\Delta}_{t}^4\le C_6 t^{-2}$, and thus $$\sum_{t\ge 0} \mathbb{E}\, \bm{\Delta}_{t}^4\le C_6 \sum_{t\ge 1}t^{-2}\le C_7.$$ By the Borel-Cantelli lemma, it follows that $\bm{\Delta}_t$ vanishes almost surely. $\Box$ Theorem \[MainTheorem\] follows directly from Lemmas \[lemma:y\] and \[lemma:x\]. $\Box$ We now show why the truth-hearing assumption is necessary. We describe a sequence of graphs $G_t$ that allows every agent infinite access to the truth and yet does not lead to truthful consensus. For this, it suffices to ensure that the expected mean process $\bm{y}_t$ does not converge. Consider a system with two learning agents with priors $\mu_{0,1}$ and $\mu_{0,2}$ from the same distribution $\mathcal{N}(2,1)$. We have $x_{0,1}=x_{0,2}= y_{0,1}=y_{0,2}= 2$ and, as usual, the truth is assumed to be 0; the noise variance is $\sigma^2 = 1$. The graph sequence is defined as follows: set $t_1=0$; for $k=1,2,\ldots$, agent 1 links to the truth agent at time $t_k$ and to agent 2 at times $t_k+1,\ldots, s_k-1$; then at time $s_k$, agent 2 links to the truth agent, and then to agent 1 at times $s_k+1,\ldots, t_{k+1}-1$. The time points $s_k$ and $t_k$ are defined recursively to ensure that $$\label{yLBs} y_{s_k,1}\geq 1+2^{-2k+1} \hspace{.5cm} \text{and} \hspace{.5cm} y_{t_k,2} \geq 1+2^{-2k}.$$ In this way, the expected mean processes of the two agents alternate while possibly sliding down toward 1 but never lower. The existence of these time points can be proved by induction. Since $y_{0,2}=2$, the inequality $y_{t_k,2} \geq 1+2^{-2k}$ holds for $k=1$, so let’s assume it holds up to $k>0$. The key to the proof is that, by , as agent 1 repeatedly links to agent 2, she is pulled arbitrarily close to it. Indeed, the transition rule gives us $$y_{t+1,1} = \frac{ (P_t)_{11} }{ (P_{t+1})_{11} }\, y_{t,1} + \frac{1}{ (P_{t+1})_{11} }\, y_{t,2},$$ where $(P_{t+1})_{11}= (P_t)_{11}+1$, which implies that $y_{t,1}$ can be brought arbitrarily close to $y_{t,2}$ while the latter does not move: this follows from the fact that any product of the form $\prod_{t>t_a}^{t_b}\frac{t}{t+1}$ tends to $0$ as $t_b$ grows.[^4] It follows that a suitably increasing sequence of $s_k,t_k$ ensures the two conditions . The beliefs of the two agents do not converge to the truth even though they link to the truth agent infinitely often. For the recursive relation , we adopt similar techniques as in the proof of Theorem \[th:y\] to prove $\|\Sigma_t\| =o(\frac{1}{t})$. We will repeatedly use the properties of matrix infinity and max norms introduced in Section \[sec:matrix\], and the fact that $\|B_t\|_\infty\le 1$. Notice that $\|M_{mK_k}\|_{\infty}\ge 1/m$ for any $0\le k<K$, then $\|M_{t}H_tM_{t}^T\|_\infty \le C_1/m^2$. By recursive substitution of for $t=mK,mK+1,\dots,(m+1)K-1$, we have $$\begin{aligned} \|\Sigma_{(m+1)K}\| &\le& \|B_{mK:(m+1)K}\|_\infty^2\|\Sigma_{mK}\|+\frac{C_1}{m^2}\\ &\le&\left(1-\frac{C_2}{m}\right)\|\Sigma_{mK}\|+\frac{C_1}{m^2},\end{aligned}$$ where the second inequality comes from . Let $a_m:=\|\Sigma_{mK}\|$, then we have the following recursive inequality: $$a_{m+1}\le\left(1-\frac{C_2}{m}\right)a_m+\frac{C_1}{m^2}.$$ By adding the term $1$ to the right and rearranging, we have $$a_{m+1}-\frac{1}{m+1}\frac{C_1}{C_2}\le\left(1-\frac{C_2}{m}\right)\left(a_{m}-\frac{1}{m}\frac{C_1}{C_2}\right),$$ $$\begin{aligned} \mu'(\theta) &\propto& l(d_1,d_2,\dots,d_k|\theta)\mu(\theta)\\ &\propto& \exp \left(-\frac{\tau}{2}\sum_{i=1}^kd_i^2\right)\exp\left(-\frac{\tau_x}{2}(\theta-x)^2\right)\\ &\propto& \exp\left(-\frac{\tau'}{2}(\theta-x')^2\right),\end{aligned}$$ [^1]: $^{1}$Nokia Bell Labs, 600-700 Mountain Avenue, Murray Hill, New Jersey 07974, [chu.wang@nokia.com ]{} [^2]: $^{2}$Department of Computer Science, Princeton University, 35 Olden Street, Princeton, New Jersey 08540, [chazelle]{}@[cs.princeton.edu ]{} [^3]: The Kullback-Leibler divergence [@jadbabaie2012non] is not suitable here because the estimator is Gaussian, hence continuous, whereas the truth is a single-point distribution. [^4]: We note that the construction shares a family resemblance with one used by Moreau [@moreau05] to show the non-consensual dynamics of certain multiagent averaging systems. The difference here is that the weights of the averaging change at each step by increasing the agent’s self-confidence.
--- bibliography: - 'all\_data.bib' --- [Bradford Snios$^{1,*}$, William R. Dunn$^{1,2,*}$, Carey M. Lisse$^{3}$, Graziella Branduardi-Raymont$^{2}$, Konrad Dennerl$^{4}$, Anil Bhardwaj$^{5}$, G. Randall Gladstone$^{6}$, Susan Nulsen$^{1}$, Dennis Bodewits$^{7}$, Caitriona M. Jackman$^{8}$, Juli[á]{}n D. Alvarado-G[ó]{}mez$^{1}$, Emma J. Bunce$^{9}$, Michael R. Combi$^{10}$, Thomas E. Cravens$^{11}$, Renata S. Cumbee$^{12}$, Jeremy J. Drake$^{1}$, Ronald F. Elsner$^{13}$, Denis Grodent$^{14}$, Jae Sub Hong$^{1}$, Vasili Kharchenko$^{1,15}$, Ralph P. Kraft$^{1}$, Joan P. Marler$^{16}$, Sofia P. Moschou$^{1}$, Patrick D. Mullen$^{17}$, Scott J. Wolk$^{1}$, Zhonghua Yao$^{14}$ ]{} [$1$. Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA, USA\ $2$. University College London, London, UK\ $3$. Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA\ $4$. Max-Planck-Institut für extraterrestrische Physik, Garching, Germany\ $5$. Physical Research Laboratory, Ahmedabad, India\ $6$. Southwest Research Institute, San Antonio, TX, USA\ $7$. Auburn University, Auburn, AL\ $8$. University of Southampton, Southampton, UK\ $9$. University of Leicester, Leicester, UK\ $10$. University of Michigan, Ann Arbor, MI, USA\ $11$. University of Kansas, Lawrence, KS, USA\ $12$. NASA Goddard Space Flight Center, Greenbelt, MD, USA\ $13$. NASA Marshall Space Flight Center, Huntsville, AL, USA\ $14$. Universit[é]{} de Li[è]{}ge, Li[è]{}ge, Belgium\ $15$. University of Connecticut, Storrs, CT, USA\ $16$. Clemson University, Clemson, SC, USA\ $17$. University of Illinois Urbana-Champaign, Champaign, IL, USA ]{} *X-ray observatories contribute fundamental advances in Solar System studies by probing Sun-object interactions, developing planet and satellite surface composition maps, probing global magnetospheric dynamics, and tracking astrochemical reactions. Despite these crucial results, the technological limitations of current X-ray instruments hinder the overall scope and impact for broader scientific application of X-ray observations both now and in the coming decade. Implementation of modern advances in X-ray optics will provide improvements in effective area, spatial resolution, and spectral resolution for future instruments. These improvements will revolutionize Solar System studies in the following ways:* - Investigate early Solar System elemental and molecular composition via comet emissions, including rapid outflow events at the snow line - Provide elemental composition maps of the surfaces of satellites throughout the solar system with clear implications for astrobiology at Europa - Revolutionise magnetospheric studies of high energy transport and global dynamics of the space environments around planets - Study effects of solar X-ray activity on planet atmosphere evolution with broad ramifications for the evolution of exoplanet atmospheres and their star-planet relationships These milestones will usher in a truly transformative era of Solar System science through the study of X-ray emission. Solar System Objects as X-ray Sources ===================================== Despite being an area of astronomical research for millenia, the study of Solar System objects has undergone spectacular growth in the past two decades. Recent improvements in satellite technologies coupled with reduced flight costs have generated a swell in interplanetary missions designed for [*in situ*]{} observations of Solar System objects. Together with frequent, high-resolution observations using Earth-based telescopes and satellites, Solar System objects are monitored at unprecedented spatial and temporal precisions for astronomical systems. The Solar System truly is a perfect laboratory to study astronomical objects and leverage those insights to understand processes in the more distant universe. Solar System X-ray studies have provided a fundamental understanding of high energy environments in our local universe. Solar System objects emit X-rays through a variety of mechanisms, such as: charge-exchange emissions between ions and neutral particles, fluorescence emission induced by solar X-rays, bremsstrahlung emissions from auroral activity in magnetospheres, and scattering of solar X-rays. These mechanisms have been utilized to study surface and atmospheric compositions [@Wargelin2004; @Dennerl2008], magnetospheric and auroral dynamics [@Elsner2005; @Bhardwaj2007b; @BranduardiRaymont2008], and energy and mass transport [@Gladstone2002; @Bodewits2007; @Bhardwaj2007; @Snios2016]. X-ray observations have provided key insights into: planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn), satellites (Moon, Io, Europa, Ganymede), comets, asteroids (Eros, Itokawa), and space environments (planetary radiation belts, satellite plasma tori, boundary layers such as Earth’s magnetopause) [@Bhardwaj2014 and reference therein]. This broad range of emission mechanisms, coupled with the diversity of the physical conditions of the emitting objects, enable Solar System X-ray studies to provide an irreplaceable window on a wide array of astrophysical bodies and environments. In looking towards the future of Solar System sciences, interplanetary probe and lander missions (such as those from the [*Discovery*]{}, [*New Frontiers*]{}, and [*Solar System Exploration*]{} programs) will provide invaluable single-point space-time measurements that will radically change our perspective of the local universe. However, it is often challenging to interpret these measurements in a global context. The broader impact for scientific application will come from integrating [*in situ*]{} measurements with observations from Earth-based telescopes and satellites. Moreover, simultaneous, remote X-ray instruments are increasingly used to complement [*in situ*]{} observations. For example, comparisons of remote X-ray observations conjugate with [*Juno*]{} connect [*in situ*]{} measurements to the global current systems and particle acceleration processes at giant planets (Gladstone et al., in prep; Rymer et al., in prep). This need for complementary X-ray data to will only continue to grow with the increase of probe and lander missions in the coming decades. Beyond the benefits for Solar System science, high-energy observations of planetary atmospheres are vital for exoplanet studies. Elevated stellar X-ray activity may damage atmospheres by removing bulk gas, depleting ozone, and dissociating water [@Segura2010]. Alternatively, increases in X-ray activity may beneficially stimulate exotic chemical reactions required for complex molecular compounds, such as amino acids, to form [@Glassgold2012; @Cleeves2015; @Cleeves2017]. Planet atmosphere evolution models rely heavily on the high-precision and sampling rates of Solar System observations. The study of these X-ray-induced interactions is especially valuable for exoplanetary systems with K- and M-class host stars where X-ray flares are frequent and intense. X-ray emissions produced through the rapid rotation of Jupiter’s strong magnetic field and plasma injection from Io also provide a spatially-resolved analogue for exoplanets and other rapidly-rotating magnetospheres such as brown dwarfs and pulsars. While X-ray observations are invaluable for both Solar System and exoplanetary sciences, current X-ray instruments (e.g. [[*Chandra*]{}]{}, [*XMM*]{}) suffer from low effective areas at soft X-ray energies ($\sim$150cm$^{2}$) with either limited spatial ($\sim$0.5“) or spectral ($\sim$1 E/$\rm \Delta$E) resolutions. High-energy observations of the Solar System therefore need long exposure times, requiring either prohibitively long observing campaigns or extensive modeling of the systems in question in order to probe fundamental physics and processes that occur on timescales shorter than hours. In scenarios where high spatial resolution is required, options are even further limited as only one instrument, [[*Chandra*]{}]{}, is presently capable of satisfying such constraints. These restrictions ultimately limit the overall scope and impact for broader scientific application. Incorporating modern X-ray optics will improve effective areas ($>$1000cm$^{2}$) and spatial resolutions ($\sim$0.1”), which will answer fundamental questions that are currently present in Solar System sciences. In this paper, we highlight several key scientific advancements in Solar System studies that will be possible through utilizing state-of-the-art X-ray instrumentation and technologies. Comets as Tracers of the Early Solar System =========================================== The study of comets has been pivotal in understanding the chemical formation and evolution of the Solar System. Comets are volatile-rich planetesimals created during the initial period of planetary formation. Although the majority of comets currently reside in the outermost regions of the Solar System, orbital perturbations cause a small fraction of these objects to enter into the inner Solar System (&lt; 3 AU) where ices from the comet’s surface start to sublimate and are ejected [@Meech2004]. This outgassing creates a large cloud of dust and gas that surrounds the comet, known as a coma, as well as generates the comet’s iconic tails. The deposition of dust and ice from comets in the inner Solar System, including direct impacts by comets, has been speculated as a potential delivery mechanism of volatile molecules, such as water, to terrestrial planets where such materials are theorized to be lost during initial stages of planet formation [@Anders1989b; @Chyba1992; @Marty2017]. In studying comets, we therefore obtain detailed information regarding early-stage chemical and dust compositions and the transport of such chemicals throughout the Solar System. While widely known as optically bright sources, the past two decades of research have shown comets to also be X-ray bright. The first evidence for comet X-ray emission was found from C/1996 B2 (Hyakutake) by [@Lisse1996], and X-rays have presently been observed from over 30 comets. As a comet approaches perihelion, it generates X-rays from charge-exchange interactions between solar wind ions and neutral gas in the coma [@Cravens1997; @Wegmann2004; @Bodewits2007]. Coherent scattering of solar X-rays by comet dust and ice particles contribute significantly to the total emission intensity at energies greater than 1 keV, providing an additional tracer of the comet’s composition [@Snios2014; @Snios2018]. Cometary X-ray emissions have been used to determine the velocity, composition, and freeze-in temperature of the solar wind [@Kharchenko2003; @Bodewits2004; @Snios2016], to identify and map plasma interaction structures [@Wegmann2005], and to determine the distribution and composition of the neutral gas [@Lisse2005; @Mullen2017]. ![image](ison.png){width="69.00000%"} Limitations of current X-ray instruments restrict observations to near-Earth comets and to the brightest emission periods during perihelion approach. Such restrictions allow for only 1-2 observations per comet at a rate of 0.5 comets per year. Effective area increases to 2 m$^{2}$ at 1 keV will allow future X-ray instruments to observe an average of 4 comets per year. Comet observations will also be viable out to distances of $\sim$2.5 AU, where the outgassing may be driven by CO and CO$_{2}$, allowing for multiple observations throughout perihelion approach and exit. Such fantastic increases in available targets and sampling rates will for the first time allow statistical analyses of cometary X-rays as a function of the comet’s composition, outflow rates, origin, orbit, and chemical evolution through its orbit(s). The resulting spatial and temporal statistics from comet observations will additionally provide vital diagnostics for laboratory-based astrochemistry and cross-section studies which presently lack high-precision results for direct comparison [@Ali2016; @Bodewits2019 see the white paper by Betancourt-Martinez et al. for further information]. Continued analysis of X-ray emissions from comets is vital for understanding the chemical composition of the early Solar System, probing the evolution and transportation of complex molecular compounds potentially crucial for life, and applying these chemical results for a broader astrophysical use. Surface Composition of Planets and Satellites ============================================= Lunar X-rays were first remotely mapped on the Apollo 15 mission [@Adler1972], and through fluorescence line emissions, the locations of oxygen, magnesium, aluminum, and silicon were identified across the sunlit side of the Moon [@Schmitt1991; @Wargelin2004; @Narendranath2011]. Currently, the highest surface resolution is 20 km from the Moon-orbiting [*Selenological and Engineering Explorer*]{} [[*SELENE*]{}; @Yokota2009]. Using a spatial resolution better than the 0.5", future Earth-orbiting X-ray instruments will obtain a 2 km surface resolution for lunar surface abundances. Such improvements will allow accurate extrapolation from the remote map to the narrow field-of-view and high resolution maps of 1-10 km regions performed from lunar orbiters and landers (e.g. [*Clementine*]{}, [*Lunar Reconnaissance Orbiter*]{}). Deeper exposures will also allow us to identify abundances for exotic elements, such as gold and sulfur, that are not observable with current instruments. Mapping of the Lunar surface will be vital for future Moon-based mission objectives. Studies of fluorescence emissions from the surface of Mercury have revolutionised our understanding of the composition of the planet [@Weider2015]. Such studies have been possible through [*in situ*]{} missions such as [*Messenger*]{}, and in the future [*Bepicolombo*]{}. While Mercury is likely to be too close to the Sun for X-ray observations by an Earth orbiting observatory, the surface composition of rocky and icy bodies beyond Earth can be studied remotely through fluorescence processes. In the outer Solar System, Jupiter’s moons are constantly bombarded by high energy particles from Jupiter’s magnetosphere. These collisions trigger characteristic X-ray fluorescence lines from elements which has allowed [[*Chandra*]{}]{}, through virtue of its spatial and spectral resolution, to identify oxygen and sulphur in the surfaces of Io, Europa, and Ganymede [@Elsner2005b Nulsen et al., in prep]. For Europa in particular, the recent detection of plumes from the sub-surface ocean means that X-ray fluorescence studies of the surface have key implications for identification of ocean compositions. The satellite’s spectra also hint at a variety of other emission lines (e.g. sodium), but current sensitivity is insufficient for a clear detection. New instruments with improved sensitivity will map these trace elements, providing key abundances that are critical for understanding the salt content of the sub-surface ocean. These types of observations are currently not possible without the [[*Chandra*]{}]{}-like spatial resolution required to distinguish the moons from the background. Improvements in sensitivity may make these studies possible, and extend them to all the Galilean moons at Jupiter and Enceladus at Saturn. Planetary Magnetospheres and Aurorae ==================================== ![a) Overlaid [[*Chandra*]{}]{} (green dots) and [*HST*]{} (orange) polar projection of Jupiter’s North Pole [@BranduardiRaymont2008]. Large green dots indicate hard X-ray bremsstrahlung emissions from precipitating electrons on current systems outwards from the planet. Small green dots indicate X-rays from Jupiter’s returning currents, through which precipitating high energy ions charge exchange with hydrogen in Jupiter’s atmosphere. b) A UV auroral image showing hydrogen excitation emission from electron collisions in the atmosphere [@Clarke2004]. While the UV auroral emissions have permitted a broad range of studies of Jupiter’s global dynamics through the upward current system, poor sensitivity on previous X-ray instruments has limited interpretation of the partner returning current system. Advances in X-ray sensitivity will allow X-ray studies to provide comparable scientific impact to the UV observations. ](xray_aurora.png "fig:"){width="41.50000%"} ![a) Overlaid [[*Chandra*]{}]{} (green dots) and [*HST*]{} (orange) polar projection of Jupiter’s North Pole [@BranduardiRaymont2008]. Large green dots indicate hard X-ray bremsstrahlung emissions from precipitating electrons on current systems outwards from the planet. Small green dots indicate X-rays from Jupiter’s returning currents, through which precipitating high energy ions charge exchange with hydrogen in Jupiter’s atmosphere. b) A UV auroral image showing hydrogen excitation emission from electron collisions in the atmosphere [@Clarke2004]. While the UV auroral emissions have permitted a broad range of studies of Jupiter’s global dynamics through the upward current system, poor sensitivity on previous X-ray instruments has limited interpretation of the partner returning current system. Advances in X-ray sensitivity will allow X-ray studies to provide comparable scientific impact to the UV observations. ](uv_aurora.jpg "fig:"){width="52.00000%"} This year marks the 40th anniversary of the discovery that planets other than Earth generate X-rays, with the discovery of Jupiter’s bright and dynamic X-ray aurorae in 1979 [@Metzger1983]. Since this novel finding, X-ray observations by the [*Einstein*]{}, [*ROSAT*]{}, [[*Chandra*]{}]{}, [*XMM-Newton*]{}, [*Suzaku*]{}, and [*NuSTAR*]{} X-ray observatories have all provided deep insights into the planetary properties of Jupiter, its moons, its surrounding space environment, and its interaction with the upstream solar wind. Jupiter provides a critical local analogue for more distant gas giants and rapidly-rotating magnetospheres such as exoplanets, brown dwarfs and pulsars, for which there are no [*in situ*]{} measurements with which to enrich understanding of remote observations. [[*Chandra*]{}]{}’s higher spatial resolution has been fundamental in understanding the nature of these processes. Within one year of launch, [[*Chandra*]{}]{} revealed that the majority of Jupiter’s X-ray aurorae were localised in a region at Jupiter’s poles [@Gladstone2002 see also Figure 2a] in which the planet acts as the largest natural particle accelerator in the Solar System [@Barbosa1990], accelerating ions to tens of MeV energies. [[*Chandra*]{}]{}’s spatial resolution has further revealed a variety of fundamental properties of Jupiter, showing that Jupiter’s poles sometimes pulse with regularity and can operate independently of one another [@Gladstone2002; @Dunn2017] and helping to reveal how the giant planets respond to space weather events [@Dunn2016]. The ion precipitations that predominantly produce Jupiter’s X-ray emission reveal the dynamics of Jupiter’s return current [@Cravens2003], which can only be remotely studied through X-ray observations. While the X-rays uniquely permit studies of the return currents, the current system outwards from the planet can be probed by both UV and X-ray observations [@BranduardiRaymont2008 Figure 2a]. Over the last two decades, UV observations of Jupiter’s outward current system (Figure 2b) have produced hundreds of findings and profoundly changed our understanding of giant planets (e.g. review in @Grodent2015). It is not yet possible to leverage the X-rays in the same manner, due to the low instrument sensitivities and consequent low statistics. For instance, Figure 2a shows several hours of [[*Chandra*]{}]{} observations which attained a few hundred photons. Improvements in sensitivity will provide $\sim10^4$ counts for a similar duration. This transformation, in combination with high angular resolution, will identify the auroral morphology of the return currents on the few minute-timescales in which they are known to vary [@Dunn2017; @Jackman2018]. This will allow X-ray observations to probe the return currents, as UV observations have done for outward currents, opening revolutionary new insights into the global dynamics of rapidly-rotating magnetospheres and giant planets. Beyond the Jovian system, X-ray observations of Saturn have revealed atmospheric properties [@Ness2004; @BranduardiRaymont2010] and fluorescent emissions from the rings, identifying elemental composition [@Bhardwaj2005]. However, despite Saturn’s comparable size and rotation-rate to Jupiter, these studies have failed to detect X-ray aurora at the planet. This raises the question of whether Jupiter and Saturn are fundamentally different in their particle acceleration or whether it is simply that Saturn’s X-ray aurorae are too dim to detect with current instrument effective areas. Future X-ray instruments will determine whether X-ray aurorae are ubiquitous across gas giants or dependent upon specific properties of each system. A higher sensitivity instrument will also open up observations of the ice giants, Uranus and Neptune, to identify the high energy environments around these planets. An instrument with heightened sensitivity and a comparable spatial resolution to [[*Chandra*]{}]{} will revolutionise the study of planetary magnetospheres, providing a new window into the high-energy regimes of these planets. Summary ======= The Solar System is a perfect laboratory to study astronomical objects at remarkable spatial and temporal precisions and to leverage those insights to understand the more distant universe. X-ray instruments are fundamental for such studies by providing insights into Sun-object interactions, developing surface composition maps, probing small-scale and global magnetospheric dynamics, and tracking local astrochemical reactions. Although current X-ray instruments are still a vital tool to augment planetary science in the next decade, their limitations mean that a giant leap in this science can only be achieved with a major step up in observatory capabilities. Implementing modern X-ray optics in future instruments, including improvements in effective area, spatial resolution, and spectral resolution, will remove these strict limitations and foster a truly transformative era of Solar System science through the study of X-ray emission. References {#references .unnumbered} ========== [2]{}
--- abstract: 'Inversion formulas have been found, converting between *Stirling*, *tanh* and *Lah* numbers. *Tanh* and *Lah* polynomials, analogous to the Stirling polynomials, have been defined and their basic properties established. New identities for Stirling and tangent numbers and polynomials have been derived from the general inverse relations. In the second part of the paper, it has been shown that if shifted-gamma probability densities and negative binomial distributions are *matched* by equating their first three semi-invariants (cumulants), then the cumulants of the two distributions are related by a pair of reciprocal linear combinations equivalent to the inversion formulas established in the first part.' address: 'University of Udine, Department of Mathematics and Computer Science, Via delle Scienze 206. 33100-Udine, Italy.' author: - Giacomo Della Riccia title: 'Inversions relating Stirling, Tanh, Lah Numbers and an application to Mathematical Statistics' --- Introduction ============ The usual form of an *inversion formula* is $$g(n)=\sum_ia(n,i)f(i)\leftrightarrow f(n)=\sum_iA(n,i)g(i),$$ where $\left\{a(n,m), A(n,m)\right\}$ is a pair of *inverse numbers* and $\left\{f(n)\right\}$, $\left\{g(n)\right\}$ are numerical sequences. An extensive study of inverse relations and related topics can be found in [@Riorb]. In this paper we will consider inversion formulas of a more general form: $$g(n,m)=\sum_ia(n,i)f(i,m)\leftrightarrow f(n,m)=\sum_iA(n,i)g(i,m),$$ with *double-sequences* $\left\{f(n,m)\right\}$, $\left\{g(n,m)\right\}$ and corresponding *number inverses* $\left\{F(n,m)\right\}$, $\left\{G(n,m)\right\}$; in other words, we are interested in inverse relations converting between *arrays*. Of course, from the general formula, one can get inverse relations for *sequences* by fixing one of the two indices. As we shall see, the case $m=1$ is of particular interest because, in general, it is associated with identities involving important classical numbers. We used this approach to find relations connecting Stirling $\left\{{n \brack m}, {n \brace m}\right\}$, $\left\{\mbox{arctan}\ t(n,m), \mbox{tangent}\ T(n,m)\right\}$ and Lah $L(n,m)$ numbers. Then, in analogy to the Stirling polynomials $\sigma_k(x)$ case, we introduced *tangent* $\delta_k(x)$ and Lah $\lambda_k(x)$ polynomials and derived connecting relations. Most of the identities we obtained seem to be original. Finally, we discussed the above results in the light of a problem in Statistical Mathematics dealing with semi-invariants (cumulants) of *shifted-gamma* probability densities $g(\vartheta;\ a,b,c)=\Gamma(\vartheta+c;\ a,b)$ and *negative binomial distributions* $nb(\varpi; r,\lambda)$. We showed that if $g(\vartheta;\ a,b,c)$ and $nb(\varpi; r,\lambda)$ are *matched* by equating their first three cumulants, then the cumulants $\gamma(n)$ and $\eta(n)$ of the two distributions are related by reciprocal linear combinations equivalent to the *array* inversion formulas established previously. Relations between Stirling, tanh and Lah numbers ================================================ We will use Stirling ${n\brack m}$, ${n\brace m}$ [@Kn], arctan $t(n,m)$ and tangent $T(n,m)$ ([@Comt], see pp. 258-260), and Lah $L(n,m)$ ([@Riora], see pp. 43-44) numbers, but with scale factors and appropriate notations: $$\begin{aligned} &&\theta(n,m)=(-1)^{(n-m)/2}t(n,m),\quad \Theta(n,m)=(-1)^{(n-m)/2}T(n,m);\\ && \overline{{n\brack m}}=(-2)^{n-m}{n\brack m},\quad \overline{{n\brace m}}=2^{n-m}{n\brace m};\\ &&\overline{l}(n,m)=(-1)^{n-m}\overline{L}(n,m)=(-1)^nL(n,m)=\frac{n!}{m!}{{n-1}\choose{m-1}}.\end{aligned}$$ The orthogonal relations satisfied by $\left\{\theta(n,m), \Theta(n,m)\right\}$ and $\left\{{\overline{n\brack m}}, \overline{{n\brace m}}\right\}$ are of the form $\sum_{i}a(n,i)A(i,m)=\sum_{i}A(n,i)a(i,m)=[m=n]$ and it is easily verified by direct calculation that these relations are also valid for $\left\{\overline{l}(n,m), \overline{L}(n,m)\right\}$. Recursions and egfs are simply obtained by introducing scales in the ordinary relations. Since the egfs of $\theta(n,m)$, $\Theta(n,m)$ egfs are of particular interest, we give the following explicit derivation. The known egfs [@Comt] are $$t_m(v)=\sum_{n}t(n,m)\frac{v^n}{n!} =\frac{1}{m!}\arctan^m v,\quad T_m(v)=\sum_{n}T(n,m)\frac{v^n}{n!}=\frac{1}{m!}\tan^m v.$$ Since $t_m(v)$, $T_m(v)$, as functions of $v$, and $m$ have opposite parity, $t(n,m)=T(n,m)=0$ when $n-m$ is *odd*. Thus, with $n-m$ is even, putting $v=\imath u$, $\imath^2=-1$, $\arctan(\imath u)=\imath\arg\tanh u$ and $\tan(\imath u)=\imath\tanh u$, we get $$\begin{aligned} \label{egf} &\theta_m(u)=\sum_{n}\imath^{n-m}t(n,m)\frac{u^n}{n!} =\sum_{n}(-1)^{\frac{n-m}{2}}t(n,m)\frac{u^n}{n!}=\frac{1}{m!}\arg\tanh^m u\\ &=\frac{1}{m!}\left(\frac{1}{2}\ln \frac{1+u}{1-u}\right)^m =\frac{1}{m!}\left(\sum_{j}\theta_{2j+1}\frac{u^{2j+1}}{(2j+1)!}\right)^m;\ \theta_{2j+1} =(-1)^{j}t_{2j+1}=(2j)!;\nonumber\\ &\Theta_m(u)_m(u)=\sum_{n}\imath^{n-m}T(n,m)\frac{u^n}{n!} =\sum_{n}(-1)^{\frac{n-m}{2}}T(n,m)\frac{u^n}{n!}=\frac{1}{m!}\tanh^m u\nonumber\\ &=\frac{1}{m!}\left(\sum_{j}\Theta_{2j+1}\frac{u^{2j+1}}{(2j+1)!}\right)^m;\ \Theta_{2j+1}=(-1)^{j}T_{2j+1} =\frac{4^{j+1}(4^{j+1}-1)}{2j+2}B_{2j+2}.\nonumber\end{aligned}$$ Thus, scale factors $(-1)^{(n-m)/2}$ change *arctan* $t(n,m)$, *tangent* $T(n,m)$ numbers in *arctanh* $\theta(n,m)$, *tanh* $\Theta(n,m)$ numbers. For brevity, the pairs $\{\overline{{n\brack m}}, \overline{{n\brace m}}\}$, $\{\theta(n,m), \Theta(n,m)\}$ and $\{\overline{l}(n,m), \overline{L}(n,m)\}$ will be called *Stirling*, *tanh* and *Lah* numbers, respectively. Scaled numbers basic properties are listed in Table 1. -------------------------------------------------------------------- $\mbox{Recurrence relations}$ $\overline{{{n+1}\brack m}}=\overline{{n\brack{ m-1}}}-2n\overline{{n\brack m}},\quad \overline{{{n+1}\brace m}}=\overline{{n\brace{ m-1}}}+2m\overline{{n\brace m}};$ $\overline{{0\brack m}}=\overline{{0\brace m}}=[m=0],\quad \overline{{n\brack 0}}=\overline{{n\brace 0}}=[n=0].$ $\theta(n+1,m)=\theta(n,m-1)+n(n-1)\theta(n-1,m),$ $\Theta(n+1,m)=\Theta(n,m-1)-m(m+1)\Theta(n,m+1);$ $\theta(0,m)=\Theta(0,m)=[m=0];\ \theta(n,0)=\Theta(n,0)=[n=0];\ \theta(1,0)=\Theta(1,0) =0.$ $\overline{l}(n+1,m)=(n+m)\overline{l}(n,m)+\overline{l}(n,m-1),$ $\overline{L}(n+1,m)=-(n+m)\overline{L}(n,m)+\overline{L}(n,m-1);$ $\overline{l}(n,0)=\overline{L}(n,0)\equiv0;\quad (-1)^m\overline{l}(0,m)=\overline{L}(0,m)=-\frac{1}{m!}+[m=0].$ $\mbox{Duality laws and Orthogonal relations}$ $\overline{{n\brack m}}=(-1)^{n-m}\overline{{-m\brace {-n}}};\ \theta(n,m)=\Theta(-m,-n);\ \overline{l}(n,m) =(-1)^{n-m}\overline{L}(-m,-n).$ $\sum_{i}a(n,i)A(i,m)=\sum_{i}A(n,i)a(i,m)=[m=n].$ $\mbox{Exponential generating functions}$ $\sum_{n}\overline{{n\brack m}}\frac{u^n}{n!} =\frac{1}{m!}\left(\frac{1}{2}\ln (1+2u)\right)^m,\quad \sum_{n}\overline{{n\brace m}}\frac{u^n}{n!} =\frac{1}{m!}\left(\frac{e^{2u}-1}{2}\right)^m$ $\theta_m(u)=\sum_{n}\theta(n,m)\frac{u^n}{n!} =\frac{1}{m!}\arg\tanh^m u=\frac{1}{m!}\left(\frac{1}{2}\ln \frac{1+u}{1-u}\right)^m$ $\Theta_m(u)=\sum_{n}\Theta(n,m)\frac{u^n}{n!}=\frac{1}{m!}\tanh^m u$ $\sum_{n}\overline{l}(n,m)\frac{u^n}{n!} =\frac{1}{m!}\left(\frac{u}{1-u}\right)^m,\quad \sum_{n}\overline{L}(n,m)\frac{u^n}{n!} =\frac{1}{m!}\left(\frac{u}{1+u}\right)^m.$ -------------------------------------------------------------------- : Basic properties of scaled Stirling, tanh and Lah numbers \[conv\] Numbers in each pair, *Stirling*, *tanh* and *Lah*, convert between numbers in the other two pairs. Let $\tanh u=\frac{v}{1+v}$, $v=\frac{e^{2u-1}}{2}$, then from the egfs listed in Table 1 we get: $$\begin{array}{l}\label{sequence} \sum_{n}\Theta(n,m)\frac{u^n}{n!}=\frac{1}{m!}\tanh^m u =\frac{1}{m!}\left(\frac{v}{1+v}\right)^m=\sum_i\overline{L}(i,m)\frac{v^i}{i!}\\ =\sum_i\overline{L}(i,m)\frac{1}{i!}\left(\frac{e^{2u}-1}{2}\right)^i= \sum_i\overline{L}(i,m)\sum_n\overline{{n\brace i}}\frac{u^n}{n!}\\ =\sum_n\left(\sum_i\overline{L}(i,m)\overline{{n\brace i}}\right)\frac{u^n}{n!}. \end{array}$$ These equations imply $\Theta(n,m)=\sum_{i=m}^n\overline{L}(i,m)\overline{{n\brace i}}$ and, by dualities and inversions (Table 1), $\theta(n,m)=\sum_{i=m}^n\overline{l}(i,m)\overline{{n\brack i}}$, $\overline{{n\brack m}}=\sum_i\overline{L}(n,i)\theta(i,m)$ and $\overline{{n\brace m}}=\sum_i\overline{l}(n,i)\Theta(i,m)$. Hence, Lah numbers convert between Stirling and tanh numbers. In Table 2 we listed the identities that are derived by use of inversions and/or dualities given in Table 1. Since in the third and fourth inversion formulas Stirling numbers convert between Lah and tanh numbers, and in the fifth and sixth tanh numbers convert between Lah and Stirling numbers, the proof is complete. The basic structure connecting tanh and Stirling numbers is the following. \[coeff\] Tanh numbers are finite sums of multiples of Stirling numbers, and inversely $$\begin{array}{l} \theta(n,n-k)=\sum_{i=0}^ki!{{n}\choose i}{{n-1}\choose i}\overline{{{n-i}\brack {n-k}}},\\ \overline{{n\brack {n-k}}}=\sum_{i=0}^k(-1)^ii!{{n}\choose i}{{n-1}\choose i}\theta(n-i,n-k),\\ \Theta(n+k,n)=\sum_{i=0}^k(-1)^{k-i}i!{{n+i}\choose{i}} {{n+i-1}\choose{i}}\overline{{{n+k}\brace{n+i}}},\\ \overline{{n+k\brace n}}=\sum_{i=0}^k(-1)^{k-i}i!{{n+i}\choose{i}}{{n+i-1}\choose{i}}\Theta(n+k,n+i). \end{array}$$ These relations are obtained from the first two inverse pairs in Table 2 with $k=n-m$, $i$ replaced by $n-i$ and the use of Lah numbers explicit expressions. ------------------------------------------------------------------------------------------ $\overline{{n\brack m}}=(-2)^{n-m}{n\brack m},\quad \overline{{n\brace m}}=2^{n-m}{n\brace m}$ $\theta(n,m)=(-1)^{(n-m)/2}t(n,m), \quad \Theta(n,m)=(-1)^{(n-m)/2}T(n,m)$ $\overline{l}(n,m)=(-1)^nL(n,m),\quad \overline{L}(n,m) =(-1)^mL(n,m)$ $\Theta(n,m)=\sum_i\overline{L}(i,m)\overline{{n\brace i}}\quad \leftrightarrow\quad \overline{{n\brace m}}=\sum_i\overline{l}(i,m)\Theta(n,i)$ $\theta(n,m)=\sum_i\overline{l}(n,i)\overline{{i\brack m}}\quad \leftrightarrow\quad \overline{{n\brack m}}=\sum_i\overline{L}(n,i)\theta(i,m)$ $\overline{L}(n,m) =\sum_i\overline{{n\brack i}}\Theta(i,m)\quad \leftrightarrow\quad \Theta(n,m)=\sum_i\overline{{n\brace i}}\overline{L}(i,m)$ $\overline{l}(n,m)=\sum_i\overline{{i\brace m}}\theta(n,i)\quad \leftrightarrow\quad \theta(n,m)=\sum_i\overline{{i\brack m}}\overline{l}(n,i)$ $\overline{l}(n,m)=\sum_i\theta(n,i)\overline{{i\brace m}}\quad \leftrightarrow\quad \overline{{n\brace m}}=\sum_i\Theta(n,i)\overline{l}(i,m)$ $\overline{L}(n,m) =\sum_i\Theta(i,m)\overline{{n\brack i}}\quad \leftrightarrow\quad \overline{{n\brack m}}=\sum_i\theta(i,m)\overline{L}(n,i)$ ------------------------------------------------------------------------------------------ : Conversions between Stirling, tanh, and Lah numbers As we know ([@Knu], see p. 418), Stirling numbers $\overline{{x\brack {x-k}}}$, $\overline{{x+k\brace x}}$ can be viewed as polynomial in $x$. Thus, Corollary (\[coeff\]) implies that $\theta(x,x-k)$, $\Theta(x+k,x)$ can also be treated as polynomials; these have the following properties. \[poly1\] If $k=2j\geq0$, then $\theta(x,x-k)$, $\Theta(x+k,x)$ are polynomials in $x$ having degree $\frac{3k}{2}=3j$ and leading coefficient $\frac{1}{3^j\times j!}$, $(-1)^j\frac{1}{3^j\times j!}$, respectively. The proof is by induction on $k$ applied to tanh numbers recurrence relations (Table 1) written with $k=n-m$ notations. Details of the proof are omitted because they are the same as those used by Gessel and Stanley ([@Gess], see p. 25 ) in their study on Stirling numbers structure. The first few cases are the following. $$\begin{aligned} &\theta(x,x-2)=-\Theta(x,x-2)=\frac{3!}{3\times1!}{x\choose 3}\\ &\theta(x,x-4)=\frac{6!}{3^2\times2!}{{x+1}\choose 6}-2^4{x\choose 5};\quad \Theta(x,x-4)=\frac{6!}{3^2\times2!}{{x+1}\choose 6}-2^3\times3{x\choose 5}.\end{aligned}$$ As pointed out in the Introduction, the general inverse relations in Table 2 yield interesting results in the case of $m=1$, essentially because $$\begin{aligned} &\overline{{n\brack 1}}=(-2)^{n-m}(n-1)!;\quad \overline{{n\brace 1}}=2^{n-m}.\\ &\theta(2n+1,1)=(2n)!;\quad \Theta(2n+1,1)=(-1)^nT_{2n+1};\quad \theta(2n,1)=\Theta(2n,1)\equiv0.\end{aligned}$$ The first general pair of inverse relations gives Stirling numbers identities: $$\label{nids} \begin{array}{c} \sum_{i=1}^{n}(-1)^{i-1}i!\overline{{{n}\brace{i}}}= \begin{cases} 0,\ \mbox{even}\ n\\ \Theta(n,1),\ \mbox{odd}\ n\\ \end{cases} \end{array} \leftrightarrow\quad 2^{n-1}=\sum_{i=1}^n i!\Theta(n,i).$$ The identity when $n$ is even was given for the first time by Lengyel ([@Len], see p. 7), whereas the identity for $n$ odd is new. The third inverse pair yields: $$\label{nidp} 2^{n-1}=\sum_{i=1}^n\Theta(n,i)i!\quad \leftrightarrow\quad n!=\sum_{i=1}^{n}\theta(n,i)2^{i-1},$$ showing that tanh numbers convert between factorials and powers of 2. From the fourth we get an inverse pair: $$\theta(n,1) =\sum_{i=1}^n\overline{l}(n,i)(-2)^{i-1}(i-1)!\leftrightarrow (-2)^{n-1}(n-1)! =\sum_{i=1}^n\overline{L}(n,i)\theta(i,1),$$ extending an inversion formula $n!=\sum_{i=1}^n\overline{L}(n,i)2^{i-1}i!\leftrightarrow 2^{n-1}n!=\sum_{i=1}^n\overline{l}(n,i)i!$ given by Lah ([@Lah], see p. 207). Finally, the fifth and the sixth pairs disclose original identities involving two out of the Stirling, tanh and Lah numbers: $$\begin{aligned} &\Theta(n,1)=\sum_{i=1}^n\overline{{{n}\brace{i}}}(-1)^{i-1}i!\quad \leftrightarrow\quad (-1)^{n-1}n!=\sum_{i=1}^n \overline{{{n}\brack{i}}}\Theta(i,1)\\ &(-1)^{n-1}n!=\sum_{i=1}^n\Theta(i,1)\overline{{{n}\brack{i}}}\ \leftrightarrow\ (-2)^{n-1}(n-1)!= \sum_{i=1}^n\theta(i,1)\overline{L}(n,i).\end{aligned}$$ Stirling, tanh and Lah polynomials ================================== Since $\theta(x,x-k)=\sum_{i=0}^ki!{{x}\choose i}{{x-1}\choose i}\overline{{{x-i}\brack {x-k}}}$ and $\overline{l}(x,x-k)=k!{{x}\choose{k}}{{x-1}\choose{k}}$ vanish for $x=0,1,\ldots,k$ , *tanh polynomials* $\delta_k(x)$ and *Lah polynomials* $\lambda_k(x)$ can be defined by rules similar to $\overline{\sigma}_k(x)=\left.\overline{{x\brack {x-k}}}\right/x^{\underline{k+1}}$ [@Kn] used for Stirling polynomials. For clarity, we recall that $x^{\underline{k+1}}=x(x-1)\ldots,(x-k)=(-1)^{k+1}(k-x)^{\underline{k+1}}$. \[stp1\] $$\begin{aligned} &\delta_k(x)=\frac{\theta(x,x-k)}{x^{\underline{k+1}}}\ \sim\ \delta_k(k-x)=(-1)^{k+1}\frac{\Theta(x,x-k)}{x^{\underline{k+1}}};\\ &\lambda_k(x)=\frac{\overline{l}(x,x-k)}{x^{\underline{k+1}}}\ \sim\ \lambda_k(k-x)=-\frac{\overline{L}(x,x-k)}{x^{\underline{k+1}}}=(-1)^{k+1}\lambda_k(x).\end{aligned}$$ In the case of integers $n,m$, the above definitions assume the form $$\label{stp2} \theta(n,m)=\frac{n!}{(m-1)!}\delta_{n-m}(n);\quad \Theta(n,m)=-\frac{n!}{(m-1)!}\delta_{n-m}(-m).$$ Corollary (\[coeff\]) with $x$ instead of $n$ and a factor $x^{\underline{k+1}}$ divided out, yields: \[coeff3\] Tanh polynomials $\delta_k(x)$ are finite sums of multiples of Stirling polynomials $\overline{\sigma}_k(x)$, and inversely $$\begin{aligned} &\delta_k(x)=\sum_{i=0}^k{{x-1}\choose {i}}\overline{\sigma}_{k-i}(x-i)\ \sim\ \delta_k(k-x)=\sum_{i=0}^k{{k-x-1}\choose {k-i}}\overline{\sigma}_i(i-x);\\ &\overline{\sigma}_k(x)=\sum_{i=0}^k{{x-1}\choose {i}}\delta_{k-i}(x-i)\ \sim\ \overline{\sigma}_k(k-x)=\sum_{i=0}^k{{k-x-1}\choose {k-i}}\delta_i(i-x).\end{aligned}$$ The generating functions of $x\delta_k(x)$ and $x\delta_{k}(k+x)$ are: $$\begin{aligned} &\sum_{k}x\delta_k(x)u^{k}=\left(u\coth u\right)^x=\left(\sum_{j}2^{2j}\frac{B_{2j}}{(2j)!}u^{2j}\right)^x,\ B_{2j}\ \emph{Bernoulli numbers};\\ &\sum_{k}x\delta_{k}(k+x)u^{k}=\left(\frac{1}{u}\arg\tanh u\right)^x=\left(\frac{1}{2u}\ln \frac{1+u}{1-u}\right)^{x}=\left(\sum_{j\geq0}\frac{1}{2j+1}u^{2j}\right)^{x}.\end{aligned}$$ The egf of $\theta(n+k,n)$ (\[egf\]) and definitions (\[stp2\]) yield $$\begin{aligned} &&\sum_{k}\theta(n+k,n)\frac{u^{n+k}}{(n+k)!} =\sum_{k}\frac{(n+k)!}{(n-1)!}\delta_k(n+k)\frac{u^{n+k}}{(n+k)!}=\frac{1}{n!}\arg\tanh^n u,\\ &&\sum_{k}x\delta_{k}(k+x)u^{k}=\left(\frac{1}{u}\arg\tanh u\right)^x\end{aligned}$$ where, on the basis of the “polynomial” argument, we replaced $n$ with $x$. Using the egf of $\Theta(n+k,n)$ and proceeding as above, we obtain $\sum_{k}x\delta_k(x)u^{k}=\left(u\coth u\right)^x$. \[poly2\] Tanh polynomials $\delta_{k}(x)$ satisfy the recurrence relation $$\begin{aligned} &(x+1)\delta_{k}(x+1)=(x-k)\delta_{k}(x)+(x-1)\delta_{k-2}(x-1);\quad x\delta_0(x)\equiv1.\\ &\delta_{k}(x),\ k=2j>0,\ \emph{has degree}\ k/2-1\ \emph{and}\ \delta_{k}(x)\equiv0,\ k=2j+1.\end{aligned}$$ The recurrence relation is obtained by dividing out a common factor $x^{\underline{k}}$ in the recurrence relation of $\theta(n,m)$ (Table 1), written with $n=x$ and $m=x+1-k$. (Compare with $(x+1)\sigma_k(x+1)=(x-k)\sigma_k(x)+x\sigma_{k-1}(x)$, ([@Kn] Exercise 18, Chapter 6)). $\delta_k(x),\ k=2j>0 $ has degree $k/2-1$ follows from Proposition (\[poly1\]) and $\delta_{k}(x)\equiv0,\ k=2j+1$, because $\theta(x,x-k)=\Theta(x+k,x)\equiv0,\ k=2j+1$. (Recall that $\overline{\sigma}_k(x),\ k>0$, has degree $k-1$, [@Kn]). The first few cases are the following: $$\begin{aligned} &\delta_{k}(1)=2^{k}\frac{B_{k}}{k!}+[k=1];\quad k\delta_k(0) =2^k(2^k-2)\frac{B_k}{k!},\quad k>0;\\ &\delta_{k}(-1)=-\frac{\Theta_{k+1}}{(k+1)!};\quad \delta_{2j}(2j+1)=\frac{1}{2j+1}.\\ &\delta_2(x)\equiv\frac{1}{3};\ \delta_4(x) =\frac{1!}{3^2\times 2!}{{x-1}\choose 1}-\frac{1}{3^2\times5};\ \delta_6(x)=\frac{2!}{3^3\times3!}{x\choose 2}-\frac{2^3}{3^4\times5}{{x-1}\choose 1}+\frac{2}{3^3\times7\times5}.\end{aligned}$$ A companion expression of (\[nids\]), derived from Proposition (\[coeff3\]), is $$\begin{array}{c} \sum_{i=1}^{k}{{x-1}\choose {i}}\overline{\sigma}_{k-i}(x-i)= \begin{cases} 0,\quad \mbox{odd}\ k\\ \delta_k(x),\quad \mbox{even}\ k\\ \end{cases} \end{array};\quad \overline{\sigma}_k(x)=\sum_{i=1}^{k}{{x-1}\choose {i}}\delta_{k-i}(x-i).$$ $\sum_{i=1}^{2j+1}{{x-1}\choose {i}}\overline{\sigma}_{2j+1-i}(x-i)=0$ is a new Stirling polynomials identity. The properties of $x\lambda_k(x)$ follow at ounce from $x\lambda_k(x)={x\choose k}$. Polynomials $x\lambda_k(x)={x\choose k}$ have degree $k$. Their generating function and recurrence relation are $$\begin{array}{l} \sum_{k\geq0}x\lambda_k(x)u^{k}=(1+u)^x,\quad \sum_{k\geq0}x\lambda_{k}(k+x)u^{k}=\frac{1}{(1-u)^x},\\ (x+1)\lambda_k(x+1)={{x+1}\choose k}={{x}\choose {k-1}}+{{x}\choose {k}}=x\left[\lambda_{k-1}(x)+\lambda_k(x)\right],\quad x\lambda_0(x)\equiv1. \end{array}$$ An application to a problem in Mathematical Statistics ====================================================== We now show that the above results have an application in a problem of Statistical Mathematics dealing with semi-invariants (cumulants) of *shifted-gamma* densities $g(\vartheta;\ a,b,c)=\Gamma(\vartheta+c;\ a,b)$ and *negative binomial distributions* $nb(\varpi; r,\lambda)$. For the sake of completeness, we first recall some standard definitions. $$\begin{aligned} &\begin{array}{l} g(\vartheta;\ a,b,c)=\Gamma(\vartheta+c;\ a,b)=\begin{cases} \frac{1}{b^a\Gamma(a)}(\vartheta+c)^{a-1} \exp[-\frac{(\vartheta+c)}{b}],&\text{$\vartheta>-c$},\\ 0&\text{otherwise} \end{cases}\\ \end{array}\\ &nb(\varpi;\ r,\lambda)=\frac{\Gamma(r+\varpi)}{\Gamma(r)\Gamma(\varpi+1)} \left(\frac{1}{1+\lambda}\right)^r\left(\frac{\lambda}{1+\lambda}\right)^{\varpi},\quad \lambda\geq0\end{aligned}$$ From the *moment* egfs $M_{sg}(t)=e^{-ct}\left/(1-bt)^a\right.$, $M_{nb}(t)=1\left/[1-\lambda(e^t-1)]^r\right.$ and the *cumulant* egfs $\ln M_{sg}(t)$, $\ln M_{nb}(t)$: $$\begin{array}{l} \ln M_{sg}(t)=\sum_{n>0}\gamma(n)\frac{t^n}{n!}=(-c+ab)t+\sum_{n>1}a\frac{b^n}{n}t^n,\\ \ln M_{nb}(t)=\sum_{n>0}\eta(n)\frac{t^n}{n!} =r\sum_{m>0}\frac{\lambda^m}{m}(e^t-1)^m\\ =\sum_{n>0}\left[\sum_{m>0}r(m-1)!{n\brace m}\lambda^m\right]\frac{t^n}{n!}\ \left[\mbox{with}\ (e^t-1)^m=m!\sum_{n\geq0}{n\brace m}\frac{t^n}{n!}\right], \end{array}$$ we get the $n$th cumulants $\gamma(n)$, $\eta(n)$ of $g(\vartheta;\ a,b,c)$, $nb(\varpi;\ r,\lambda)$: $$\begin{aligned} &\gamma(n)= \begin{cases}\label{c1} -c+ab, & \text{$n=1$},\\ (n-1)!\ ab^n, & \text{$n>1$}; \end{cases}\\ &\eta(n)=\sum_{m=1}^nr(m-1)!2^{m-n}\overline{{n\brace m}}\lambda^{m},\quad n>0.\label{c2}\end{aligned}$$ Two distributions $g(\vartheta;\ a,b,c)$ and $nb(\varpi;\ r,\lambda)$ are said to be *matched* if their first three cumulants are equal. By equating the first three cumulants $$\begin{array}{l} 1st\ cumulant\ (\mbox{mean})=\ \mu=-c+ab=r\lambda\\ 2nd\ cumulant\ (\mbox{variance})=\sigma^2=ab^2=r\lambda(1+\lambda)\\ 3rd\ cumulant=\gamma(3)=2!\ ab^3=\eta(3)=2!\ r\lambda(1+\lambda)(1/2+\lambda) \end{array}$$ we get matching conditions $$\label{match} \begin{array}{lll} a=r\frac{\lambda(1+\lambda)}{(1/2+\lambda)^2}, & b=1/2+\lambda, & c=\frac{ab}{1+2b}=\frac{r\lambda}{1+2\lambda};\\ r=\frac{ab^2}{b^2-1/4}, & \lambda=b-1/2. \end{array}$$ For convenience, we will use scaled cumulants $\overline{\eta(n)}=\frac{2^n}{r}\eta(n)$ and $\overline{\gamma(n)}=\frac{2^n}{r}\gamma(n)$. \[le2\] In a matched pair $\left\{g(\vartheta;\ a,b,c), nb(\varpi;\ r,\lambda)\right\}$, cumulants $\overline{\gamma}(n)$ and $\overline{\eta}(n)$ are polynomials in $\lambda$ having degree $n$ $$\begin{aligned} &\overline{\gamma}(n)=\sum_{m=1}^n\gamma(n,m)\lambda^{m},\quad n>0,\quad \overline{\eta}(n) =\sum_{m=1}^n\eta(n,m)\lambda^{m};\nonumber\\ &\gamma(n,m)=2^m(m-1)!\left[\overline{l}(n-1,m-1) +2m\overline{l}(n-1,m)\right],\label{q}\\ &\eta(n,m)=2^{m}(m-1)!\overline{{n\brace m}}.\label{c3}\end{aligned}$$ The lemma holds for $\overline{\eta}(n)$ as we already know (\[c2\]). It is also true for $\overline{\gamma}(n)$ because if $\left\{a,b,c\right\}$ are replaced by the matching values (\[match\]), then from (\[c1\]): $$\begin{aligned} &\begin{array}{l} \overline{\gamma}(n)=\begin{cases} 2\lambda,\quad n=1 & \\ 2^n(n-1)!\lambda(1+\lambda)(1/2+\lambda)^{n-2}=\sum_{m=1}^n\gamma(n,m)\lambda^m , & \text{$n>1$}, \end{cases}\\ \end{array}\\ &\lambda(1+\lambda)(1/2+\lambda)^{n-2}=\sum_{m=1}^n\left[{{n-2}\choose {n-m}}\frac{1}{2^{n-m}} +{{n-2}\choose {n-m-1}}\frac{1}{2^{n-m-1}}\right]\lambda^n\\ &\mbox{and}\ \gamma(n,m)=2^m(m-1)!\left[\overline{l}(n-1,m-1) +2m\overline{l}(n-1,m)\right],\quad n\geq1.\end{aligned}$$ \[th2\] In a matched pair $\left\{g(\vartheta;\ a,b,c), nb(\varpi;\ r,\lambda)\right\}$, cumulants $\overline{\gamma}(n)$ and $\overline{\eta}(n)$ of the two distributions are related by reciprocal linear combinations $$\begin{aligned} &&\overline{\gamma}(n+1)=\sum_{i=0}^n\theta(n,i)\overline{\eta}(i+1)\quad \leftrightarrow\quad \overline{\eta}(n+1) =\sum_{i=0}^n\Theta(n,i)\overline{\gamma}(i+1),\quad n\geq0,\\ &&\emph{where}\ \theta(n,m)=(-1)^{(n-m)/2}t(n,m)\ \emph{and}\ \Theta(n,m)=(-1)^{(n-m)/2}T(n,m).\end{aligned}$$ Since cumulants are polynomials in $\lambda$, a solution $\theta(n,m)$ and $\Theta(n,m)$ exists if like powers of $\lambda$ on both sides are equal: $$\begin{aligned} &&\eta(n+1,m+1)\equiv[\lambda^{m+1}]\sum_i\Theta(n,i)\overline{\gamma}(i+1) =\sum_i\Theta(n,i)\gamma(i+1,m+1),\label{sys1}\\ &&\gamma(n+1,m+1)\equiv[\lambda^{m+1}]\sum_i\theta(n,i)\overline{\eta}(i+1) =\sum_i\theta(n,i)\eta(i+1,m+1).\nonumber\end{aligned}$$ From (\[c3\]) and Stirling recurrence relation, and (\[q\]), we get $$\label{cum} \begin{array}{l} \eta(n+1,m+1)=2^{m+1}m!\overline{{{n+1}\brace {m+1}}}=2^{m+1}m!\left[\overline{{n\brace{ m}}}+2(m+1)\overline{{n\brace {m+1}}}\right],\\ \gamma(i+1,m+1)=2^{m+1}m!\left[\overline{l}(i,m) +2(m+1)\overline{l}(i,m+1)\right]. \end{array}$$ Substituting (\[cum\]) into (\[sys1\]), rearranging the terms and dividing by $2^{m}m!$, we obtain a recursion in $m$ which automatically terminates after $n-m$ recursive steps $$\begin{aligned} &\overline{{n\brace m}}-\sum_{i=m}^{n}\Theta(n,i) \overline{l}(i,m)=2(m+1)\left[\overline{{n\brace {m+1}}}-\sum_{i=m+1}^{n}\Theta(n,i)\overline{l}(i,m+1)\right]\\ &=2^2(m+2)(m+1)\left[\overline{{n\brace {m+2}}}-\sum_{i=m+1}^{n}\Theta(n,i)\overline{l}(i,m+2)\right]=,\ldots,=0,\end{aligned}$$ Thus $\overline{{n\brace m}}-\sum_i\Theta(n,i)\overline{l}(i,m)=0$, and $\Theta(n,m)=(-1)^{(n-m)/2}T(n,m)$ from Theorem (\[conv\]). Since $\gamma(n+1)=\sum_{i=0}^n\theta(n,i)\eta(i+1)$ is the reciprocal of $\overline{\eta}(n+1) =\sum_{i=0}^n\Theta(n,i)\overline{\gamma}(i+1)$, $\theta(n,m)$ and $\Theta(n,m)$ are necessarily *number inverses*, therefore $\theta(n,m)=(-1)^{(n-m)/2}t(n,m)$. This completes the proof. Conversely, $$\eta(n+1,m+1) =\sum_i\Theta(n,i)\gamma(i+1,m+1)\leftrightarrow\gamma(n+1,m+1)=\sum_i\theta(n,i)\eta(i+1,m+1)$$ implies Theorem (\[conv\]) and the inversion relations in Table 2, hence, *the problem in Theorem *(\[th2\])* of converting between cumulants is equivalent to the problem in Theorem *(\[conv\])* of converting between Stirling, tanh and Lah numbers*. Finally, let us show that the inverse pair (\[nidp\]): $n!=\sum_{i=0}^n\theta(n,i)2^{i-1}\ \leftrightarrow\ 2^{n-1}=\sum_{i=0}^n\Theta(n,i)i!,$ corresponds to an interesting particular case of Theorem (\[th2\]). In fact, let $\lambda\rightarrow0$ while $r\lambda=\alpha$ remains constant, then, as we know, the negative binomial distribution $nb(\varpi;r,\lambda)$ tends towards the Poisson distribution $p(\varpi; \alpha)$ with cumulants all equal to $\alpha$. The matched shifted-gamma density tends towards $g(\vartheta;4\alpha,\frac{1}{2},\alpha)$, since, according to the matching conditions (\[match\]), $a\rightarrow 4\alpha,\ b\rightarrow\frac{1}{2},\ c\rightarrow\alpha$, and its cumulants are obtained from (\[c1\]). Using the cumulant limiting values, one verifies that the reciprocal relations in Theorem (\[th2\]) reduce to (\[nidp\]). Main results and Conclusion =========================== From general inverse relations converting between Stirling, tanh and Lah numbers, we obtained a certain number of new identities by fixing $m=1$ in the double-sequences involved. The same approach was used to study connections between $\sigma_k(x)$ Stirling, $\delta_k(x)$ tanh and $\lambda_(x)$ Lah polynomials. Finally, we showed that the cumulants of a shifted-gamma probability density and a negative binomial distribution can be related by reciprocal linear combinations (Theorem \[th2\]) which turned out to be an instance of the tanh numbers inversion formula, hence, that this problem and our problem (Theorem \[conv\]) on number arrays, are equivalent. [1]{} L. Comtet, *Advanced combinatorics*, Riedel, Dordrecht, 1974. I. Gessel and Stanley R.P., *Stirling polynomials*, Journal of Combinatorial Theory **A24** (1978), 24–33. R. L. Graham, D. E. Knuth, and O. Patashnik, *Concrete mathematics: A foundation for computer sciences*, 1990 ed., Addison-Wesley, 1989. D. E. Knuth, *Two notes on notation*, Amer. Math. Monthly **99** (1992), no. 5, 403–422. I. Lah, *Eine neue art von zahlen, ihre eigenschaften und anwendung in der mathematischen statistik*, Mitteilungsbl.Math.Statis. **7** (1955), 203–216. T. Lengyel, *On some properties of the series $\sum_{k=0}^{\infty}k^nx^k$ and the <span style="font-variant:small-caps;">S</span>tirling numbers of the second kind*, Discrete Math. **150** (1996), no. 1-3, 281–292, Selected papers in honour of Paul Erdös on the occasion of his 80th birthday. J. Riordan, *Combinatorial identities*, 1979 ed., Krieger, 1968. [to3em]{}, *An introduction to combinatorial theory*, Princeton University Press, 1980.
--- abstract: 'In this report, I explained the problem of Secret Sharing Scheme. Then based on the definition of the problem, two old methods: Blakley’s Secret Sharing Scheme and Shamir’s Secret Sharing are introduced. However, we explained the details of the first one since it’s the topic of this work. Blakley’s method has an application in distributing a key between different parties and reconstructing the key based on each share. However, this method is not efficient enough because of too large space states. Also, we tried to simulate a scenario for spreading a key between some users and tried to reconstruct the first key using Matlab in a graphical user interface.' address: 'School of Informatics, Computing, and Cyber Systems, Department of Mathematics & Statistics, Northern Arizona University, Flagstaff, AZ 86001 & ' author: - Alireza Shamsoshoara bibliography: - 'sample.bib' title: 'Overview of Blakley’s Secret Sharing Scheme' --- Introduction ============ I started the problem of secret sharing scheme as a classical problem based on [@liu1968introduction]. Assume that there are eleven scientists and they are working on a very confidential project. Their purpose is to hide the documents and results of the project in a safe box in a way that the safe cannot be opened unless six or more scientists are available in the lab. So the question is how the head of the group can spread a common key between scientists that no one can make the original key with his/her individual key, but a combination of scientists can retrieve the key to open the safe box. In another scenario assume that Coca Cola executives want to have access to “Secret Formula" in case of the emergency. So they can access the “Secret Formula" if they have these three possible combinations: i) six Directors or ii) three vice presidents or iii) one president. So the question is how one reliable person can spread and announce the “Secret formula" between these parties that those combinations can reconstruct the formula again in the case of the emergency. In another case, consider the Rivest Shamir Adleman (RSA) [@rivest1978method]. RSA is mainly used in crypto-systems for secure transmission between two entities in the network. In this mechanism, everyone has a private key and public key. Private key is secret and it is only used for the decryption of the encrypted message at each user; however, public key is being used for encrypting the message. The decryption key is also called private key. This method of encryption and decryption is called asymmetric, since, the private and public (decryption and encryption) keys are different. This method is based on choosing two different large prime numbers which makes the public key. Anyone can utilize the public key to encrypt its own message. The values of those two large prime numbers should be kept secret from everyone. But with currently published methods, and if the public key is large enough, only someone with knowledge of the prime numbers can decode the message feasibly [@rivest1978method]. Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem remains an open question. So another application of the secret sharing scheme is if one user or entity wants to trust another user to share the private key with him/her, how can he/she share that key? Is it enough to share with just one or it is better to share the private key with multiple persons in such a way that no one can predict the key from its share. In all these three scenarios, there is one common objective: Is it possible to store data among multiple semi-trusted people/nodes in a manner that doesn’t violate any of the three cornerstones of information security? Figure \[fig:Fig1\] demonstrates these three important criterion. The answer of this question is yes. Schemes such as Blakley[@blakley1979safeguarding] and Shamir[@shamir1979share] are popular for replying to these challenges and scenarios. ![Three main criteria in secret sharing scheme[]{data-label="fig:Fig1"}](Figures/MAT690-FIG1.jpg){width=".8\columnwidth"} Definition ---------- In this subsection, we will define the secret sharing scheme based on Blakley’s and Shamir’s secret sharing schemes. In addition to Blakley and Shamir’s secret sharing schemes, there is another method based on Chinese Remainder Theorem (CRT) which has been introduced by Asmuth and Bloom in 1983 [@asmuth1983modular]. Any method for distributing a secret among a group of individuals **(shareholders)** each of which is allocated some information **(share)** related to the secret. These two items should be common between in all these three methods: - The secret can only be reconstructed when the shares are combined together. - Individual shares are of no use on their own Moreover, these schemes are also called $(t, n)$ threshold schemes. since the secret is distributed among $n$ participants and only $t$ or more participants can recover the secret. The goal of these schemes is to divide a secret $S$ into $n$ shares $S_0, \dotsm, S_{n-1}$ such that: 1. knowledge of $t$ or more shares makes S easily computable 2. knowledge of $t - 1$ or less shares leaves $S$ completely undetermined where: - $n$ and $t$ are positive integers. - $n$ is the number of secrets generated. - $t$ is the amount of overall secrets needed to recover the original secret. - $t$ is less than or equal to $n$. Now if we want to investigate the secrecy, integrity, and availability we can summarize the results in these bullets: - **Secrecy:** If an eavesdropper wants to learn the secret, it needs to corrupt at least $t$ shareholders and collect their shares. - **Integrity:** If an eavesdropper wants to destroy or alter the secret, it needs to corrupt at least $n - t + 1$. - **Availability:** If the threshold value ($t$) is known and given, then the secret availability will be increased as $n$ increases.\ If the value of $n$ (number of shareholders) increases, then the secrecy and integrity will be enhanced if $t$ increases. Blakley’s Secret Sharing Scheme =============================== In this scheme, the dealer who knows the secret, spreads the key or secret among $n$ members. Each member would call his or her piece as share. The key will be revealed if $t$ or more of them share the information together. Blakley utilized a geometric approach to solve this problem. He assumed that the secret is a point in a $t$-dimensional space. Based on this $t$-dimensional space, if hyperplanes intersect at one point, it will reconstruct the secret key. Coefficients of $n$ different hyperplanes constitute the corresponding $n$ shares. This scheme is a linear threshold scheme like Shamir’s scheme. In Blakley’s approach, the secret and the shares can be summarized as a linear system $Cx = y$, where matrix $C$ and the vector $y$ correspond to hyperplane equations. Blakley’s method [@blakley1979safeguarding] utilized principles of geometry to share the secret. According to this method, the secret key is a point in a $t$-dimensional space, which is the intersect point of all the hyperplanes. Affine hyperplanes in this space represent n shares. Blakley secret sharing scheme can be represented as a linear system $Cx$ mod $p = y$. The general full rank matrix C is the critical data in this approach. In the following, we will explain the two main concepts (distributing and reassembling) in this approach. At first, consider Fig \[fig:hyperplane\] which demonstrates three different hyperplanes in 3-Dimensional space. Each plane represents one shareholder based on the definition. ![Three different hyperplanes (shareholders) [@FileSecr71online][]{data-label="fig:hyperplane"}](Figures/hyperplane.jpg){width=".5\columnwidth"} ![Intersection of different hyperplanes [@FileSecr71online][]{data-label="fig:hyperplanes"}](Figures/hyperplanes.jpg){width="\columnwidth"} As it is demonstrated in Fig\[fig:hyperplanes\], intersection of three and two hyperplanes define a line and a point in a 3-Dimentional space respectively. The main idea is two nonparallel lines in the same plane intersect exactly at one point. Three nonparallel planes in space intersect exactly at a specific point. Any $t$ nonparallel $(t - 1)$ dimensional hyperplanes intersect at a specific point. The secret may be encoded as any single coordinate of the of the intersection. If the key is encoded using all the coordinates, then the insider will gain information about the key because he knows the point must lie on his plane. If the insider can access more information about others too, then the system in not safe anymore. In the following, I will itemize the process of distributing the secret shares between shareholders. This task can be done by the dealer. 1. Pick a prime number ($p$) 2. Create a point using your secret information $x_0$ 3. Pick random values for the $Y_0$ and $Z_0$ axes in $\textrm{mod}$ $p$ 4. Use previous values to make your intersection point $Q(x_0, y_0, z_0)$ 5. Choose values for $a$ and $b$ in $\textrm{mod}$ $p$ and find the value for your $c$ in your hyperplane such that: $$\begin{aligned} \label{eq:1} c \equiv z_0 - ax_0 - by_0 \ (\textrm{mod}\ p) \end{aligned}$$ 6. Based on the values of $a$, $b$, and $c$, define your hyperplane: $$\begin{aligned} \label{eq:2} z \equiv ax + by + c \ (\textrm{mod}\ p) \end{aligned}$$ Now everyone has a specific hyperplane. For instance, making hyperplanes for five share holders will result to: $$\begin{aligned} \label{eq:3} &a_1x + b_1y - z \equiv -c_1 \ (\textrm{mod} p) \\ \nonumber&a_2x + b_2y - z \equiv -c_2 \ (\textrm{mod} p) \\ \nonumber&a_3x + b_3y - z \equiv -c_3 \ (\textrm{mod} p) \\ \nonumber&a_4x + b_4y - z \equiv -c_4 \ (\textrm{mod} p) \\ \nonumber&a_5x + b_5y - z \equiv -c_5 \ (\textrm{mod} p)\end{aligned}$$ which can be summarized as: $$\begin{aligned} \label{eq:4} &a_ix + b_iy - z \equiv -c_i \ (\textrm{mod} p) \qquad 1 \leq i \leq 5\end{aligned}$$ Hence, choosing only three shareholder from five people would yield to find the secret key which is $x_0$ in this specific example. Hence, finding solution for equation \[eq:5\] will find the value for the secret key. As long as determinant of matrix is nonzero in $\textrm{mod} \ p$, matrix can be inverted and the secret can be found. $$\begin{aligned} \label{eq:5} \left(\begin{array}{ccc} a_1 & b_1 & -1\\ a_2 & b_2 & -1\\ a_3 & b_3 & -1\end{array} \right) \left(\begin{array}{c} x_0\\ y_0\\ z_0\end{array} \right) \equiv \left(\begin{array}{c} -c_1\\ -c_2\\ -c_3\end{array} \right) \ \textrm{mod} \ (p)\end{aligned}$$ Subsection \[subsec:example\] gives one example of this matrix representation and these two main concepts. Two main drawbacks of this schemes are: - Blakley’s scheme lacks actual implementations. In [@blakley1994linear], Blakley et al. only presented a guideline to outline a matrix of linear systems for perfect secrecy, and no actual matrix was indicated. - This approach is not space efficient. - Each shareholder knows that the point (which is the secret) lies in his share (hyperplane). As a consequence, the method is not perfect. Maybe, because of these two reasons, Shamir’s scheme is more popular compared to the Blakley’s approach. In the following, I summarize some details of recent works which are related to this scheme. Recently, researchers started to use Blakley’s geometry-based secret sharing approach in the area of secret image sharing [@chen2008geometry; @tso2008sharing]. Ulutas et al. [@ulutas2009improvements] introduced an imprved scheme for secret image sharing, which adopts Blakley’s secret sharing method with another method to share the secret and create meaningful shares. Moreover, Bozkurt et al.[@bozkurt2008threshold] explained the first threshold RSA signature approach using the Blakley’s secret scheme. Example of Blakley’s Secret Sharing Scheme {#subsec:example} ------------------------------------------ As an example for equations in the previous section consider this case. Assume that there are five different hyperplanes. Also, assume that $p$ is equal to 73 which is prime. $$\begin{aligned} \label{eq:6} &z \equiv 4x \ + 19y + 68 \ (\textrm{mod} 73) \\ \nonumber&z \equiv 52x + 27y + 10 \ (\textrm{mod} 73) \\ \nonumber&z \equiv 36x + 65y + 18 \ (\textrm{mod} 73) \\ \nonumber&z \equiv 57x + 12y + 16 \ (\textrm{mod} 73) \\ \nonumber&z \equiv 34x + 19y + 49 \ (\textrm{mod} 73)\end{aligned}$$ solving equation \[eq:7\] would find the secret key as $42$. $$\begin{aligned} \label{eq:7} \left(\begin{array}{ccc} 4 & 19 & -1\\ 52 & 27 & -1\\ 36 & 65 & -1\end{array} \right) \left(\begin{array}{c} x_0\\ y_0\\ z_0\end{array} \right) \equiv \left(\begin{array}{c} -68\\ -10\\ -18\end{array} \right) \ \textrm{mod} \ (73)\end{aligned}$$ The solution is $(x_0, y_0, z_0)$ = $(42, 29, 57)$ which means secret key = $x_0$ = $42$ Using Graphical User Interface (GUI) in Matlab to find hyperplanes and solutions ================================================================================ In this section, I designed a program for users to assign the values for the initial prime number, choose numbers of shareholder, number of selected shareholders, and the private key. A user can define these values in the application. Then based on the algorithm in the \[algorithm\], the program calculates the coefficient matrix and the vector with constant values. For solving the linear equation, $gflineq$ command is used in $\textrm{mod} \ p$. $\textit{prime number} \gets \text{user Input}$ $\textit{Secret Key} \gets \text{user Input}$ $\textit{Number of Shareholders} \gets \text{user Input}$ $\textit{Number of selected Shareholders} \gets \text{user Input}$ *loop*: $\textit{Random point} \gets \text{Random integers between 0 and P - 1}$ $\textit{Random coefficients} \gets \text{Random integers between 0 and P - 1}$ $\textit{C} \gets \text{mod(z - ax - by, P)}$ $\textit{Matrix} \gets \text{[a, b, mod(-1, P)]}$ **goto** *loop*. $\textit{Selected Random} \gets \text{random permuation(share holder, selected)}$ $\textit{Solution} \gets \text{solving linear equation for Galois Field for selected rows)}$ After running the program, every output demonstrates one solution of a point which includes the secret key too. Each time we try to stamped the time to see what is the effect of each prime number on the computation time. It seems that the growing rate for the time is exponential when the initial prime number is getting big. Fig \[fig:simulation\] demonstrates this behavior for first 100 prime numbers with 5 share holders and 3 selected shares. ![Time simulation for the Blakley’s secret sharing scheme[]{data-label="fig:simulation"}](Figures/simulation.jpg){width="\columnwidth"} Conclusion ========== In this article, I reviewed the secret sharing schemes specifically Blakley approach. Then I tried to simplify the problem by using an example and the graphical user interface in the Matlab. we proved that choosing large prime numbers will take more time for the algorithm to solve the linear equation.
--- abstract: 'Modelling the substitution of nucleotides along a phylogenetic tree is usually done by a hidden Markov process. This allows to define a distribution of characters at the leaves of the trees and one might be able to obtain polynomial relationships among the probabilities of different characters. The study of these polynomials and the geometry of the algebraic varieties that define can be used to reconstruct phylogenetic trees. However, not all points in these algebraic varieties have biological sense. In this paper, we explore the extent to which adding semialgebraic conditions arising from the restriction to parameters with statistical meaning can improve existing methods of phylogenetic reconstruction. To this end, our aim is to compute the distance of data points to algebraic varieties and to the stochastic part of theses varieties. Computing these distances involves optimization by nonlinear programming algorithms. We use analytical methods to find some of these distances for quartet trees evolving under the Kimura 3-parameter or the Jukes-Cantor models. Numerical algebraic geometry and computational algebra play also a fundamental role in this paper.' author: - Marta Casanellas - 'Jesús Fernández-Sánchez' - 'Marina Garrote-López' title: Distance to the stochastic part of phylogenetic varieties --- Introduction ============ Within the new century, algebraic tools have started to be successfully applied to some problems of phylogenetic reconstruction (see for example [@allmandegnanrhodes2013; @chifmankubatko2015; @allmankubatkorhodes]). The main goal of phylogenetic reconstruction is to estimate the *phylogenetic tree* that best explains the evolution of living species using solely information of their genome. To this end, one usually considers evolutionary models of molecular substitution and assume that DNA sequences evolve according to these models by a Markov process on a tree. Some of the most used models are *nucleotide substitution models* (e.g. Kimura 3-parameter [@kimura1981] or Jukes-Cantor [@JC69] models), which are specified by a $4\times4$ transition matrix associated to each edge of the tree and a distribution of nucleotides at the root. Then, the distribution of possible nucleotide sequences at the leaves of the tree (representing the living species) can be computed as an algebraic expression in terms of the parameters of the model (the entries of the substitution matrices and the distribution at the root). This allows the use of algebraic tools for phylogenetic reconstruction purposes. When reconstructing the *tree topology* (i.e., the shape of the tree taking into account the names of the species at the leaves), the main tools that have been used come either from rank conditions on matrices arising from a certain rearrangement of the distribution of nucleotides at the leaves [@SVDquartets; @chifmankubatko2015; @casfer2016], or from phylogenetic invariants [@lake1987; @casanellas2007]. These tools use the fact that the set of possible distributions satisfies certain *algebraic* constraints, but do not specifically use the condition that one is dealing with discrete *distributions* that arise from *stochastic* matrices at the edges of the tree (i.e. with positive entries and rows summing to one). These extra conditions lead to *semi-algebraic* constraints which have been specified for certain models in [@AllmanSemialg] (for the general Markov model), [@matsen2009] (for the Kimura 3-parameter model) and [@zwierniksmith; @klaere2012] for the 2-state case ($2\times 2$ transition matrices). Combining algebraic and semi-algebraic conditions to develop a tool for reconstructing the tree topology is not an easy task and, as far as we are aware, both tools have only been used together in [@Kosta2019] for the simple case of 2 states. As a starting point of topology reconstruction problems, it is natural to use trees on four species (called 1,2,3,4 for example). In this case, there are three possible (unrooted and fully resolved) phylogenetic trees, $13|24$, $13|24$, and $14|23$ (see Figure \[Fig:3\_top\]). Then a distribution of nucleotides for this set of species is a vector $P\in {\mathbb{R}}^{4^4}$ whose entries are non-negative and sum to one. The set of distributions arising form a Markov process on any of these trees $T$ (for a given substitution model) defines an algebraic variety $\mathcal{V}_T$ (see [Section \[sec:PhyloVars\]]{}). The three *phylogenetic varieties* $\mathcal{V}_{12|34}$, $\mathcal{V}_{13|24}$, $\mathcal{V}_{14|23}$ are different and the topology reconstruction problem for a given distribution $P\in {\mathbb{R}}^{4^4}$ is, briefly, deciding to which of these three varieties $P$ is closest (for a certain distance or for another specified optimization problem such as likelihood estimation). The algebraic tools related to rank conditions mentioned above attempt to estimate these euclidean distances, for example. ![The three unrooted (fully resolved) phylogenetic trees on $4$ leaves: $12|34$ (left), $13|24$ (middle) and $14|23$ (right)[]{data-label="Fig:3_top"}](top4.png){width="12cm"} If we assume that $P$ should be close to a distribution that has arisen from stochastic parameters on one of these trees, then one should consider only the *stochastic part* of these varieties, $\mathcal{V}_{12|34}^+$, $\mathcal{V}_{13|24}^+$, $\mathcal{V}_{14|23}^+$ (which we call the *stochastic phylogenetic varieties*). The main questions that motivated the study presented here are: Could semi-algebraic tools add some insight to the already existent algebraic tools? Do semi-algebraic conditions support the same tree $T$ whose algebraic variety $\mathcal{V}_T$ is closest to the data point? In terms of the Euclidean distance and trees of four species, we make the explicit following question: *Question 1:* If $P\in {\mathbb{R}}^{4^4}$ is a distribution satisfying $d(P,\mathcal{V}_{12|34}) <min\{d(P,\mathcal{V}_{13|24}),d(P,\mathcal{V}_{14|23})\}$, would it be possible that $d(P,\mathcal{V}_{12|34}^+) > min\{d(P,\mathcal{V}_{13|24}^+),d(P,\mathcal{V}_{14|23}^+)\}$? We address this problem for special cases of interest in phylogenetics: short branches at the external edges (see section 4) and long branch attraction (in section 6). The length of a branch in a phylogenetic tree is understood as the expected number of substitutions of nucleotides per site along the corresponding edge; both cases, short and long branches, usually lead to confusing results in phylogenetic reconstruction (particularly in relation to the long branch attraction problem, see section 6). In the first case we are able to deal with the Kimura 3-parameter model and in the second case we have to restrict to the more simple Jukes-Cantor (JC69) model. The reason for this restriction is that the computations get more involved in the second case and we have to use computational algebra techniques (for which is crucial to decrease the number of variables of the problem). To this end, in section 5 we introduce an algorithm that computes the distance of a point to the stochastic phylogenetic varieties in the JC69 case; this algorithm makes explicit use of the euclidean distance degree [@draisma2015] of the phylogenetic varieties. We find that in the first framework (short external branches), restricting to the stochastic part does not make any difference, that is Question 1 has a negative answer in this case (see Theorem \[thm:seb\]). However, in the long branch attraction framework, considering the stochastic part of phylogenetic varieties might be of interest, specially if the data points are close to the intersection of the three varieties, see Theorem \[thm\_lba\]. In particular, the answer to Question 1 is positive for data close to the long branch attraction problem under the JC69 model. In section 7 we provide results on simulated data that support these findings and also show a positive answer to Question 1 for balanced trees. Summing up, incorporating the semi-algebraic conditions to the problem of phylogenetic reconstruction seems important when the data are close to the intersection of the three phylogenetic varieties. This is the case where phylogenetic reconstruction methods tend to confuse the trees. On the contrary, on data points which are far from the intersection (in the short branches case of section 4 for example), it does not seem necessary to incorporate these semi-algebraic tools. This is the reason why incorporating these tools into phylogenetic reconstruction methods might be extremely difficult. The organization of the paper is as follows. In section 2, we introduce the concepts on nucleotide substitution models and phylogenetic varieties that we will use later on. Then in section 3 we prove some technical results regarding the closest stochastic matrix to a given matrix. In section 4 we consider the case of short external branches for the Kimura 3-parameter model and obtain the results analytically. In section 5 we introduce the computational approach that we use in order to compute the distance to the stochastic phylogenetic varieties. The results for the long branch attraction case are expanded in section 6 and in section 7 we provide results on simulated data that illustrate our findings. The Appendix collects all technical proofs needed in section 6. Preliminaries ============= Phylogenetic varieties {#sec:PhyloVars} ---------------------- We refer the reader to the work [@AllmanRhodeschapter4] of E. A. Allman and J. A. Rhodes for a good general overview of phylogenetic algebraic geometry. Here we briefly introduce the basic concepts that will be needed later. Let $T$ be a phylogenetic tree with its leaves labelled by $\{1,2,3,4\}$ (i.e. $T$ is a tree as a graph whose interior nodes have degree 3 and whose leaves, of degree 1, are in correspondence with $\{1,2,3,4\}$), see Figure \[Fig:3\_top\]. Using the notation introduced in Figure \[Fig:3\_top\], $T$ belongs to the set ${\mathcal{T}}=\{12|34, 13|24, 14|23\}$. When the tree $T$ needs to be considered as rooted, we will choose an internal vertex $r$ as the root. Suppose the Markovian evolutionary process on that tree follows a nucleotide substitution model ${\mathcal{M}}$: associate a random variable taking values on $\Sigma:=\{\tt A,\tt C,\tt G,\tt T \}$ at each node of the tree, and consider as parameters the distribution $\pi = (\pi_{\tt A}, \pi_{\tt C}, \pi_{\tt G}, \pi_{\tt T})$ at the root, $\sum_i\pi_i = 1$, and a $4\times 4$ transition matrix $M_e$ at each edge $e$ of $T$. The transition matrices are *stochastic* (or *Markov*) matrices, that is, all its entries are nonnegative and its rows sum up to $1$. A vector is *stochastic* if all its coordinates are nonnegative and sum up to $1$. If $T\in {\mathcal{T}}$ and $S$ is the set of stochastic parameters described above, we denote by $\psi_T$ the following (parametrization) map: $$\begin{aligned} \psi_T: S\subset[0,1]^{\ell} &\rightarrow {\mathbb{R}}^{4^4}\\ \{\pi,\{M_e\}_{e\in E(T)}\} &\mapsto P=(p_{\tt AAAA}, p_{\tt AAAC}, \ldots, p_{\tt TTTG}, p_{\tt TTTT})\end{aligned}$$ which maps each set of parameters of the model $\{\pi,\{M_e\}_{e\in E(T)}\}\in S$ to the joint distribution of characters at the leaves of $T$ given by the hidden Markov process on $T$ governed by these parameters. The entries $p_{x_1,\ldots,x_4}$ of the joint distribution can be expressed in terms of the entries of the substitution matrices. For example, for the tree $12|34$ rooted at the leftmost internal edge with transition matricies as in Figure \[Fig:tree\] we have $$ p_{x_1,x_2,x_3,x_4}=\sum_{x_r,x_s\in \Sigma}\pi_{x_r} M_1(x_r,x_1)M_2(x_r,x_2)M_5(x_r,x_s)M_3(x_s,x_3)M_4(x_s,x_5) $$ ![Tree $12|34$ with transition matrices $M_1$, $M_2$, $M_3$, $M_4$ and $M_5$[]{data-label="Fig:tree"}](T1234.png){width="6cm"} We write ${\mathcal{V}}_T^+$ for the image of $\psi_T$, that is, the space of all the distributions arising from stochastic parameters, $${\mathcal{V}}_T^+=\{P \in{\mathcal{V}}_T\ |\ P = \psi_T(s) \mbox{ and } s\in S\}.$$ We call this set the *stochastic phylogenetic variety.* Since $\psi_T$ is a polynomial map, it can be extended to ${\mathbb{R}}^{\ell}$ (that is, we can consider not only nonnegative entries in $\pi$ and $M_e$). Define the *phylogenetic variety* associated with ${\mathcal{T}}$ as the smallest algebraic variety containing $\psi_T({\mathbb{R}}^l)$, $${\mathcal{V}}_T = \overline{\psi_T({\mathbb{R}}^l)}.$$ This variety contains all joint distributions that arise from the model ${\mathcal{M}}$ on the tree $T$ and some additional points in the closure of that set. Thus, not every point in these varieties has biological sense. Unless noted otherwise we will always assume rows of the matrices $M_e$ sum up to $1$, even if the entries are not positive (as in the extension of the map $\psi_T$ just defined). Kimura and Jukes-Cantor models ------------------------------ In this paper we focus on phylogenetic $4$-leaf trees evolving under the *Jukes-Cantor* ($JC69$ for short, see [@JC69]) and the *$3$-parameter Kimura* ($K81$ for short, [@kimura1981]) models. The JC69 model is a highly structured model that assumes equal mutation probabilities while the K81 takes into account the classification of nucleotides as purines/pyrimidines and the probabilities of substitution between and within these groups. Both models assume the uniform distribution at the root, $\pi = (\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4})$. A $4\times 4$ matrix $M$ is a *K81 matrix* if it is of the form $$\label{eq:K81} M= \left( \begin{array}{cccc} a & b & c & d \\ b & a & d & c \\ c & d & a & b \\ d & c & b & a \\ \end{array} \right),$$ for some $a,b,c,d\in {\mathbb{R}}$ summing to 1, $a+b+c+d = 1$. If $b=c=d$, then we say that $M$ is a *JC69 matrix*. Note that these matrices only have an interpretation as transition matrices of a Markov process if they only have nonegative entries; in this case we talk about *stochastic K81 matrices* or *stochastic JC69 matrices*. \[lema:eigenK81\]*([@ARbook])* If $M$ is a $K81$ matrix as , then it diagonalizes with eigenvalues $m_{\tt A} = a+b+c+d=1$, $m_{\tt C} = a+b-c-d$, $m_{\tt G} = a-b+c-d$ and $m_{\tt T} = a-b-c+d$ and respective eigenvectors $\bar{\tt A}=(1,1,1,1)^t$, $\bar{\tt C}=(1,1,-1,-1)^t$, $\bar{\tt G}=(1,-1,1,-1)^t$ and $\bar{\tt T}=(1,-1,-1,1)^t$. In particular, the eigenvalues of a $JC69$ matrix are $m_{\tt A} = 1$ and $m_{\tt C} = m_{\tt G} = m_{\tt T} = 1-4b$. Fourier coordinates ------------------- Let $M$ be a $K81$ matrix and write $m_{\tt A}, m_{\tt C}, m_{\tt G}, m_{\tt T}$ and $\bar{\tt A}, \bar{\tt C}, \bar{\tt G}, \bar{\tt T}$ for the eigenvalues and eigenvectors of $M$, respectively. The basis of eigenvectors will be denoted by $\bar{\Sigma}=\{\bar{\tt A}, \bar{\tt C}, \bar{\tt G}, \bar{\tt T}\}$ and is called the *Fourier basis*. Because of Lemma \[lema:eigenK81\], we have $$\begin{aligned} \bar{M} = H^{-1}\cdot M \cdot H,\end{aligned}$$ where $\bar{M} = diag(m_{\tt A}, m_{\tt C}, m_{\tt G}, m_{\tt T})$ and $$H=\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array} \right)$$ is the matrix of change of basis from $\bar{\Sigma}$ to $\Sigma$. Notice that $H^{-1} = \dfrac{1}{4}H^{t} = \dfrac{1}{4}H$. The vectors $P = (p_{\tt AAAA}, p_{\tt AAAC}, \ldots, p_{\tt TTTG}, p_{\tt TTTT})\in {\mathbb{R}}^{4^4}$ considered in section 2.1 can be thought of as $4\times 4\times 4\times 4$ tensor in $\otimes^4{\mathbb{R}}^4$: if we call $\Sigma=\{\tt A,C,G,T\}$ the standard basis of ${\mathbb{R}}^4$, then the components $p_{x_1x_2x_3x_4}$ of $P$ are its coordinates in the natural basis in $\otimes^4{\mathbb{R}}^4$ induced by $\Sigma$. This motivates the following definition. Given a tensor $P$ in ${\mathbb{R}}^4\otimes{\mathbb{R}}^4\otimes{\mathbb{R}}^4\otimes{\mathbb{R}}^4$, we will denote by $(p_{\tt AAAA}, p_{\tt AAAC}, \ldots, p_{\tt TTTT})^t$ the coordinates of $P$ in the basis $\{\tt A\otimes A\otimes A\otimes A, \tt A\otimes A\otimes A\otimes C, \ldots, T\otimes T\otimes T\otimes T\}$ induced by $\Sigma$. Similarly, we will write $(\bar{p}_{\tt AAAA}, \bar{p}_{\tt AAAC}, \ldots, \bar{p}_{\tt TTTG}, \bar{p}_{\tt TTTT})^t$ for the coordinates of $P$ in the basis $\{\tt \bar{A}\otimes \bar{A}\otimes \bar{A}\otimes \bar{A}, \ldots, \bar{T}\otimes \bar{T}\otimes \bar{T}\otimes \bar{T}\}$ induced by the Fourier basis $\bar{\Sigma}$. \[rk:FourierOrthogonal\] Note that the Fourier basis is orthogonal and all the vectors have the same norm. Thus, the Euclidean distance between tensors can be computed using the Fourier coordinates (up to a positive scalar). If one considers the following bijection between $\Sigma$ and the group $G=({\mathbb{Z}}/2{\mathbb{Z}}\times {\mathbb{Z}}/2{\mathbb{Z}},+)$, $$\begin{array}{ccc} \Sigma=\{\tt A,C,G,T\} & \longleftrightarrow & {\mathbb{Z}}/2{\mathbb{Z}}\times {\mathbb{Z}}/2{\mathbb{Z}}\\ {\tt A} & \mapsto & (0,0) \\ {\tt C} & \mapsto & (0,1) \\ {\tt G} & \mapsto & (1,0) \\ {\tt T} & \mapsto & (1,1) \\ \end{array},$$ then the previous change of coordinates can be understood as the discrete Fourier transform on $G^4$. The following result states that the polynomial parametrization $\psi_T$ becomes monomial after this change of coordinates. *([@evans1993])* Let $P=\psi_T(\pi,M_1, M_2, M_3, M_4, M_5)$ where $T$ is the tree topology $12|34$ and $M_i$ are $K81$ matrices. If $(m_{\tt A}^i, m_{\tt C}^i, m_{\tt G}^i, m_{\tt T}^i)$ are the eigenvalues of $M_i$, then the Fourier coordinates of $P$ are $$ \bar{p}_{x_1x_2x_3x_4} = \begin{cases} \dfrac{1}{4^4}m_{x_1}^1m_{x_2}^2m_{x_1+x_2}^5m_{x_3}^3m_{x_4}^4 &\text{if $x_1 + x_2 = x_3+x_4$,}\\ 0 &\text{otherwise,} \end{cases}$$ where the sum of elements in $\Sigma$ is given by the bijection $\Sigma \leftrightarrow {\mathbb{Z}}/2{\mathbb{Z}}\times {\mathbb{Z}}/2{\mathbb{Z}}$ introduced above. The closest stochastic matrix ============================= Throughout this section, we will use the following notation. We write $H^{n-1}$ for the hyperplane $\{x_1+\ldots+x_n = 1\}$. Given a point $v\in {\mathbb{R}}^n$, we denote by $\operatorname{proj}_{H}(v)$ and $\operatorname{proj}_{\bigtriangleup}(v)$ the orthogonal projections of $v$ onto $H^{n-1}$ and the standard $(n-1)$-dimensional simplex $\bigtriangleup^{n-1}$, respectively. For any matrix $M\in{\mathcal{M}_n(\mathbb{R})}$ we denote by $\widehat{M}$ its closest matrix in the Frobenious norm: $$\label{stoc_mat} \widehat{M} = \operatorname*{arg\,min}_{\substack{\sum_j X_{ij} = 1\ \forall i,\\ X_{ij} \geq 0\ \forall (i,j)}} \parallel M-X\parallel_F.$$ Similarly, for any vector $v$ we write $\widehat{v}$ for its closest stochastic vector. The problem of finding the nearest stochastic matrix is equivalent to finding the orthogonal projection (in Euclidean norm) of every row of the matrix onto the standard simplex [@Simplex] $$\begin{aligned} \bigtriangleup^{n-1}=\{(x_1,\ldots,x_n) \mid \sum_i x_i=1, x_i\geq 0\} \subset {\mathbb{R}}^n.\end{aligned}$$ The uniqueness of $\widehat{v}$, and consequently of $\widehat{M}$, is guaranteed since both the objective function and the domain set are convex. The orthogonal projection onto the standard simplex has been widely studied and there exist several algorithms to compute it. We refer the reader to [@Michelot:simplex] for an algorithm that, given any vector $v\in{\mathbb{R}}^n$, produces a vector $x\in {\mathbb{R}}^n$ with $\sum_i x_i = 1$ and $x\geq0$ that minimizes $\parallel v-x \parallel_2.$\ In the following result we state some properties of this last projection that will be useful later. \[props\_projection\] Let $v=(v_1, \ldots, v_n)$ be a point in ${\mathbb{R}}^n$ and let $\widehat{v} = (\widehat{v}_1, \ldots, \widehat{v}_n)$ be its orthogonal projection onto $\bigtriangleup^{n-1}$, $\widehat{v} := \operatorname{proj}_{\bigtriangleup}(v)$. 1. $\operatorname{proj}_{\bigtriangleup}(v) = \operatorname{proj}_{\bigtriangleup}\big(\operatorname{proj}_{H}(v)\big)$ 2. If $v\in H^{n-1}$ and $v_i\leq 0$ for some $i$, then $\widehat{v}_i = 0$. 3. Let $w$ be a point obtained by a permutation of the coordinates of $v$, i.e. $w=Pv$ for some permutation matrix $P$. Then $\widehat{w} = P\widehat{v}$. 4. If $v_i=v_j$ for some $i,j=1,\ldots,n$ then $\widehat{v}_i = \widehat{v}_j$. 5. The projection of $v$ is $p_i=(0, \ldots,{\underset{\substack{\frown\vspace{-0.07cm}\\ i}}{1}},\ldots, 0)$ if and only if $v_i - v_j \geq 1$ $\forall j \neq i$. The proofs of items $(i)$ and $(ii)$ can be found in [@Michelot:simplex]. These two statements also suggest a method to compute the projection onto the standard simplex. 1. This follows from the fact that $P$ is a permutation matrix and hence is an orthogonal matrix. 2. It is a consequence of $(iii)$. 3. Using $(i)$ and $(ii)$ we can assume that $\sum_i v_i = 1$, i.e., $v$ belongs to the affine hyperplane $H^{n-1}.$ We will use $p_i$ to denote the vertex $(0, \ldots,{\overset{\substack{i\\\smile}}{1}},\ldots, 0)$ of $\bigtriangleup^{n-1}$, $F_i$ to denote the facet containing every vertex but $p_i$. We write $w_i$ for the normal vector to $F_i$ contained in $H^{n-1}$. A parametric expression for the linear subspace of dimension $n-2$ containing the facet $F_i$ is $$p_j + \sum_{k\neq i}\lambda_k \overrightarrow{p_jp_k} = p_j + \sum_{k\neq i}\lambda_k (0, \ldots,{\overset{\substack{j\\\smile}}{1}},\ldots,{\overset{\substack{i\\\smile}}{0}} \ldots, {\overset{\substack{k\\\smile}}{-1}} \ldots, 0) \mbox{ for any } j\neq i.$$ The normal vector $w_i$ satisfies $w_i\perp\overrightarrow{p_jp_k}$ and $w_i\perp (1,\ldots,1)$. An easy computation shows that we can take $w_i := (1, \ldots,{\underset{\substack{\frown\vspace{-0.07cm}\\ i}}{1 - n}},\ldots, 1)$. Points that are projected onto a vertex $p_i$ of the simplex coincide with the points in a polyhedral convex cone $C_i$ with vertex $p_i$ and rays generated by normal vectors to the facets adjacent to it. In order to simplify notation, we choose $i = 1$, but the other cases are analogous. The facets adjacent to $p_1 = (1,0,\ldots,0)$ are $F_2,\ldots,F_n$. The $2$-dimensional faces of the cone $C_1$ are generated by $p_1$ and the subspace generated by any two vectors of $w_2,\ldots,w_n$. For instance the parametric expression of the face generated by $w_2$ and $w_3$ is $$\begin{aligned} \hspace{1cm} H^{n-1}_{2,3} &= p_1 + \lambda_2 w_2 + \lambda_3 w_3 = (1,0,\ldots, 0) + \lambda_2 (1, 1-n, 1, \ldots, 1) + \lambda_3 (1, 1, 1 - n, \ldots, 1)\\ &= (1 + \lambda_2 + \lambda_3, (1 - n)\lambda_2 + \lambda_3, \lambda_2 + (1 - n)\lambda_3,\lambda_2 + \lambda_3, \ldots, \lambda_2 + \lambda_3) \end{aligned}$$ with $\lambda_2\geq0$ and $\lambda_3\geq0$. After some computations, one can see that these points can be characterized by the inequalities: $$x_1-x_i \geq 1 \quad i=4,5,\ldots,n. $$ If we repeat this computation for every pair of faces we conclude that the points that are projected to $p_1$ are precisely the ones satisfying $$\label{eq:conditions_vId} x_1-x_j \geq 1 \quad j=2,3,\ldots,n. $$ as we wanted to prove. If the rows of a matrix $M$ result of some permutation applied to the first row, the previous lemma shows that $\widehat{M}$ will preserve the same identitites between entries as the matrix $M$. Actually, it can be shown that if $M$ is a matrix in a equivariant model [@Draisma] not necessarily stochastic, then $\widehat{M}$ will remain in the same model. The following lemma is direct and the proof is left to the reader. \[lem:stochM\] Let $M$ be a $JC69$ matrix. Then $M$ is stochastic if and only if its eigenvalues are contained in $\left[-1/3,1\right].$ Let $M$ be a non-stochastic Jukes-Cantor matrix. Then $\widehat{M}$ is either the identity matrix or the matrix $$\begin{pmatrix} 0&1/3&1/3&1/3\\ 1/3&0&1/3&1/3\\ 1/3&1/3&0&1/3\\ 1/3&1/3&1/3&0\\ \end{pmatrix}.$$ Let $M$ be a JC69 with diagonal entries equal to $a$ and off-diagonal entries equal to $b$. Then it is not stochastic if either $b<0$ or $a<0$. Let $v=(a,b,b,b)$ be the first row of $M$ and $\widehat{v}=(\widehat{a},\widehat{b},\widehat{b},\widehat{b})$ its projection onto the simplex $\bigtriangleup^3$ (Lemma \[props\_projection\] $(iv)$). The following reasoning is valid for each row because of Lemma \[props\_projection\] $(iii)$. If $b<0$ then, by Lemma \[props\_projection\], $(ii)$ $\widehat{b}$ equals zero and $\widehat{a}$ has to be equal to $1$ since the coordinates of $\widehat{v}$ sum to $1$. Therefore $\widehat{M}$ is the $4\times 4$ identity matrix. If $a < 0$ then $\widehat{a}=0$ and since $3\widehat{b}=1$, $\widehat{b} = \dfrac{1}{3}$. Therefore $\widehat{M}$ is a matrix with $0$ in the diagonal and $\dfrac{1}{3}$ at the non-diagonal entries. For later use, we close this section by stating a characterization of those K81 matrices $M$ for which $\widehat{M}$ is a permutation matrix. \[lema:perm\_matrix\] Let $M$ be a $K81$ matrix and denote by $(a_1, a_2, a_3, a_4)$ its first row. Then $\widehat{M}$ is a permutation matrix if and only if there is some $i\in\{1,\ldots,4\}$ such that $$\label{eq:permutationMatrix} a_i - a_j \geq 1 \mbox{ for all } j\neq i.$$ It is a consequence of Lemma \[props\_projection\] $(v)$. Short external branches ======================= In this section we will study evolutionary processes where mutations at the external edges are unusual, so the probabilities of substitution of nucleotides in the corresponding transition matrices are small.\ Given $P\in{\mathbb{R}}^{4^n}$, let $P_T^+$ be a point in ${\mathcal{V}}_T^+$ that minimizes the distance to $P$, i.e. $$d(P,P_T^+)=d(P,{\mathcal{V}}_T^+).$$ Unless noted otherwise we will keep this notation. \[k3\_id\_prop\] Assume that $P=\psi_T(Id, Id, Id, Id, M_e)$ where $M_e$ is a non-stochastic $K81$ matrix and $T$ is any $4$-leaf tree. Then, (a) The point $P^+_T$ is equal to $\psi_T(Id, Id, Id, Id, \widehat{M}_e)$. Moreover, it is the point that minimizes the distance to the standard simplex $\Delta \subset {\mathbb{R}}^{4^ 4}$. In particular, the point $P^+_T$ is unique. (b) If $T'\neq T$ is another tree in ${\mathcal{T}}$, then $d(P,{\mathcal{V}}^+_{T'}) \geq d(P,{\mathcal{V}}^+_{T})$. (c) The following are equivalent: (i) equality holds in (b); (ii) $P^+_T \in {\mathcal{V}}^+_T \cap {\mathcal{V}}^+_{T'}$; (iii) the matrix $M_e$ is a permutation matrix. We assume that $T = T_{12\mid 34}$, but the proof is analogous for the other trees. We define $\widehat{P} := proj_{\Delta}(P)$, that is, $\widehat{P}$ is the only point in $\Delta$ that minimizes the distance from $P$ to the standard simplex, which is a convex set. First of all, we have that $$\begin{aligned} \label{aux_dist} d(P,{\mathcal{V}}_T^+) = \min_{Q\in{\mathcal{V}}_T^+}d(P,Q) \geq \min_{Q\in\bigtriangleup^{4^4-1}}d(P,Q) =d(P,\widehat{P}).\end{aligned}$$ This follows from the fact that ${\mathcal{V}}_T^+\subset \bigtriangleup^{4^4-1}$, since for all $Q\in{\mathcal{V}}_T^+$ the sum of its coordinates $\sum_iQ_i$ equals $1$. We now show that $\widehat{P}\in {\mathcal{V}}^+_T$. Since the transition matrices at the exterior edges of $T$ are the identity, the coordinates of $P$ are $$p_{ijkl} = \begin{cases} \dfrac{1}{4}(M_e)_{ik} &\text{if $i=j$ and $k=l$}\\ 0 &\text{otherwise.} \end{cases}$$ Since $M_e$ is a K81 matrix the non-zero coordinates of $P$ only take $4$ different values. Moreover, because of Lemma \[props\_projection\] (ii) and (iii), we can write the coordinates of $\widehat{P}$ as $$\widehat{p}_{ijkl} = \begin{cases} b_{ik} &\text{if $i=j$ and $k=l$}\\ 0 &\text{otherwise.} \end{cases}$$ for some values $b_{ik}$ satisfying the identities of a K81 matrix. Since $\sum_{i,j} b_{ik}=1$, it follows that the matrix $$\begin{aligned} 4\left( \begin{array}{cccc} b_{11} & b_{12} & b_{13} & b_{14} \\ b_{21} & b_{22} & b_{23} & b_{24} \\ b_{31} & b_{32} & b_{33} & b_{34} \\ b_{41} & b_{42} & b_{43} & b_{44} \\ \end{array} \right)\end{aligned}$$ is a K81 stochastic matrix. Actually, this matrix is just $\widehat{M}$, and so, $\widehat{P}=\psi_T(Id,Id,Id,Id,\widehat{M})$. In particular, $\widehat{P}\in {\mathcal{V}}^+_T$. Since $P^+_T$ minimizes the distance from $P$ to the variety ${\mathcal{V}}^+_T$, we have $d(P,\widehat{P})\geq d(P,P^+_T)$. Because of (\[aux\_dist\]), the equality holds. Moreover, from the uniqueness of the point minimizing the distance to $\Delta$, it follows that $P^+_T=\widehat{P}$. This concludes the proof of (a). \(b) For any tree topology $T'$, we have that ${\mathcal{V}}^+_{T'}\subset \Delta$. It follows that $d(P,\widehat{P})\leq d(P,P^+_{T'})$. Since $\widehat{P}=P^+_T$, we infer that $d(P,P^+_T)\leq d(P,P^+_{T'})$ for any $T'\neq T$. Now, we proceed to characterize when the equality holds in (b), which proves (c).\ (i) $\Leftrightarrow$ (ii). It is clear that if $P^+_T\in {\mathcal{V}}^+_{T'}$, then $d(P,{\mathcal{V}}^+_T)=d(P,P^+_T)\geq d(P,{\mathcal{V}}^+_{T'})$. Together with the inequality in (b), this proves that (ii) implies (i). Conversely, if the equality holds, then $d(P,P^+_{T'})=d(P,\Delta)$. Because of the uniqueness of the point that minimizes the distance to $\Delta$, it follows that $P^+_{T'}=\widehat{P}$, and we have already seen that $\widehat{P}\in {\mathcal{V}}^+_T$. Therefore, $P^+_T\in {\mathcal{V}}^+_T \cap {\mathcal{V}}^+_{T'}$. \(ii) $\Leftrightarrow$ (iii). It only remains to see that $P^+_{T'} = P^+_{T}$ (i.e. $P_T^+\in{\mathcal{V}}_{T}^+\cap{\mathcal{V}}_{T'}^+$) if and only if $M_e$ is a permutation matrix. If $\widehat{P} \in {\mathcal{V}}^+_{T'}$, then the rank of $flatt_{T'}(\widehat{P})$ is less or equal than $4$ (see [@allman2003]). Because $\widehat{P}=\psi_T(Id,\ldots,Id,\widehat{M}_e)$, $flatt_{T'}(P)$ is a diagonal matrix that contains the $16$ entries of $\widehat{M}_e$ multiplied by a constant (see [@allman2009]). Since $M_e$ is a $K81$ stochastic matrix, $\widehat{M}_e$ has to be a permutation matrix. Conversely, if $\widehat{M}_e$ has to be a permutation matrix, then the corresponding point $\widehat{P}=\psi_T(Id,\ldots,Id,\widehat{M}_e)$ lies in every variety ${\mathcal{V}}^+_{T'}$. Note that $P_T^+$ coincides with $\psi_T(Id, Id, Id, Id, \widehat{M_e})$ but also with any tensor obtained by a label swapping of the parameters [@allman2004b]. \[thm:seb\] Let $M$ be a $K81$ non-stochastic matrix such that $\widehat{M}$ is not a permutation matrix (see Lemma \[lema:perm\_matrix\] for a characterization). Let $P_0=\psi_T(Id, Id, Id, Id, M)$, $T'\neq T$ and let $P\in{\mathbb{R}}^{4^4}$ be a point such that $$\begin{aligned} d(P,P_0) < \dfrac{d(P_0,{\mathcal{V}}_{T'}^+) - d(P_0,{\mathcal{V}}_{T}^+)}{2} \end{aligned}$$ (this is satisfied if $P$ is close enough to $P_0$). Then $d(P, {\mathcal{V}}_T^+) < d(P, {\mathcal{V}}_{T'}^+).$ We first define the function $f(Q) = d(Q,V_{T'}^+) - d(Q,V_{T}^+)$. By hypothesis, $\widehat{M}_e$ is not a permutation matrix and by Proposition \[k3\_id\_prop\], we have that $f(P_0) > 0$. We want to show that $f(P) > 0$ if $d(P,P_0)<f(P_0)/2$. Clearly, we are done if $f(P)\geq f(P_0)$, so we assume that $f(P)<f(P_0)$. From the triangle inequality we have $|d(P,\mathcal{W}) - d(P_0, \mathcal{W})| \leq d(P,P_0)$, for any variety $\mathcal{W}$. Then, we obtain $$\begin{aligned} \lvert f(P) - f(P_0) \rvert &= \lvert d(P, {\mathcal{V}}^+_{T'}) - d(P_0, {\mathcal{V}}^+_{T'}) - \left(d(P, {\mathcal{V}}^+_{T}) - d(P_0, {\mathcal{V}}^+_{T})\right) \rvert \leq \nonumber \\ &\leq \lvert d(P,{\mathcal{V}}^+_{T'}) - d(P_0, {\mathcal{V}}^+_{T'}) \rvert + \lvert d(P, {\mathcal{V}}^+_{T}) - d(P_0, {\mathcal{V}}^+_{T}) \rvert \nonumber \\ & \leq 2 \; d(P,P_0) < f(P_0).\end{aligned}$$ Therefore, $f(P)=(f(P)-f(P_0))+f(P_0)= -|f(P)-f(P_0)|+f(P_0) > 0$. This concludes the proof. The matrix $$\begin{aligned} M_e = \left( \begin{array}{cccc} 0.9 & 0.03 & -0.01 & 0.08 \\ 0.03 & 0.9 & 0.08 & -0.01 \\ -0.01 & 0.08 & 0.9 & 0.03 \\ 0.08 & -0.01 & 0.03 & 0.9 \\ \end{array} \right)\end{aligned}$$ satisfies the hypothesis of Theorem \[thm:seb\] and its nearest stochastic matrix is $$\begin{aligned} \widehat{M}_e = \left( \begin{array}{cccc} {0.89\wideparen{6}} &0.02\wideparen{6} & 0 & 0.07\wideparen{6} \\ {0.02\wideparen{6}} & {0.89\wideparen{6}} & {0.07\wideparen{6}} & 0 \\ 0 & {0.07\wideparen{6}} & {0.89\wideparen{6}} & {0.02\wideparen{6}} \\ {0.07\wideparen{6}} & 0 & {0.02\wideparen{6}} & {0.89\wideparen{6}} \\ \end{array} \right). \end{aligned}$$ Algorithm ========= Although in the last section we were able to answer our questions analytically, this approach seems unfeasible when we want to tackle more general problems. In this section, in order to find the distance from a point to a stochastic phylogenetic variety we use numerical algebraic geometry. Our goal is to find all critical points of the distance function to a phylogenetic variety in the interior and at the boundary of the stochastic variety. Among the set of critical points we pick the one that minimizes the distance. Similar approaches, where computational and numerical algebraic geometry are applied to phylogenetics studies, can be found in the works [@NAG4LS] and [@Kosta2019]. Let $\delta_\mathcal{X}(x)$ denote the Euclidean distance of a point $x$ to an algebraic variety $\mathcal{X}$, $$\delta_\mathcal{X}(x) := d(x, \mathcal{X}).$$ If $\mathcal{X}_{sing}$ is the singular locus of $\mathcal{X}$, the number of critical points of $\delta_\mathcal{X}(x)$ in $\mathcal{X} \setminus \mathcal{X}_{sing}$ for a general $x$ is called the *Euclidean distance degree* (EDdegree for short) of the variety. The EDdegree was introduced in [@draisma2015] and it is currently an active field of research. According to Lemma $2.1$ of [@draisma2015] the number of critical points of $\delta_\mathcal{X}(x)$ in $\mathcal{X}\setminus \mathcal{X}_{sing}$ is finite and constant for general points $x$. In this section we assume the JC69 model and we parametrize each transition matrix by its eigenvalue different from $1$ (see Lemma \[lema:eigenK81\]). From now on, denote by $P=\varphi_{T}(x_1,\ldots, x_5)$ the parameterization in the Fourier coordinates, where $x_i$ is the eigenvalue of the transition matrix $M_i$. We do not include the root distribution in this notation since for the $K81$ case it is always the uniform distribution. Recall that by Lemma \[lem:stochM\], $P\in{\mathcal{V}}_T^+$ if and only if $x_i\in\left[-1/3, 1\right]$, $i=1,\ldots,5$. Given a point $P$, we denote by $f_T(x_1,\ldots,x_5)$ the square of the Euclidean distance function from the point $\varphi_{T}\left(x_1,\ldots,x_5 \right)$ to $P$: $$f_T(x_1,\ldots, x_5) = d(P,\varphi_{T}(x_1,\ldots,x_5))^2,$$ and by $$\mathcal{D}:=\left[-1/3,1\right]^5$$ the region of stochastic parameters. Under the Jukes-Cantor model, the singular points of the varieties ${\mathcal{V}}_T$ are those that are the image of some null parameter. In other words, $\varphi_{T}(x_1,\ldots,x_5)$ is a singular point of the variety if and only if $x_i = 0$ for some $i$ (see [@casfer2008] and [@casfermich2015] for details). Hence, we can compute the number of critical points of our function $f_T$ in the pre-image of the smooth part of the variety as the degree of saturation ideal $I:(x_1\cdots x_5)^{\infty}$, where $I$ is generated by the partial derivatives of $f_T$. Using this and the package `Magma` [@magma] we obtain: If ${\mathcal{V}}_{{\mathcal{T}}}$ is the phylogenetic variety corresponding to a $4$-leaf tree evolving under the JC69 model, then the EDdegree of ${\mathcal{V}}_{{\mathcal{T}}}$ is $290$. For identifying the critical points of this constrained problem we use the *KKT conditions of first order for local minimums*. Karush-Kuhn-Tucker conditions (KKT) {#KKT .unnumbered} ----------------------------------- If $f,g_i:{\mathbb{R}}^l \longrightarrow {\mathbb{R}}$ are $\mathcal{C^{\infty}}$ functions for $i=1,\dots, n$, we consider the following minimization problem: $$\begin{aligned} & \underset{x}{\text{minimize}} & & f(x) \\ & \text{subject to} & & g_i(x) \leq 0, \; i = 1, \ldots, n. \end{aligned}$$ If a point $x^*$ that satisfies $g_i(x^*)\leq0$ $\forall i=1,\ldots,m$ is a local optimum of the problem, then there exist some constants $\mu = (\mu_1,\ldots,\mu_n)$ (called *KKT multipliers*) such that $x^*$ and $\mu$ satisfy - $-\nabla f(x^*) = \sum_{i=1}^n \mu_i\nabla g_i(x^*),$ - $\mu_i\geq0$ $\forall i=1,\ldots,n$, - $\mu_ig_i(x^*)=0$ $\forall i=1,\ldots,n$. According to these conditions the algorithm falls naturally into two parts. First of all we find the $290$ critical points of the objective function over all ${\mathbb{C}}^5$ and then we check the boundary of $\mathcal{D}$. To find the critical points at the boundary we restrict the function $f_T$ to all possible boundary subsets and find critical points there. Namely, on the Jukes Cantor model we write $$\begin{aligned} g_{1,i}(x) := x_i -1 \leq 0 \qquad g_{2,i}(x) := -x_i - 1/3\leq 0\end{aligned}$$ for the inequalities defining the feasible region $\mathcal{D}$. Moreover, for each $i=1,\ldots,5$ and $l=1,2$, write $$\begin{aligned} S_{l,i}=\{x=(x_1,\ldots,x_5)\mid g_{l,i}=0\}.\end{aligned}$$ Then $x$ is at the boundary of $\mathcal{D}$ if it belongs to the subset $S := \left(\cap_{i\in\iota_1}S_{1,i}\right)\cap\left(\cap_{j\in\iota_2}S_{2,j}\right)$ for some $\iota_1,\iota_2 \subseteq\{1,\ldots,5\}$ disjoint subsets. We use homotopy continuation methods to solve the different polynomial systems previously described. All computations have been done with the package `PHCpack.m2` ([@PHCpack] and [@PHCpackM2]) which turned out to be the only numerical package capable to find these $290$ points of $I:(x_1\cdot \ldots\cdot x_5)^{\infty}.$ `Macaulay2` [@M2] has been used to implement the main core of the algorithm while some previous computations have been previously performed with `Magma` [@magma]. The whole code can be found in: $$\begin{aligned} \label{github_marina} \qquad \mbox{\url{https://github.com/marinagarrote/StochasticPhylogeneticVarieties}.}\vspace{0.2cm} $$ Compute $f_T(x)$ Compute $I :=\left( \dfrac{\partial f_T}{\partial x_1}, \dfrac{\partial f_T}{\partial x_2}, \dfrac{\partial f_T}{\partial x_3}, \dfrac{\partial f_T}{\partial x_4}, \dfrac{\partial f_T}{\partial x_5} \right)$ $\mathcal{L} := \{\}$ $d := degree\big(I:(x_1\cdots x_5)^{\infty}\big)$ Find the $d$ $0$-dimensional solutions of $\nabla f_T$=0 Evaluate each $x\in\mathcal{L}$ into $f_T(x)$ and return the point $x^*$ with minimum $f_T(x^*)$ Long branch attraction ====================== *Long branch attraction* is one the most difficult problems to cope with phylogenetic inference. It is a phenomenon in phylogenetic reconstruction when fast evolving lineages are wrongly inferred to be closely related, without considering their true evolutionary relationships. It can happen when a set of similar species contains some that are very different from the main set. Many reconstruction methods join together these outgroup species even though they are very different to each other. Quartet trees representing these events are characterized for having two non-sister long branches and two non-sister short branches. The length of a branch in a phylogenetic tree represents the expected number of elapsed mutations along that process and, for the K81 and JC69 models, be computed as $-log\big(det(M)\big)/4$, where $M$ is the transition matrix associated to the edge. Therefore the branch length of an edge is related to the eigenvalues of the corresponding transition matrix. In particular, for the JC69 model, the eigenvalue different than 1 determines the branch length. Throughout this section we use the notation introduced in Section 5. Consider the tree of Figure \[lba\], with a non-stochastic matrix $M_e$ at the interior edge, a stochastic transition matrix $M$ at edges pointing to leaves $1$ and $3$, and the identity matrix at the remaining edges. Assume $M$ and $M_e$ are Jukes-Cantor matrices. Then, let $k$ (respectively $m$) be the eigenvalue of $M$ (resp. of $M_e$) different from 1. Since $M$ is stochastic, $k$ is in $\left[-1/3, 1\right]$ (see Lemma \[lem:stochM\]). We also assume $m>1$ since $M_e$ is not stochastic (the other possibility would be that $m<-1/3$, but this leads to a biologically unrealistic situation). Let $P:=\varphi_{12|34}\left(k, 1, k, 1, m\right)$ be the Fourier coordinates of the corresponding joint distribution. In this section we study the distance of $P$ to the stochastic phylogenetic varieties ${\mathcal{V}}_{12|34}^+$, ${\mathcal{V}}_{13|24}^+$, ${\mathcal{V}}_{14|23}^+$ to give an answer to Question 1. As observed in Remark \[rk:FourierOrthogonal\], we can use Fourier coordinates to compute distances. Given $P=\varphi_{12|34}\left(k, 1, k, 1, m\right)$ and $T\in \mathcal{T}$, we want to find its closest point in ${\mathcal{V}}_{T}^+$, so our goal is to find $(x_1,\ldots,x_5)\in\mathcal{D}$ such that $ d\left(P, {\mathcal{V}}_{T}^+\right) = d\left(P, \varphi_{T}\left(x_1, x_2, x_3, x_4, x_5\right)\right).$ ![Phylogenetic tree such that $P=\varphi_{T_{12\mid 34}}\left(k,1,k,1,m\right)$[]{data-label="lba"}](lba_tree_v3.png){width="4.5cm"} Therefore, using the notation of Section 5, finding the closest point to $P$ on the stochastic phylogenetic variety ${\mathcal{V}}_{T}^+$ can be translated into the following optimization problem: \[OPT\] $$\begin{aligned} & \underset{x}{\text{minimize}} & & f_T(x) := d\left(P, \varphi_{T}\left(x_1, x_2, x_3, x_4, x_5\right)\right)^2\\ & \text{subject to} & & g_{1,i}(x) \leq 0, \; i = 1, \ldots, 5, \\ & & & g_{2,i}(x) \leq 0, \; i = 1, \ldots, 5. \end{aligned}$$ where $g_{1,i}(x) = x_i -1$ and $g_{2,i}(x) = -x_i - \dfrac{1}{3}$. Local minimum ------------- An initial numerical approach suggests a candidate $x^*$ to be a minimum of this optimization problem when $T=12|34$. In Fourier coordinates, the Euclidean distance from $P$ to a point $\varphi_{12|34}\left(x_1, x_2, x_3, x_4, x_5\right)\in{\mathcal{V}}_{12|34}$ is given by the square root of the following function: $$\begin{aligned} f_{12|34}(x_1, x_2, x_3, x_4, x_5) :=\ &12\left(x_1x_2x_3x_4x_5 - k^2m\right)^2 + 9\left(x_1x_2x_3x_4 - k^2\right)^2 + 6\left(x_1x_2x_3x_5 - k^2m\right)^2 \\ &+ 6\left(x_1x_2x_4x_5 - km\right)^2 + 6\left(x_1x_3x_4x_5 - k^2m\right)^2 + 6\left(x_2x_3x_4x_5 - km\right)^2 \\ &+ 3\left(x_1x_3x_5 - k^2m\right)^2 + 3\left(x_2x_3x_5 - km\right)^2 + 3\left(x_1x_4x_5 - km\right)^2 \\ &+ 3\left(x_2x_4x_5 - m\right)^2 + 3\left(x_1x_2 - k\right)^2 + 3\left(x_3x_4 - k\right)^2.\end{aligned}$$ We define $x^*=(\kappa(k,m), 1, \kappa(k,m), 1, 1)$ where $\kappa(k,m)$ is the minimum between $1$ and the unique real solution $\tilde{x}(k,m)$ of $\dfrac{\partial f_{12|34}}{\partial x_1}(x_1,1,x_1,1,1) = 0$. A direct computation shows that, $$\label{kappa} \tilde{x}(k,m) = \dfrac{3k^2\left(3m + 1\right) - 4}{36\gamma(k,m)} + \gamma(k,m) \, ,$$ where $$\label{eq:gamma} \gamma(k,m) = \sqrt[3]{\dfrac{1}{24}k\left(3m + 1\right) + \dfrac{1}{216}\sqrt{\alpha(k,m)}}$$ and $$\begin{aligned} \label{eq:alpha} \alpha(k,m) =& -729k^6m^3 - 27k^6 + 108k^4 - 243\left(3k^6 - 4k^4 - 3k^2\right)m^2 - 63k^2 \\ & - 27\left(9k^6 - 24k^4 - 2k^2\right)m + 64.\nonumber\end{aligned}$$ The following proposition (proved in Appendix \[App:TecProofs\], Proposition \[propA1\]) claims that $\tilde{x}(k,m)$ is indeed a real number. The computations in this section and in the Appendix have been done with `SageMath` [@sagemath] version $8.6$. \[kappa\_real\] $\tilde{x}(k,m)$ is a well-defined real number for all $k \in \left[-1/3,1\right]$ and $m\in\Omega := \left(1,\omega\right]$, where $\omega =\dfrac{4}{9} +\dfrac{11}{27\sqrt[3]{\dfrac{69+16\sqrt{3}}{243}}} +\sqrt[3]{\dfrac{69+16\sqrt{3}}{243}}\approx 1.734$. For $m>\omega$ there exists some values of $k\in [-1/3,1]$ where $\tilde{x}(k,m)$ is still well defined. Nevertheless, in the phylogenetic framework we are working, it is enough to consider the domain $\Omega= \left(1,\omega\right]$. As the parameter of $x^*$ corresponding to the interior edge is 1, $\varphi_{T}(x^*)$ belongs to the intersection of the tree phylogenetic varieties ${\mathcal{V}}_{12\mid 34}\cap{\mathcal{V}}_{13\mid 24}\cap{\mathcal{V}}_{14\mid 23} $ (see also Lemma \[k3\_id\_prop\]), for that reason it is natural to ask whether if $x^*=\big(\kappa(k,m), 1, \kappa(k,m), 1, 1\big)$ is also a local minimum of the optimization problem \[OPT\] for $T=13|24$ or $T=14|23$. \[thm:LocMin\] If $k\in [-1/3,1]$ and $m\in\Omega$, then $x^*=\big(\kappa(k,m), 1, \kappa(k,m), 1, 1\big)$ is a local minimum of the optimization problem \[OPT\] where $T$ is either $12|34$ or $13|24$. In order to prove that $x^*$ is a local minimum we first show that $x^*$ satisfies the Karush-Kuhn-Tucker (KKT) conditions defined in Section \[KKT\] for some KKT multipliers $\mu_{1,i}$, $\mu_{2,i}$, $i=1,\dots, 5$. Assume first that $\tilde{x}(k,m) < 1$. Then we observe that $\dfrac{\partial f_{12|34}}{\partial x_1}\bigg\rvert_{x^*} = \dfrac{\partial f_{12|34}}{\partial x_3}\bigg\rvert_{x^*} = 0$. Moreover we have that $g_{1,i}(\tilde{x}(k,m), 1, \tilde{x}(k,m), 1, 1)$ is $0$ for $i=2,4,5$, $g_{1,i}(\tilde{x}(k,m), 1, \tilde{x}(k,m), 1, 1) \neq 0$ for $i=1,3$ and $g_{2,i}(\tilde{x}(k,m), 1, \tilde{x}(k,m), 1, 1) \neq 0$ $\forall i$. Therefore, by $(iii)$ of the KKT conditions, we need to take $$\begin{aligned} \mu_{2,i} & = & 0, \mbox{ for } i=1,\ldots,5 \\ \mu_{1,i} & = & 0 \mbox{ for } i = 1,3. \end{aligned}$$ Moreover, $\nabla g_{1,i}(x) = (0, \ldots, {\overset{\substack{i\\\smile}}{1}}, \ldots, 0)^t$ for all $i$ and for every $x$. Therefore condition $(i)$, $$-\nabla f_{12|34}(x^*) = \mu_{1,2}\nabla g_{1,2}(x^*) + \mu_{1,4}\nabla g_{1,4}(x^*) + \mu_{1,5}\nabla g_{1,5}(x^*),$$ is equivalent to $$\left(0, \dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*}, 0, \left.\dfrac{\partial f_{12|34}}{\partial x_4}\right\rvert_{x^*}, \dfrac{\partial f_{12|34}}{\partial x_5}\bigg\rvert_{x^*}\right)^t = -(0, \mu_{1,2}, 0, \mu_{1,4}, \mu_{1,5})^t,$$ which implies that necessarily $$\begin{aligned} \mu_{1,2} & = & - \dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*} \\ \mu_{1,4} & = & -\left.\dfrac{\partial f_{12|34}}{\partial x_4}\right\rvert_{x^*} \\ \mu_{1,5}& = & -\dfrac{\partial f_{12|34}}{\partial x_5}\bigg\rvert_{x^*} \, . \end{aligned}$$ Because of condition $(iii)$, to conclude it is enough to show that these partial derivatives are negative. This is proven in part $a)$ of Lemma \[lema:dif2\_negativa\], Lemma \[lema:dif4\_negativa\] and Lemma \[lema:dif5\_negativa\] of the Appendix. Moreover, $x^*$ is a minimum because according to Lemma \[lema:minim\] $a)$ (see Appendix), $x^*$ is a minimum of the function $f_{12|34}$ restricted to the boundary $x_2=x_4=x_5=1$. If $\tilde{x}(k,m)$ is grater than $1$, by the KKT conditions and the same reasoning as before we need to prove that $\dfrac{\partial f_{12|34}}{\partial x_i}\bigg\rvert_{x^*}$ is negative for every $i$, since any partial derivative of $f_{12|34}$ vanishes on $x^*$. This is proven in part $b)$ of Lemma \[lema:dif2\_negativa\], Lemma \[lema:dif4\_negativa\], Lemma \[lema:dif5\_negativa\] and Lemma \[lema:minim\] of the Appendix. Therefore $x^*$ is a local optimum. The proof for the topology $13|24$ follows directly from the previous results since the function $f_{13|24}$ satisfies $\dfrac{\partial f_{13|24}}{\partial x_i}\bigg\rvert_{x^*} = \dfrac{\partial f_{12|34}}{\partial x_i}\bigg\rvert_{x^*}$ for $i = 2,4$ and $\dfrac{\partial f_{13|24}}{\partial x_5}\bigg\rvert_{x^*}$ is also negative by Lemma \[lema:dif5\_1324\_negativa\]. Global minimum {#GlobMin} -------------- \[globalP0\] Let $P_0:=\varphi_{T}\left(k_0, 1, k_0, 1, m_0\right)$. If $k_0\in [-1/3,1]$ and $m_0\in\Omega$, then $$d(P_0, {\mathcal{V}}_T^+) = d\big(P_0, \varphi_{T}\big(\tilde{x}(k_0,m_0), 1, \tilde{x}(k_0,m_0), 1, 1\big)\big),$$ We have tested the conjecture for $1000$ pairs of parameters $(k,m)$ randomly chosen on the region $\left(0, 1/4\right] \times \left(1, 3/2\right]$ in order to simulate points close to the LBA phenomenon. Every experiment has verified that the global minimum of the problem is the point $x^* = \big(\kappa(k,m), 1, \kappa(k,m), 1, 1\big)$, where $\kappa(k)$ is defined as in (\[kappa\]) and which was proved to be a local minimum. A list of the tested parameters $k$ and $m$ can be found in (\[github\_marina\]). In the cases where te conjecture is satisfied, we have: \[thm\_lba\] Let $k_0\in [-1/3,1]$, $m_0\in\Omega$ and assume that $P_0:=\varphi_{T}\left(k_0, 1, k_0, 1, m_0\right)\in{\mathcal{V}}_T$, satisfies $d(P_0, {\mathcal{V}}_T^+) = d\big(P_0, \varphi_{T}\big(\tilde{x}(k_0,m_0), 1, \tilde{x}(k_0,m_0), 1, 1\big)\big)$ with $\tilde{x}(k_0,m_0) \neq 1$. Then, if $P$ is close enough to $P_0$ and $T'\neq T$ is another tree in ${\mathcal{T}}$, its closest point in ${\mathcal{V}}_{T}^+$ belongs also to ${\mathcal{V}}^+_{T'}$. In particular, $$d(P,{\mathcal{V}}_T^+) \geq d(P,{\mathcal{V}}_{T'}^+).$$ Let $W_{P_0} := \left\{ \varphi_T(x) \ \bigg\rvert x\in \, \mathcal{D} \mbox{ and } \dfrac{\partial f_T}{\partial x_i}\bigg\rvert_{x} = 0\ \forall i \in \left[1, \ldots, 5 \right] \right\}$ be the image of the set of critical points of $f_T$ that satisfies the problem constrains. Write $\mathcal{B}_{x_5=1}{\mathcal{V}}_T$ for the set of border points $\varphi_T(x)\in{\mathcal{V}}_T^+$ with $x = (x_1,\ldots,x_4,1)$. If $P_0^+$ is the closest point in ${\mathcal{V}}_T^+$ to $P_0$, then $P_0^+\in \mathcal{B}_{x_5=1}{\mathcal{V}}_T\setminus W_{P_0}$ because the hypothesis $P_0^+ =\varphi_{T}\big(\tilde{x}(k_0,m_0), 1, \tilde{x}(k_0,m_0), 1, 1\big)$ implies that the partial derivatives are non-zero (see Lemmas \[lema:dif2\_negativa\], \[lema:dif4\_negativa\] and \[lema:dif5\_negativa\])). Define $g(P) := d(P, W_P) - d(P,\mathcal{B}_{x_5=1} {\mathcal{V}}_T)$. Therefore $g(P_0) > 0$ and $g(P)$ is also positive if $P$ is close enough to $P_0$. It follows that $d(P, W_P) > d(P,\mathcal{B}_{x_5=1} {\mathcal{V}}_T)$ and therefore $$\min_{Q\in{\mathcal{V}}_T^+} d(P,Q) = \min_{Q\in\mathcal{B}_{x_5=1} {\mathcal{V}}_T} d(P,Q).$$\ Finally, $ d(P,{\mathcal{V}}_{T'}^+) \leq \min_{Q\in\mathcal{B}_{x_5=1} {\mathcal{V}}_{T'}} d(P,Q)= \min_{Q\in\mathcal{B}_{x_5=1} {\mathcal{V}}_T} d(P,Q) = d(P,{\mathcal{V}}_T^+)$ since $\mathcal{B}_{x_5=1}{\mathcal{V}}_T =\mathcal{B}_{x_5=1} {\mathcal{V}}_{T'} \subset {\mathcal{V}}^+_T \cap {\mathcal{V}}^+_{T'}$. Study on simulated data {#sec:simulations} ======================= In this section we simulate points close to a given phylogenetic variety and we compute its distance to the stochastic part of this variety as well as to the other phylogenetic varieties (distinguishing also the stochastic part of the varieties). We do this in the setting of long branch attraction of the previous section [and for balanced trees]{}. We cannot do this theoretically because, [even if]{} we have found a local minimum [for the long branch attraction setting]{} (Theorem \[thm:LocMin\]), [we cannot warranty that it is]{} global and also because we do not know exactly the distance when the input does not lie on the variety. To do the computations of this section we use Algorithm \[algo\]. We consider a $4$-leaf tree $12|34$ with $JC69$ matrices. Suppose $k_a$ and $k_b$ are the eigenvalues of matrices at the exterior edges and $M$ is a $JC69$ matrix at the interior edge, with eigenvalue $m$ that takes values in the interval $\left[0.94,1.06\right]$ (see Figure \[Im:simul\_tree\]). These trees represent points in ${\mathcal{V}}_{12|34}$ that range from the stochastic part of the variety ${\mathcal{V}}_{12\mid 34}^+$ (that is $m\leq1$) to the non-stochastic part ($m>1$). For each set of parameters we considered $100$ data points, each corresponding to the observation of $10000$ independent samples from the corresponding multinomial distribution $\varphi_T(k_a,k_b,k_a,k_b,m)$. ![Tree $12|34$ with distribution $P = \varphi(k_a, k_b, k_a, k_b, m)$[]{data-label="Im:simul_tree"}](simul_tree_v3.png){width="4.5cm"} For each data point $P$ generated as above and for each tree $T\in {\mathcal{T}}$, we have computed the distance of $P$ to the stochastic part of the variety ${\mathcal{V}}_T^+$, $d(P,{\mathcal{V}}_{T}^+)$ using Algorithm \[algo\] and we have also computed the the distance to the complete variety, $ d(P,{\mathcal{V}}_{T})$. These computations have been performed for the three tree topologies $12|34$, $13|24$ and $14|23$. For each set of parameters $k_a,k_b$ and $m$ we have plotted, the average of each of these distances computed from the $100$ data points. In each graphic we have fixed $k_a$ and $k_b$ and let $m$ vary in the $x$-axis from $0.94$ to $1.06$; the $y$-axis represents the distance. The grey background part of the plots represent the region of data points sampled from non the stochastic part of the variety, whereas the white part represents the stochastic part. ![Eigenvalues $k_a=0.37$ and $k_b=0.87$. On the left: distance to the phylogenetic varieties ${\mathcal{V}}_T$. On the right: distance to the stochastic part of the varieties ${\mathcal{V}}_T^+$.[]{data-label="Fig:PV_75"}](InkedPV_a75_2_LI.jpg){width="14cm"} ![Eigenvalues $k_a = k_b = 0.51$. On the left: distance to the phylogenetic varieties ${\mathcal{V}}_T$. On the right: distance to the stochastic part of the varieties ${\mathcal{V}}_T^+$.[]{data-label="Fig:PV_50"}](InkedPV_50_2_LI.jpg){width="14cm"} The first plot (Figure \[Fig:PV\_75\]) represents trees on the long branch attraction phenomena and the second one (Figure \[Fig:PV\_50\]) represent balanced trees. In both cases we observe a similar behaviour. The distance to the variety ${\mathcal{V}}_{12|34}$ is in general smaller for all values of $m$ (except when we are really close to the intersection). But if we observe the distance to the stochastic variety we see that when $m>1$ the distance to ${\mathcal{V}}_{12|34}^+$ becomes grater than the distance to the other stochastic varieties and this confirm the inequality of Theorem \[thm\_lba\]. However, for $m<1$ the distance to ${\mathcal{V}}_{12|34}^+$ is always the smallest. The different performance on the two plots of the distances to ${\mathcal{V}}_{13|24}^+$ and ${\mathcal{V}}_{14|23}^+$ are due to the shapes of the trees that we are considering. On the case of balance trees we see that the distances to ${\mathcal{V}}_{13|24}^+$ and ${\mathcal{V}}_{14|23}^+$ are almost equal. Every simulation performed has showed us that, when $m>1$, the closest point to $P$ in ${\mathcal{V}}_{12\mid 34}^+$, $P_{12\mid 34}^+$ belongs to the varieties intersection, i.e. $P_{12\mid 34}^+\in{\mathcal{V}}_{12|34}^+\cap{\mathcal{V}}_{13|24}^+\cap{\mathcal{V}}_{14|23}^+$. However, this is not true when we compute the closest point to ${\mathcal{V}}_{T'}^+$ for $T' \neq 12|34$. In the case of long branch attraction (see Figure \[Fig:PV\_75\]) the closest point $P_{14|23}^+\in {\mathcal{V}}_{14\mid 23}^+$ to $P$ is always at the interior of the stochastic variety ${\mathcal{V}}_{14| 23}^+$ whether for $T=13 |24$, the closest point to $P$ is in the interior of ${\mathcal{V}}_{13|24}$ approximately half the time. These simulations verify that if $P\in {\mathbb{R}}^{4^4}$ is a distribution satisfying $d(P,\mathcal{V}_{12|34}) <min\{d(P,\mathcal{V}_{13|24}),d(P,\mathcal{V}_{14|23})\}$ it is possible that $d(P,\mathcal{V}_{12|34}^+) > min\{d(P,\mathcal{V}_{13|24}^+),d(P,\mathcal{V}_{14|23}^+)\}$. This provides an afirmative answer to the Question $1$ posed at the beginning of the paper. This suggests that considering the stochastic part of phylogenetic varieties and the resulting semi-algebraic constraints needed to describe them may be an interesting strategy for phylogenetic reconstruction in the long branch attraction setting, and also for balanced trees. However, as it has become evident throughout this paper, to deal with both algebraic and semi-algebraic conditions is not an easy task, and more work is needed in order to design practical methods for phylogenetic inference under more general evolutionary models than the models used here. Computations ------------ The computations were performed on a machine with $10$ Dual Core Intel(R) Xeon(R) Silver $64$ Processor $4114$ ($2.20$ GHz, $13.75$M Cache) equipped with $256$ GB RAM running Ubuntu $18.04.2$. We have used `Macaulay2` version $1.3$ and `SageMath` version $8.6$. [10]{} Species tree inference by the star method and its generalizations. , 1 (2013), 50–61. . , 4 (2017), 620–636. Phylogenetic invariants of the general [Markov]{} model of sequence mutation. (2003), 113–144. . Cambridge University Press, January 2004. ISBN 0-521-52586-1). Quartets and parameter recovery for the general [M]{}arkov model of sequence mutation. (2004), 107–132. Phylogenetic invariants. In [*Reconstructing Evolution*]{}, O. Gascuel and M. A. Steel, Eds. Oxford University Press, 2007. The identifiability of covarion models in phylogenetics. (2009), 76–88. A semialgebraic description of the general markov model on phylogenetic trees. (12 2012). The [M]{}agma algebra system. [I]{}. [T]{}he user language. , 3-4 (1997), 235–265. Computational algebra and number theory (London, 1993). Performance of a new invariants method on homogeneous and nonhomogeneous quartet trees. (2007), 288–293. . , [3]{} ([2008]{}), [265–292]{}. . , [2]{} ([2015]{}), [203–225]{}. . , 23 (2014), 3317–3324. Identifiability of the unrooted species tree topology under the coalescent model with time-reversible substitution processes, site-specific rate variation, and invariable sites. (2015), 35–47. . Springer-Verlag, Berlin, Heidelberg, 2007. The euclidean distance degree of an algebraic variety. (2015), 1–51. On the ideals of equivariants tree models. (2009), 619–644. Invariants of some probability models used in phylogenetic inference. (1993), 355–377. Invariant versus classical quartet inference when evolution is heterogeneous across sites and lineages. , 2 (2016), 280–291. Macaulay2, a software system for research in algebraic geometry. Available at <http://www.math.uiuc.edu/Macaulay2/>. Numerical algebraic geometry for model selection and its application to the life sciences. (10 2016). Interfacing with phcpack. (01 2013), 20–25. Evolution of protein molecules. (1969), 21–132. Estimation of evolutionary distances between homologous nucleotide sequences. (1981), 1454�–1458. An algebraic analysis of the two state markov model on tripod trees. , 1 (2012), 38 – 48. Maximum likelihood estimation of symmetric group-based models via numerical algebraic geometry. , 2 (2019), 337–360. Regularization algorithms for transition matrices. (2001), 23–40. A rate-independent technique for analysis of nucleic acid sequences: evolutionary parsimony. (1987), 167–191. Fourier transform inequalities for phylogenetic trees. , 1 (2009), 89–95. A finite algorithm for finding the projection of a point onto the canonical simplex of rn. , 1 (July 1986), 195–200. . , 2019. . Phcpack: a general-purpose solver for polynomial systems by homotopy continuation. , 2 (1999), 251–276. Implicit inequality constraints in a binary tree model. (2011), 1276–1312. Technical proofs - local minimum {#App:TecProofs} ================================ Proof of Proposition \[kappa\_real\] ------------------------------------ \[propA1\] $\tilde{x}(k,m)$ is a real number for all $k \in I := \left[-1/3,1\right]$ and $m\in\Omega := \left(1,\omega\right]$, where $\omega =\dfrac{4}{9} +\dfrac{11}{27\sqrt[3]{\dfrac{69+16\sqrt{3}}{243}}} +\sqrt[3]{\dfrac{69+16\sqrt{3}}{243}}\approx 1.734$. To prove this result we need to verify that $\gamma(k,m) \neq 0$ and $\alpha(k,m)\geq 0$, for $k\in I,\ m\in\Omega$. Unless noted otherwise we will assume $m>1$ during all this reasoning. We first study when the denominator of $\kappa$ vanishes. $\gamma(k,m) = 0 $ if and only if $\gamma(k,m)^3= 9k\left(3m+1\right) + \sqrt{\alpha(k,m)} = 0$. It is equivalent to $$\label{y0} 9k\left(3m+1\right) = -\sqrt{\alpha(k,m)} $$ Then $\alpha(k,m) - \big(9k\left(3m+1\right)\big)^2 = -\left(9k^2m+3k^2-4\right)^3 = 0$ if and only if $k = \pm\dfrac{2}{\sqrt{9m+3}}$. Only the negative solution of $k$ satisfies equation (\[y0\]). Note that $k = -\dfrac{2}{\sqrt{9m+3}}$ is always negative and it will be greater than $-1/3$ if and only if $m < \dfrac{11}{3}$. Therefore, for all $k\in\left[-1/3, 1 \right]$ and $m < \omega < \dfrac{11}{3}$ $\kappa(k,m)$ is a real number since ${\gamma(k,m)}$ is never zero. The following lemma finishes the proof. \[lema:alphaPos\] $\alpha(k,m)\geq 0 \ \forall k\in I,\ m\in\Omega$ Consider $\alpha_m(k) := \alpha(k,m)$ as a function of $k$. $$\begin{aligned} \alpha_m(k) =& \underbrace{\left(-729m^3 -729m^2-243m-27\right)}_{a(m)}k^6 + \underbrace{\left(972m^2+648m+108\right)}_{b(m)}k^4 \\ &+ \underbrace{\left(729m^2+54m-63\right)}_{c(m)}k^2 + \underbrace{64}_{d} \end{aligned}$$ It is an even function in $k$ ($\alpha_m(k) = \alpha_m(-k)$) and it goes to minus infinity when $k$ goes to $\pm\infty$. Moreover it is a polynomial of degree $6$ in $k$ with polynomials in $m$ as coefficients. This function has a local minimum at $k = 0$ since $$\begin{cases} \alpha_m(k) = a(m)k^6 + b(m)k^4 + c(m)k^2 + d \mbox{ and } \alpha_m(0) = d = 64 > 0, \\ \alpha'_m(k) = 6a(m)k^5 + 5b(m)k^3 + 2c(m)k \mbox{ and } \alpha'_m(0) = 0,\\ \alpha''_m(k) = 30a(m)k^4 + 10b(m)k + 2c(m) \mbox{ and } \alpha''_m(0) = 2\left(729m^2+54m-63\right) > 0 \mbox{ for } m > 1. \end{cases}$$ Since $\alpha_m(k)$ is an even polynomial in $k$ with one positive minimum at $k = 0$ and limit to $-\infty$ when $k$ goes to $\pm\infty$, its number of real roots will be even and at least two. Suppose $\alpha_m(k)$ has $4$ or $6$ real roots, then the number of local extremes of $\alpha_m(k)$ should be at least seven, but $\alpha'_m(k)$ has degree $5$, and therefore it has at most $5$ roots. Therefore $\alpha_m(k)$ will only have $2$ real roots (one positive and one negative) and a minimum at $k = 0$. By Descartes rule we can count the number of positive (and negative) roots of $\alpha_m(k)$: Let $p(x)$ be a polynomial of one variable in descending power order. Then the number of positive roots (counted with multiplicity) of $p(x)$ is either the number of sign changes between consecutive nonzero coefficients or is less than it by an even number. Trivially we see that for any $m>1$, $a(m) = -729m^3 -729m^2-243m-27 < 0$, $b(m) = 972m^2+648m+108 >0$, $c(m)=729m^2+54m-63>0$ and $d>0$, therefore $\alpha_m(k)$ has exactly one positive (and by symmetry one negative) root. We want to see now that the root of $\alpha_m(k)$ does not belong to $I$ if $m\in\Omega$. Define $m_1:=1$ and $m_2:= \omega\approx 1.734$. Then, $\alpha_{m_1}(k) = -1728k^6+1728k^4+720k^2+64$ is zero if and only if $k = \pm k_1 := \pm\dfrac{2\sqrt{3}}{3}\approx\pm1.154\not\in I$. Moreover the roots of $\alpha_{m_2}(k) = 0$ are $\pm k_2 := \pm1$. ![In purple $\alpha_{m_1}(k)$, in green $\alpha_{m_2}(k)$](alpha.png){width="9cm"} It remains to see that for any $\hat{m}\in\Omega$, the positive root of $\alpha_{\hat{m}}(k) = 0$ belongs to $[k_2, k_1]$. To do it, first consider $\alpha(k,m)$ as a function of $m$, $$\begin{aligned} \alpha_k(m) =& \underbrace{\left(-729k^6\right)}_{a(k)}m^3 + \underbrace{243\left(-3k^6+4k^4+3k^2\right)}_{b(k)}m^2 + \underbrace{27\left(-9k^6-24k^4-2k^2\right)}_{c(k)}m \\ & \underbrace{-27k^6+108k^4-63k^2+64}_{d(k)}. \end{aligned}$$ This exhibits $\alpha_k(m)$ as a degree $3$ polynomial in $m$ and it has a unique real root since it has negative discriminant $$\begin{aligned} \mathcal{D}_{\alpha_k(m)} &= 18a(k)b(k)c(k)d(k) - 4b(k)^3d(k) + b(k)^2c(k)^2 - 4a(k)c(k)^3 - 27a(k)^2d(k)^2 \nonumber \\ &= -99179645184(k^6 + 3 k^8) \label{eq:discriminant} \end{aligned}$$ for all $k$ different from zero. The region where this real root is positive can be determined by using the Descartes rule. Let us study the sign of the coefficients: - $a(k) = - 729k^6 <0 \ \forall k\neq0.$ - $b(k) = 243\left(-3k^6+4k^4+3k^2\right) = 0$ if and only if $k^2(-3k^4+4k^2+3) = 0$. The real solutions of this polynomial are $k = 0$, $k = r_b := +\sqrt{\dfrac{2+\sqrt{13}}{3}}$ and $k = -r_b$. Evaluating at $b(k)$ we have $b(k) < 0$ for $|k| > r_b$ and $b(k) > 0$ for $-r_b < k < r_b$ and $k\neq0.$ - $c(k) = 27\left(-9k^6-24k^4-2k^2\right) = 0$ if and only if $k^2(-9k^4+24k^2+2) = 0$. The real solutions of $c(k)$ are $k = 0$, $k = r_c :=+\sqrt{\dfrac{4}{3}+\sqrt{2}}$ and $k = -r_c$. Again, evaluating at the polynomial we have $c(k) < 0$ for $|k| > rc$ and $c(k) > 0$ for $-r_c < k < r_c$ and $k\neq0.$ - $d(k) = -27k^6+108k^4-63k^2+64$. In this case the roots of $d(k)=0$ are $k = r_d := +\sqrt{\dfrac{4}{3} + \sqrt[3]{2 - \sqrt{3}} + \sqrt[3]{2 + \sqrt{3}}}$ and $k = -r_d$. Thus, $d(k) > 0$ when $-r_d<k<r_d$ and $k\neq0$, and $d(k)<0$ if $|k|>r_d$. ![Sign of coefficients of $\alpha_k(m)$](discriminant_alpha_m.png){width="12cm"} For any $k\in [-r_d,r_d]$ there is only one sign difference between consecutive coefficients, since $r_b<r_c<r_d$. Therefore the real root of $\alpha_k(m)$ will be positive for any $-r_d < k < r_d$ different from zero, and negative if $|k|>r_d$. Since $r_d \approx 1.8786 > k_2 = 1$, $\alpha_k(m)$ always has one positive root for any $k\in[+k_1, +k_2]$.\ As a consequence of the following lemma the positive root of $\alpha_{\hat{m}}(k)$ is in $[k_2, k_1]$ for any $\hat{m}\in[m_1, m_2]$. Therefore $\alpha(k,m)$ is never zero for any $k\in I$ or $m\in\Omega$ and, since is a continuous function, its sign is constant on this domain. Evaluating the function we check that $\alpha(k,m)$ is positive on the defined region. Let $\textbf{k}:[m_1, m_2] \to [k_2, k_1]$ be the positive solution of $\alpha_m(k)=0$ (so that $\alpha(\textbf{k}(m), m)=0\ \forall m\in[m_1,m_2]$). Then $\textbf{k}(m)$ is continuous and injective. **Proof** Ass observed in the proof of Lemma \[lema:alphaPos\], $\alpha_m(k)$ has one and only one real and positive root for any $m>1$. Therefore $\textbf{k}(m)$ is well defined and continuous by the implicit function Theorem. Recall that $k(+m_1) > k(+m_2)$ but $m_1< m_2$. Then consider the following two cases: - $\textbf{k}(m)$ is strictly decreasing, therefore it is injective. - $\textbf{k}(m)$ is not strictly decreasing. Since it is continuous and $\textbf{k}(m_1) > \textbf{k}(m_2)$ there exist some $m',m''\in[m_1,m_2]$ such that $\textbf{k}(m') = \textbf{k}(m'') =: \bar{k}$. Therefore $\alpha(\textbf{k}(m'),m') = \alpha(\textbf{k}(m''),m'') = 0$ and $\alpha(\bar{k},m') = \alpha(\bar{k},m'') = 0$. Moreover $\alpha_{\bar{k}}(m') = \alpha_{\bar{k}}(m'') = 0$. This is not possible since, as noted in the proof of Lemma \[lema:alphaPos\], $\alpha_k$ has a unique real and positive root for any $k$. Proof of Theorem \[thm:LocMin\] ------------------------------- The proofs of the following Lemmas will be all divided into two parts. On the first one we assume $\tilde{x}(k,m)<1$ and on the other one $\tilde{x}(k,m)$ is assumed to be grater or equal than $1$. For that reason, we start studying for which parameters $k$ and $m$, $\tilde{x}(k,m)$ equals $1$. \[lema:m\_est\] $\tilde{x}(k,m)$ equals $1$ if and only if $m = m^* :=\dfrac{-3k^2-k+16}{3k(3k+1)}$. Moreover, $\tilde{x}(k,m) > 1$ if and only if $m>m^*$. The idea and arguments presented in the following proof will be also used in the remaining proofs of this section. They are based on basic concepts and results on *Elimination Theory*, good general reference here is the Chapter $3$ of [@Cox:2007]. Consider the new variables $x$, $g$ and $a$ that will allow us to make explicit the relations of $\tilde{x}(k,m)$, $\gamma(k,m)$ and $\alpha(k,m)$. Then $\tilde{x}(k,m) - 1$ is zero if and only if $(k,m)$ is a solution of the system of equations: $$\label{eq:defPolys} \begin{cases} p(x) := x-1 = 0\\ p_{\tilde{x}}(x,g,k,m) := 36xg - 36g^2 - 9k^2m + 3k^2 +4 = 0\\ p_{\gamma}(g,a,k,m) := 216g^3 - 9k\left(3m + 1\right) - a = 0\\ p_{\alpha}(a,k,m) := a^2 - \alpha(k,m) = 0 \end{cases}$$ Polynomials $p_{\tilde{x}}$, $p_{\gamma}$ and $p_{\alpha}$ stand for the relations introduced in (\[kappa\]), (\[eq:gamma\]) and (\[eq:alpha\]) respectively. Define the ideal $\mathcal{I} := \left(p(x),p_{\tilde{x}}(x,g,k,m), p_{\gamma}(x,g,a,k,m), p_{\alpha}(a,k,m)\right)$ and compute the elimination ideal $\mathcal{I}\cap{\mathbb{C}}[k,m]$. According to Lemma $1$ and Theorem $3$ in section $3.2$ of [@Cox:2007], the variety ${\mathcal{V}}(\mathcal{I}\cap{\mathbb{C}}[k,m])$ is the smallest algebraic variety containing the points $(k,m)$ that correspond to points in ${\mathcal{V}}(\mathcal{I})$. However this inclusion is strict and there are points $(k,m)\in{\mathcal{V}}(\mathcal{I}\cap{\mathbb{C}}[k,m])$ that do not expand to solutions of (\[eq:defPolys\]). In this case, the ideal $\mathcal{I}\cap{\mathbb{C}}[k,m])$ is generated by the polynomial: $$\begin{aligned} j\left(k,m\right) = \left(9k^2m+3k^2-4\right)^3\left(9k^2m+3k^2+3km+k-16\right). \end{aligned}$$ Studying the solutions of $j(k,m)=0$ one can see that polynomials on (\[eq:defPolys\]) vanish if and only if $m=m^*$. Moreover by checking at any point such that $m>m^*$ we prove that $\tilde{x}(k,m)>1$ if and only if $m>m^*$. \[lema:dif2\_negativa\] $\dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*}$ is negative for all $(k,m)\in I\times\Omega$. The proof falls naturally into two cases. $a)$ Suppose $\tilde{x}<1$: By definition, $\kappa(k,m) = \tilde{x}(k,m)$ in this case. Therefore, $$\dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*} = 54 \tilde{x}^4 +\left(-36k^2m - 18k^2 + 36\right)\tilde{x}^2 - \left(30km + 6k\right) \tilde{x}- 6m + 6$$ To prove that this function is negative we prove that it never vanishes on $I\times\Omega$ and is negative for a particular value in that region. $\dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*}$ is zero if and only if the following polynomials vanish: $$\label{eq:dif_x2} \begin{cases} p(x,k,m) = 54x^4 +\left(-36k^2m - 18k^2 + 36\right)x^2 - \left(30km + 6k\right)x - 6m + 6,\\ p_{\tilde{x}}(x,g,k,m), \\ p_{\gamma}(g,a,k,m), \\ p_{\alpha}(a,k,m) \\ \end{cases}$$ where $p_{\tilde{x}}(x,g,k,m)$, $p_{\gamma}(g,a,k,m)$ and $p_{\alpha}(a,k,m)$ are defined as in (\[eq:defPolys\]). We consider the ideal $\mathcal{I} =\left( p(k,m,x), p_{\tilde{x}}(k,m,x,g), p_{\gamma}(k,m,x,g,a), p_{\alpha}(k,m,a)\right)$ and we compute the elimination ideal $\mathcal{I}\cap{\mathbb{C}}[k,m]$ which turns out to be generated by exactly one polynomial, $$\begin{aligned} j\left(k,m\right) =& \left(m-1\right)\left(3k^2+1\right)\left(9k^2m+3k^2-4\right)^3\left(81k^6m^3-27k^6m^2-45k^6m-9k^6+39k^4m^3\right.\\ &\left. +547k^4m^2+469k^4m+97k^4-1312k^2m^2-1120k^2m-256k^2-768m^2\right) \end{aligned}$$ The polynomial $j(k,m)$ is zero if and only if at least one of these polynomial vanishes: $$\begin{aligned} j_1\left(k,m\right) =&\ m-1 \label{eq:j1}\\ j_2\left(k,m\right) =&\ 3k^2+1 \label{eq:j2}\\ j_3\left(k,m\right) =&\ 9k^2m+3k^2-4 \label{eq:j3}\\ j_4\left(k,m\right) =&\ 81k^6m^3-27k^6m^2-45k^6m-9k^6+39k^4m^3+547k^4m^2\nonumber \\ & +469k^4m+97k^4-1312k^2m^2-1120k^2m-256k^2-768m^2\nonumber \end{aligned}$$ The first polynomial is zero when $m=1$, but $1\not\in\Omega$. The second one has no real solutions in $k$. Note that $j_3\left(k,m\right)$ is zero when $k^\pm(m) = \pm\dfrac{2}{\sqrt{9m+3}}$. However $k^- \not\in I$ if $m\in\Omega$ (see part (i) of the proof of Lemma \[kappa\_real\]) and $k^+(m)$ does not generate a solution of (\[eq:dif\_x2\]). The case of $j_4$ is not that simple. Consider it as a function of $m$: $$\begin{aligned} j_{4,k}\left(m\right) =& \underbrace{\left(81k^6 + 39k^4\right)}_{a(k)}m^3 + \underbrace{\left(-27k^6 + 547k^4 - 1312k^2 - 768\right)}_{b(k)}m^2 +\\ &\underbrace{\left(-45k^6 + 469k^4 - 1120k^2\right)}_{c(k)}m + \underbrace{\left(-9k^6 + 97k^4 - 256k^2\right)}_{d(k)} \end{aligned}$$ The discriminant (see (\[eq:discriminant\])) of $ j_{4,k}\left(m\right)$ is $$\begin{aligned} \mathcal{D}_{j_{4,k}}(k) &= 49152 k^2 (64 + 115 k^2 - 38 k^4 + 3 k^6)^2 (-2304 + 1020 k^2 - 340 k^4 + 39 k^6)\\ &= -49152 k^2 (\sqrt{6} - k) (\sqrt{6} + k) (384 - 106 k^2 + 39 k^4) (64 + 115 k^2 - 38 k^4 + 3 k^6)^2 \\ \end{aligned}$$ The polynomial $\mathcal{D}_{j_{4,k}}(k)$ has three real roots at $k=0$ and $k=\pm\sqrt{6}\sim 2.449$. Since $\mathcal{D}_{j_{4,k}}(-1) = \mathcal{D}_{j_{4,k}}(1) = -1615457157120 < 0$ we conclude $\mathcal{D}_{j_{4,k}}(k) \leq 0 \ \forall k\in I$ and hence $j_{4,k}(m)$ only has one real root in this interval. Evaluating at $m=1$ and $m=2$ we get, - $j_{4,k}(1) = 384(3k^4-7k^2-2) = 0$ if and only if $k=\pm\sqrt{\dfrac{7+\sqrt{73}}{6}}=\pm 1.609$. Moreover $j_{4,k}(1) < 0 \ \forall k\in I$ since $j_{4}(0,1) = - 768 <0.$ - $j_{4,k}(2) = 441k^6 + 3535k^4 - 7744k^2 - 3072$ is zero for $k\sim\pm1.4399$. At $k=0$ we have $j_{4}(0,2) = - 3072 <0$, then $j_{4,k}(2)$ is also negative $\forall k\in I$. This clearly forces the root of $j_{4,k}(m)$ to not be in $\Omega$ and consequently ${\mathcal{V}}(\mathcal{I}\cap{\mathbb{C}}[k,m])\cap I\times\Omega=\emptyset.$ Since $f_{12|34}$ is continuous and well defined in $I\times\Omega$ it may be concluded that $f_{12|34}$ has the same sign in all the domain. Evaluating at any point $(k,m)\in I\times\Omega$ we conclude that $\dfrac{\partial f}{\partial x_2}\big(x^*\left(k,m\right)\big)$ is negative on this region.\ $b)$ Suppose that $\tilde{x} \geq 1$, then we prove that $\dfrac{\partial f}{\partial x_2}\bigg\rvert_{x = \left(1,1,1,1,1\right)} < 0$: The function $\dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{\textbf{1}} = -18k^2 - 6(6k^2 + 5k + 1)m - 6k + 96$ is negative if and only if $m > \dfrac{-3k^2-k+16}{6k^2+5k+1}$. Moreover $\dfrac{-3k^2-k+16}{6k^2+5k+1} < m^*$ for all $k<5/3$ and therefore $\dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{\textbf{1}}$ is negative for all $m>m^*$. \[lema:dif4\_negativa\] $\dfrac{\partial f_{12|34}}{\partial x_4}\bigg\rvert_{x^*}$ is negative for all $(k,m)\in I\times\Omega$. Computing the partial derivative and substituting we get $\dfrac{\partial f_{12|34}}{\partial x_4}\bigg\rvert_{x^*} = \dfrac{\partial f_{12|34}}{\partial x_2}\bigg\rvert_{x^*}$. This follows from the symmetry on $f_{12|34}$ and on $x^*$. Therefore, Lemma \[lema:dif4\_negativa\] is a consequence of Lemma \[lema:dif2\_negativa\]. \[lema:dif5\_negativa\] $\dfrac{\partial f_{12|34}}{\partial x_5}\bigg\rvert_{x^*}$ is negative for all $(k,m)\in I\times\Omega$. $a)$ Suppose $\tilde{x}<1$: $$\begin{aligned} \dfrac{\partial f_{12|34}}{\partial x_5}\bigg\rvert_{x^*} = -54\tilde{x}^4 + (-54k^2m + 36)\tilde{x}^2 - 36km\tilde{x} - 6m + 6. \end{aligned}$$ In this case consider the ideal $\mathcal{I} = \left( p(x,k,m),p_{\tilde{x}}(x,g,k,m), p_{\gamma}(x,g,a,k,m), p_{\alpha}(a,k,m) \right)$ where $p(x,k,m) = -54x^4 + (-54k^2m + 36)x^2 - 36kmx - 6m + 6$. The ideal $\mathcal{I}\cap{\mathbb{C}}[k,m]$ is generated by the polynomial, $$j'(k,m) = j_1(k,m)\cdot j_2(k,m)\cdot j_3(k,m)\cdot j_4'(k,m),$$ where $j_4'(k,m) = 81k^4m^3+(-27k^4-288k^2-256)m^2+(-45k^4-96k^2)m-9k^4$ and $j_1$, $j_2$ and $j_3$ are defined as in (\[eq:j1\]), (\[eq:j2\]) and (\[eq:j3\]). We only need to study the intersection of $j'_4$ with $I\times\Omega$. Taking $j'_4$ as a function of $m$ we compute its discriminant, $$\mathcal{D}{j'_{4,k}}(k) = -442368 k^6 (2 + 3 k^2) (128 + 18 k^2 + 27 k^4)$$ which has only one real root at $k=0$. Substituting at $k=\pm1$ we get $\mathcal{D}_{j'_{4,k}}(-1) = \mathcal{D}_{j'_{4,k}}(1) = -382648320 < 0$. Therefore $\mathcal{D}_{j'_{4,k}}(k) \leq 0 \ \forall k\in I$ and $j'_{4,k}(m)$ exactly one real root. If $k\in I$ this root is not in $\Omega$ since $j_4'(k,1)= -384k^2 - 256 <0 \ \forall k$, and $j_4'(k,2)= 441k^4 - 1344k^2 - 1024 <0 \ \forall k\in I.$ Same argument as before is valid to conclude $\dfrac{\partial f}{\partial x_5}\big(x^*(k,m)\big)$ is negative in our domain.\ $b)$ Suppose $\tilde{x} \geq 1$: The function $\dfrac{\partial f}{\partial x_5}\bigg\rvert_{\textbf{1}} = -6(9k^2 + 6k + 1)m + 96$ is negative if and only if $m > \dfrac{16}{9k^2-6k+1}$. The value $m^*$ defined in Lemma \[lema:m\_est\] is grater than $\dfrac{16}{9k^2-6k+1}$ for all $k\in\left[-1/3, 1\right]$. Hence $\dfrac{\partial f}{\partial x_2}\bigg\rvert_{\textbf{1}}$ is negative for all $m>m^*$. \[lema:dif5\_1324\_negativa\] $\dfrac{\partial f_{13|24}}{\partial x_5}\bigg\rvert_{x^*}$ is negative for all $(k,m)\in I\times\Omega$. $a)$ Assume $\tilde{x}<1$, then: $$\dfrac{\partial f_{13|24}}{\partial x_5}\bigg\rvert_{x^*} = 48\tilde{x}^4 + (-36k^2m-12k^2+48)\tilde{x}^2 + (-36km-12k)\tilde{x}$$ Let $p(x,k,m) = 48x^4 + (-36k^2m-12k^2+48)x^2 + (-36km-12k)x$ and\ $\mathcal{I}=\left(p(x,k,m),p_{\tilde{x}}(x,g,k,m), p_{\gamma}(x,g,a,k,m), p_{\alpha}(a,k,m)\right)$. In this case the ideal defined as $\mathcal{I}\cap{\mathbb{C}}[k,m]$ is generated by the polynomial $$j(k,m) = k^4(m-1)(3m+1)^3(9k^2m+3k^2-4)^3.$$ This polynomial vanishes if and only if $m =1$, $m=-1/3$, $m=\dfrac{4-3k^2}{9k^2}$ or $k = 0$. The two first possible values of $m$ do not belong to $\Omega$. The third one does not satisfy $p(x,k,m)=0$. It only remains to study the case $k = 0$ which is in ${\mathcal{V}}(\mathcal{I})$ for all values of $m$. However $\dfrac{\partial f_{13|24}}{\partial x_5}\bigg\rvert_{x^*}$ has a unique root in $I\times \Omega$ and for random values of $k$ and $m$ at this region (for both $k$ positive and negative) we check that it is always negative. $b)$ Suppose $\tilde{x} \geq 1$: The function $\dfrac{\partial f_{13|24}}{\partial x_5}\bigg\rvert_{\textbf{1}} = -6(9k^2 + 6k + 1)m + 96$ is negative if and only if $m > \dfrac{16}{9k^2-6k+1}$. The value $m^*$ obtained in Lemma \[lema:m\_est\] is grater than $\dfrac{16}{9k^2-6k+1}$ for all $k\in\left[-1/3, 1\right]$. Thus $\dfrac{\partial f_{13|24}}{\partial x_2}\bigg\rvert_{\textbf{1}}$ is negative for all $m>m^*$. \[lema:minim\] Let $g(x_1, x_3) = f(x_1, 1, x_3, 1, 1)$. Then $\bar{x} = \left(\tilde{x}(k,m), \tilde{x}(k,m)\right)$ is a minimum of $g$. $a)$ Assume $\tilde{x} \leq 1$ The first derivatives of $g(x_1, x_3)$ vanish at the point $\bar{x} = \left(\kappa(k,m), \kappa(k,m)\right)$. The Hessian matrix of $g$ evaluated at $\bar{x}$ is, $$\label{eq:proj} H = \left(\begin{array}{cc} 72 \tilde{x}(k,m)^2 + 24 & -54k^2m - 18k^2 + 144 \tilde{x}(k,m)^2 \\ -54k^2m - 18k^2 + 144 \tilde{x}(k,m)^2 & 72 \tilde{x}(k,m)^2 + 24 \\ \end{array}\right)$$ We need to prove that $H$ is a positive definite matrix, and therefore that all its principal minors are positive for all $k\in I$ and $m\in\Omega$. The first one is clearly positive since it is the sum of positive numbers. To prove that the determinant of $H$ is also positive we will follow the same ideas of Lemma \[lema:dif2\_negativa\]. Consider the ideal $\mathcal{I} = \big(\det(M),p_{\tilde{x}}(x,g,k,m), p_{\gamma}(x,g,a,k,m), p_{\alpha}(a,k,m)\big)$ where $$det(M) = 36\left(-9(3k^2m + k^2 - 8x^2)^2 + 16(3{x}^2 + 1)^2\right).$$ The elimination ideal $\mathcal{I}\cap{\mathbb{C}}[k,m]$ is generated by the polynomial $j(k,m) = j_1(k,m)j_2(k,m)j_3(k,m)$ where $$\begin{aligned} j_1 = &\ 729k^6m^3+729k^6m^2+243k^6m+27k^6-972k^4m^2-648k^4m-108k^4+432k^2m \\ & +144k^2-64,\\ j_2 = &\ 27k^2m^2-126k^2m-45k^2-64 \text{ and }\\ j_3 = &\ 729k^6m^3+729k^6m^2+243k^6m+27k^6-972k^4m^2-648k^4m-108k^4-729k^2m^2 \\ & -54k^2m+63k^2-64. \end{aligned}$$ We are interested in the zeros of each polynomial $j_i$. The first one, $j_1(k,m)$ only vanishes if $m = \dfrac{4 -3k^2}{9k^2}$ but this value is not a zero of $det(M)$. The polynomial $j_2(k,m)$ vanishes at $m = \dfrac{21k\pm8\sqrt{9k^2+3}}{9k}\not\in\Omega$ for any $k\in I$. Consider $j_{3,k}(m) = j_3(k,m)$ as a function of $m$. Its discriminant $\mathcal{D}(k) = -297538935552k^8 - 99179645184k^6$ is negative for all $k\neq 0$. Since $j_{3}(0,m) = -64$ for all $m$, the polynomial $j_{3,k}(m)$ has at most one real root $\forall k$. Moreover, $j_{3,k}(1)\leq 0$ and $j_{3,k}(\omega)\leq 0$ for all $k$ and hence $j_{3,k}$ is smaller or equal than zero for all $m\in\Omega$. Therefore it can be deduced that $det(M)$ has constant sing in the region $I\times\Omega$. Substituting at any random point on that region we can check that $det(M) > 0$ for all $k\in I$ and $m\in\Omega$. And hence $H$ is a positive definite matrix for all $k\in I$ and $m\in\Omega$.\ $b)$ Assume $\tilde{x} \geq 1$. In this case, since we are in the boundary of the domain, by the KKT conditions we need to prove that $\nabla g(1,1)$ is negative. The gradient $$\nabla g(1,1) = (-54k^2m - 18k^2 - 18km - 6k + 96, -54k^2m - 18k^2 - 18km - 6k + 96)$$ is zero if and only if $m = m^*$. Moreover for $m\geq m^*$ or equivalently for $\tilde{x}\geq 1$ the polynomial $-54k^2m - 18k^2 - 18km - 6k + 96$ is negative for all $k\in I$.
--- abstract: 'This work proposes and analyzes the use of keystroke biometrics for content de-anonymization. Fake news have become a powerful tool to manipulate public opinion, especially during major events. In particular, the massive spread of fake news during the COVID-19 pandemic has forced governments and companies to fight against missinformation. In this context, the ability to link multiple accounts or profiles that spread such malicious content on the Internet while hiding in anonymity would enable proactive identification and blacklisting. Behavioral biometrics can be powerful tools in this fight. In this work, we have analyzed how the latest advances in keystroke biometric recognition can help to link behavioral typing patterns in experiments involving 100,000 users and more than 1 million typed sequences. Our proposed system is based on Recurrent Neural Networks adapted to the context of content de-anonymization. Assuming the challenge to link the typed content of a target user in a pool of candidate profiles, our results show that keystroke recognition can be used to reduce the list of candidate profiles by more than 90%. In addition, when keystroke is combined with auxiliary data (such as location), our system achieves a Rank-1 identification performance equal to 52.6% and 10.9% for a background candidate list composed of 1K and 100K profiles, respectively.' author: - bibliography: - 'submission\_example.bib' title: | Keystroke Biometrics in Response to\ Fake News Propagation in a Global Pandemic --- Introduction ============ In 2020, the COVID-19 pandemic is dominating worldwide media. In an overconnected world fueled by global panic, the propagation of fake news has achieved rates never seen before. Many of these fakes are based on ridiculous statements with scarce impact in a large percentage of the society (e.g. drinking water kills the virus or cocaine cures the virus [@bbc; @nypost]). However, other fake news are more sophisticated and employed to modify public opinion, propagate panic, and destabilize governments. The usage of fake news to manipulate public opinion has become normal in recent years, especially when major events such as elections and referendums take place. But during the COVID-19 pandemic, the spread of massive quantities of fake news has forced social media platforms to act. Companies such as Facebook or Twitter are working harder to detect and reduce the spread of fake news and bot profiles. Facebook introduced for example a “context” option that provides background information for the sources of articles in its News Feed, and Twitter has professional fact checkers to identify false content [@newyorktimes; @forbes]. During the COVID-19 outbreak these fact checkers detected anonymous profiles publishing fake news that go directly against guidance from authoritative sources of global and local public health information, aimed to influence people into acting against recommended guidance [@theguardian]. Data re-identification or de-anonymization is the practice of matching anonymous data with publicly available information, or auxiliary data, in order to discover the individual to which the data belongs to [@narayanan2008robust]. In the context of the fight against fake news, de-anonymization is useful to link multiple profiles belonging to the same user who is generating fake contents. Once detected, these users can be blacklisted based on their profile, MAC address, IP address, or other account data. However, these safeguards can be circumvented by creating a new account, changing device, or using a Virtual Private Network (VPN). Biometric technologies such as keystroke dynamics can be used to mitigate this circumvention. Data de-anonymization can cover any type of data, from text to audio, image, or video, being therefore a very challenging task [@shu2017fake; @agrawal2017multimodal; @suwajanakorn2017synthesizing]. Popular examples in this line are DeepFakes, which refer to deep learning based techniques able to create fake videos by swapping the face of a person with the face of another person [@2020_SurveyDeepFake_Tolosana]. In this work we focus in content that has been typed using a traditional keyboard (i.e. text). ![image](figures/block_diagram.pdf){width="\textwidth"} Keystroke biometric recognition enables the identification of users based on their typing behavior. During the last 15 years, the efforts of the keystroke biometrics scientific community have been mostly focused on verification scenarios with a limited number of users, typically less than several hundred. The architecture proposed in [@typenet2020], with experiments conducted on over $100$,$000$ users, opened new research opportunities and challenges. The results over a user verification scenario revealed the potential of scaling up keystroke recognition. However, the suitability of this biometric trait for a large-scale identification scenario remains unexplored in the literature. This work presents a feasibility study of content de-anonymization based on keystroke biometrics. To the best of our knowledge, this is the first work that analyzes keystroke identification for content de-anonymization. Our results suggest the potential of keystroke identification as a tool to improve the linkability between anonymous and verified profiles. The rest of the paper is organized as follows. Sec. \[state\] summarizes the state of the art in keystroke recognition. Sec. \[system\] defines the problem and presents the proposed system. Sec. \[data\] describes the experimental protocol, while in Sec. \[results\] we present the results. Sec. \[limitations\] discusses the limitations and privacy concerns. Finally, Sec. \[conclusions\] draws the conclusions. Keystroke Biometrics:\ From fundamentals to the State of the Art {#state} ========================================= Keystroke biometric systems are commonly placed into two categories: *fixed-text*, where the keystroke sequence typed by the user is prefixed, such as a username or password, and *free-text*, where the keystroke sequence is arbitrary, such as writing an email or transcribing a sentence with typing errors. Free-text systems must therefore consider different text content between training and testing. Biometric recognition systems can be applied for *verification* or *identification* task. Verification implies a 1:1 comparison to determine if the biometric sample belongs to the claimed identity. Identification implies 1:$N$ comparisons to determine the identity of the biometric sample from a pool of candidates. Biometric authentication algorithms based on keystroke dynamics for desktop and laptop keyboards have been predominantly studied for verification tasks in fixed-text scenarios, achieving accuracies higher than $95\%$ [@2016_IEEEAccess_KBOC_Aythami; @morales2014]. Approaches based on sample alignment (e.g. Dynamic Time Warping) [@2016_IEEEAccess_KBOC_Aythami], Manhattan distances [@Vinnie1], digraphs [@Bergadano], and statistical models (e.g. Hidden Markov Models) [@Ali] have achieved the best results in fixed-text verification. However, the performances of free-text algorithms are generally far from those reached in the fixed-text scenario, where the complexity and variability of the text entry contribute to intra-subject variations in behavior, challenging the ability to recognize users [@Sim]. Monrose and Rubin [@Monrose] proposed a free-text keystroke algorithm based on user profiling by using the mean latency and standard deviation of digraphs and computing the Euclidean distance between each test sample and the reference profile. Their results worsened from $90\%$ to $23\%$ of correct classification rates when they changed both users’ profiles and test samples from fixed-text to free-text. Gunetti and Picardi [@Gunetti] extended the previous algorithm to n-graphs. They calculated the duration of n-graphs common between training and testing and defined a distance function based on the duration and order. Their results of $7.33\%$ classification error outperformed the previous state of the art. Recently, some algorithms based on statistical models have been shown to work very well with free-text, like the POHMM (Partially Observable Hidden Markov Model) [@Monaco]. Performance achieved using that approach in free-text is close to fixed-text, but requires several hundred keystrokes and has only been evaluated with a database containing less than 100 users. The latest advances in deep learning and the availability of large scale databases has boosted the performance of free-text keystroke recognition biometrics only very recently. In [@typenet2020], a Deep Recurrent Neural Network architecture was presented with experiments over a database with $168$,$000$ users and $136$M keystrokes. Results obtained within a free-text verification scenario achieved error rates under $5\%$. Nevertheless, the performance of these algorithms for large scale identification scenarios remains unknown. This is one of the major contributions of this work. ![image](figures/fig_features.pdf){width="\textwidth"} Problem Statement and System Description {#system} ======================================== Problem Statement ----------------- Anonymous content is re-identified when multiple anonymous profiles are linked or associated to a verified profile. In our experiments, we assume that fake content was typed by an anonymous user who authored other verified content published on the same or different platform. A *verified content* is defined in this work as a content associated to a real identity (e.g. personal social media account or digital profile certified by a third party). An *anonymous content* is defined as a content published by one user who has not revealed his/her real identity (i.e. this content is usually associated with an alias or pseudonym). In this work, we assume that timing sequences of the keyboard were captured when typing. This can occur when a user types content directly into a webpage. No special permissions are required to record the timestamps of keyboard input events generated on all major web browsers (e.g., Chrome, Firefox, Safari), and only the Tor Browser attempts to obfuscate typing behavior by lowering timestamp resolution [@torbrowser]. De-anonymization is achieved by comparing the typing characteristics of the anonymous and the verified contents using the biometric patterns associated to the keystroke dynamics derived from these timing sequences. Fig. \[Block\_Diagram\] presents the architecture of our proposed approach. The Anonymous typed content is first characterized according to the keystroke dynamics $\textbf{x}^\textrm{A}$ (A for Anonymous) associated to the sequences of time events $t$ and keycodes $k$. A Recurrent Neural Network is used to project the timing and keycode sequences into a feature space trained for keystroke verification. The generated feature vector $\textbf{f}(\textbf{x}^{\textrm{A}})$ is characterized by the individual typing behavior of the anonymous subject. The feature vector $\textbf{f}(\textbf{x}^{\textrm{A}})$ is then matched with each of the $i$ feature vectors $\textbf{f}(\textbf{x}_i^{\textrm{V}})$ of a Verified content database $\mathfrak{B}$ (the “background”), composed of $N=\#\mathfrak{B}$ profiles. The result of the matching process is a ranked list with the $N$ profiles ordered by similarity to the anonymous subject. Pre-processing and Keystroke Dynamics {#features} ------------------------------------- The raw data captured in each keystroke sequence is composed of a three dimensional time series including: the keycodes, key press timestamps (corresponding to keydown events), and key release timestamps (corresponding to keyup events). In our experiments, timestamps were in UTC format with millisecond resolution (captured in a web browser), and the keycodes were integers between $0$ and $255$ according to the ASCII code. From this raw data, we extract $4$ temporal features popular in keystroke recognition (see Fig. \[features\_fig\] for details): (i) Hold Time ($t_{\textrm{H}}$): the elapsed time between press and release events; (ii) Inter-key Latency ($t_{\textrm{IL}}$): the elapsed time between releasing a key and pressing the next key; (iii) Press Latency ($t_{\textrm{PL}}$): the elapsed time between two consecutive press events; and Release Latency ($t_{\textrm{RL}}$): the elapsed time between two consecutive release events. These $4$ features are commonly used in both fixed-text and free-text keystroke systems [@Alsultan]. Finally, we include the keycodes as an additional feature. Let $L$ be the length of the keystroke sequence. The keycode and Hold Time features are calculated for each of the $L$ keys in the sequence, and latency features between consecutive keys ($t_{\textrm{IL}}$, $t_{\textrm{PL}}$, and $t_{\textrm{RL}}$) are calculated for the $L-1$ consecutive key pairs. This produces a time series with shape $L \times 2 + (L-1) \times 3$. All feature values are normalized before being provided as input to the model. Normalization is important so that the activation values of neurons in the input layer of the network do not saturate (i.e. all close to $1$). The keycodes are normalized between $0$ and $1$ by dividing each keycode by $255$, and the $4$ timing features are converted to seconds. This scales most timing features between $0$ and $1$ as the average typing rate over the entire dataset is $5.1$ $\pm$ $2.1$ keys per second. Only latency features that occur either during very slow typing or long pauses exceed a value of $1$. Recurrent Neural Network Architecture {#architecture} ------------------------------------- We employ the Recurrent Neural Network (RNN) model proposed in [@typenet2020]. The model is composed of two Long Short-Term Memory (LSTM) layers of $128$ units. Between the LSTM layers there are batch normalization and dropout layers ($0.5$ drop rate) to avoid overfitting. Additionally, each LSTM layer has a $0.2$ recurrent dropout rate. The network was trained with more than $1$M keystroke sequences (over $50$M keystrokes) from $68$,$000$ different users (see Sec. \[protocol\] for details). The RNN was trained using a Siamese setup involving two inputs: two keystroke sequences from either the same or different users. During the training phase, the model learns the projections necessary to discriminate whether two keystroke sequences belong to the same user or not. The model acts as a feature extractor and outputs an embedding vector that contains the discriminating features (see [@typenet2020] for details). One constraint when training a RNN using standard backpropagation through time applied to a batch of sequences is that the number of elements in the time dimension (i.e. number of keystrokes) must be the same for all sequences. We fix the size of the time dimension to $M$. In order to train the model with sequences of different lengths $L$ within a single batch, we truncate the end of the input sequence when $L>M$ and zero pad at the end when $L<M$, in both cases to the fixed size $M$. Error gradients are not computed for zeroed elements, which do not contribute to the loss function in the iterative learning due to the Masking layer indicated in Fig. \[Block\_Diagram\]. Finally, the output of the RNN model $\textbf{f}($**x**$)$ is an array of size $1 \times 128$ that we consider later as an embedding feature vector to identify anonymous content based on Euclidean distance. ![image](figures/fig_tsne.pdf){width="100.00000%"} Dataset and Experimental Protocol {#data} ================================= \[experimental\_protocol\] Dataset {#aalto} ------- All experiments were conducted with the Aalto University Dataset [@Dhakal] that comprises keystroke data collected from $168$,$000$ participants during a three-month time span. The acquisition task required subjects to memorize English sentences and then type them as quickly and accurately as they could. The English sentences were selected randomly from a set of $1$,$525$ examples taken from the Enron mobile email and Gigaword newswire corpora. The example sentences contained a minimum of $3$ words and a maximum of $70$ characters. Note that the sentences typed by the participants could contain more than $70$ characters as each participant could forget or add new characters when typing. For the data acquisition, the authors launched an online application that recorded the keystroke data from participants who visited their website and agreed to complete the acquisition task (i.e. the data was collected in an uncontrolled environment). Press (keydown) and release (keyup) event timings were recorded in the browser with millisecond resolution using the JavaScript function `Date.now`. All participants in the database completed $15$ sessions (i.e. one sentence for each session) on either a physical desktop or laptop keyboard. The authors also reported demographic statistics: $72$% of the participants took a typing course, $218$ countries were involved, and $85$% of the participants had English as native language. Experimental Protocol {#protocol} --------------------- The RNN was trained using the first $68$,$000$ users in the dataset according to the method proposed in [@typenet2020]. The size of the time dimension $M$ was fixed to $M=50$, which is short enough to consider small sentences. The remaining $100$,$000$ users will be employed only to perform the evaluation of the de-anonymization system, so there is no data overlap between the two groups of users. Note that the model is unique for all the $100$,$000$ users in the evaluation set, and does not require specific training when a new target user is added. The $15$ sequences from the $100$,$000$ users in the database were divided into two groups that simulate a de-anonymization scenario: Verified ($10$ sequences) and Anonymous ($5$ sequences). We evaluated the de-anonymization accuracy by comparing the Anonymous set of samples $\textbf{x}_{j,l}^{\textrm{A}}$, with $l=1,...,5$ belonging to the user $j$ against the Background Verified set $\textbf{x}_{i,g}^{\textrm{V}}$, with $g=1,...,10$ belonging to all $100$,$000$ users. The distance was computed by averaging the Euclidean distances $||\cdot||$ between each Verified embedding vector $\textbf{f}(\textbf{x}_{i,g}^{\textrm{V}})$ and each Anonymous embedding vector $\textbf{f}(\textbf{x}_{j,l}^{\textrm{A}})$ as follows: $$\label{score} d_{i,j}= \frac{1}{10 \times 5}\sum_{g=1}^{10}\sum_{l=1}^{5} ||\textbf{f}(\textbf{x}_{i,g}^{\textrm{V}})-\textbf{f}(\textbf{x}_{j,l}^{\textrm{A}})||$$ We then re-identify an anonymous profile (i.e. Anonymous subject $j=J$ is the same Verified person $i=I$) as follows: $$\label{mindistance} I = \arg\min_i d_{i,J}$$ The results reported in the next section are computed in terms of Cumulative Match Curve (CMC), which is a measure of $1$:$N$ identification system performance. The curves are calculated for each user and then averaged over all $100$,$000$ users. A Rank-$1$ means that $d_{i,J}<d_{I,J}$ for any $i \neq I$, while a Rank-$n$ means that instead of selecting a single Verified profile, we select $n$ of them starting with $i=I$ by increasing distance $d_{i,J}$. In forensic scenarios, it is traditional to use Rank-20, Rank-50, or Rank-100 in order to generate a short list of potential candidates that are finally identified manually using a bag of evidence. Experimental Results {#results} ==================== Discriminatory Potential of Keystroke Biometrics ------------------------------------------------ To ascertain the potential of the feature vectors generated by the keystroke model, we applied the popular data visualization algorithm t-SNE over the dataset. t-SNE is an algorithm to visualize high-dimensional data. This algorithm minimizes the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. Fig. \[tsne\] shows the projection of the keystroke embedding of three target users into a 2D space generated by the t-SNE algorithm. t-SNE projection is an unsupervised algorithm but for interpretation purposes we have colored three groups: - Background projections including embeddings of $30$,$000$ keystroke sequences from $2$,$000$ random users. These sequences serve to visualize the boundaries and shape of the feature space represented by t-SNE. - Validated sequences ($10$ per user) typed by $3$ target users (each one in a different plot). These sequences model the target user’s typing behavior. - Anonymous sequences ($5$ per user) typed by the $3$ target users. These sequences serve to re-identify the target users. As we can see, the projections of the Validated and Anonymous keystroke embeddings from the target users are represented in close regions of the t-SNE space. Note that the t-SNE projection is not trained using the labels (i.e. identity) associated to each keystroke embedding. As a result, we observe that the personal typing patterns from the target users as represented by our deep network are discriminative enough in this large background set of $30$,$000$ different sequences. These results suggest the discriminatory information available in this biometric trait. ![CMC curves for different background sizes $N=\#\mathfrak{B}$. Pre-screening is based on the assumption that the location of the typist is available and can be used to reduce the number of candidates in the background set.[]{data-label="rank"}](figures/rank_curves.pdf){width="0.95\columnwidth"} De-anonymization Accuracy ------------------------- As a $1$:$N$ problem, the identification accuracy varies depending on the size of the background set $\mathfrak{B}$. In our experiments, the size of the background is equal to the number of verified profiles available for the comparison. Fig. \[rank\] and Table \[table\_acc\] present the de-anonymization performance for different background sizes. The results vary depending on the size of the background, with Rank-1 identification accuracy varying from $1.8\%$ for the largest background ($100$K) to $26.4\%$ for the smallest one ($1$K). These accuracies can be considered low for verification scenarios associated to user authentication. But in the context of profile de-anonymization, a $1.8\%$ accuracy among $100$,$000$ profiles means that $1$,$800$ profiles can be automatically de-anonymized using only the keystroke patterns of the owner. Rank-1 identification rate reveals the ability to unequivocally identify the anonymous target profile among all the verified profiles in the background set. Additionally, Rank-$n$ represent the achievable accuracy if we consider a ranked list of $n$ profiles from which the de-anonymization is then manually or automatically conducted based on additional evidence [@2018_INFFUS_MCSreview2_Fierrez]. In general, the results suggest that keystroke de-anonymization enables a $90\%$ size reduction of the candidate list while maintaining $100\%$ accuracy (see the CMC curves in Fig. \[rank\]). The number of background profiles can be further reduced if auxiliary data is available to realize a pre-screening of the initial list of verified profiles (e.g. country, language). The Aalto University Dataset contains auxiliary data including age, country, gender, type of keyboard, and others. Fig. \[rank\] and Table \[table\_acc\] show also user identification accuracy over the entire background dataset with a pre-screening by country (i.e., contents generated in a country different to the country of the target user are removed from the background set). The results show that pre-screening based on a unique attribute is enough to largely improve the identification rate: Rank-1 identification with pre-secreening ranges between $10.9\%$ to $52.6\%$, while the Rank-100 ranges between $58.8\%$ to $99.9\%$. These results demonstrate the potential of keystroke dynamics for de-anonymization when auxiliary information is available. \[table\_acc\] ----------- ----------------- ----------------- ----------------- $N$ = 1K $N$ = 10K $N$ = 100K Rank-1 $26.4$ $(52.6)$ $8.1$ $(25.4)$ $1.8$ $(10.9)$ Rank-50 $95.9$ $(99.5)$ $59.2$ $(79.7)$ $21.8$ $(48.4)$ Rank-100 $98.9$ $(99.9)$ $73.1$ $(88.1)$ $31.3$ $(58.8)$ Rank-1000 $100$ $(100)$ $98.9$ $(99.7)$ $74.2$ $(89.4)$ Rank-5000 $-$ $99.9$ $(99.9)$ $96.1$ $(99.6)$ ----------- ----------------- ----------------- ----------------- : Identification accuracy (Rank-$n$ in %) for different background sizes $N$. In brackets accuracy with pre-screening of the background dataset based on the location of the typist. Limitations and Privacy Aspects {#limitations} =============================== The results presented in this work are encouraging. However, there are still some limitations regarding the application of this technology in the fight against fake news. The first one is that content must be typed. Spread of news by retweet or similar sharing mechanisms which do not require use of the keyboard, are not detectable by this technology. Second, bots are commonly employed for the propagation of fake content. It is not clear how the method proposed in this work would perform for synthetic behavior emulated by bots [@becaptcha]. Third, the identification performance decays for a large number of background profiles. Therefore, pre-screening is recommendable to reduce the candidate list. On the other hand, biometric data is considered sensitive data in a number of regulations (e.g. paragraph 71, EU GDPR). Keystroke dynamics, as biometric data, must be processed according to appropriate technical and organizational methodologies. The proposed de-anonymization based on keystroke behaviors is a powerful tool that can help in the fight against missinformation. But at the same time, the missuses of this technology arise important concerns related to data protection and user privacy. In addition to the identification accuracy studied in the paper, a concrete application of the ideas developed here should also consider and evaluate a secure storage of the biometric templates, and other modules for privacy preservation [@bringer2013]. The balance is delicate in this case, as we should aim to de-anonymize problematic subjects while preserving the privacy rights and freedom of speech of the overall population at the same time. Careful consideration of such security and privacy aspects is out of the scope of the present paper and can be investigated elsewhere [@Kindt2013; @Campisi2013]. Conclusions =========== This work proposes keystroke biometric recognition for typed content de-anonymization. The fight against missinformation requires new tools, and the COVID-19 pandemic has showed the necessity to develop new technologies and policies to reduce the spread of fake content. Keystroke recognition can be used as a tool to link multiple profiles belonging to the same typist based on his typing behavior. We have evaluated a system based on Recurrent Neural Networks in experiments involving $100$,$000$ users and more than $1$M keystroke sequences. Our results suggest the potential of this technology to link multiple texts typed by the same user by leveraging personal typist patterns. The performance achieved varies depending on the number of background profiles, with Rank-1 identification accuracy ranging from $10.9\%$ to $52.6\%$ and Rank-50 from $48.4\%$ to $99.5$ when auxiliary information is available. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by projects: PRIMA (MSCA-ITN-2019-860315), TRESPASS (MSCA-ITN-2019-860813), BIBECA (RTI2018-101248-B-I00 MINECO), BioGuard (Ayudas Fundación BBVA a Equipos de Investigación Científica 2017). A. Acien and R. Tolosana are supported by a FPI and postdoc fellowship from the Spanish MINECO.
--- abstract: 'Using the 100-m Green Bank Telescope, we have conducted a cm-wavelength search for line emission towards the high-redshift, far-infrared luminous object, HDF850.1 over the redshift interval 3.3 $\la z \la$ 5.4. Despite the wealth of existing multi-wavelength observations, and the recent identification of a galaxy counterpart in deep $K^\prime$ band (2.2 $\mu$m) imaging, an unambiguous spectroscopic redshift has not yet been obtained for this object. A far-infrared-to-radio wavelength photometric redshift technique however, predicts a $\sim$90% probability that the redshift is in the range, 3.3 $\la z \la$ 5.4 (equivalent to an observed redshifted emission line frequency, 26.5 $\ga \nu_{\rm obs} \ga$ 18.0 GHz), making HDF850.1 a potential occupent of the ‘high-redshift tail’ of submm selected galaxies. We have also conducted a search for line emission over the narrower redshift range, 3.9 $\la z \la$ 4.3. Although we do not detect any CO line emission in this object, our limits to the CO line luminosity are in broad agreement with the median value measured in the current sample of high-redshift, submm selected objects detected in CO line emission, but not sufficient to fully test the validity of the photometric redshift technique.' author: - | \ $^{1}$ Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Apartado Postal 51 y 216, 72000 Puebla, Pue., Mexico\ $^{2}$ Harvard-Smithsonian Center for Astrophysics, Cambridge, MA, USA, 02138\ $^{3}$ Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural, Vancouver, Canada, V6T 1Z1\ $^{4}$ SUPA[^1] Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK\ $^{5}$ Institut d’Estudis Espacials de Catalunya, IEEC/CSIC, c/ Gran Capitan 2-4, 08034, Barcelona, Spain\ $^{6}$ Department of Physics & Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104-6396, USA\ date: 'Accepted ....... Received ..............; in original form ......' title: 'A broadband spectroscopic search for CO line emission in HDF850.1: the brightest submillimetre object in the *Hubble Deep Field* North' --- \[firstpage\] galaxies: starburst – galaxies: individual: HDF850.1 – radio lines: galaxies – cosmology: observations Introduction ============ In recent years, blank-field extragalactic surveys at submillimetre/millimetre (hereafter submm) wavelengths have revealed a population of dusty galaxies undergoing vigorous star formation in the young Universe (e.g. Smail, Ivison & Blain 1997; Hughes et al. 1998; Barger et al. 1998; Bertoldi et al. 2000; Scott et al. 2002; Borys et al. 2003; Greve et al. 2004; Laurent et al. 2005). Independent methods of redshift determination for these objects imply that the majority lie at, z $\ga 2$ (Aretxaga et al. 2003, 2005; Chapman et al. 2003a, 2005). Thus their inferred star formation rates are in the range, 100 to 1000 M$_{\odot}$ yr$^{-1}$, and arguably require large reservoirs of molecular gas for fuelling such high levels of sustained activity. Measuring the total molecular gas mass contained within the interstellar medium (ISM) of a high-redshift submm galaxy (hereafter SMG) is most effectively accomplished through observations of redshifted molecular CO line emission. Over the past 15 years, observing CO line emission in high-redshift objects has become a powerful means of constraining the physical conditions of the gas within their molecular ISM (for an excellent review see Solomon & Vanden Bout 2005). The luminosity in the line is the optimal estimator of the total molecular gas mass yet, for practical reasons, in gas-rich objects at high-redshift (z $\ga 2$), searches are normally first conducted for emission from ([*J$_{\rm upper}$*]{} $\ge$ 2) CO line transitions, and subsequent searches for emission are carried out if the mm-wavelength searches for CO lines are successful. This approach can bias the sample of objects detected in CO line emission to those with hotter, and denser gas. To date, CO line emission has been detected in 14 SMGs (Frayer et al. 1998, 1999; Neri et al. 2003; Genzel et al. 2003; Downes & Solomon 2003; Sheth et al. 2004; Kneib et al. 2005 Greve et al. 2005; Tacconi et al. 2006), while the line has been detected in only one of these (Hainline et al. 2006). In addition to the faintness of molecular emission lines in high-redshift SMGs, searches have been hindered by the limited spectral bandwidth of current mm-wavelength facilities, generally covering $\sim$1700 km s$^{-1}$ at 3 mm, while the typical SMG CO linewidth is $\sim$800 km s$^{-1}$ FWHM. This narrow bandwidth can also be restrictive as galactic outflows in many high-redshift SMGs may lead to velocity offsets between the redshifts derived from Ly$\alpha$ and CO emission lines. In some SMGs this difference may be greater than 600 km s$^{-1}$ (e.g. Greve et al. 2005), due possibly to galactic outflows, or scattering of Ly$\alpha$ photons by dust. As CO emission-line frequencies are not expected to be biased with respect to the systematic redshift, broadband spectroscopic searches for CO line-emission should be a powerful means of obtaining redshifts for the SMG population. Educated searches for mm-to-cm wavelength molecular CO line emission in luminous, dusty galaxies without redshifts, will become feasible in the near future as wideband spectrometers are available on large mm-to-cm wavelength telescopes, for example the 100-m Green Bank Telescope (GBT[^2]; Jewell & Prestage 2004), or the 50-m Large Millimeter Telescope (LMT[^3]). In order to obtain redshift estimates, and to guide the frequency tunings of these spectroscopic searches for molecular line emission from SMGs, some groups have developed photometric redshift techniques which exploit the far-infrared-to-radio wavelength correlation in star-forming galaxies (Helou et al. 1985), or adopt template far-infrared spectral energy distributions (SEDs) based on nearby galaxies (Carilli & Yun 1999, 2000; Dunne, Clements & Eales 2000a; Rengarajan & Takeuchi 2001; Hughes et al. 2002; Aretxaga et al. 2003; Wiklind 2003; Hunt & Maiolino 2005). This technique has the potential to provide redshift estimates for large samples of SMGs with individual accuracies, $\delta z \sim \pm$0.3, when photometric flux measurements of three or more far-infrared-to-radio wavelengths are available (Aretxaga et al. 2005). The GBT is the only operational mm-to-cm wavelength telescope in the world with instruments that have both sufficient spectral line sensitivity and receiver bandwidth to conduct guided searches for CO line emission in SMGs at redshifts $z \ga 0.9$. This lower redshift limit is set by the current GBT frequency limit of 60 GHz and the line rest frequency of 115.2712 GHz. Given this restriction, an excellent candidate for conducting a blind search for CO line emission is HDF850.1 (Hughes et al. 1998), one of the most well studied SMGs, and the brightest 850 $\mu$m source in the confusion limited JCMT/SCUBA survey of the northern *Hubble Deep Field*. Due partly to the extreme optical faintness ($K \simeq$ 23.5, $I- K > 5.2$) of the gravitationally lensed galaxy counterpart to HDF850.1 (Dunlop et al. 2004), the redshift of this object has proven elusive. A wealth of deep rest-frame far-infrared-to-radio wavelength observations of HDF850.1 provide the basis for a photometric redshift z$=$4.1$\pm$0.5 (Yun & Carilli 2002; Aretxaga et al. 2003). In principle, HDF850.1 presents an ideal target for the GBT with which to test the accuracy of our photometric redshift technique, and thus has motivated a GBT search for and line emission over the redshift interval, 3.3 $\la z \la$ 5.4. We present the results of a GBT search for line emission in HDF850.1 over the redshift interval, 3.3 $\la z \la$ 5.4, and a search for line emission over the narrower redshift interval, 3.9 $\la z \la$ 4.3. Throughout this work, we adopt the following $\Lambda$-dominated cosmological parameters: $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_\Lambda = 0.7$, $\Omega_m = 0.3$ (Spergel et al. 2003, 2006). Observations and Data Reduction =============================== The far-infrared-to-radio wavelength photometric redshift estimate of Aretxaga et al. (2003) implies an 86 to 90% probability that HDF850.1 has a redshift in the range, 3.3 $\le z \le$ 5.4. Over this redshift interval, the 115.2712 GHz line is redshifted into the 18.0 to 26.5 GHz frequency window of the K-band receiver on the GBT. Motivated by this prediction, we have obtained a complete K-band spectrum of HDF850.1 in order to search for redshifted line emission. Observations in nod mode were carried out with the GBT K-band receiver during October 2004 and May 2005. The position center adopted for the on-source beam was that of the mm-wavelength counterpart detected with the Plateau de Bure Interferometer (PdBI) by Downes et al. (1999). All of the K-band observations were conducted under reasonably dry conditions, with an average 22 GHz zenith opacity, $\tau_{\rm 22GHz} \sim 0.09$. The nearby quasar 3C295 was used for pointing purposes, as well as baseline and flux calibration throughout the K-band observations. To correct for slowly varying, large-scale spectral baseline features, the observations were made by alternately nodding two beams separated by 178.8, between the source and blank sky. The GBT spectrometer allows a maximum instantaneous frequency coverage of 800 MHz bandwidth in each of 4 independent quadrants. For the observations presented here, one pair of quadrants was used to measure a $\sim$1.5 GHz wide spectra on the source, while the other pair measured blank sky in the off-beam. A total of 6 tunings (or sequences) were therefore used to cover the entire K-band window. Each spectral channel was 0.39 MHz wide so that the velocity resolution varied from $\sim$4.4 km s$^{-1}$ to $\sim$6.5 km s$^{-1}$ across the band. A total of 28.2 hours of integration time was devoted to the HDF850.1 K-band spectrum. The time spent on each $\sim$1.5 GHz tuning sequence was varied to compensate for the increased opacity towards the center of the band, due to the 22 GHz atmospheric water vapour line. The goal was to obtain a spectrum with uniform noise across the K-band. Overheads such as pointing, focusing, acquisition of baseline calibration spectra, and follow-up of potential CO line detections, amounted to an additional factor of 2 increase in the observing time. In December 2005, a search was also conducted for emission using the GBT Q-band receiver (40 to 48 GHz), over the redshift interval, 3.91 $\la z \la$ 4.25. As in the case of the K-band observations, the nod observing mode was adopted, while the spectrometer was set up in wide bandwidth, low resolution mode. The velocity resolution varied from $\sim$2.5 km s$^{-1}$ to $\sim$2.7 km s$^{-1}$ across our Q-band spectrum. As there is a dearth of bright, compact calibration sources at these higher frequencies, the primary flux calibrator was 3C286, while the nearby quasar 1153+495 ($\sim$17$^o$ separation) was used for pointing. Obtaining quality spectra with the GBT Q-band receiver generally requires low wind speeds ($\le$3 m s$^{-1}$), and an extremely dry, stable atmosphere. Only on a single night in December 2005, was data obtained under just such conditions, with a median $\tau_{\rm 44GHz} \sim 0.1$ and negligible wind speeds. Despite acceptable Q-band weather conditions during this, and possibly one other observing shift, only the left polarization Q-band spectra is included in our analysis, as the majority of the right polarization spectra suffer from a severe baseline ripple of unknown origin (see §\[sec:baselines\]). Both the K-band and Q-band spectra were reduced using the *gbtidl*[^4] data analysis package. For a series of consecutive scans, a co-added spectrum is produced following the standard procedures described by Vanden Bout, Solomon & Maddalena (2004), which are only outlined here for completeness. For simplicity, we will consider the spectrum in only one quadrant, and one polarization from each of the two beams in the discussion that follows. Let us refer to these spectra as $B_1(\nu)$ and $B_2(\nu)$, where one of the two beams is always pointed ‘ON’ the source, while the ‘OFF’ beam is observing blank sky separated from the source by 178.8in azimuth. A single scan is the time that one beam spends on-source before nodding to the off-source position, when the other beam is nodded onto the source. For our observations, a scan duration of 1 minute, during which one beam is on-source before nodding to the off-source position, was adopted. This scan duration is chosen so as to minimize the frequency dependent variation in the sky brightness temperature between successive scans, while also spending less time on overheads such as nodding. Assuming that $B_1(\nu)$ begins on-source at scan $i$, then a series of normalized spectra are produced by subtraction of the off-source scan from the on-source scan (note that from this point on, the $\nu$ dependance will not be made explicit):\ $\left (B_{\rm 1-ON}^{(i)}-B_{\rm 1-OFF}^{(i+1)} \right )/B_{\rm 1-OFF}^{(i+1)},$\ $\left (B_{\rm 2-ON}^{(i+1)}-B_{\rm 2-OFF}^{(i)} \right )/B_{\rm 2-OFF}^{(i)},$\ $\left (B_{\rm 1-ON}^{(i+2)}-B_{\rm 1-OFF}^{(i+3)} \right)/B_{\rm 1-OFF}^{(i+3)},$\ $\left (B_{\rm 2-ON}^{(i+3)}-B_{\rm 2-OFF}^{(i+2)} \right)/B_{\rm 2-OFF}^{(i+2)},$\ $\left (B_{\rm 1-ON}^{(i+4)}-B_{\rm 1-OFF}^{(i+5)} \right)/B_{\rm 1-OFF}^{(i+5)},$\ $\left (B_{\rm 2-ON}^{(i+5)}-B_{\rm 2-OFF}^{(i+4)} \right)/B_{\rm 2-OFF}^{(i+4)},$\ etc....\ It is important that each normalized spectrum be created from the ‘ON’-‘OFF’ scan pair that travel down the same signal path (ie. the signal from a single beam and polarization), so that any path-dependent sources of baseline structure may be properly subtracted. An average of these normalized spectra is then calculated, with each spectrum weighted by the inverse square of the system temperature, $T_{\rm sys}^{-2}$. The units of this mean spectrum are then calibrated from \[K\] to \[Jy\] using the best-fit gain curves shown for the K-band observations in Figure \[fig:kband\_gain\]. The gain curves were derived from observations of objects with known (and non-variable) radio flux densities (3C295 for the K-band spectrum, and 3C286 for the Q-band spectrum). The left and right polarization spectra are analyzed separately so that any potential detection of CO line emission would have to be confirmed in both polarizations. A large fraction of the data ($\sim$53%) were considered unusable due to various forms of spectral baseline irregularities and contamination which could not be removed reliably during the data reduction process (see §\[sec:baselines\]). We attempted to use spectra of the bright pointing sources to correct the baseline shapes in these spectra (Vanden Bout, Solomon & Maddalena 2004), however we found that this did not improve our results. After removal of the poor quality data, the total on-source integration time devoted to the final K-band spectra is 12.3 hours in the left polarization spectra, and 13.9 hours in the right polarization spectra (see Table \[tab:obstab\]). Seq.\# $\nu_1 - \nu_2$\[GHz\] $t_{\rm int}$ (L/R) \[hrs\] $\bar{\sigma}$ (L/R) \[mJy\] -------- ------------------------ ----------------------------- ------------------------------ K1 $22.10-23.62$ 2.7 / 3.1 0.58 / 0.69 K2 $23.54-25.06$ 2.2 / 1.8 0.51 / 0.61 K3 $20.88-22.40$ 2.1 / 3.1 0.50 / 0.52 K4 $19.44-20.96$ 1.8 / 2.6 0.60 / 0.46 K5 $24.98-26.50$ 1.5 / 1.2 0.98 / 0.85 K6 $18.00-19.52$ 1.9 / 2.1 0.77 / 0.72 : Summary of K-band observations of HDF850.1. The total integration time, $t_{\rm int}$, is based on the data which contributes to the final, left (L) and right (R) polarizaion spectra. \[tab:obstab\] Spectral Baselines {#sec:baselines} ------------------ A major obstacle faced by searches for faint, broad emission lines, is the presence of variations in the spectral baseline shape, with a characteristic scale similar to the expected CO linewidths. These features are not uncommon in mm/cm wavelength spectra obtained with single-dish instruments, and can easily be mistaken for detections of weak, broad emission lines. There are a number of instrumental, as well as atmospheric effects which may produce spectral baseline artefacts. Here, we summarize a few such artefacts which have been identified in our GBT (K and Q-band) spectra of HDF850.1: - Weather variations over short timescales (on order of the length of a single scan of 1 minute duration) may result in inaccurate subtraction of the atmospheric contribution to the system temperature across the band. This is particularly problematic at Q-band frequencies, where clouds passing overhead may lead to rapid changes in the temperature of the atmosphere within a single beam. - There may be interference from sources along the path of the analog signal, originating at the receiver on the telescope and travelling to the spectrometer backend in the GBT control room. An example of this is the 50 MHz ripple which appears in the left polarization K-band spectra taken during May, 2005 (Figure \[fig:rip50\]), and is due to temperature variations in the equipment room where the spectrometer is located. These variations cause standing waves in the connectors which manifest themselves as ripples in the spectral baseline. - Resonances in the receiver feeds may cause a loss of power at certain frequencies, leading to emission or absorption lines, sometimes referred to as ‘suck-outs’. When calculating $(\rm ON-OFF)/OFF$ from the on-source and off-source spectra, there will be a feature which is either in emission or absorption, depending on whether the power loss is in the on, or the off-beam. These ‘suck-outs’ are apparent in the K-band receiver temperature curve shown in Figure \[fig:kband\](a) (e.g. at frequencies of $\sim$22.6 GHz and $\sim$25.7 GHz), and may also be present in the Q-band receiver system (however at present, no high-resolution receiver temperature data are available for this receiver). - A high-frequency ripple severely affected the right polarization Q-band spectra taken in December, 2005 (but may also be present in the left polarization at a lower amplitude), and is of an unknown origin. Data which was infected by any of these artefacts was not included when creating the final spectra. Thus $\sim$53$\%$ of the original data were discarded. Results and Discussion {#sec:results} ====================== We do not find any evidence for line emission in our K-band spectra. Figure \[fig:kband\](a) shows the final K-band spectra of HDF850.1 over the observed frequency range, $\nu_{\rm obs}$ = 18.0 to 26.5 GHz. Due to the presence of residual baseline features, $\sim$53$\%$ of the raw data are not included in the final, co-added spectra shown here. The most noticeable contaminant of our spectra are the receiver resonance features, appearing at various frequencies along the K-band receiver temperature spectra (top curve, Figure \[fig:kband\](a)). These resonances originate within the feed horns and their amplitudes depend strongly on the weather conditions. Although these features prevent us from placing any constraints on the presence of line emission over certain redshift intervals, we are still able to obtain line luminosity limits over much of the K-band window. CO line luminosity limits ------------------------- We calculate 3-$\sigma$ upper limits to the and line luminosities across the available K-band and Q-band spectra, respectively. We assume a range of CO line widths, $\Delta V_{\rm line}$ = 460, 780 and 1100 km s$^{-1}$, which represent the first, median and third quartiles of the linewidths in the first 12 SMGs detected in CO line emission (see Greve et al. 2005). The 3-$\sigma$ upper limit to the CO line integrated intensity is given by (e.g. Isaak, Chandler & Carilli 2004), $3\cdot \sqrt{ \Delta V_{\rm line}/\Delta V_{\rm channel}}\cdot \sigma_{\rm channel} \cdot \Delta V_{\rm channel}$ (in units of Jy km s$^{-1}$), where the velocity width of a channel, $ \Delta V_{\rm channel}$, and the r.m.s. per channel, $\sigma_{\rm channel}$, both vary across the combined spectra. The limits to the integrated line intensity are converted to 3-$\sigma$ upper limits to the CO line luminosity, $L^{\prime}_{\rm CO}$, following the expression given by Solomon, Downes & Radford (1992). These limits are shown in Figure \[fig:kband\](b) for the line in the K-band spectra, and in Figure \[fig:qband\](b) for the line in the Q-band spectrum. For comparison, we also plot the median CO line luminosity measured in the first 12 SMGs detected in ([*J$_{\rm upper}$*]{} $\ge$ 2) CO line emission (see Greve et al. 2005). The fact that neither , or line emission is detected in HDF850.1, can be explained by three possible scenarios; *i)* the total molecular gas mass in this object is low, resulting in CO line emission that is weaker than our line luminosity limits, *ii)* the frequency of the emission line coincides with that of a receiver resonance feature, or *iii)* the redshift of this object is such that the emission frequency of the line is outside the range accessible to the K-band receiver, a possibility which has a 10-14$\%$ probability according to the photometric redshift estimate of Aretxaga et al. (2003). Under the assumption that *i)* is the correct scenario, the limits to the line luminosity can be used to estimate the limits on the total molecular gas mass, $M_{\rm H_2}$, in HDF850.1. Adopting the relationship: $M_{\rm H_2} = \alpha L'_{\rm CO(1-0)}$ ($\alpha \sim1$ M$_{\odot}$(K km s$^{-1}$ pc$^2)^{-1}$), appropriate for nearby ultraluminous infrared galaxies (Downes & Solomon 1998), we can estimate an upper limit to the molecular gas mass contained within HDF850.1, under the assumption that its redshift is in the range, 3.3 $\le z \le$ 5.4. The 3-$\sigma$ limit to the line luminosity is in the range, $L'_{\rm CO} \la (3.7 - 8.3)\times 10^{10}$ K km s$^{-1}$ pc$^2$, depending on the assumed line width and redshift. By first accounting for a lensing amplification factor of 3 (Dunlop et al. 2004), these limits on the CO line luminosity translate directly to a molecular gas mass, $M_{\rm H_2} \la (1.2 - 2.8)\times 10^{10}$ M$_{\odot}$. The lensing amplification factor has been calculated for the FIR/submm emission region, which we are assuming is co-spatial with the CO line emission region, resulting in an equal lensing factor. This assumption may not be valid, but without high angular resolution observations of both the FIR/submm and the CO emission line regions, it remains uncertain. CO and Far-Infrared luminosities -------------------------------- We can assess whether the CO line luminosity limits achieved here, are sufficient to have detected CO line emission over the observed redshift interval, by considering the CO line luminosity ($L'_{\rm CO}$) predicted by the estimated far-infrared luminosity ($L_{\rm FIR}$) of HDF850.1 within the context of the locally observed $L_{\rm FIR} - L'_{\rm CO}$ relation. In nearby galaxies there exists a well-established correlation between far-infrared luminosity and CO line luminosity (e.g. Young & Scoville 1991), though it is unclear whether this relation truly arises from a direct dependance of star-formation rate (as traced by $L_{\rm FIR}$) on the total molecular gas mass (as traced by $L'_{\rm CO}$). Furthermore, this relationship appears to deviate from a power-law at high far-infrared luminosities ($L_{\rm FIR} \ga 10^{12}$ L$_{\odot}$), which are characteristic of the SMG population. Despite the uncertainties in this relation, we converted the estimated far-infrared luminosity in HDF850.1 to an expected line luminosity, in order to determine if our limits on the and line luminosities are sufficiently sensitive for us to have confidently expected a CO detection. Following Neri et al. (2003) and Greve et al. (2005), we calculate the far-infrared luminosity for HDF850.1 according to, $L_{\rm FIR} \sim 1.9\times 10 ^{12}~S_{850}\rm [mJy]$ L$_{\odot}$ (Blain et al. 2002), under the assumption of a modified greybody with dust temperature $T_{\rm d} = 40$ K, and emissivity index $\beta= 1.5$, where $S_{850}$ is the observed 850 $\mu$m flux density. Although various measurements of the 850 $\mu$m flux density in HDF850.1 exist in the literature, the differences are not significant and we adopt the original value presented by Hughes et al. (1998), $S_{850} = 7.0 \pm 0.5$ mJy. This leads to an estimated far-infrared luminosity, $L_{\rm FIR} \sim (13.3 \pm 1.0)m^{-1}\times 10 ^{12}$ L$_{\odot}$ (where $m$ is the magnification factor due to gravitational lensing, believed to be $\sim$3; Dunlop et al. 2004). In Figure \[fig:lcolfir\], we compare the estimated limits on the $L_{\rm FIR} - L'_{\rm CO}$ parameter space obtained here for HDF850.1, with the $L_{\rm FIR} - L'_{\rm CO}$ relation observed in other SMGs and various low-redshift galaxy samples. The ultraluminous infrared galaxy (ULIRG) sample observed in by Solomon et al. (1997) is also included, along with the SLUGS sample of Dunne et al. (2000b) with line luminosities taken from the literature (Sanders et al. 1985, 1986, 1991; Young et al. 1995; Casoli et al. 1996; Chini, Krügel & Lemke 1996; Maiolino et al. 1997; Solomon et al. 1997; Lavezzi & Dickey 1998). More recent measurements of the nuclear and line emission in a subset of the SLUGS are presented in Yao et al. (2003). The SMG sample consists of those objects in which searches have been conducted for CO line emission (Frayer et al. 1998, 1999; Neri et al. 2003; Genzel et al. 2003; Sheth et al. 2004; Greve et al. 2005; Kneib et al. 2005; Tacconi et al. 2006), and the one object, SMM J13120+4242 at z=3.4, for which the line has been detected (Hainline et al. 2006). The far-infrared luminosities of these objects are calculated following the same prescription as that adopted for HDF850.1, with submm/mm flux densities taken from the literature (Smail et al. 1997, 1998; Ivison et al. 1998; Barger, Cowie, & Sanders 1999; Dey et al. 1999; Cowie, Barger, & Kneib 2002; Scott et al. 2002; Chapman et al. 2003b, 2005; Greve et al. 2004). For the purpose of comparison with the nearby galaxies detected in , we follow Greve et al. (2005) and assume that for the CO lines observed in SMGs, $ L'_{\rm CO(4-3)} / L'_{\rm CO(1-0)} = L'_{\rm CO(3-2)} / L'_{\rm CO(1-0)} = L'_{\rm CO(2-1)} / L'_{\rm CO(1-0)} = 1$, corresponding to optically-thick, thermalized CO emission. These data are plotted in Figure \[fig:lcolfir\], with appropriate corrections applied to both $L_{\rm FIR}$ and $L'_{\rm CO}$ to account for magnification by gravitational lensing in the 6 SMGs believed to be lensed (assuming co-spatial far-infrared and CO emission line regions). Also plotted in Figure \[fig:lcolfir\] is a fit to the $L_{\rm FIR} - L'_{\rm CO}$ relation derived from a combined sample of ULIRGs and SMGs by Greve et al. (2005) where $\log{L'_{\rm CO}} = (0.62\pm0.08)\log{L_{\rm FIR}} + (2.33\pm0.93)$. We find a similar relation when we fit to the luminosities in the combined low-redshift LIRGs/ULIRGs and SMG sample plotted here, $\log{L'_{\rm CO}} = (0.57\pm0.03)\log{L_{\rm FIR}} + (3.10\pm0.34)$. These fits may be used to compare the estimated far-infrared luminosity in HDF850.1 to the measured CO line luminosity limits. After correcting for amplification by gravitational lensing, the estimated far-infrared luminosity in HDF850.1 makes it one of the least intrinsically luminous SMGs that has been searched for CO line emission, as most in the sample are believed to be unlensed (see Greve et al. 2005). Adopting the estimate for $L_{\rm FIR}$ in HDF850.1, the fit to the $L_{\rm FIR} - L'_{\rm CO}$ relation by Greve et al. (2005) would predict, $L'_{\rm CO} = 2.9m^{-1}\times 10^{10}$ K km s$^{-1}$ pc$^2$, while the fit presented here would predict, $L'_{\rm CO} = 3.4m^{-1}\times 10^{10}$ K km s$^{-1}$ pc$^2$. These values are generally lower than the 3-$\sigma$ CO line luminosity limits achieved in our K-band and Q-band spectra. We are unable to draw any conclusions as to the validity of the photometric redshift technique applied to HDF850.1. Although we have removed $\sim$53% of the original K-band data due to various spectral baseline irregularities, a further $\sqrt{2}$ decrease in the noise would still not be sufficient to obtain an $L'_{CO}$ limit that was significantly inconsistent with the expected CO content, given the uncertainties and the dispersion in the estimated gas masses of SMGs with CO detections. The and line luminosity limits presented here are not of sufficient depth to exclude the presence of CO line emission within the redshift interval, 3.3 $\la z \la$ 5.4. Conclusions =========== We present a broadband, GBT spectroscopic search for and line emission in the high-redshift SMG, HDF850.1 using the K-band (18.0 to 26.5 GHz) and Q-band (40.0 to 48.0 GHz) receivers. Although we do not detect any CO line emission in this object, our constraints on the CO line luminosity are approaching that predicted by the far-infrared luminosity within the context of the local $L_{\rm FIR} - L'_{\rm CO}$ relation. These GBT results are still consistent with HDF850.1 lying in the redshift interval, 3.3 $\la z \la$ 5.4, based on our previous rest-frame far-infrared-to-radio photometric redshift estimate. The GBT has recently been successful in detecting the line in 3 quasar host galaxies; APM 08279+5255 at z=3.9, PSS J2322+1944 at z=4.1 and BR 1202-0725 at z=4.7 (Riechers et al. 2006), while Hainline et al. (2006) present a detection of line emission in the SMG SMM J13120+4242 at z=3.4. These 4 objects detected in line emission with the GBT, were previously known to exhibit strong CO line-emission, and therefore to contain large masses of warm molecular gas. With these prior CO detections, the redshifts for the molecular emission-line regions were constrained to $\la$100 km s$^{-1}$ enabling a more efficient use (i.e. a narrower frequency search) of their available GBT observing time. Given the prior uncertainty in both the redshift and CO line intensity of HDF850.1, our experiment is quite distinct from the ‘tuned’ GBT observations described above. This is the first broad-bandwidth cm-wavelength search for CO-line emission in a high-redshift object (guided by a radio-to-FIR photometric redshift), with no previous detections of molecular line emission or an optical redshift. Considering future possibilities, the gaseous medium within SMGs is expected and perhaps already shown to be warm and dense, and hence the CO line transitions should be more intense than the [*J*]{}=1-0 transition (Weiss et al. 2005). At z$\sim$$2-4$ (typical of the SMG population) the ([*J*]{} $\ge$ 2) CO transitions are redshifted into the $\sim$$70- 310$ GHz atmospheric windows. Thus we are optimistic that CO line searches, using broadband mm-wavelength receivers on sensitive facilities such as the LMT, PdBI or CARMA, will be more successful in obtaining unambiguous spectroscopic redshifts for the optically obscured SMG population of starburst galaxies. Acknowledgments {#acknowledgments .unnumbered} =============== We are very grateful to the entire Green Bank staff for their help and patience throughout the course of these observations. In particular, we would like to thank Carl Bignell, Ron Maddalena, Dana Balser, Karen O’Neil, Tony Minter, Frank Ghigo, Glen Langston, Brian Mason, Jay Lockman, Phil Jewell and Richard Prestage. J.W. would like to thank Paul Kondratko for informative discussions on GBT data reduction. J.W. thanks the Department of Astrophysics at INAOE for a graduate student scholarship and the SAO for the funding provided by a predoctoral student fellowship. D.H.H., J.W. and I.A. are supported by CONACYT grant 39953-F. This work is partially funded by CONACYT grant 39548-F. We thank the anonymous referee for helpful suggestions. Aretxaga, I., Hughes, D. H., Chapin, E. L., Gazta[ñ]{}aga, E., Dunlop, J. S., & Ivison, R. J. 2003, MNRAS, 342, 759 Aretxaga, I., Hughes, D. H., & Dunlop, J. S. 2005, MNRAS, 358, 1240 Barger A.J., Cowie L.L., Sanders D.B., Fulton E., Tanigushi Y., Sato Y., Kawara K., Okuda H., 1998, Nat, 394, 248 Barger, A.J., Cowie, L.L., & Sanders, D.B. 1999, ApJ, 518, L5 Bertoldi F., Menten K. M., Kreysa E., Carilli C. L., Owen F., 2000, 24th meeting of the IAU, Joint Discussion 9, Manchester, England. Blain, A. W., Smail, I., Ivison, R. J., Kneib, J.-P., & Frayer, D.T. 2002, Phys. Rep., 369, 111 Borys C., Chapman S.C., Halpern M., Scott D., 2003, MNRAS, 344, 385 Carilli C.L. & Yun M.S., 1999, ApJ, 513, L13 Carilli C.L. & Yun M.S., 2000, ApJ, 530, 618 Casoli F., Dickey J., Kazës I., Boselli A., Gavazzi G., Jore K., 1996, A&AS, 116, 193 Chapman S.C., et al. , 2003a, Nature, 422, 695 Chapman, S.C. et al. 2003b, ApJ, 585, 57 Chapman S.C., et al. , 2005, ApJ, 622, 772 Chini R., Krügel E., Lemke R., 1996, A&AS, 118, 47 Cowie, L.L., Barger, A.J., & Kneib, J.-P. 2002, AJ, 123, 2197 Dey, A., et al. 1999, ApJ, 519, 610 Downes, D. & Solomon P.M. 1998, ApJ, 507, 615 Downes D., et al., 1999, A&A, 347, 809 Dunlop J., et al. , 2004, MNRAS, 350, 769 Dunne L., Clements D.L., Eales S.A., 2000a, MNRAS, 319, 813 Dunne L., Eales S.A., Edmunds M., Ivison R.J., Alexander P., Clements D.L., 2000b, MNRAS, 315, 115 Frayer, D.T. et al. 1998, ApJ, 506, L7 Frayer, D.T. et al. 1999, ApJ, 514, L13 Genzel, R. et al. 2003, ApJ, 584, 633 Greve T.R., Ivison R.J., Bertoldi F., Stevens J.A., Dunlop J.S., Lutz D., Carilli C.L., 2004, MNRAS, 354, 779 Greve T.R. et al. 2005, MNRAS, 359, 1165 Hainline, L.J., Blain, A.W., Greve, T.R., Chapman, S.C., Smail, I., & Ivison, R.J., 2006, ApJ, 650, 614 Helou, G., et al., 1985, ApJ, 298, L7 Hughes D.H., et al., 1998, Nat, 394, 241 Hughes D.H., et al., 2002, MNRAS. 335, 871 Hunt, L.K. & Maiolino, R., 2005, ApJ, 626, L15 Isaak, K., Chandler, C., & Carilli, C. 2004, MNRAS, 348, 1035 Ivison, R.J. et al. 1998, MNRAS, 298, 583 Jewell, P.R., & Prestage, R.M. 2004, Proc. SPIE, 5489, 312 Kneib, J.-P. et al. 2005, A&A, 434, 819 Laurent, G.T. et al. 2005, ApJ, 623, 742 Lavezzi T. E., Dickey J. M., 1998, AJ, 115, 405 Maiolino R., Ruiz M., Rieke G. H., Papadopoulos P., 1997, ApJ, 485, 552 Neri R. et al. 2003, ApJ, 597, L113 Rengarajan T.N. & Takeuchi T.T., 2001, PASJ, 53, 433 Riechers, D.A. et al. 2006, ApJ, 650, 604 Sanders D. B., Mirabel I. F., 1985, ApJ, 298, L31 Sanders D. B., Young J. S., Soifer B. T., Schloerb F. P., Rice W. L., 1986, ApJ, 305, L45 Sanders D. B., Scoville N. Z., Soifer B. T., 1991, ApJ, 370 158 Scott S. E. et al. 2002, MNRAS, 331, 817 Sheth, K., et al.  2004, ApJ, 614, L5 Smail I., Ivison R.J., Blain A.W., 1997, ApJ, 490, L5 Smail I., Ivison R.J., Blain A.W. & Kneib, J.-P. 1998, ApJ, 507, L21 Solomon P. M., Downes D., & Radford S. J. E., 1992, ApJ, 398, L29 Solomon P. M., Downes D., Radford S. J. E., Barrett J. W., 1997, ApJ, 478, 144 Solomon P. M., & Vanden Bout P. A., 2005, ARA&A, 43, 677 Spergel D. N., et al., 2003, ApJS, 148, 175 Spergel D. N., et al., 2006, *astroph/0603449* Tacconi, L. et al. 2006, ApJ, 640, 228 Vanden Bout P. A., Solomon P. M. & Maddalena R.J., 2004, ApJ, 614, L97 Weiss A., Downes D., Walter F., Henkel C., 2005, A&A, 440, L45 Wiklind T., 2003, ApJ, 588, 736 Yao, L., Seaquist, E.R., Kuno, N., & Dunne, L. 2003, ApJ, 588, 771 Young J. S., Scoville N. Z., 1991, ARA&A, 29, 581 Young J. S. et al., 1995, ApJS, 98, 219 Yun M.S. & Carilli C.L., 2002, ApJ, 568, 88 [^1]: Scottish Universities Physics Alliance [^2]: The Green Bank Telescope is a facility of the National Radio Astronomy Observatory, operated by Associated Universities, Inc. under a Cooperative Agreement with the National Science Foundation. [^3]: http://www.lmtgtm.org [^4]: http://gbtidl.sourceforge.net
--- abstract: 'We present results for the corrections of order ${\alpha}^{2}(Z{\alpha})^{5}$ to the Lamb shift. We compute all the contributing Feynman diagrams in dimensional regularization and a general covariant gauge using a mixture of analytical and numerical methods. We confirm results obtained by other groups and improve their precision. Values of the 32 “master integrals” for this and similar problems are provided.' author: - Matthew Dowling - 'Jorge Mond[é]{}jar' - 'Jan H. Piclum' - Andrzej Czarnecki title: 'Radiative-nonrecoil corrections of order ${\alpha}^{2}(Z{\alpha})^{5}$ to the Lamb shift' --- Introduction ============ Recent developments in spectroscopy have led to very precise experimental values for the $1S$ Lamb shift and the Rydberg constant [@Berkeland:1995zz; @Weitz:1995zz; @Bourzeix:1996zz; @Udem:1997zz; @Schwob:1999zz], so that now the Lamb shift provides the best test of Quantum Electrodynamics for an atom. These achievements have spurred great theoretical efforts aimed at matching the current experimental accuracy (for a review of the present status and recent developments in the theory of light hydrogenic atoms, see [@Eides:2000xc]). The theoretical prediction is expressed in terms of three small parameters: $Z\alpha$ describing effects due to the binding of an electron to a nucleus of atomic number $Z$; $\alpha$ (frequently accompanied by $1/\pi$) from electron selfinteractions; and the ratio of electron to nucleus masses. The Lamb shift is of the order $\alpha\left(Z\alpha\right)^{4}$; all corrections through the second order in the small parameters are known, as well as some of the third order [@Eides:2006hg]. Another source of corrections is the spatial distribution of the nuclear charge. Even for hydrogen, the experimental uncertainty in the measurement of the proton root mean square charge radius poses an obstacle for further theoretical progress. Fortunately, measurements can be performed also with the muonic hydrogen whose spectrum is much more sensitive to the proton radius. A comparison of the theoretical prediction [@Pachucki:1996] and anticipated new measurements [@muonicH] is expected to soon improve the knowledge of this crucial parameter. In this paper we focus on the second-order radiative-nonrecoil contributions to the Lamb shift of order ${\alpha}^{2}(Z{\alpha})^{5}$. The total result for the corrections of this order was presented first in [@Pachucki:1994zz] and improved in [@Eides:1995gy; @Eides:1995ey]. Our full result is compatible with the previous ones and has better precision. When comparing contributions from individual diagrams with [@Eides:1995ey], however, we find small discrepancies in some cases. In Section \[eval\] we present the details of our approach, and in Section \[res\] we present our results. In Appendix \[app\] we show the results for the master integrals, and in Appendix \[appB\] new analytic results for two diagrams. Evaluation {#eval} ========== We consider an electron of mass $m$ orbiting a nucleus of mass $M$ and atomic number $Z$, where $Z$ is assumed to be of such a size that $Z{\alpha}$ is a reasonable expansion parameter. We are interested in corrections to the Lamb shift of order ${\alpha}^{2}(Z{\alpha})^{5}$ and leading order in $m/M$, given by $$\delta E=-|\psi_{n}(0)|^{2}\mathcal{M}^{(2,2,0)}(eN\to eN)\,,$$ where $|\psi_{n}(0)|^{2}=(Z{\alpha}\mu)^{3}/(\pi n^{3})$ is the squared modulus of the wave function of an $S$ bound state with principal quantum number $n$ ($\mu$ is the reduced mass of the system), and $\mathcal{M}^{(2,2,0)}(eN\to eN)$ is the momentum space representation of the amplitude of the interaction between the electron and the nucleus at orders ${\alpha}^{2}(Z{\alpha})^{2}$ and $(m/M)^{0}$. Both particles are considered to be at rest and on their mass shell [@Peskin]. The correction $\delta E$ is given by the sum of all the three-loop diagrams presented in Figs. \[vacuum\] and \[other\]. In these figures, the continuous line represents the electron, and the dashed line represents the interaction with the nucleus. The reason for this is that for our purposes this interaction can be replaced by an effective propagator. In all diagrams, the leading order in $m/M$ comes from the region where all the loop momenta scale like $m$. The part of the diagrams representing the interaction between the electron and the nucleus at order $(Z{\alpha})^{2}$ is given by the sum of the direct and crossed two-photon exchange shown in Fig. \[graphdelta\]. If $k$ and $N$ are the loop and nucleus momenta, respectively, and $k^{2}\sim m^{2}\ll N^{2}=M^{2}$, the sum of the nucleus propagators can be approximated at leading order by $$\begin{aligned} & & \frac{1}{(N+k)^{2}-M^{2}+i\epsilon} + \frac{1}{(N-k)^{2}-M^{2}+i\epsilon} {\nonumber}\label{delta}\\ & & \simeq\frac{1}{2N\cdot k+i\epsilon} - \frac{1}{2N\cdot k-i\epsilon}=-i\pi\delta(N\cdot k)\,.\end{aligned}$$ Since the nucleus is considered to be at rest, this gives us a $\delta(Mk^{0})$. Together with the propagators of the two photons, this constitutes the effective propagator. We used dimensional regularization, and renormalized our results using the on-shell renormalization scheme. For all the photon propagators in Figs. \[vacuum\] and \[other\] we used a general covariant $R_{\xi}$ gauge. The overall cancellation of the dependence on the gauge parameter in the final result provided us with a good check for our calculations. Since the gauge invariance of the sum of the subdiagrams in Fig. \[graphdelta\] is trivially and independently fulfilled, for the photonic part of the effective propagator we used the Feynman gauge. ![\[vacuum\]The different sets of vacuum polarization diagrams. Each set represents the drawn diagram plus all the possible permutations of its pieces.](Vacuum){width="\columnwidth"} ![\[other\]The diagrams involving a two-loop electron self-interaction and vertex corrections.](Other){width="\columnwidth"} ![\[graphdelta\]The sum of the direct and crossed diagrams is approximated by an effective propagator (the double line represents the propagator of the nucleus).](Delta){width="\columnwidth"} Since we are considering free asymptotic states with independent spins, the Dirac structures of the electron and the nucleus factorize, and we simplify them by inserting each of the structures between the spinors of the initial and final states and averaging over the spins of the initial states. We use the program `qgraf` [@Nogueira:1991ex] to generate all of the diagrams, and the packages `q2e` and `exp` [@Harlander:1997zb; @Seidensticker:1999bb] to express them as a series of vertices and propagators that can be read by the `FORM` [@Vermaseren:2000nd] package `MATAD 3` [@Steinhauser:2000ry]. Finally, `MATAD 3` is used to represent the diagrams in terms of a set of scalar integrals using custom-made routines. In this way, we come to represent the amplitude $\mathcal{M}$ in terms of about 18000 different scalar integrals. These integrals can be expressed in terms of a few master integrals by means of integration-by-parts (IBP) identities [@Chetyrkin:1981qh]. Using the so-called Laporta algorithm [@Laporta:1996mq; @Laporta:2001dd] as implemented in the `Mathematica` package `FIRE` [@Smirnov:2008iw], we find 32 master integrals.[^1] Since the program `FIRE` deals only with standard propagators, when using it we worked only with one of the nucleus propagators, instead of the Dirac delta of the effective propagator. That is, instead of working with $\delta(k^{0})$, we worked with $1/(2k\cdot N+i\epsilon)$, for example. Working with just one of the propagators is enough for this purpose, as the IBP method is insensitive to the $i\epsilon$ prescription (remember from Eq. (\[delta\]) that in our approximation this is the only difference between the two propagators). Since each diagram in Figs. \[vacuum\] and \[other\] represents the subtraction of two integrals that only differ in the nucleon propagator, when applying the IBP method we can set to zero any resulting integral in which the propagator $1/(2k\cdot N+i\epsilon)$ disappears. This can be done because the same integral with the propagator $1/(2k\cdot N-i\epsilon)$ instead would give the exact same contribution and thus the difference between the two is zero. Once the reduction to master integrals is complete, we can simply substitute back the delta function in place of the nucleon propagator. In order to calculate the master integrals, we turned the expressions in Appendix \[app\] into a representation in terms of Feynman parameters. The procedure we then followed in most cases was to use a Mellin-Barnes representation [@Smirnov:1999gc; @Tausk:1999vh] to break up sums of Feynman parameters raised to non-integer powers and transform the integrals into integrals of Gamma functions over the imaginary axis. In some cases, we were able to obtain analytical results. Otherwise, we used the `Mathematica` packages `MB` [@Czakon:2005rk] and `MBresolve` [@Smirnov:2009up] to perform a numerical calculation. For integrals $I_{9}$, $I_{10}$, $I_{14}$, $I_{15}$, $I_{27}$, $I_{28}$, $I_{31}$, and $I_{32}$ (cf. Appendix \[app\]), the Mellin-Barnes representation was too cumbersome for a numerical evaluation. In these cases we used the `Mathematica` package `FIESTA 1.2.1` [@Smirnov:2008py] with integrators from the `CUBA` library [@Hahn:2004fe] to perform numerical computations using sector decomposition [@Binoth:2000ps; @Heinrich:2008si]. Like `FIRE`, `FIESTA` can only process standard propagators as input. This means that we had to use the momentum representation of the integrals with the nucleon propagators instead of the delta function. We separately calculated the integrals containing $1/(2N\cdot k+i\epsilon)$ and the ones containing $1/(-2N\cdot k+i\epsilon)$ instead, and added the two results. We checked the method by computing with `FIESTA` some integrals we had already found with `MB` and `MBresolve`. The results always agreed. There was one case, integral $I_{32}$, where the `FIESTA` result for the integral with $1/(-2N\cdot k+i\epsilon)$ was numerically unstable. Fortunately, in this case we could find a representation in terms of Feynman parameters that we could compute directly using `CUBA`, without further treatment. This was possible because the integral is finite, and the representation was free of spurious divergences. We cross-checked this result using a beta version of `FIESTA 2` [@Smirnov:2009pb], which did not produce the instabilities we encountered in the former version. We performed an additional cross-check of our results by changing the basis of integrals. To do this, we took one of the integrals we computed with `FIESTA` and used the IBP method to express it in terms of a similar integral of our choice (same as the original one, but with some propagator(s) raised to different powers) plus other master integrals we already knew. We then computed the new integral with `FIESTA` and checked if the final result for the Lamb shift (or for individual diagrams) agreed with the calculation in the old basis. Since changing the basis modifies the coefficients of all the integrals involved in the change, the agreement of the results obtained with different bases is a very good cross-check of our calculations. This cross-check was performed for several integrals. In particular, we changed integrals $I_{19}$ and $I_{27}$, which are the ones limiting our precision, and integrals $I_{15}$ and $I_{32}$. Since the last two integrals contain most of the propagators for integral types $F$ and $G$, the corresponding changes of basis affect the coefficients of most of the other integrals of the respective type. Results {#res} ======= Our final results for the separate contributions from the vacuum-polarization diagrams of Fig. \[vacuum\] and the diagrams $a$–$s$ of Fig. \[other\] are $$\begin{aligned} \delta E_{vac.} & = & \frac{{\alpha}^{2}(Z{\alpha})^{5}}{\pi n^{3}}{\left(}\frac{\mu}{m}{\right)}^{3}m\,[0.86281422(3)]\,,\label{vac}\\ \delta E_{a-s} & = & \frac{{\alpha}^{2}(Z{\alpha})^{5}}{\pi n^{3}}{\left(}\frac{\mu}{m}{\right)}^{3}m\,[-7.72381(4)]\,.\label{as}\end{aligned}$$ The best results so far for the vacuum polarization diagrams and for diagrams $a$–$s$ have been published in [@Pachucki:1993zz] (cf. [@Eides:2000xc] for references of partial results) and [@Eides:1995ey], respectively. Our results are compatible with them and improve the precision by two orders of magnitude in the case of $\delta E_{vac.}$ and a little over one order of magnitude for $\delta E_{a-s}$. The total result reads $$\delta E=\frac{{\alpha}^{2}(Z{\alpha})^{5}}{\pi n^{3}}{\left(}\frac{\mu}{m}{\right)}^{3}m\,[-6.86100(4)]\,,\label{total}$$ and the corresponding energy shifts for the $1S$ and the $2S$ states in hydrogen are $$\begin{aligned} \delta E_{1S} & = & -296.866(2)\,\text{kHz}\,,\\ \delta E_{2S} & = & -37.1082(3)\,\text{kHz}\,.\end{aligned}$$ Set This paper Refs. [@Pachucki:1993zz; @Eides:1996uj] ----- ------------------------ ----------------------------------------- I $-0.07290996446926(4)$ $-0.0729098(3)$ II 0.61133839226… $0.61133839226\dots$ III 0.50814858506… $0.50814858506\dots$ IV $-0.12291623(3)$ $-0.122915(3)$ V $-23/378$ $-23/378$ : \[tvac\] Comparison between our results for the different vacuum-polarization sets (in Fried-Yennie gauge) and those of [@Pachucki:1993zz; @Eides:1996uj]. Numbers ending in an ellipsis indicate an analytic result, which we show in Appendix \[appB\]. Diagram This paper Ref. [@Eides:1995ey] --------- -------------------------- ---------------------- $a$ 0 0 $b$ 2.955090809… 2.9551(1) $c$ $-2.22312657\dots$ $-2.2231(1)$ $d$ $-5.2381153272259(2)$ $-5.238023(56)$ $e$ 5.0561650638185(4) 5.056278(81) $f$ $6\ln2-207/40$ $-1.016145(21)$ $g$ $6\ln2-147/80-\pi^{2}/4$ $-0.1460233(52)$ $h$ 153/80 153/80 $i$ $-5.51731(2)$ $-5.51683(34)$ $j$ $-7.76838(1)$ $-7.76815(17)$ $k$ 1.9597582447795(2) 1.959589(33) $l$ 1.74834(4) 1.74815(38) $m$ 1.87510512(6) 1.87540(17) $n$ $-1.30570289(7)$ $-1.30584(18)$ $o$ $-12.06904(9)$ $-12.06751(47)$ $p$ 6.13815(1) 6.13776(25) $q$ $-7.52425(2)$ $-7.52453(34)$ $r$ 14.36962(7) 14.36733(44) $s$ $-0.9304766935602(5)$ $-0.930268(72)$ : \[tother\] Comparison between our results for diagrams $a$–$s$ (in Fried-Yennie gauge) and those of [@Eides:1995ey]. Numbers ending in an ellipsis indicate an analytic result, which we show in Appendix \[appB\]. Choosing the Fried-Yennie gauge [@Fried:1958zz; @Adkins:1993qm], we also compared the results from the different sets of vacuum polarization diagrams with those of [@Pachucki:1993zz; @Eides:1996uj], and the results from the individual diagrams $a$–$s$ with those of [@Eides:1995ey]. Our results for the vacuum polarization graphs and diagrams $a$–$s$ are presented in Tables \[tvac\] and \[tother\], respectively. All numbers in the tables are to be multiplied by the prefactor ${\alpha}^{2}(Z{\alpha})^{5}/(\pi n^{3})(\mu/m)^{3}m$ (note the difference in normalization in [@Pachucki:1993zz]). We found new analytic results for four diagrams. The results for diagrams $f$ and $g$ are given in Table \[tother\], while the results for diagrams $b$ and $c$, being too lengthy for the table, are presented in Appendix \[appB\]. For completeness, the known analytic results for sets II and III of the vacuum polarization diagrams are given in Appendix \[appB\] as well. It should be mentioned that the errors of the results in Eqs. (\[vac\]), (\[as\]), and (\[total\]) are not obtained from the sum of the errors of the diagrams in Tables \[tvac\] and \[tother\]. Once we decompose the problem into the calculation of master integrals, the diagrams are no longer independent, as the same master integral contributes to several different diagrams. Thus, to find the error of our total result, we first sum all diagrams and then sum all the errors of the integrals in quadrature. We found discrepancies between our results for diagrams $a$–$s$ and those of [@Eides:1995ey]. Most of the central values in the second and third column of table \[tother\] lie between $1\sigma$ and $2\sigma$ away from each other, but in the case of diagrams $o$ and $s$ the difference is around $3\sigma$, and for diagrams $k$ and $r$, it reaches $5\sigma$ (we take as $\sigma$ the errors of individual diagrams in the third column). We should stress again that our calculation is done using dimensional regularization while the study of Ref. [@Eides:1995ey] was performed in four dimensions. Even though all the individual diagrams are finite, one can imagine situations where the two regularization methods give different partial results. However, we do not observe significant cancellations in the sum of the differences. Thus, it seems the differences are real although practically negligible; their sum is very small and amounts to $10^{-3}$, which is the error estimate in [@Eides:1995ey]. Thus our results agree within that error. Summary ======= We have applied particle theory methods to compute, in dimensional regularization and a general covariant gauge, the corrections of order ${\alpha}^{2}(Z{\alpha})^{5}$ to the Lamb shift. We have made use of IBP techniques to reduce the problem of computing all the necessary Feynman diagrams to the simpler problem of computing 32 scalar integrals. Mellin-Barnes integral representations and sector decomposition have then allowed us to obtain analytic results for some of these integrals, and good numerical results for the rest. With this, we have been able to reproduce and improve the results from previous calculations. The techniques used here are quite general and can be applied to other multi-loop problems in atomic physics. We are grateful to A.V. and V.A. Smirnov for providing us with a new version of `FIESTA` prior to publication. We thank M.I. Eides for useful comments. This work was supported by the Natural Sciences and Engineering Research Council of Canada. The work of J.H.P. was supported by the Alberta Ingenuity Foundation. The Feynman diagrams were drawn using `Axodraw` [@Vermaseren:1994je] and `Jaxodraw 2` [@Binosi:2008ig]. Results for the master integrals {#app} ================================ In section \[eval\] we presented our method of calculating the corrections to the Lamb shift, which differs significantly from the methods used in previous calculations. One important difference is the reduction of diagrams to master integrals. Here we present our results for all master integrals. ![\[masters\] A graphic representation of the 32 master integrals. Solid and dashed lines represent massive and massless scalar propagators, respectively. The dotted double line denotes the delta function. A dot on a line signifies that the propagator is raised to a higher power. The external lines indicate the momentum $p$ of the electron flowing in and out of the diagram. The first two diagrams represent different integrals that differ only by a term in the numerator.](Masters){width="\columnwidth"} The set of master integrals is represented in Fig. \[masters\]. We have two different types of integrals. In Euclidean space, they are defined as: $$\begin{aligned} & & F(\nu_{1},\nu_{2},\nu_{3},\nu_{4},\nu_{5},\nu_{6},\nu_{7},\nu_{8})=\frac{e^{3\gamma_{E}\epsilon}}{{\left(}\pi^{D/2}{\right)}^{3}}\int\frac{d^{D}k_{1}\, d^{D}k_{2}\, d^{D}k_{3}\;2\pi\,\delta(k_{2}^{0})}{(k_{1}^{2})^{\nu_{1}}\,(k_{2}^{2})^{\nu_{2}}\,(k_{3}^{2})^{\nu_{3}}\,[(k_{1}+p)^{2}+1]^{\nu_{4}}}{\nonumber}\\ & & \times\frac{1}{[(k_{1}+k_{2}+p)^{2}+1]^{\nu_{5}}\,[(k_{1}+k_{2}+k_{3}+p)^{2}+1]^{\nu_{6}}\,[(k_{2}+k_{3}+p)^{2}+1]^{\nu_{7}}\,[(k_{3}+p)^{2}+1]^{\nu_{8}}}\,,\\ & & G(\nu_{1},\nu_{2},\nu_{3},\nu_{4},\nu_{5},\nu_{6},\nu_{7})=\frac{e^{3\gamma_{E}\epsilon}}{{\left(}\pi^{D/2}{\right)}^{3}}\int\frac{d^{D}k_{1}\, d^{D}k_{2}\, d^{D}k_{3}\;2\pi\,\delta(k_{1}^{0})}{(k_{1}^{2})^{\nu_{1}}\,(k_{2}^{2})^{\nu_{2}}\,(k_{3}^{2})^{\nu_{3}}\,[(k_{1}+k_{2}+p)^{2}+1]^{\nu_{4}}}{\nonumber}\\ & & \times\frac{1}{[(k_{1}+k_{2}+k_{3}+p)^{2}+1]^{\nu_{5}}\,[(k_{2}+k_{3}+p)^{2}+1]^{\nu_{6}}\,[(k_{3}+p)^{2}+1]^{\nu_{7}}}\,,\end{aligned}$$ where $D=4-2\epsilon$, and $p=(i,\vec{0})$ is the momentum of the electron. The mass of the electron has been set equal to one for convenience and can be easily restored from the dimension of the integral. The factor $e^{3\gamma_{E}\epsilon}$, where $\gamma_{E}$ is the Euler-Mascheroni constant, has been introduced to suppress the dependence of the results on this constant. With these definitions, our results for the master integrals are: $$\begin{aligned} I_{1} & = & F(1,0,0,0,0,1,0,1)=2e^{3\gamma_{E}\epsilon}\,\frac{\Gamma(1-\epsilon)\Gamma^{2}\left(-\frac{3}{2}+2\epsilon\right)\Gamma\left(-\frac{1}{2}+\epsilon\right)\Gamma\left(-\frac{5}{2}+3\epsilon\right)}{\Gamma(-3+4\epsilon)}\,,\label{I1}\\ I_{2} & = & F(1,0,-1,0,0,1,0,1)=-2I_{1}\,,\label{I2}\\ I_{3} & = & F(0,0,0,0,1,1,1,0)=-96.174642407742494299(1)-3003.97283051743374945(1)\epsilon{\nonumber}\\ & & -16370.644886761701890(1)\epsilon^{2}-204040.09217878970569(1)\epsilon^{3}+\mathcal{O}(\epsilon^{4})\,,\label{I3}\\ I_{4} & = & F(-1,0,0,1,0,1,0,1)=128.23285654365665907(1)+4005.2971073565783326(1)\epsilon{\nonumber}\\ & & +21827.526515682269186(1)\epsilon^{2}+272053.45623838627426(1)\epsilon^{3}+\mathcal{O}(\epsilon^{4})\,,\label{I4}\\ I_{5} & = & F(0,-1,0,0,1,1,1,0)=213.37528929773859515(1)-1789.0076495990746772(1)\epsilon+\mathcal{O}(\epsilon^{2})\,,\label{I5}\\ I_{6} & = & F(1,0,1,0,0,1,0,0)=2\sqrt{\pi}e^{3\gamma_{E}\epsilon}\,\frac{\Gamma^{2}(1-\epsilon)\Gamma\left(-\frac{5}{2}+3\epsilon\right)\Gamma\left(-\frac{3}{2}+2\epsilon\right)\Gamma\left(\frac{9}{2}-5\epsilon\right)}{\Gamma(2-2\epsilon)\Gamma(3-3\epsilon)}\,,\\ I_{7} & = & F(1,0,0,0,1,0,0,1)=2\sqrt{\pi}e^{3\gamma_{E}\epsilon}\,\frac{\Gamma(-1+\epsilon)\Gamma\left(-\frac{3}{2}+2\epsilon\right)\Gamma\left(\frac{5}{2}-3\epsilon\right)\Gamma(-\frac{1}{2}+\epsilon)}{\Gamma(2-2\epsilon)}\,,\\ I_{8} & = & F(0,0,0,0,1,0,1,1)=2\sqrt{\pi}e^{3\gamma_{E}\epsilon}\,\frac{\Gamma(-1+\epsilon)\Gamma\left(-\frac{3}{2}+2\epsilon\right)\Gamma^{2}\left(-\frac{1}{2}+\epsilon\right)}{\Gamma(-1+2\epsilon)}\,,\\ I_{9} & = & F(0,1,0,0,1,1,1,1)=-\frac{8\pi^{2}}{\epsilon}-257.35053226188(1)-2952.9668342496406(4)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{10} & = & F(0,0,0,1,1,1,1,1)=-420.49901(1)+1860.837(4)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{11} & = & F(1,0,0,0,1,1,0,1)=2^{3-4\epsilon}\pi e^{3\gamma_{E}\epsilon}\,\frac{\Gamma{\left(}\epsilon-\frac{1}{2}{\right)}}{\cos{\left(}2\pi\epsilon{\right)}}\left[2\frac{\Gamma{\left(}\frac{5}{2}-3\epsilon{\right)}\Gamma{\left(}\epsilon{\right)}}{\Gamma{\left(}4-4\epsilon{\right)}}\,{}_{3}F_{2}{\left(}1,\frac{5}{2}-3\epsilon,\epsilon;\frac{3}{2},4-4\epsilon;1{\right)}\right.\nonumber \\ & & \left.-\sqrt{\pi}\frac{\Gamma{\left(}1-\epsilon{\right)}\Gamma{\left(}3\epsilon-\frac{3}{2}{\right)}}{\Gamma{\left(}\frac{5}{2}-2\epsilon{\right)}\Gamma{\left(}2\epsilon{\right)}}\,{}_{3}F_{2}{\left(}1,1-\epsilon,3\epsilon-\frac{3}{2};\frac{5}{2}-2\epsilon,2\epsilon;1{\right)}\right]\,,\label{I11}\\ I_{12} & = & F(1,1,0,0,1,1,0,1)=-\frac{4\pi^{2}}{\epsilon}-24\pi^{2}-\frac{4\pi^{4}}{3}+\pi^{2}{\left(}-116-\frac{59}{3}\pi^{2}+32\ln2+100\zeta(3){\right)}\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{13} & = & F(1,1,0,1,0,1,0,1)=-263.74028719521945979(1)+1741.1125810306205720(1)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{14} & = & F(1,0,0,0,1,1,1,1)=-362.8560(1)+\mathcal{O}(\epsilon)\,,\\ I_{15} & = & F(1,1,0,0,1,1,1,1)=36.969282(2)+\mathcal{O}(\epsilon)\,,\\ I_{16} & = & F(1,4,0,0,1,0,1,1)=\pi^{2}\left(-\frac{513}{128\epsilon}-\frac{11077}{768}-\frac{571}{16}\ln2+32\sqrt{5}\ln\left(\frac{1+\sqrt{5}}{2}\right)\right){\nonumber}\\ & & -1889.3810189605726842(1)\epsilon-2199.2559561980712031(1)\epsilon^{2}+\mathcal{O}(\epsilon^{3})\,,\\ I_{17} & = & F(1,0,0,0,2,0,2,1)=\frac{32\pi^{2}}{\sqrt{5}}\ln\left(\frac{1+\sqrt{5}}{2}\right)-683.43054120051764110(1)\epsilon{\nonumber}\\ & & +5647.2496334930969112(1)\epsilon^{2}+\mathcal{O}(\epsilon^{3})\,,\\ I_{18} & = & F(1,4,1,0,1,0,1,1)=\pi^{2}\left[\frac{343}{512\epsilon}+\frac{125257}{46080}-\frac{2}{3}\pi^{2}+\frac{169}{192}\ln2+16\ln^{2}2-48\ln^{2}{\left(}\frac{1+\sqrt{5}}{2}{\right)}\right]+\mathcal{O}(\epsilon)\,,\\ I_{19} & = & F(1,0,1,0,1,1,1,0)=-293.4480(2)+\mathcal{O}(\epsilon)\,,\\ I_{20} & = & F(1,0,0,1,1,0,1,1)=\pi^{2}\left[-\frac{2}{\epsilon}-2-\frac{8}{3}\pi^{2}+16\sqrt{5}\ln{\left(}\frac{1+\sqrt{5}}{2}{\right)}+32\ln^{2}{\left(}\frac{1+\sqrt{5}}{2}{\right)}\right]{\nonumber}\\ & & +1394.0754186124348755(1)\epsilon+\mathcal{O}(\epsilon^{2})\,,\label{I20}\\ I_{21} & = & F(1,1,1,0,0,1,0,1)=2\pi^{2}-\frac{4\pi^{4}}{3}+\pi^{2}{\left(}44-4\pi^{2}+80\zeta(3){\right)}\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{22} & = & F(0,0,0,1,1,0,1,1)=-\frac{128\pi^{2}}{3}-\pi^{2}\left(\frac{1792}{3}+\frac{256}{3}\pi-\frac{1024}{3}\ln2\right)\epsilon+\mathcal{O}(\epsilon^{2})\,,\label{I22}\\ I_{23} & = & F(1,0,1,0,1,0,1,0)\nonumber \\ & = & \frac{2\pi^{3}e^{3\gamma_{E}\epsilon}\,}{\Gamma^{2}{\left(}2-2\epsilon{\right)}\Gamma{\left(}\frac{3}{2}-\epsilon{\right)}}\left[2^{2\epsilon-2}\frac{\Gamma{\left(}1-\epsilon{\right)}\Gamma{\left(}\epsilon-\frac{1}{2}{\right)}}{\sin{\left(}\pi\epsilon{\right)}\cos{\left(}4\pi\epsilon{\right)}}{\left(}\frac{\Gamma{\left(}\frac{7}{2}-5\epsilon{\right)}\sin{\left(}\pi\epsilon{\right)}}{\Gamma{\left(}\frac{5}{2}-3\epsilon{\right)}\sin{\left(}2\pi\epsilon{\right)}\cos{\left(}3\pi\epsilon{\right)}}-\frac{\Gamma{\left(}3\epsilon-\frac{3}{2}{\right)}}{\Gamma{\left(}5\epsilon-\frac{5}{2}{\right)}\cos{\left(}2\pi\epsilon{\right)}}{\right)}\right.\nonumber \\ & & +\frac{\sqrt{\pi}\,\Gamma{\left(}2-2\epsilon{\right)}\Gamma{\left(}2\epsilon-\frac{1}{2}{\right)}}{\Gamma{\left(}2-\epsilon{\right)}\Gamma{\left(}3\epsilon-\frac{1}{2}{\right)}\sin{\left(}\pi\epsilon{\right)}\cos{\left(}\pi\epsilon{\right)}\cos{\left(}3\pi\epsilon{\right)}}\,{}_{3}F_{2}{\left(}1,2-2\epsilon,2\epsilon-\frac{1}{2};2-\epsilon,3\epsilon-\frac{1}{2};1{\right)}\nonumber \\ & & \left.-\frac{2^{1-2\epsilon}\pi\,\Gamma{\left(}\frac{5}{2}-3\epsilon{\right)}}{\Gamma{\left(}\frac{5}{2}-2\epsilon{\right)}\Gamma{\left(}\frac{1}{2}+\epsilon{\right)}\sin{\left(}2\pi\epsilon{\right)}\cos{\left(}\pi\epsilon{\right)}\cos{\left(}2\pi\epsilon{\right)}}\,{}_{3}F_{2}{\left(}1,\frac{5}{2}-3\epsilon,\epsilon;\frac{5}{2}-2\epsilon,2\epsilon;1{\right)}\right]\,,\\ I_{24} & = & G(0,1,2,1,0,1,0)=\frac{2\pi^{2}}{\epsilon}-162.745878930257(1)+640.681562239(2)\epsilon{\nonumber}\\ & & -9490.745115169417(3)\epsilon^{2}+\mathcal{O}(\epsilon^{3})\,,\\ I_{25} & = & G(2,1,1,1,0,1,0)=-\frac{4\pi^{2}}{\epsilon}-192.3546921335253(1)-2297.18352848038(1)\epsilon{\nonumber}\\ & & -10356.58582995624(1)\epsilon^{2}+\mathcal{O}(\epsilon^{3})\,,\\ I_{26} & = & G(1,1,1,1,0,1,0)=-\frac{4\pi^{2}}{\epsilon}-244.4995291143211(3)-2339.54007847666(2)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{27} & = & G(0,1,1,2,0,1,1)=136.8086023(2)-907.048(2)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{28} & = & G(0,1,1,1,1,1,0)=-280.62418(1)+734.494(1)\epsilon+\mathcal{O}(\epsilon^{2})\,,\\ I_{29} & = & G(1,1,1,0,1,1,1)=118.63826101784(1)+\mathcal{O}(\epsilon)\,,\\ I_{30} & = & G(0,1,1,0,1,1,0)=-10\sqrt{\pi}e^{3\gamma_{E}\epsilon}\,\frac{\Gamma\left(\epsilon\right)\Gamma^{2}{\left(}1-\epsilon{\right)}\Gamma\left(\frac{5}{2}-5\epsilon\right)\Gamma{\left(}-\frac{3}{2}+3\epsilon{\right)}}{\Gamma{\left(}2-2\epsilon{\right)}\Gamma\left(\frac{7}{2}-4\epsilon\right)}{\nonumber}\\ & & \times{}_{3}F_{2}\left(\frac{7}{2}-5\epsilon,\frac{3}{2}-\epsilon,-\frac{1}{2}+\epsilon;\frac{7}{2}-4\epsilon,\frac{1}{2}+\epsilon;1\right)\,,\\ I_{31} & = & G(1,1,1,1,1,1,0)=49.3616(1)+\mathcal{O}(\epsilon)\,,\\ I_{32} & = & G(1,1,1,1,1,1,1)=26.272804(6)+291.1097(1)\epsilon+\mathcal{O}(\epsilon^{2})\,,\end{aligned}$$ where $\zeta$ denotes Riemann’s zeta function, and ${}_{3}F_{2}$ is a generalized hypergeometric function. The latter can be expanded in $\epsilon$ with the help of the `Mathematica` package `HypExp 2` [@Huber:2007dx]. The relation between integrals $I_{1}$ and $I_{2}$ expressed in Eq. (\[I2\]) is not evident when looking at their respective representations. This relation becomes clear when checking the cancellation of the gauge-parameter dependence in the sum of all diagrams. If the 32 integrals presented here were an irreducible basis, the gauge dependence of the coefficient of each integral should vanish independently. However, this does not happen with the coefficients of integrals $I_{1}$ and $I_{2}$, which means that the integrals are connected. Demanding the cancellation of the gauge dependence yields Eq. (\[I2\]). We checked this relation by computing explicitly the analytic solution for $I_{2}$. There appears to be also a relation between integrals $I_{3}$ and $I_{4}$, although the gauge dependence does not give us any hint in this case. By demanding the cancellation of poles in several diagrams, one can find the following relation between the first three terms of $I_{3}$ and $I_{4}$, $$I_{4}=-\frac{4}{3}I_{3} + \mathcal{O}(\epsilon^3)\,.$$ The relation, however, seems to be valid to all orders in the $\epsilon$ expansion. We checked it numerically up to order $\epsilon^{4}$, but we could not find an analytic proof for it. Integrals $I_{3}$–$I_{5}$, $I_{12}$, $I_{13}$, $I_{16}$, $I_{17}$, and $I_{20}$–$I_{22}$ can be represented as a one-fold Mellin-Barnes integral. We only show numerical results with 20-digit precision, which is more than enough for our purposes. However, these integrals can be easily evaluated with a precision of 100 digits or more. With this kind of precision it is possible to find analytical results, using the `PSLQ` algorithm [@pslq]. In this way, we determined the $\mathcal{O}(\epsilon)$ term of integrals $I_{12}$ and $I_{21}$. The analytic expression for the $\mathcal{O}(\epsilon^{0})$ term in $I_{20}$ was obtained using the analytic result for set II of vacuum-polarization diagrams presented in [@Eides:1996uj]. Likewise, the analytic expression for the $\mathcal{O}(\epsilon)$ term in $I_{22}$ was extracted from the analytic result for set III found in [@Pachucki:1993zz]. As mentioned above, we were able to numerically calculate these integrals to 100-digit precision and confirm the analytic expressions with `PSLQ`. Analytic results {#appB} ================ Here we show the analytic results for diagrams $b$ and $c$ from Fig. \[other\]: $$\begin{aligned} \lefteqn{\text{Diagram }b =} && {\nonumber}\\ && \frac{111}{8} - \pi^{2}-9\ln2+24\ln^{2}2 + \frac{48}{\sqrt{5}}\ln{\left(}\frac{1+\sqrt{5}}{2}{\right)}{\nonumber}\\ && -72\ln^{2}{\left(}\frac{1+\sqrt{5}}{2}{\right)},\\ \lefteqn{\text{Diagram }c =} && {\nonumber}\\ && -\frac{352897}{27000} + \frac{31}{45}\pi^{2} - \frac{643}{225}\ln2 - \frac{248}{15}\ln^{2}2 {\nonumber}\\ && +\frac{104}{9\sqrt{5}}\ln{\left(}\frac{1+\sqrt{5}}{2}{\right)}+ \frac{248}{5}\ln^{2}{\left(}\frac{1+\sqrt{5}}{2}{\right)}.\end{aligned}$$ For completeness, we also give here the analytic results for sets II and III of the vacuum polarization diagrams, found in [@Eides:1996uj] and [@Pachucki:1993zz], respectively: $$\begin{aligned} \text{Set II} & = & \frac{67282}{6615} - \frac{2}{9}\pi^{2} + \frac{628}{63}\ln2 {\nonumber}\\ && -\frac{872}{63}\sqrt{5}\ln{\left(}\frac{1+\sqrt{5}}{2}{\right)}+ \frac{8}{3}\ln^{2}{\left(}\frac{1+\sqrt{5}}{2}{\right)}\,, {\nonumber}\\ \\ \text{Set III} & = & \frac{15647}{13230}-\frac{25}{63}\pi+\frac{52}{63}\ln2\,.\end{aligned}$$ [99]{} D. J. Berkeland, E. A. Hinds and M. G. Boshier, Phys. Rev. Lett. **75**, 2470 (1995). M. Weitz *et al.*, Phys. Rev. A **52**, 2664 (1995). S. Bourzeix *et al.*, Phys. Rev. Lett. **76**, 384 (1996). T. Udem, A. Huber, B. Gross, J. Reichert, M. Prevedelli, M. Weitz and T. W. H[ä]{}nsch, Phys. Rev. Lett. **79**, 2646 (1997). C. Schwob *et al.*, Phys. Rev. Lett. **82**, 4960 (1999). M. I. Eides, H. Grotch and V. A. Shelyuto, Phys. Rept. **342**, 63 (2001) \[arXiv:hep-ph/0002158\]. K. Pachucki, Phys. Rev. A **63**, 042503 (2001); M. I. Eides and V. A. Shelyuto, Can. J. Phys. **85**, 509 (2007) \[arXiv:physics/0612244\]. K. Pachucki, Phys. Rev. A **53**, 2092 (1996). B. Lauss, Nucl. Phys. A **827**, 401C (2009) \[arXiv:0902.3231 \[nucl-ex\]\]; A. Antognini *et al*., AIP Conf. Proc. **796**, 253 (2005). K. Pachucki, Phys. Rev. Lett. **72**, 3154 (1994) M. I. Eides and V. A. Shelyuto, Pisma Zh. Eksp. Teor. Fiz. **61**, 465 (1995) \[JETP Lett. **61**, 478 (1995)\]. M. I. Eides and V. A. Shelyuto, Phys. Rev. A **52**, 954 (1995) \[arXiv:hep-ph/9501303\]. For a quantum field theoretical treatment of non-relativistic bound states see, for example, Sec. 5.3 of M. E. Peskin and D. V. Schroeder, [*Quantum Field Theory*]{}, (Westview Press, Boulder, 1995). P. Nogueira, J. Comput. Phys. **105**, 279 (1993). R. Harlander, T. Seidensticker and M. Steinhauser, Phys. Lett. B **426**, 125 (1998) \[arXiv:hep-ph/9712228\]. T. Seidensticker, arXiv:hep-ph/9905298. J. A. M. Vermaseren, arXiv:math-ph/0010025. M. Steinhauser, Comput. Phys. Commun. **134**, 335 (2001) \[arXiv:hep-ph/0009029\]; URL: http://www-ttp.particle.uni-karlsruhe.de/\~ms/software.html. F. V. Tkachov, Phys. Lett. B **100**, 65 (1981); K. G. Chetyrkin and F. V. Tkachov, Nucl. Phys. B **192**, 159 (1981). S. Laporta and E. Remiddi, Phys. Lett. B **379**, 283 (1996) \[arXiv:hep-ph/9602417\]. S. Laporta, Int. J. Mod. Phys. A **15**, 5087 (2000) \[arXiv:hep-ph/0102033\]. A. V. Smirnov, JHEP **0810**, 107 (2008) \[arXiv:0807.3243 \[hep-ph\]\]. V. A. Smirnov, Phys. Lett.  B [**460**]{}, 397 (1999) \[arXiv:hep-ph/9905323\]. J. B. Tausk, Phys. Lett.  B [**469**]{}, 225 (1999) \[arXiv:hep-ph/9909506\]. M. Czakon, Comput. Phys. Commun. **175**, 559 (2006) \[arXiv:hep-ph/0511200\]. A. V. Smirnov and V. A. Smirnov, Eur. Phys. J. C **62**, 445 (2009) \[arXiv:0901.0386 \[hep-ph\]\]. A. V. Smirnov and M. N. Tentyukov, Comput. Phys. Commun. **180**, 735 (2009) \[arXiv:0807.4129 \[hep-ph\]\]. T. Hahn, Comput. Phys. Commun. **168**, 78 (2005) \[arXiv:hep-ph/0404043\]. T. Binoth and G. Heinrich, Nucl. Phys. B **585**, 741 (2000) \[arXiv:hep-ph/0004013\]; Nucl. Phys. B **680**, 375 (2004) \[arXiv:hep-ph/0305234\]; Nucl. Phys. B **693**, 134 (2004) \[arXiv:hep-ph/0402265\]. G. Heinrich, Int. J. Mod. Phys. A **23**, 1457 (2008) \[arXiv:0803.4177 \[hep-ph\]\]. A. V. Smirnov, V. A. Smirnov and M. Tentyukov, arXiv:0912.0158 \[hep-ph\]. K. Pachucki, Phys. Rev. A **48**, 2609 (1993). H. M. Fried and D. R. Yennie, Phys. Rev. **112**, 1391 (1958). G. S. Adkins, Phys. Rev. D **47**, 3647 (1993). M. I. Eides, H. Grotch and V. A. Shelyuto, Phys. Rev. A **55**, 2447 (1997) \[arXiv:hep-ph/9610443\]. J. A. M. Vermaseren, Comput. Phys. Commun. **83**, 45 (1994). D. Binosi, J. Collins, C. Kaufhold and L. Theussl, Comput. Phys. Commun. **180**, 1709 (2009) \[arXiv:0811.4113 \[hep-ph\]\]. T. Huber and D. Ma[î]{}tre, Comput. Phys. Commun. **178**, 755 (2008) \[arXiv:0708.2443 \[hep-ph\]\]. H.R.P. Ferguson, D.H. Bailey and S. Arno, Math. Comput. **68**, 351 (1999). [^1]: Actually, it is possible to reduce the number of integrals to at least 31, and possibly 30. We give more details about this in Appendix \[app\].
--- abstract: 'We study the damping of plasma waves in linear Josephson junction chains as well as in two capacitively coupled chains. In the parameter regime where the ground capacitance can be neglected, the theory of the antisymmetric mode in the double chain can be mapped onto the theory of a single chain. We consider two sources of relaxation: the scattering from quantum phase slips (QPS) and the interaction among plasmons related to the nonlinearity of the Josephson potential. The contribution to the relaxation rate $1/\tau$ from the nonlinearity scales with the fourth power of frequency $\omega$, while the phase-slip contribution behaves as a power law with a non-universal exponent. In the parameter regime where the charging energy related to the junction capacitance is much smaller than the Josephson energy, the amplitude of QPS is strongly suppressed. This makes the relaxation mechanism related to QPS efficient only at very low frequencies. As a result, for chains that are in the infrared limit on the insulating side of the superconductor-insulator transition, the quality factor $\omega\tau$ shows a strongly non-monotonic dependence on frequency, as was observed in a recent experiment.' author: - 'M. Bard' - 'I. V. Protopopov' - 'A. D. Mirlin' title: Decay of plasmonic waves in Josephson junction chains --- Introduction ============ Josephson-junction (JJ) chains constitute an ideal playground to study a wealth of fascinating physical effects. Parameters of these systems can be engineered in a controllable way, leading to the emergence of various physical regimes. In chains with the charging energy dominating over the Josephson energy, the Coulomb blockade is observed[@HavilandDelsing96] and a thermally activated conductance is found[@ZimmerEtAl13] at low bias. Moreover, the critical voltage at which the conduction sets in, is governed by the depinning physics [@VogtEtAl15; @CedergenEtAl17]. In the opposite limit, where the Josephson energy is the dominant energy scale, superconducting behavior in the current-voltage characteristics is observed [@ChowEtAl98; @ErguelEtAl13b]. Deep in the superconducting regime, plasmonic waves (small collective oscillations of the superconducting phase) are well-defined excitations above the classical superconducting ground state. The non-perturbative processes in which the phase difference across one of the junctions changes by $2\pi$—the so called quantum phase slips[@BradleyDoniach84; @MatveevEtAl02; @PopEtAl10; @RastelliEtAl13; @ErguelEtAl13] (QPS)—are exponentially rare. Upon lowering the Josephson energy, QPS proliferate and eventually lead to the superconductor-insulator transition [@BradleyDoniach84; @ChoiEtAl98; @ChowEtAl98; @HavilandEtAl00; @KuoChen01; @MiyazakiEtAl02; @TakahideEtAl06; @RastelliEtAl13; @BardEtAl17; @CedergenEtAl17] (SIT) that occurs when the charging and Josephson energies are of the same order. Disorder plays an important role in JJ chains. The effect of disorder was discussed in the context of the persistent current in closed JJ rings in Ref. . More recently, the impact of various types of disorder on the SIT was studied in Ref. . Remarkably, the most common type of disorder, random off-set charges, works to [*enhance*]{} superconducting correlations. The mechanism behind this effect is the loss of coherence of QPS due to a disorder-induced random phase, see also Ref. . In recent years, properties of JJ chains under microwave irradiation have attracted considerable interest. Microwave radiation leads to quantized current steps in the current-voltage characteristics that were argued to be promising for metrological applications [@GuichardHekking10]. Another interesting direction in this context is the field of circuit quantum electrodynamics where novel regimes can be reached [@ZhangEtAl14; @MartinezEtAl18]. JJ chains can be further employed to provide a tunable ohmic environment [@RastelliPop18]. This environment is realized by two parallel chains that are coupled capacitively to each other, and inductively to transmission lines. A similar setup was used in Ref.  to probe the reflection coefficient of a JJ double chain under microwave irradiation. Two parallel chains are short-circuited at one end while being coupled at the other end to a dipole antenna that can excite antisymmetric plasma waves (i.e., those with opposite amplitudes in the two chains). The whole sample is placed in a metallic waveguide which reduces the influence of external disturbances. Resonances corresponding to individual plasmonic modes at quantized momenta are clearly observed. This enables the reconstruction of the energy spectrum of the plasma waves. Because of finite damping, the resonances in the reflection coefficient acquire a finite width. By measuring the modulus and the phase of the reflection coefficient, the internal damping could be disentangled from the external losses such as the leakiness of the waveguide or the damping of the transmission line. For chains with a large Josephson energy, the experimentally found quality factor (inverse linewidth multiplied by mode frequency) increased with lowering frequency of the microwave radiation. When the Josephson energy was reduced, the curves became flat and eventually showed a tendency to drop at low frequencies. This behavior was interpreted in Ref.  as a signature of the SIT. It was noted, however, that, in contrast to theoretical predictions, the observed behavior is controlled by the short-wavelength rather than the long-wavelength part of the Coulomb interaction in the chain. In particular, the “superconducting” behavior with the quality factor growing at low frequencies was observed in the range of parameters corresponding to the insulating phase of the chain. The purpose of this work is to provide theoretical understanding of the effects related to the internal damping of plasma waves in JJ chains. We study two models: (i) a single linear chain, and (ii) a double chain of JJs, as in the experiment of Ref. . It is shown that the effective theory for the antisymmetric mode of the double chain can be mapped onto a theory for a single chain if the capacitance to the ground can be neglected. We identify two sources that lead to the decay of plasmons: (i) the scattering of plasmons induced by QPS and (ii) the interaction of plasmon modes via “gradient” anharmonicities. We find the contribution to the relaxation rate of a plasma wave for both kinds of damping mechanisms. The “gradient” nonlinearities are always irrelevant in the renormalization-group sense and the corresponding relaxation rate vanishes as $\omega^4$ at low frequencies. From the SIT point of view, this behavior can be viewed as “superconducting”. On the other hand, the contribution of QPS processes can show both “superconducting” and “insulating” trends depending on the parameters of the model. The QPS contribution is, however, strongly suppressed if the Josephson energy is much larger than the charging energy associated with the junction capacitance that controls the short-wavelength part of the Coulomb interaction. The combination of the two mechanisms (QPS and “gradient” anharmonicities) can thus lead to a change of the trend from “insulating” to “superconducting” at intermediate frequencies. This mimics a SIT in the intermediate frequency range, although the system is in fact deeply in the insulating phase from the point of view of its infrared behavior. The paper is structured as follows. In Sec. \[Sec:Model\] we introduce lattice models for a single JJ chain and for two capacitively coupled chains, and derive the effective low-energy field theory. Sec. \[Sec:Relaxation\] discusses two mechanisms contributing to the finite lifetime of the plasmonic waves in JJ chains. The scattering off QPS is studied in Sec \[Sec:phase\_slips\], and the decay because of interactions between plasmonic waves is analyzed in Sec. \[Sec:nonlinearities\]. We analyze the interplay of both mechanisms in Sec. \[Sec:interplay\]. In Sec. \[Sec:Summary\] we summarize the main results of the paper and compare them to experimental findings. Technical details can be found in the appendix. Lattice models and low-energy theory {#Sec:Model} ==================================== In this work, we consider two closely related systems: a single linear chain of Josephson junctions depicted in Fig. \[Fig:schematic-system\]a and a device consisting of two capacitively coupled chains shown in Fig. \[Fig:schematic-system\]b. We are interested in their effective low-energy description. For a single chain of Josephson junctions with Coulomb interaction and disorder, Fig. \[Fig:schematic-system\]a, the field theory was constructed previously in Ref. . We briefly recall this derivation below and extend the theory by including the terms accounting for gradient nonlinearities. We then show that, up to numerical coefficients, the same effective description applies to the antisymmetric mode of the double JJ chain of Fig. \[Fig:schematic-system\]b, provided that $C_g\ll C_0$. ![Schematic representation of the two devices under consideration: a) A single chain of Josephson junctions. The capacitance to the ground is denoted by $C_0$, and the junction capacitance by $C_1$. The canonically conjugated variables are the superconducting phase $\theta_i$ and the number of Cooper pairs $\mathcal{N}_i$ of the $i$-th island. b) Two capacitively coupled chains. Here, the capacitance to the ground is denoted by $C_g$ and the interchain capacitance by $C_0$. The additional index ${\uparrow},{\downarrow}$ discriminates between the variables of the two chains. \[Fig:schematic-system\]](Josephson_junction_chain.pdf "fig:")\ ![Schematic representation of the two devices under consideration: a) A single chain of Josephson junctions. The capacitance to the ground is denoted by $C_0$, and the junction capacitance by $C_1$. The canonically conjugated variables are the superconducting phase $\theta_i$ and the number of Cooper pairs $\mathcal{N}_i$ of the $i$-th island. b) Two capacitively coupled chains. Here, the capacitance to the ground is denoted by $C_g$ and the interchain capacitance by $C_0$. The additional index ${\uparrow},{\downarrow}$ discriminates between the variables of the two chains. \[Fig:schematic-system\]](double-chain.pdf "fig:") In the case of a single chain, we denote by $C_1$ and $C_0$ the junction capacitance and the capacitance to the ground, respectively. Tunnel barriers between the islands allow for hopping of Cooper pairs along the chain. The number of Cooper pairs $\mathcal{N}_{i}$ and the superconducting phase $\theta_i$ of the $i$-th island satisfy the canonical commutation relation, $[\mathcal{N}_i,\theta_{j}]=i\delta_{i,j}$. Besides the Josephson energy ${E_{\mathrm{J}}}$ that quantifies the hopping strength of Cooper pairs, there are the two charging energy scales $E_0=(2e)^2/C_0$ and $E_1=(2e)^2/C_1$, where $e$ denotes the elementary charge. The charging energy $E_1$ quantifies the strength of the Coulomb interaction at short scales, while the energy $E_0$ controls the Coulomb-interaction strength in the infrared and, in particular, determines the position of the SIT[@ChoiEtAl98; @BardEtAl17]. The lattice Hamiltonian for this system has the form $${\mathcal{H}}=\frac{E_1}{2}\sum_{i,j}s_{i,j}^{-1}\mathcal{N}_{i}\mathcal{N}_j+{E_{\mathrm{J}}}\sum_i [1-\cos(\theta_{i+1}-\theta_i)], \label{Eq:lattice-Hamiltonian-single-chain}$$ where $$s_{i,j}=\frac{C_{i,j}}{C_1}=\left(2+\frac{1}{\Lambda^2}\right)\delta_{i,j}-\delta_{i,j+1}-\delta_{i,j-1}$$ is the dimensionless capacitance matrix and $\Lambda=\sqrt{C_1/C_0}$ is the screening length for the 1D Coulomb interaction. In the low-energy limit, it is legitimate to replace the lattice Hamiltonian (\[Eq:lattice-Hamiltonian-single-chain\]) by an effective continuum model. The latter is conveniently written[@BardEtAl17] in terms of the field $\phi(x)$ related to the density of Cooper pairs ${\cal N}(x)$ by $\partial_x\phi(x)=-\pi {\cal N}(x)$. The action of the model reads[@BardEtAl17] $$S=S_0+S_{\mathrm{ps}}. \label{Eq:S_clean}$$ The quadratic part of the action (in the imaginary-time representation, with temperature $T$ and Matsubara frequencies $\omega_n$) $$S_0=\frac{1}{2\pi^2 u_0 K_0} T \sum_{\omega_n}\int\frac{{\mathrm{d}}q}{2\pi}\left[\omega_n^2+\epsilon^2(q)\right]|\phi(q,\omega_n)|^2 \label{Eq:S_0}$$ describes the plasma waves with the energy spectrum $$\epsilon(q)=\frac{\omega_p|q|}{\sqrt{q^2+\alpha/\Lambda^2}}, \quad \omega_p=\sqrt{{E_{\mathrm{J}}}E_1}. \label{Eq:dispersion}$$ To facilitate our future discussion of the effective theory for the double chain setup of Fig. \[Fig:schematic-system\]b, we have introduced here a numerical coefficient $\alpha$; in the present case of a single chain we have $\alpha\equiv 1$. The parameters $u_0$ and $K_0$ in Eq. (\[Eq:S\_0\]) are given by $$u_0=\sqrt{\frac{{E_{\mathrm{J}}}E_0}{\alpha}},\qquad K_0=\sqrt{\frac{{E_{\mathrm{J}}}}{\alpha\, E_0}}. \label{Eq:parameters}$$ Here $u_0$ is the velocity of low-energy plasmons with momentum $q\ll 1/\Lambda$ and $K_0$ is the corresponding Luttinger constant. Note that we measure all distances in units of the lattice spacing and set $\hbar = 1$, so that the velocity has the dimension of energy. The second ingredient in Eq. (\[Eq:S\_clean\]), $S_{\rm ps}$, describes QPS. In the absence of disorder it is given by[@BardEtAl17] $$S_{\mathrm{ps}}=yu_0\int {\mathrm{d}}x{\mathrm{d}}\tau \cos\left[2\phi(x,\tau)\right], \label{Eq:S_ps}$$ where $\tau$ is the imaginary time and $y$ is the (ultraviolet) value of the QPS amplitude that is usually called “fugacity”. This terminology is related to the fact that QPS can be considered as vortices in the Euclidean version of the $1+1$-dimensional quantum theory. The fugacity $y$ for phase slips is exponentially small in the regime ${E_{\mathrm{J}}}\gg \min(E_1,E_0)$ where the superconducting correlations are (at least locally) well developed, $$y\propto {\mathrm{e}}^{-\zeta K}\,, \qquad K=\sqrt{\frac{{E_{\mathrm{J}}}}{\alpha\, E_0}+\frac{{E_{\mathrm{J}}}}{\alpha^2\, E_1}} \,. \label{Eq:y}$$ Here $K$ plays the role of the Luttinger constant for the ultraviolet plasmons (with $q\sim 1$), and $\zeta$ is a numerical factor that depends on the screening length $\Lambda$ and also on details of the ultraviolet cutoff scheme. Estimates for $\zeta$ in several limiting cases can be found in Refs.. Among various types of disorder that are present in experimental realizations of the JJ chains, the strongest and the most important one is the random stray charges. Random stray charges $Q_i$ modify the kinetic energy term in the lattice Hamiltonian, Eq. (\[Eq:lattice-Hamiltonian-single-chain\]), according to $$\sum_{i,j}s_{i,j}^{-1}\mathcal{N}_{i}\mathcal{N}_j \ \ \longrightarrow\ \ \sum_{i,j}s_{i,j}^{-1}\left(\mathcal{N}_{i}-Q_i\right)\left(\mathcal{N}_j-Q_j\right).$$ The wave function of the system accumulates then an extra phase in the course of a QPS due to the Aharonov-Casher effect[@AharonovCasher1984; @MatveevEtAl02; @Pop2012]. Accordingly, the QPS action in the effective theory in the presence of charge disorder takes the form $$S_{\mathrm{ps}}=yu_0 \int {\mathrm{d}}x {\mathrm{d}}\tau \cos\left[2\phi(x,\tau)-\mathcal{Q}(x)\right], \label{Eq:S_ps_disordered}$$ where $$\mathcal{Q}(x)=2\pi \int_{-\infty}^{x} {\mathrm{d}}x^{\prime} \, Q(x^{\prime}).$$ For simplicity we assume Gaussian white noise disorder, $$\left<Q(x)\right>_Q=0, \quad \left<Q(x)Q(x^{\prime})\right>_Q=\frac{D_Q}{2\pi^2} \delta(x-x^{\prime}).$$ In the Hamiltonian language, the action corresponds to $${\mathcal{H}}={\mathcal{H}}_0+{\mathcal{H}}_{\mathrm{ps}}, \label{Eq:Hamiltonian}$$ where the quadratic part is of the form $${\mathcal{H}}_0=\frac{1}{2\pi^2}\int\frac{{\mathrm{d}}q}{2\pi}\left[\frac{\epsilon^2(q)}{u_0K_0}|\phi(q)|^2+u_0 K_0 q^2 |\pi\theta(q)|^2\right],$$ and the QPS contribution reads $${\mathcal{H}}_{\mathrm{ps}}=yu_0\int {\mathrm{d}}x \cos[2\phi(x)-\mathcal{Q}(x)]. \label{Eq:Hamiltonian_ps}$$ Equations (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]) \[the latter one reduces to Eq. (\[Eq:S\_ps\]) in the clean case\] constitute the low-energy description of a JJ chain as derived in Ref. . Several remarks are in order here. First, in this work we will be interested in the physics at moderate wavelengths $q\lesssim 1/\Lambda$ and will approximate the dispersion relation (\[Eq:dispersion\]) by its expansion at small $q$: $$\begin{aligned} \epsilon(q)&\approx& u_0(1-q^2 l^2)|q|, \label{Eq:dispersion1} \\ l&=&\Lambda/\sqrt{2\alpha}. \label{Eq:l}\end{aligned}$$ Here, the length $l$ (that differs from $\Lambda$ only by a numerical coefficient) sets the scale for the bending of the plasmonic dispersion relation. Second, the only nonlinearity in our effective action at this stage is due to the QPS. In the ultimate infrared limit $q\ll 1/\Lambda$ (or, in fact, for all $q$ in the case of short-range charge-charge interaction, $\Lambda\lesssim 1$) the effective action Eqs. (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\]) reduces to the standard sine-Gordon theory and describes the superconductor-insulator transition (SIT) that occurs [@ChoiEtAl98] at $K_0=2/\pi$. The SIT is driven by the QPS and in this sense they constitute the most important anharmonicity in the system. Other non-harmonic terms are, however, also possible. For example, the expansion of the Josephson coupling in Eq. (\[Eq:lattice-Hamiltonian-single-chain\]) to the next-to-leading order provides the following contribution to the effective Hamiltonian: $${\mathcal{H}}_{\mathrm{nl}}=-\frac{{E_{\mathrm{J}}}}{\alpha^3\, 4!}\int {\mathrm{d}}x \left(\partial_x\theta\right)^4, \label{Eq:nonlinearity}$$ which is associated with the action $$S_{\mathrm{nl}}=-\frac{\alpha}{4!\pi^4 {E_{\mathrm{J}}}^3}\int {\mathrm{d}}x {\mathrm{d}}\tau \left(\partial_{\tau}\phi\right)^4. \label{Eq:nonlinearity-action}$$ In contrast to the QPS term (\[Eq:Hamiltonian\_ps\]), the nonlinearity (\[Eq:nonlinearity\]) and all other non-linear terms that can be added to the effective Hamiltonian are built out of the local charge ${\cal N}\propto \partial_x\phi$ and current ${\cal J}\propto \partial_x\theta$ densities. Therefore, they always contain high powers of gradients and are irrelevant in the renormalization-group sense. We will occasionally refer to the anharmonicities of these types as to “gradient” anharmonicities to distinguish them from the QPS. The gradient anharmonicities are always unimportant at lowest energies. Yet, as we will see in Sec. \[Sec:nonlinearities\], they can control the decay of plasmons at sufficiently high frequencies if the bare value of the QPS amplitude $y$ is small. We thus conclude that the effective action describing a chain of Josephson junctions is given by $$S=S_0+S_{\rm ps}+S_{\rm nl}. \label{Eq:SFinal}$$ Here, $S_0$ and $S_{\rm ps}$ are given by Eqs. (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]), respectively. As for the “gradient” anharmonicities represented by $S_{\rm nl}$, we will use Eq. (\[Eq:nonlinearity-action\]) as a specific form. We argue in Sec. \[Sec:nonlinearities\] that our main conclusions are insensitive to this particular choice. Let us now turn to the discussion of the double-chain setup of Fig. \[Fig:schematic-system\]b. In this case we denote by $C_g$ and $C_0$ the capacitance to the ground and the interchain capacitance, respectively. The lattice Hamiltonian for the double chain reads $$\begin{split} {\mathcal{H}}=\frac{E_1}{2}\sum_{i,j}\sum_{\sigma,\sigma^{\prime}={\uparrow},{\downarrow}}&[\mathcal{S}^{-1}]_{\sigma,\sigma^{\prime}}(i,j)\mathcal{N}_{i,\sigma}\mathcal{N}_{j,\sigma^{\prime}} \\ &+{E_{\mathrm{J}}}\sum_{i,\sigma}[1-\cos(\theta_{i+1,\sigma}-\theta_{i,\sigma})] , \end{split} \label{Eq:lattice-Hamiltonian-double-chain}$$ with $$\mathcal{S}(i,j)=\tilde{s}_{i,j} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} +\frac{C_0}{C_1} \delta_{i,j} \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$$ and $$\tilde{s}_{i,j}=(2+C_g/C_1)\delta_{i,j}-\delta_{i,j+1}-\delta_{i,j-1}.$$ Here, the indices ${\uparrow},{\downarrow}$ refer to the two chains. In the Gaussian approximation the spectrum of the Hamiltonian (\[Eq:lattice-Hamiltonian-double-chain\]) consists of two modes, symmetric and antisymmetric, that are analogous to the charge and spin modes in a spinful Luttinger liquid. In this work we are interested in the physics of the antisymmetric mode that can be excited in the system by coupling to a dipole antenna [@KuzminEtAl18]. To simplify the analysis, we assume further that $C_g\ll C_0$. It is not clear to us how well is this assumption satisfied in the experiments of Ref. ; we believe it to be, however, of minor importance for our results. Specifically, our analysis should remain applicable, up to modifications in numerical coefficients of order unity, also for $C_g\sim C_0$. The condition $C_g\lesssim C_0$ corresponds to sufficiently well coupled chains, with large splitting between the symmetric and antisymmetric modes[^1]. In full analogy to the spin-charge separation in quantum wires, the (low-momentum) velocity of the symmetric (“charge”) mode, $u_{\rm ch}=2\sqrt{e^2{E_{\mathrm{J}}}/C_g}$ greatly exceeds, under the assumption $C_g\ll C_0$, the velocity of the antisymmetric (“spin”) mode, $u_{\rm s}=\sqrt{2e^2 {E_{\mathrm{J}}}/C_0}$. This observation allows one to integrate out the charge mode and formulate the effective description of the system in terms of the antisymmetric mode alone. Details of this derivation are presented in appendix \[App:Field\_theory\]. It turns out that, just as in the case of a single JJ chain, the effective theory is given by Eqs. (\[Eq:SFinal\]), (\[Eq:S\_0\]), (\[Eq:S\_ps\_disordered\]) and (\[Eq:nonlinearity-action\]), with the parameters given by Eqs. (\[Eq:dispersion1\]), (\[Eq:l\]) and (\[Eq:parameters\]). The only difference is the value of the numerical factor $\alpha$ that should now be set to $2$. Equations (\[Eq:SFinal\]), (\[Eq:S\_0\]), (\[Eq:S\_ps\_disordered\]) and (\[Eq:nonlinearity-action\]) constitute the starting point for our analysis of the decay of plasmonic excitations in the setups of Fig. \[Fig:schematic-system\]. We carry out this analysis in Sec. \[Sec:Relaxation\]. Relaxation of plasmonic waves {#Sec:Relaxation} ============================= Plasmonic waves, which are long-wavelength excitations above the superconducting ground state, are subjected to interaction. As a result, once excited by, e.g., a microwave, a plasma wave can decay into several plasmons of lower energy. The two anharmonic terms in the action (\[Eq:SFinal\]) provide two mechanisms for the decay of plasmons: interaction with QPS and “gradient” anharmonicities. We analyze these channels of plasmon decay one by one in Secs. \[Sec:phase\_slips\] and \[Sec:nonlinearities\], respectively. The interplay of the two mechanisms is discussed in Sec. \[Sec:interplay\]. Relaxation due to phase slips {#Sec:phase_slips} ----------------------------- We start with the discussion of the relaxation processes related to the scattering off QPS. Our analysis follows closely the one of Refs. . The curvature of the plasmonic spectrum, as quantified by the length $l$ in Eq. (\[Eq:dispersion1\]) is of minor importance here and for the purpose of this section we approximate the plasmonic spectrum by $$\epsilon(q)=u_0|q|.$$ Correspondingly, the Gaussian action takes the form $$S_0=\frac{1}{2\pi^2 u_0 K_0} \int {\mathrm{d}}x {\mathrm{d}}\tau \left[u_0^2(\partial_x\phi)^2+(\partial_{\tau}\phi)^2\right]. \label{Eq:S_0-Luttinger-liquid}$$ A formal expansion of the QPS action, Eq. , in powers of $\phi$ shows that the plasmon can decay into an arbitrary large number of low-energy plasmons. We will determine directly the sum of all those contributions. This decay rate can be conveniently calculated from the imaginary part of the self-energy (of the Fourier transform) of the correlation function $$G(\mathbf{r})=\left<\left<\phi(\mathbf{r})\phi(0)\right>_{S}\right>_{Q}, \label{Eq:Green-function}$$ where $\mathbf{r}=(x,\tau)$ and $\left<\cdot\right>_S$ denotes the average with respect to the full action, $S=S_0+S_{\mathrm{ps}}$. On the Gaussian level, the imaginary-time Green function reads, in Fourier space, $$G_0(\mathbf{q})=\frac{\pi^2 u_0 K_0}{\omega_n^2 + u_0^2 q^2}, \qquad {\mathbf{q}}=(q,\omega_n),$$ where $\omega_n$ is the Matsubara frequency. With the help of the self-energy, the full Green function can be expressed as $$G({\mathbf{q}})=\frac{1}{G_0^{-1}({\mathbf{q}})-\Sigma({\mathbf{q}})}=\frac{\pi^2 u_0 K_0}{\omega_n^2+u_0^2 q^2 - \pi^2 u_0 K_0 \Sigma({\mathbf{q}})}.$$ The inverse lifetime of an excitation with energy $\omega$ is related to the imaginary part of the retarded self-energy on the mass shell: $$\frac{1}{\tau(\omega)}=\frac{\pi^2 K_0 u_0}{2\omega}{\operatorname{Im}}\Sigma^R(q=\omega/u_0,\omega). \label{Eq:inverse-lifetime}$$ In the following, we calculate the self-energy perturbatively in the QPS fugacity $y$. The self-energy in the Matsubara space-time, $\Sigma({\mathbf{r}})$, can be extracted from the perturbative expansion of the Green function, Eq. , $$G({\mathbf{r}})=G_0({\mathbf{r}})+\int {\mathrm{d}}^2 r_1 {\mathrm{d}}^2 r_2 G_0({\mathbf{r}}-{\mathbf{r}}_1)\Sigma({\mathbf{r}}_1-{\mathbf{r}}_2)G_0({\mathbf{r}}_2),$$ with the following result: $$\begin{split} \Sigma({\mathbf{r}})=2y^2 &u_0^2 \Bigl[ {\mathrm{e}}^{-2 C_0({\mathbf{r}})-D_Q|x|} \\ &-\delta({\mathbf{r}}) \int {\mathrm{d}}^2 r_0 {\mathrm{e}}^{-2C_0(r_0)-D_Q |x_0|} \Bigr]+\mathcal{O}(y^4). \label{Eq:self-energy} \end{split}$$ Here, the exponential contains the correlation function $$\begin{aligned} C_0({\mathbf{r}})&=\frac{2}{\beta N_x}\sum_{{\mathbf{q}}}(1-\cos {\mathbf{qr}})G_0({\mathbf{q}}) \\ &=\frac{\pi K_0}{2}\ln\left[\frac{u_0^2 \beta^2}{\pi^2}\sinh\left(z_{+}\right)\sinh\left(z_{-}\right)\right]\end{aligned}$$ with $$z_{\pm}=\frac{\pi}{u_0\beta}(x\pm i u_0 \tau),$$ $N_x\gg 1$ denotes the number of junctions, and $\beta$ is the inverse temperature. The result for the self-energy in the imaginary time $\tau$ should be analytically continued to real time $t$ and then Fourier-transformed. The $\tau$-dependence of the first term in Eq. (\[Eq:self-energy\]) is determined by the following Matsubara time-ordered correlation function $$\chi^T(x,\tau)={\mathrm{e}}^{-2C_0(x,\tau)}.$$ The retarded version of this correlation function can be obtained in the standard way [@GiamarchiBook]. We find $$\chi^R(x,t)=\frac{2\Theta(t)\Theta\left(u_0 t -|x|\right) \sin\left(\pi^2 K_0\right)\left(\frac{\pi}{\beta u_0}\right)^{2\pi K_0}}{\left|\sinh \frac{\pi}{u_0\beta}\left(x+u_0 t\right)\sinh \frac{\pi}{u_0\beta}\left(x- u_0 t\right)\right|^{\pi K_0}},$$ where $\Theta$ denotes the Heaviside step function. In order to extract the lifetime, we need to know the imaginary part of the self-energy in Fourier space. The second term on the RHS of Eq.  does not contribute to the imaginary part of $\Sigma$. The imaginary part of the self-energy in Fourier space can thus be obtained via $${\operatorname{Im}}\Sigma^R(q,\omega)=2u_0^2 y^2 {\operatorname{Im}}\!\! \int \!\!{\mathrm{d}}x {\mathrm{d}}t\, {\mathrm{e}}^{-i(q x- \omega t)}\chi^R(x,t){\mathrm{e}}^{-D_Q|x|}.$$ It is convenient to switch to the light-cone variables $z_{\pm}=\pi(u_0 t \pm x)/(u_0 \beta)$: $$\begin{split} {\operatorname{Im}}&\Sigma^R(q,\omega)=2u_0 y^2 \sin(\pi^2 K_0)\left(\frac{\pi}{u_0 \beta}\right)^{2\pi K_0-2} \\ &\times{\operatorname{Im}}\int_0^{\infty}{\mathrm{d}}z_{+}\int_0^{\infty}{\mathrm{d}}z_{-} \frac{\exp\{i\frac{\beta}{2\pi}(\omega-u_0 q)z_{+}\}}{\left(\sinh z_{+}\right)^{\pi K_0}} \\ &\times \frac{\exp\{i\frac{\beta}{2\pi}(\omega+u_0 q)z_{-}\}}{\left(\sinh z_{-}\right)^{\pi K_0}}\,{\mathrm{e}}^{-\frac{D_Q u_0 \beta}{2\pi}|z_{+}-z_{-}|}. \end{split} \label{Eq:Im-Sigma}$$ Equations (\[Eq:inverse-lifetime\]) and (\[Eq:Im-Sigma\]) give the decay rate of plasma waves due to QPS. They can be further simplified in various limiting cases that we analyze below. ### Clean case\[Sec:Clean-case\] In the regime $D_Q \ll \mathrm{min}(q,T/u_0)$ we can set $D_Q=0$, and the integrations decouple. Performing the integrations, we obtain $$\begin{split} {\operatorname{Im}}\Sigma^R(q,\omega)&=2u_0 y^2 \sin(\pi^2 K_0)\left(\frac{2\pi}{u_0 \beta}\right)^{2\pi K_0-2} \\ &\times {\operatorname{Im}}\Biggl\{ {\operatorname{B}}\left(1-\pi K_0,\frac{\pi K_0}{2}-i\frac{\beta }{4\pi}(\omega+u_0 q)\right) \\ & \times{\operatorname{B}}\left(1-\pi K_0,\frac{\pi K_0}{2}-i\frac{\beta}{4\pi}(\omega-u_0 q)\right) \Biggr\}, \end{split}$$ where $${\operatorname{B}}(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$ is the Euler Beta function. Making use of Eq. , we extract the relaxation rate, $$\frac{1}{\tau(\omega)}\sim u_0 y^2 \begin{cases} \left(\frac{2\pi T}{u_0}\right)^{2\pi K_0-3}, & \omega \ll T, \\ \frac{T}{u_0}\left(\frac{2\pi\omega T}{u_0^2}\right)^{\pi K_0 -2}, & \omega \gg T. \end{cases} \label{Eq:scaling-rate-clean}$$ ![Scaling behavior of the inverse relaxation time of plasmonic waves due the scattering from QPS in different regimes in the frequency-temperature plane. For each regime, the behavior of $1/(\tau u_0 y^2)$ is indicated.\[Fig:rate-QPS-processes\]](rate-QPS-processes.pdf){width="220pt"} ### Disordered case\[Sec:Disordered-case\] In the limit of strong disorder, $D_Q \gg \mathrm{max}(q,T/u_0)$, the main contribution of the integrations in Eq.  comes from the region close to $z_{+}=z_{-}$. We find $$\begin{split} {\operatorname{Im}}\Sigma^{R}(q,\omega)\simeq 8 & u_0\frac{y^2}{D_Q}\sin(\pi^2 K_0) \left(\frac{2\pi}{u_0 \beta}\right)^{2\pi K_0 -1} \\ & \times {\operatorname{Im}}{\operatorname{B}}\left(1-2\pi K_0,\pi K_0 -i \frac{\beta \omega}{2\pi}\right) \end{split}$$ for the imaginary part of the self-energy, which is independent of momentum. This leads to the following scaling of the relaxation rate in the case of strong disorder, $D_Q \gg \mathrm{max}(q,T/u_0)$: $$\frac{1}{\tau(\omega)}\sim u_0\frac{y^2}{D_Q} \begin{cases} \left(\frac{2\pi T}{u_0}\right)^{2\pi K_0-2}, &\omega \ll T, \\ \left(\frac{\omega}{u_0}\right)^{2\pi K_0-2}, & \omega\gg T. \end{cases} \label{Eq:scaling-rate-strongly-disordered}$$ For a moderate disorder strength, we need to consider two cases. If $q\ll D_Q \ll T/u_0$, the clean result given by the first line of Eq.  remains valid. For $T/u_0\ll D_Q \ll q$ the integration over $z_{-}$ in Eq. (\[Eq:Im-Sigma\]) is cut at the upper limit by $\pi T/u_0 q \ll 1$. We can further neglect $z_{-}$ in the exponential function related to $D_Q$. This leads to the following behavior of the relaxation rate: $$\frac{1}{\tau(\omega)}\sim u_0 y^2\ D_Q\left(\frac{D_Q\omega}{u_0}\right)^{\pi K_0 -2}, \quad T\ll u_0 D_Q \ll \omega. \label{Eq:inter}$$ Equations (\[Eq:scaling-rate-clean\]), (\[Eq:scaling-rate-strongly-disordered\]) and (\[Eq:inter\]) give the relaxation rate of plasma waves due to QPS in different regimes and constitute the main result of this section. They are summarized in Fig. \[Fig:rate-QPS-processes\]. The relaxation rate exhibits power-law scaling with frequency, temperature and the disorder. The corresponding exponents are non-universal and are determined by the value of the Luttinger parameter $K_0$. Deep in the superconducting phase of the JJ chain, $K_0\gg1$, the relaxation rate vanishes at low frequencies, while the opposite trend is predicted in the insulating phase with sufficiently small $K_0$. Relaxation due to “gradient” nonlinearities\[Sec:nonlinearities\] ----------------------------------------------------------------- Let us now turn to the analysis of “gradient” anharmonicities described by the term $S_{\rm nl}$ in the effective action (\[Eq:SFinal\]). They are irrelevant from the point of view of the renormalization group. However, at intermediate energy scales they contribute to the decay of the plasma waves on equal footing with QPS. We consider here the nonlinearity corresponding to the correction of the action, which arises as the quartic term of the expansion of the Josephson potential. Perturbative treatment of the decay of plasmons caused by “gradient” anharmonicities was discussed in other contexts in Refs. . The perturbation theory turns out to be ill-defined in the case of a linear plasmonic spectrum and in this section we use the dispersion relation (\[Eq:dispersion1\]) taking into account its finite curvature. In order to calculate the relevant matrix element for the relaxation process, it is convenient to express the superconducting phase $\theta$ through bosonic creation ($b_q^{\dagger}$) and annihilation operators ($b_q$), that obey the standard bosonic commutation relations. This decomposition is of the form[@GiamarchiBook] $$\theta(x)=i\sqrt{\frac{\pi}{2N_x}}\sum_{q\neq 0}\frac{{\operatorname{sign}}(q)}{\sqrt{|q|}}{\mathrm{e}}^{-a |q|/2}{\mathrm{e}}^{i q x} (b_q^{\dagger}-b_{-q}),$$ where $a$ is the ultraviolet cutoff that can be send to zero in this calculation, and $N_x$ is the number of junctions per chain. Our analysis below largely follows the approach of Ref. . The relaxation rate can be calculated using the diagonal part of the linearized collision integral, $$\begin{split} \frac{1}{\tau(q_1)}=\frac{1}{2}\!\sum_{q_2,{q^{\prime}_{1}},{q^{\prime}_{2}}}\!\!W_{q_1,q_2}^{{q^{\prime}_{1}},{q^{\prime}_{2}}} \Bigl\{&N_B(\epsilon_{q_2})[1+N_B(\epsilon_{{q^{\prime}_{1}}})+N_B(\epsilon_{{q^{\prime}_{2}}})] \\ &-N_B(\epsilon_{{q^{\prime}_{1}}}) N_B(\epsilon_{{q^{\prime}_{2}}})\Bigr\}, \end{split}$$ where the transition probability is $$W_{q_1,q_2}^{q_1^{\prime},q_2^{\prime}}=2\pi |\langle 0| b_{q_2^{\prime}}b_{q_1^{\prime}}{\mathcal{H}}_{\mathrm{nl}}b_{q_1}^{\dagger}b_{q_2}^{\dagger}|0\rangle|^2\delta(E_i-E_f),$$ and $E_{i(f)}$ denotes the total energy of the initial (final) states. The modulus of the matrix element is given by $$\begin{split} |\langle 0 |b_{{q^{\prime}_{2}}} b_{{q^{\prime}_{1}}} {\mathcal{H}}_{\mathrm{nl}} b_{q_1}^{\dagger}b_{q_2}^{\dagger}|0\rangle|=\frac{\pi^2{E_{\mathrm{J}}}}{4\alpha^3N_x}&\sqrt{|q_1q_2 {q^{\prime}_{1}}{q^{\prime}_{2}}|} \\ &\times\delta_{q_1+q_2,{q^{\prime}_{1}}+{q^{\prime}_{2}}}. \end{split} \label{Eq:matrix-element}$$ A right moving plasmon with momentum $q_1>0$ can relax via this nonlinearity by the scattering off a left moving thermal plasmon with momentum $q_2<0$ (see Fig. \[Fig:process\]). According to the conservation laws, the momentum of the left moving particle $$q_2=-\frac{3}{2}q_1{q^{\prime}_{1}}{q^{\prime}_{2}}l^2+\mathcal{O}(q_1^5 l^4)$$ is much smaller than the momentum $q_1$. ![Dominant relaxation process mediated by nonlinearities for a right moving plasmon with momentum $q_1$.\[Fig:process\]](process.pdf) With the help of the momentum conservation we can perform the sum over ${q^{\prime}_{2}}$. The delta function related to the energy conservation can be written in the form $$\delta(E_i-E_f)=\frac{2}{3u_0(q_1+q_2)|{q^{\prime}_{1,+}}-{q^{\prime}_{1,-}}|l^2}\delta({q^{\prime}_{1}}-{q^{\prime}_{1,+}}),$$ where $${q^{\prime}_{1,\pm}}\simeq \frac{q_1}{2}\pm\sqrt{\frac{q_1^2}{4}+\frac{2q_2}{3q_1 l^2}}. $$ Requiring ${q^{\prime}_{1,\pm}}$ to be real restricts the range of $q_2$ to the interval $$q_{\ast}<q_2<0, \qquad q_{\ast}=-\frac{3}{8}q_1^3 l^2.$$ In the continuum (long-chain) limit, the remaining summations over momenta transform into integrations, $N_x^{-2}\sum_{q_2,{q^{\prime}_{1}}}\to \int {\mathrm{d}}q_2 {\mathrm{d}}{q^{\prime}_{1}}/(2\pi)^2$. Performing the integration over ${q^{\prime}_{1}}$, we find $$\begin{split} \frac{1}{\tau(q_1)}=&\frac{\pi^3 {E_{\mathrm{J}}}^2 q_1}{96\alpha^6}\int_{q_{\ast}}^0{\mathrm{d}}q_2 \frac{{q^{\prime}_{1,+}}{q^{\prime}_{1,-}}|q_2|}{u_0(q_1+q_2)|{q^{\prime}_{1,+}}-{q^{\prime}_{1,-}}|l^2} \\ &\times \Bigl\{N_B(\epsilon_{q_2})[1+N_B(\epsilon_{{q^{\prime}_{1,+}}})+N_B(\epsilon_{{q^{\prime}_{1,-}}})] \\ &\hspace{2.5cm}-N_B(\epsilon_{{q^{\prime}_{1,+}}})N_B(\epsilon_{{q^{\prime}_{1,-}}})\Bigr\}. \end{split} \label{Eq:final-integral}$$ The $q_2$-dependence in the denominator of the integrand can be neglected compared to $q_1$. We assume now that the energy of the particle with momentum $q_1$ is much larger than temperature but not too large such that $\beta u_0q_1^3l^2\ll 1$. In this case, the Bose function of the particle with momentum $q_2$ can be replaced by $1/\beta u_0 |q_2|$, and the Bose function related to the particle with momentum $q_{1,+}^{\prime}$ can be neglected. The Bose function related to the particle with momentum $q_{1,-}^{\prime}$ can be replaced by $1/\beta u_0{q^{\prime}_{1,-}}$ for $-3q_1^2 l^2/2\beta u_0<q_2<0$ and neglected for $q_{\ast}<q_2<-3q_1^2 l^2/2\beta u_0$. The main contribution to the integral in Eq. (\[Eq:final-integral\]) originates from the latter range of $q_2$. After performing the integration, we find (under the assumptions $\beta u_0 q_1 \gg 1$, $q_1^2 l^2 \ll 1$, and $\beta u_0 q_1^3 l^2 \ll 1$) the following behavior of the relaxation rate: $$\frac{1}{\tau(q_1)}\simeq \frac{\pi^3{E_{\mathrm{J}}}^2 Tq_1^4}{768\alpha^6u_0^2}=\frac{\pi^3}{768\alpha^5}\frac{{E_{\mathrm{J}}}}{E_0}Tq_1^4. \label{Eq:relaxation-rate-not-too-large-q}$$ Equation (\[Eq:relaxation-rate-not-too-large-q\]) constitutes the main result of this section. It predicts $\omega^4$ scaling of the relaxation rate of plasmons due to the “gradient” anharmonicity. The relaxation rate vanishes at low frequencies reflecting the irrelevant character of the gradient anharmonicities. Before closing this section let us discuss the universality of the result (\[Eq:relaxation-rate-not-too-large-q\]) with respect to the particular form of the Hamiltonian ${\mathcal{H}}_{\rm nl}$ given by Eq. (\[Eq:nonlinearity\]). On phenomenological grounds various terms of the form $(\partial_x \phi)^n (\partial_x\theta)^m$ are allowed in the effective Hamiltonian. For $n+m>4$ such terms are less relevant than the $(\partial_x \theta)^4$ term considered here and, thus, contribute less to the lifetime of plasmons. On the other hand a cubic-in-density interaction, $(\partial_x\phi)^3$, is more relevant in terms of the scaling dimension. However, the energy and momentum conservations forbid the decay of a single plasmon into two particles. Correspondingly, a cubic non-linearity should be taken in the second order perturbation theory to produce a finite decay rate. The resulting process is again the one of Fig. \[Fig:process\] and leads to the same $\omega^4$ scaling of the relaxation rate[@Apostolov2013; @Lin2013; @ProtopopovEtAl14]. Interplay of QPS and “gradient” anharmonicities {#Sec:interplay} ----------------------------------------------- In Secs. \[Sec:phase\_slips\] and \[Sec:nonlinearities\] we have analyzed the decay of plasmon excitations due to QPS and the “gradient” anharmonicities, respectively. While the relaxation rate due to “gradient” anharmonicities follows universal $\omega^4$ scaling, the QPS contribution is characterized by a non-universal exponent and reflects the SIT controlled by the value of $K_0$. Let us now discuss the interplay of the two relaxation channels. We assume for definiteness that frequencies of interest are larger than temperature. It is convenient to characterize the strength of the plasmon decay by a dimensionless parameter $\omega\tau$. This parameter is expected to be proportional to the quality factor studied in Ref. . Deep in the superconducting regime, $K_0\gg1$, the relaxation of plasmons is always dominated by the “gradient” anharmonicities and the quality factor $\omega\tau$ scales as $\omega^{-3}$, see Eqs. (\[Eq:scaling-rate-clean\]), (\[Eq:scaling-rate-strongly-disordered\]), (\[Eq:inter\]) and (\[Eq:relaxation-rate-not-too-large-q\]). Upon decreasing the Luttinger parameter $K_0$, the QPS start to be visible in the quality factor. Specifically, the QPS dominate the low-frequency behavior of the quality factor under the condition $\pi K_0 < 6$ (respectively, $\pi K_0< 3$), yielding its $\omega^{3-\pi K_0}$ (respectively $\omega^{3-2\pi K_0}$) scaling in the cases of weak (respectively, strong) disorder. Furthermore, as a result of QPS, for sufficiently small $K_0$ ($\pi K_0<3$ for weak and $\pi K_0<3/2$ for strong disorder), the quality factor goes down as frequency decreases. The resulting frequency dependence of $\omega\tau$ will then be non-monotonic with a maximum around a crossover frequency where the QPS set in, as illustrated in Fig. \[Fig:quality-factor\]. In the discussion of the overall frequency dependence of the quality factor, it is important to keep in mind the exponential smallness of the fugacity $y$, Eq. (\[Eq:y\]). Due to this fact, the frequency below which QPS dominate over “gradient” non-linearities is exponentially small for ${E_{\mathrm{J}}}\gg E_1$. As a result, even deep in the insulating regime, $K_0\ll 1$, the quality factor is dominated by the gradient non-linearities and thus [*grows*]{} with lowering the frequency in a wide frequency range if ${E_{\mathrm{J}}}\gg E_1$. Only at exponentially small frequencies this “superconducting” behavior crosses over to the decrease of the quality factor reflecting the insulating character of the system in the infrared limit. Our results compare well with the experimental findings of Ref. , as we discuss in more detail in Sec. \[Sec:Summary\]. ![Schematic behavior of the quality factor as a function of the rescaled frequency $\omega/\omega_p$ in a double-log scale in the insulating regime, $E_0 \gg {E_{\mathrm{J}}}$. The arrows indicate the change under an increase of ${E_{\mathrm{J}}}$ (thin lines correspond to a larger value of ${E_{\mathrm{J}}}$). The frequency scale of the crossover between the regime of dominant relaxation due to QPS (red lines) and that of dominant relaxation due to the nonlinearity (blue lines) is exponentially small in the parameter $\sqrt{{E_{\mathrm{J}}}/E_1}$. A further increase of ${E_{\mathrm{J}}}$ into the superconducting regime leads to a monotonic dependence of the quality factor (not shown in the figure). The scaling of the QPS and the “gradient” anharmonicity contributions indicated near the corresponding lines is based on Eqs.  and , respectively, with an assumption $\omega \gg T$. For the QPS contribution, the formula corresponds to the clean case. In the disordered case, the scaling of the QPS contribution can be inferred from Eqs.  and ; this does not affect the qualitative appearance of the plot.[]{data-label="Fig:quality-factor"}](quality-factor.pdf) Summary and discussion\[Sec:Summary\] ===================================== We have studied the decay of plasmonic waves in JJ chains. Motivated by a recent experiment[@KuzminEtAl18], we have considered, besides a single one-dimensional chain, also a model of two capacitively coupled linear chains. It has been shown that in the parameter regime where the capacitance to the ground ($C_g$) can be neglected, the theory for the antisymmetric mode in the double chain can be mapped onto a theory for a single chain. This was possible because the symmetric mode acquired a fast velocity due to the strong Coulomb interaction. Two sources for the relaxation of plasma waves have been considered. First, the damping originating from the scattering generated by QPS leads to a relaxation rate that scales with frequency as a power law with a nonuniversal exponent that depends on the parameter $K_0=\sqrt{{E_{\mathrm{J}}}/E_0}$. The scaling behavior of the relaxation rate related to QPS in different parameter regimes is summarized in Fig. \[Fig:rate-QPS-processes\]. Since the QPS amplitude is exponentially small in the parameter $\sqrt{{E_{\mathrm{J}}}/E_1}$, the rate is very sensitive to this parameter. The second mechanism for the relaxation of plasma waves is the interaction of them mediated by other nonlinear terms. As an example, we have considered the lowest-order nonlinearity coming from the Josephson potential. This term leads to a relaxation rate that scales as the fourth power of frequency. The vanishing of the relaxation rate at low frequencies reflects the irrelevance of this term in the renormalization group sense. Nevertheless, for a small phase-slip amplitude (fugacity), the contribution originating from this nonlinearity may be dominating in a wide range of frequencies. Comparing our findings to the experiment of Ref. , we find a very good qualitative agreement between our theory and experimental observations. All of the samples shown in Fig. 3b of Ref.  are nominally in the insulating regime. Specifically, values of the Luttinger constant $K_0$ that are extracted from the measured values of the impedance $Z$ (proportional to $1/K_0$) make one to expect the insulating behavior. However, the samples with a large ratio of ${E_{\mathrm{J}}}/E_1$ show an increase of the quality factor when lowering the frequency. This behavior suggests that the systems are in the superconducting regime. This apparent contradiction is resolved by noting that the crossover scale below which the QPS effects show up is exponentially small in the square root of ${E_{\mathrm{J}}}/E_1$. As a result, the downturn of the quality factor indicating insulating behavior occurs below the lowest measured frequencies. For devices with a lower value of both $K_0$ and ${E_{\mathrm{J}}}/E_1$, the authors of Ref.  observe a flat behavior at intermediate frequencies with a tendency to drop at lowest measured frequencies. This behavior is qualitatively consistent with our prediction on the frequency dependence of the quality factor that is dominated by QPS in the insulating regime at low frequencies. For a more quantitative comparison, the extension of the experimental measurement method to lower frequencies and the investigation of the temperature dependence would be beneficial. Let us discuss in more detail experimental observations on dependences of the quality factor on various input parameters. We consider first the more insulating chains. The authors of Ref.  point out a stronger sensitivity of the quality factor to the parameter ${E_{\mathrm{J}}}/E_1$ compared to the parameter $Z\propto 1/K_0$ for their weakest junctions (large $Z$ and low ${E_{\mathrm{J}}}/E_1$)—an observation that is immediately understood within our theory. In these devices, the parameter $K_0$ is very small such that the exponent for the power law of the phase-slip contribution to the quality factor is only slightly modified when changing $K_0$. Even relatively large changes of the order of $20\%$ (as in the experiment) have only a small effect, since the value of $K_0$ is still small and modifies the exponent only weakly. On the other hand, the fugacity of QPS depends exponentially on the square root of ${E_{\mathrm{J}}}/E_1$, which explains the observed strong dependence of the quality factor on this parameter. Further, we compare the scaling predicted in our work to the experimental observations in low-impedance chains shown in Fig. S 4 in the Supplementary Material of Ref. . All these samples are characterized by a large ratio of ${E_{\mathrm{J}}}$ over $E_1$ such that the QPS effects should be negligibly small in the range of measured frequencies. Indeed, the curves show an increasing behavior when lowering the frequency. More specifically, the corresponding frequency scaling of the quality factor is consistent with the theoretical expectation $\omega^{-3}$ from the decay due to the nonlinearity. Discussing the dependence on other parameters, we notice that the charging energy $E_0$ experiences a particularly strong variation in the experiment (within a factor of $\sim 75$), while the variation of other device parameters is smaller. All experimental curves appear to collapse reasonably well when plotted as a function of the rescaled frequency $\omega/\omega_p$. On the other hand, our prediction shows a strong power-law dependence ($E_0^3$) on the charging energy $E_0$. We speculate that a different kind of nonlinearity may be responsible for the explanation of this discrepancy. It might originate from some kind of nonlinear capacitances and result in a different prefactor in the frequency dependence of the quality factor that does not depend so strongly on the charging energy $E_0$. The identification and analysis of other types of nonlinearities constitutes an interesting prospect for future research. Before closing this paper, we add two more comments on possible extensions of this work. First, we assumed that the Josephson and charging energy are constant for the whole chain. In principle, one can generalize the model by including spatial fluctuations of them. This will make the Luttinger-liquid constant $K_0$ randomly space dependent, $K_0\to K_0(x)$, and result in a possibility of elastic backscattering of plasmons that gets stronger with increasing frequency [@GramadaRaikh97; @Fazio98]. In the experiment of Ref. , this disorder appears to be very weak, as can be inferred from regularly spaced resonances at higher frequencies. One can imagine, however, chains with a stronger $K_0(x)$-type disorder. An investigation of the combined effect of such a disorder and interaction on plasmon spectroscopy is an interesting prospect for future research. Second, our analysis of the width of the plasmonic resonances which relies on the golden rule assumes a continuous spectrum. This is justified if the obtained rate is larger than the the level spacings of final states to which a plasmon decays. In particular, for the gradient-anharmonicity decay, these are [*three-particle*]{} states: the final states for a decay of a plasmon with momentum $q_1$ are characterized by three momenta $q_{1}^{\prime}$, $q_2$, and $q_2^{\prime}$, see Fig. \[Fig:process\]. The corresponding three-particle level spacing is much smaller than the single-particle level spacing in a long chain since it scales as $1/N_x^3$ with the length $N_x$. Thus, the analysis remains applicable despite the discrete *single-particle* spectrum. The situation changes in shorter chains where one might be able to reach a regime in which the golden-rule rate is smaller than the three-particle level spacing. In this case, effects of localization in the Fock space may become important. For a related discussion in the context of electronic levels in quantum dots see Refs. . While preparing this paper for publication, we learnt about a related unpublished work [@HouzetGlazmanUnpublished]. *Note added in proof*. Recently, a preprint appeared where a similar problem was addressed. The results of Ref.  for plasmon decay due to QPS are consistent with our findings; gradient anharmonicities were not considered there. We thank D. Abanin, L. Glazman, R. Kuzmin, V. E. Manucharyan, and A. Shnirman for fruitful discussions. This work was supported by the Russian Science Foundation under Grant No. 14-42-00044 and by the Deutsche Forschungsgemeinschaft. IP acknowledges support by the Swiss National Science Foundation. Derivation of low-energy field theory {#App:Field_theory} ===================================== This appendix is devoted to the derivation of the low-energy field theory for the antisymmetric mode of the double-chain system. Our starting point is the lattice Hamiltonian . We denote by $E_g$, $E_0$ and $E_1$ the charging energies associated with the capacitance $C_g$, $C_0$ and $C_1$, respectively, and $E_i=(2e)^2/C_i$. The basic idea is that in the limit of small capacitance $C_g$ the associated charging energy $E_g$ suppresses the charge fluctuations in the symmetric mode (at least at long scales) leaving us with the antiymmetric mode as the only dynamical degree of freedom. This observation was previously employed in the literature to obtain the low-energy theory of the antisymmetric mode, see Refs. . Here we generalize the results of Refs  to the case when the Coulomb interaction is long-ranged ($C_1\gg C_0$) and charge disorder is present in the system. We show that the effective theory takes the form of the sine-Gordon model, Eqs. (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]) supplemented by a “gradient” non-linearity term, Eq. (\[Eq:nonlinearity-action\]). The posed goal can be achieved in two different ways. In Sec. \[Ap:FiledTheory\], we present a semi-quantitative derivation of our results from the field-theory description of the symmetric and antisymmetric modes in the double chain. A more microscopic analysis of the initial lattice model (leading to the same results) is carried out in Secs. \[App:Local\] and \[App:LongRange\] for the cases of short-range ($C_1=0$) and long-range ($C_1\gg C_0$) Coulomb interaction, respectively. Heuristic derivation from the continuum field theory {#Ap:FiledTheory} ---------------------------------------------------- We start our discussion of the effective theory for the antisymmetric mode in the double chain from a heuristic derivation based on the field-theory description of the lattice model (\[Eq:lattice-Hamiltonian-double-chain\]). The latter is derived in full analogy to the case of a single JJ chain. To this end, we introduce two fields $\phi_{\uparrow}$ and $\phi_{\downarrow}$ related to the charge in the lower and upper chain via $\partial_x\phi_\sigma=-\pi{\cal N}_\sigma$, as well as their combinations $$\begin{pmatrix} \phi_s \\ \phi_a \end{pmatrix} =\frac12 \begin{pmatrix} 1& 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} \phi_{{\uparrow}} \\ \phi_{{\downarrow}} \end{pmatrix}. \label{Eq:rotation}$$ In terms of these fields, the quadratic part of the action corresponding to the lattice model (\[Eq:lattice-Hamiltonian-double-chain\]) reads $$\begin{split} S_0=\frac{1}{\pi^2}&\int\frac{{\mathrm{d}}q}{2\pi}\frac{{\mathrm{d}}\omega}{2\pi}\Biggl\{\left[\frac{(2e)^2 q^2}{C_g+C_1 q^2}+\frac{\omega^2}{{E_{\mathrm{J}}}}\right]|\phi_s({\mathbf{q}})|^2 \\ &+\left[\frac{(2e)^2 q^2}{2C_0+C_g+C_1 q^2}+\frac{\omega^2}{{E_{\mathrm{J}}}}\right]|\phi_a({\mathbf{q}})|^2\Biggr\}. \end{split} \label{Eq:S_0_App}$$ The QPS can be accounted for by $$S_{\mathrm{ps}}=yu_0\!\!\int\!\! {\mathrm{d}}x {\mathrm{d}}\tau \left\{\cos\left[2\phi_{{\uparrow}}\!+\!{\cal Q}_{{\uparrow}}(x)\right]+\cos\left[2\phi_{{\downarrow}}+{\cal Q}_{{\downarrow}}(x)\right]\right\}, \label{Sps}$$ where $$\mathcal{Q}_{\sigma}(x)=2\pi \int_{-\infty}^{x}{\mathrm{d}}x^{\prime} Q_{\sigma}(x^{\prime})$$ and $Q_{{\uparrow}({\downarrow})}(x)$ is the random charge in the upper (lower) chain. Note that in Eq. (\[Sps\]) we consider QPS as happening independently in the upper and lower chains. This is justified provided that $E_1\equiv(2e)^2/C_1 \ll E_g, E_0\equiv (2e)^2/C_0$. The fugacity $y$ is then exponentially small in the parameter $\sqrt{{E_{\mathrm{J}}}/E_1}$. In the long wave-length limit, $q\ll \sqrt{C_g/C_1}\ll \sqrt{C_0/C_1}$, the quadratic action (\[Eq:S\_0\_App\]) reduces to the Luttinger-liquid form $$S_0=\sum_{\rho=s,a}\frac{1}{2\pi^2u_{0,\rho}K_{0,\rho}}\int {\mathrm{d}}x {\mathrm{d}}\tau [u_{0,\rho}^2 (\partial_x \phi_{\rho})^2+(\partial_{\tau}\phi_{\rho})^2],$$ with $$\begin{split} u_{0,s}&=\sqrt{{E_{\mathrm{J}}}E_g}, \\ K_{0,s}&=\frac12\sqrt{\frac{{E_{\mathrm{J}}}}{E_g}}, \end{split} \quad \begin{split} u_{0,a}&=\sqrt{{E_{\mathrm{J}}}E_0/2}, \\ K_{0,a}&=\sqrt{\frac{{E_{\mathrm{J}}}}{2E_0}}. \end{split}$$ Let us now consider the perturbative expansion of the partition function $Z$ in the fugacity $y$. The lowest non-vanishing correction arises in the second order and reads $$\begin{split} \delta Z&=\frac{y^2u_0^2}{4}\int {\mathrm{d}}^2 r_1 {\mathrm{d}}^2 r_2 \frac{1}{|{\mathbf{r}}_1-{\mathbf{r}}_2|^{2\pi K_{0,s}}} \\ &\times\Bigl<\cos[2(\phi_a({\mathbf{r}}_1)-\phi_a({\mathbf{r}}_2))+\mathcal{Q}_{{\uparrow}}(x_1)-\mathcal{Q}_{{\uparrow}}(x_2)] \\ &\hspace{0.4cm}+\cos[2(\phi_a({\mathbf{r}}_1)+\phi_a({\mathbf{r}}_2))+\mathcal{Q}_{{\uparrow}}(x_1)-\mathcal{Q}_{{\downarrow}}(x_2)] \\ &\hspace{0.4cm}+\cos[2(\phi_a({\mathbf{r}}_1)+\phi_a({\mathbf{r}}_2))-\mathcal{Q}_{{\downarrow}}(x_1)+\mathcal{Q}_{{\uparrow}}(x_2)] \\ &\hspace{0.4cm}+\cos[2(\phi_a({\mathbf{r}}_1)-\phi_a({\mathbf{r}}_2))-\mathcal{Q}_{{\downarrow}}(x_1)+\mathcal{Q}_{{\downarrow}}(x_2)]\Bigr>_{0,a}. \end{split} \label{Eq:DeltaZ1}$$ Here, ${\mathbf{r}}=(x, u_{0, a}\tau)$. In Eq. (\[Eq:DeltaZ1\]) we have performed explicit averaging over the symmetric mode $\phi_s$ but kept the correlation functions of $\phi_a$ in the unevaluated form. Introducing $${\cal Q}_{s}(x)=\mathcal{Q}_{{\uparrow}}(x)+\mathcal{Q}_{{\downarrow}}(x)\,, \qquad {\cal Q}_{a}(x)=\mathcal{Q}_{{\uparrow}}(x)-\mathcal{Q}_{{\downarrow}}(x),$$ we find $$\begin{gathered} \delta Z=y^2u_0^2 \int {\mathrm{d}}^2 r_1 {\mathrm{d}}^2 r_2 \frac{\cos[{\cal Q}_s(x_1)-{\cal Q}_s(x_2)]}{|{\mathbf{r}}_1-{\mathbf{r}}_2|^{2\pi K_{0,s}}} \times\\ \left<\cos[2\phi_a({\mathbf{r}}_1)+{\cal Q}_{a}(x_1)]\cos[2\phi_a({\mathbf{r}}_2)+{\cal Q}_{a}(x_2)]\right>_{0,a}. \label{Eq:correction_Z_App}\end{gathered}$$ Assuming that we are in the regime $K_{0,s}\ll 1$, we can approximate $|{\mathbf{r}}_1-{\mathbf{r}}_2|^{2\pi K_{0, s}}$ by unity. If the charge disorder is weak, we can further replace $\cos[{\cal Q}_s(x_1)-{\cal Q}_s(x_2)]$ by unity. In this case, the integrations over ${\mathbf{r}}_1$ and ${\mathbf{r}}_2$ decouple and we observe that the correction (\[Eq:correction\_Z\_App\]) can be viewed as resulting from the effective action \[cf. Eq. (\[Eq:S\_0\_App\]); we take into account that $C_g\ll C_0$\] $$\begin{aligned} S^{\rm eff}&=&S_0^{\rm eff}+S_{\mathrm{ps}}^{\mathrm{eff}},\\ S_0^{\rm eff}&=& \frac{1}{\pi^2}\int\frac{{\mathrm{d}}q}{2\pi}\frac{{\mathrm{d}}\omega}{2\pi} \left[\frac{(2e)^2 q^2}{2C_0+C_1 q^2}+\frac{\omega^2}{{E_{\mathrm{J}}}}\right]|\phi_a({\mathbf{q}})|^2,\nonumber\\ \label{Eq:S01} \\ S_{\mathrm{ps}}^{\mathrm{eff}}&=&\sqrt{2} y u_0 \int {\mathrm{d}}^2 r \cos[2\phi_a({\mathbf{r}})+\mathcal{Q}_a(x)],\end{aligned}$$ which (up to a redefinition of the fugacity $y$ by an unimportant numerical factor) reproduces Eqs. (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]) of the main text with $\alpha=2$. If the charge disorder is strong, we expand the cosine in Eq. , $$\begin{split} \cos[{\cal Q}_s(x_1)-{\cal Q}_s(x_2)]&=\cos[{\cal Q}_s(x_1)]\cos[{\cal Q}_s(x_2)] \\ &\,\,+\sin[{\cal Q}_s(x_1)]\sin[{\cal Q}_s(x_2)]. \end{split} \label{Eq:expansion-cosine}$$ Both terms in Eq. , when substituted into Eq. , give equivalent contributions, if the disorder ${\cal Q}_s$ is strong. In the opposite limit of weak disorder (small ${\cal Q}_s$), the second term would be much smaller than the first one. Thus, keeping only the first term will always yield a correct result, up to a coefficient of order unity. Proceeding in this way, we again find an effective action for QPS that is of first order in $y$ \[cf. discussion of the weakly disordered case\], $$\label{S-ps-random-fugacity} S_{\mathrm{ps}}^{\mathrm{eff}}=\sqrt{2}y u_0 \int {\mathrm{d}}^2 r \cos[\mathcal{Q}_s(x)]\cos[2\phi_a({\mathbf{r}})+\mathcal{Q}_a(x)].$$ For strong charge disorder we find, besides the random phase, also a random amplitude of the QPS action. As shown in Ref. , the QPS action without a random amplitude, Eq. , automatically generates a QPS term with a random amplitude if the charge disorder is strong. The phase-slip action Eq.  hence adequately describes the effects of QPS on the antisymmetric mode in the double chain in the disordered case. Let us now discuss the “gradient” anharmonicity correction to the effective action of the antisymmetric mode. Taking into account the “gradient” anharmonicity arising from the quartic expansion of the Josephson coupling in each of the two chains one finds $$S_{\mathrm{nl}}=\!\frac{-1}{12\pi^4 {E_{\mathrm{J}}}^3}\!\!\int \!\!{\mathrm{d}}x\! \left[(\partial_{\tau} \phi_a)^4\!+\!(\partial_{\tau} \phi_s)^4\!+\!6(\partial_{\tau} \phi_s)^2(\partial_{\tau} \phi_a)^2\right]. \label{Eq:nonlinear-both-modes}$$ We now average (\[Eq:nonlinear-both-modes\]) over fluctuations of $\phi_s$. Omitting a trivial constant term arising from the first term in Eq. (\[Eq:nonlinear-both-modes\]) and a renormalization of the Josephson energy in Eq. (\[Eq:S01\]) by a numerical factor arising from the last term we get $$S_{\rm nl}^{\rm eff}=-\frac{1}{12\pi^4 {E_{\mathrm{J}}}^3}\int {\mathrm{d}}x {\mathrm{d}}\tau\,(\partial_{\tau} \phi_a)^4,$$ and reproduce Eq. (\[Eq:nonlinearity-action\]) with $\alpha=2$. Elimination of symmetric mode at the level of the lattice model: the case of local Coulomb interaction {#App:Local} ------------------------------------------------------------------------------------------------------ In this appendix we assume local Coulomb interaction ($C_1=0$) and derive the effective theory of the antisymmetric mode by integrating out the symmetric mode directly in the lattice model (\[Eq:lattice-Hamiltonian-double-chain\]). We closely follow here the derivation of the effective theory for a single chain outlined in appendix A of Ref. . The generalization of this derivation to the case $C_1\gg C_0$ will be presented in Sec. \[App:LongRange\]. We start by constructing the path-integral representation of the partition function for the system. To this end, we discretize the (imaginary) time $\tau \in [0,\beta)$ in $N_{\tau}$ steps with spacing $\Delta \tau$ (the precise value will be discussed later). For concreteness we assume periodic boundary conditions along the chains, with $N_x$ grains in each chain. In the following, $n$ and $i$ are the indices of the lattice point in $\tau$ and $x$ directions, respectively, and $\sigma={\uparrow},{\downarrow}$ discriminates between the two chains. At each vertex of the space-time lattice ($n$,$i$,$\sigma$), a resolution of unity of the form $$\begin{split} \mathds{1}=\sum_{\mathcal{N}_{{\uparrow}},\mathcal{N}_{{\downarrow}}}\int_{0}^{2\pi}\frac{{\mathrm{d}}\theta_{{\uparrow}}}{2\pi}\int_{0}^{2\pi}\frac{{\mathrm{d}}\theta_{{\downarrow}}}{2\pi}&\left|\mathcal{N}_{{\uparrow}},\mathcal{N}_{{\downarrow}}\right\rangle\left\langle\theta_{{\uparrow}},\theta_{{\downarrow}}\right| \\ &\times {\mathrm{e}}^{-i \theta_{{\uparrow}}\mathcal{N}_{{\uparrow}}}\,{\mathrm{e}}^{-i \theta_{{\downarrow}}\mathcal{N}_{{\downarrow}}} \end{split}$$ is inserted. This results in the action $$\begin{split} &S=-i\sum_{n,i,\sigma}\mathcal{N}_{i,\sigma}^n(\partial_{\tau}\theta)^n_{i,\sigma}+{E_{\mathrm{J}}}\Delta\tau \sum_{n,i,\sigma} (1-\cos[(\partial_x \theta)_{i,\sigma}^n]) \\ &\!+\frac{(2e)^2\Delta\tau}{2}\!\!\sum_{n,i,\sigma,\sigma^{\prime}}\!(C^{-1})_{\sigma,\sigma^{\prime}}\left(\mathcal{N}_{i,\sigma}^n-Q_{i, \sigma}\right)\left(\mathcal{N}_{i,\sigma^{\prime}}^n-Q_{i, \sigma^\prime}\right), \end{split}$$ where we have introduced the lattice derivatives $$(\partial_x\theta)_{i,\sigma}^n=\theta_{i+1,\sigma}^n-\theta_{i,\sigma}^n \,\, \mathrm{and} \,\, (\partial_{\tau}\theta)_{i,\sigma}^n=\theta_{i,\sigma}^{n+1}-\theta_{i,\sigma}^n;$$ by $Q_{i, \sigma}$ we denote the stray charges and the inverse capacitance matrix in the local case reads $$C^{-1}=\frac{1}{C_g(C_g+2C_0)} \begin{pmatrix} C_g+C_0 & C_0 \\ C_0 & C_g+C_0 \end{pmatrix}.$$ To perform the summation over the charge variables ${\cal N}^n_{i, \sigma}$, it is convenient to introduce the symmetric and antisymmetric combinations of charges and phases $$\begin{aligned} {\cal N}^n_{i, s}&=\frac{{\cal N}^n_{i, {\uparrow}}+ {\cal N}^n_{i, {\downarrow}}}{2}, &{\cal N}^n_{i, a}&=\frac{{\cal N}^n_{i, {\uparrow}}- {\cal N}^n_{i, {\downarrow}}}{2}, \label{Eq:Nsa}\\ {Q}_{i, s}&=\frac{Q_{i, {\uparrow}}+ Q_{i, {\downarrow}}}{2}, &Q_{i, a}&=\frac{Q_{i, {\uparrow}}- Q_{i, {\downarrow}}}{2} \label{Eq:Qrot},\\ \theta_{i,s}^n&=\frac{\theta_{i,{\uparrow}}^n+\theta_{i,{\downarrow}}^n}{2}, &\theta_{i,a}^n&=\theta_{i,{\uparrow}}^n-\theta_{i,{\downarrow}}^n. \label{Eq:thetaRot}\end{aligned}$$ According to (\[Eq:Nsa\]), the charges ${\cal N}^n_{i, s}$ and ${\cal N}^n_{i, a}$ are either both integer or both half-integer. Note also the absence of $1/2$ in the definition of $\theta_{i, a}^n$. The partition function reads now $$Z=\sum_{\{{\cal N}_{i,s}^n,{\cal N}_{i,a}^n\}}\int_{0}^{2\pi} \mathcal{D}\theta_{{\uparrow}}\mathcal{D}\theta_{{\downarrow}} {\mathrm{e}}^{-\sum_{i,n}S_i^n}, \label{Eq:partition_function_new_variables}$$ with $$\begin{split} S_i^n=&-2 i {\cal N}_{i,s}^n(\partial_{\tau}\theta)_{i,s}^n -i {\cal N}_{i,a}^n(\partial_{\tau}\theta)_{i,a}^n \\ &+(2e)^2\Delta \tau \Bigl[\frac{({\cal N}_{i,s}^n-Q_{i, s})^2}{C_g}+\frac{({\cal N}_{i,a}^n-Q_{i, a})^2}{2C_0+C_g}\Bigr] \\ &+{E_{\mathrm{J}}}\Delta\tau \sum_{\sigma}(1-\cos[(\partial_x\theta)_{i,\sigma}^n]). \end{split} \label{Eq:S1}$$ We observe that in the limit of a small capacitance $C_g$, $(2e)^2/C_g\gg {E_{\mathrm{J}}}, E_0$, the dynamics of the charges ${\cal N}_{i, s}$ is frozen out and their values are pinned to the background charges $Q_{i, s}$[^2] $${\cal N}_{i, s}^{n}=\frac12 \lfloor{2 Q_{i, s}} \rfloor$$ where $\lfloor{\cdot}\rfloor$ stands for the integer part. The first term in Eq. (\[Eq:S1\]) is then a total derivative and can be dropped due to periodic boundary conditions in the imaginary time. Moreover, it is easy to see that, upon the proper redefinition of the stray charges $Q_{i, a}$, one can regard the summation over ${\cal N}_{i, a}^n$ in the partition function as running over integers irrespective of the (integer or half integer) value of ${\cal N}_{i, s}$. We thus conclude that, with the charges ${\cal N}_{i, s}$ being frozen out, the dynamics of the system is governed by the action $$\begin{split} S=\sum_{n,i}\Bigl\{&-i {\cal N}_{i,a}^n(\partial_{\tau}\theta)_{i,a}^n+(2e)^2 \Delta \tau \frac{({\cal N}_{i,a}^n-Q_{i, a})^2}{C_g+2C_0} \\ &+2{E_{\mathrm{J}}}\Delta\tau \left(1-\cos[(\partial_x\theta)_{i,s}^n]\cos[(\partial_x\theta)_{i,a}^n/2]\right)\Bigr\}. \end{split} \label{Eq:S2}$$ The last step one needs to perform in order to derive from Eq. (\[Eq:S2\]) the effective action for the antisymmetric mode is the integration over the phases $\theta_{i, s}^n$. To this end, we assume open boundary conditions in the space direction and introduce new integration variables $$\tilde{\theta_{i}^n}=\theta_{i, s}^n-\theta_{i-1, s}^n, \qquad i\geq 2.$$ The relevant factor in the partition function takes then the form $$\begin{split} &\prod_{i=1}^{N_x-1} \prod_{n=1}^{N_{\tau}}\Bigl(\int_0^{2\pi} {\mathrm{d}}\tilde{\theta}_{i+1,s}^n \\ &\times \exp\left\{-2{E_{\mathrm{J}}}\Delta\tau (1-\cos[(\partial_x\theta)_{i,a}^n/2]\cos[\tilde{\theta}_{i+1,s}^n])\right\}\Bigr)\\ & \propto \exp\left\{-\Delta \tau \sum_{i=1}^{N_x-1} \sum_{n=1}^{N_{\tau}} g \left[(\partial_x\theta)_{i,a}^n\right]\right\}. \end{split}$$ Here we have dropped an irrelevant normalization factor, and the function $g(\gamma)$ can be expressed in terms of the modified Bessel function $\mathrm{I}_0(\gamma)$: $$g(\gamma)=-\frac{1}{\Delta\tau}\ln \mathrm{I}_0\left(2{E_{\mathrm{J}}}\Delta\tau\cos \frac\gamma2\right). \label{Eq:g}$$ The function $g(\gamma)$ is $2\pi$ periodic in its argument. Thus, we can regard the effective action of the antisymmetric mode, $$\begin{gathered} S=\sum_{n,i}\Bigl\{-i {\cal N}_{i,a}^n(\partial_{\tau}\theta)_{i,a}^n+(2e)^2 \Delta \tau \frac{({\cal N}_{i,a}^n-Q_{i, a})^2}{C_g+2C_0}\\+ \Delta\tau \, g[(\partial_x\theta)_{i,a}^n]\Bigr\},\end{gathered}$$ as describing a chain of JJs with the effective Josephson coupling given by $g(\partial_x \theta)$ and proceed in close analogy with Ref. . We develop the theory starting from the superconducting ground state. As we will see later, this means that we are in the limit ${E_{\mathrm{J}}}\Delta \tau \gg 1$. In this limit, the main contribution comes from the region close to $\partial_x\theta_a=0$ (mod $2\pi$). Thus, we can employ the Villain approximation that reads $$\exp\left[ -\Delta\tau \, g(\partial_x \theta_a)\right]\propto \sum_{h}{\mathrm{e}}^{-\frac{{E_{\mathrm{J}}}\Delta\tau}{4}(\partial_x\theta_a-2\pi h)^2}.$$ Fixing the time step $\Delta\tau$ to $$\Delta \tau=\sqrt{\frac{2}{{E_{\mathrm{J}}}E_0}}=\sqrt{\frac{2C_0}{(2e)^2{E_{\mathrm{J}}}}},$$ and following the derivation of the sine-Gordon theory discussed in Ref. , we find (skipping the index “a”) $$\begin{split} S=\frac{1}{2\pi^2 K_0}\int {\mathrm{d}}x{\mathrm{d}}\tau& [u_0^2(\partial_x\phi)^2+(\partial_{\tau}\phi)^2] \\ &+yu_0 \int {\mathrm{d}}x {\mathrm{d}}\tau \cos[2\phi(x,\tau)+{\cal Q}_a], \end{split} \label{S_local_App}$$ where $$K_0=\sqrt{\frac{{E_{\mathrm{J}}}}{2E_0}}\,, \qquad u_0=\sqrt{{E_{\mathrm{J}}}E_0/2}. \label{Eq:definition_K}$$ Equation (\[S\_local\_App\]) is equivalent to Eqs. (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]) in the limit $\Lambda=\infty$. To complete our analysis we thus only need to extract the “gradient” anharmonicity term. It arises from the fourth order expansion of the effective Josephson coupling (\[Eq:g\]) and reads $${\mathcal{H}}_{\mathrm{nl}}=-\frac{{E_{\mathrm{J}}}}{192}\int {\mathrm{d}}x \left(\partial_x\theta_a\right)^4. \label{Eq:nonlinearity_App}$$ This result coincides with the “gradient” anharmonicity term stated in the main text, Eq. (\[Eq:nonlinearity\]), with $\alpha=2$. Elimination of symmetric mode at the level of the lattice model: the case of long-range Coulomb interaction {#App:LongRange} ----------------------------------------------------------------------------------------------------------- Let us now discuss the derivation of the effective theory in the case of the long-range Coulomb interaction, $C_0\ll C_1$. Throughout this section we take the limit of $C_g=0$. It is convenient to represent the partition function as a path integral over the phases $\theta_{i, \sigma}(\tau)$ $$Z=\int \prod_{i,\sigma} {\cal D}\theta_{i, \sigma}(\tau) e^{-S} \label{Eq:ZTheta}$$ with the action $$\begin{gathered} S=\int {\mathrm{d}}\tau \left\{\sum_{i, \sigma}\left[\frac{[(\partial_x\dot{\theta})_{i, \sigma}]^2}{2E_1} -{E_{\mathrm{J}}}\cos\left[(\partial_x \theta)_{i, \sigma}\right]\right.\right.\\+\left.i\dot{\theta}_{i, \sigma}Q_{i, \sigma}\Biggr]+\sum_{i}\frac{\left(\dot{\theta}_{i,{\uparrow}}-\dot{\theta}_{i, {\downarrow}}\right)^2}{2E_0}. \right\} \label{Eq:STheta}\end{gathered}$$ The action Eq. (\[Eq:STheta\]) is equivalent to the Hamiltonian (\[Eq:lattice-Hamiltonian-double-chain\]) in the limit $C_g=0$. The first term in the second line describes the effect of random stray charges. The quantization of the grain charges ${\cal N}_{i, \sigma}$ is reflected in the boundary condition along the imaginary time $$\theta_{i, \sigma}(\beta)=\theta_{i, \sigma}(0)+2\pi n_{i, \sigma} \ ,$$ where $\beta$ is the inverse temperature and $n_{i, \sigma}$ are integers. In the considered limit of $C_g=0$ the dependence of the action on the symmetric combination of phases, $\theta_{i, s}\equiv (\theta_{i, {\uparrow}}+\theta_{i, {\downarrow}})/2$ is through its spatial gradient only. We thus introduce $$\Theta_{i,s}=\frac{(\partial_x \theta)_{i,{\uparrow}}+\partial_x(\theta)_{i,{\downarrow}}}{2}, \qquad \theta_{i,a}=\theta_{i,{\uparrow}}-\theta_{i,{\downarrow}}.$$ as new integration variables and find $$\begin{split} S&=\int {\mathrm{d}}\tau \sum_{i}\Biggl\{ \frac{\dot{\Theta}_{is}^2}{E_1} + \frac{\left[(\partial_x\dot{\theta})_{i, a}\right]^2}{4E_1} +2i\dot{\Theta}_{i, s}{\cal Q}_{i, s} \\ &+i\dot{\theta}_{i, a} Q_{i, a}-2{E_{\mathrm{J}}}\cos\left[\Theta_{i, s}\right]\cos\left[\frac{(\partial_x \theta)_{i, a}}{2}\right]+\frac{\dot{\theta}_{i,a}^2}{2E_0} \Biggr\}. \end{split} \label{Eq:STheta1}$$ Here $${\cal Q}_{i, s}=\sum_{j<i}Q_{j, s}$$ and the symmetric and antisymmetric combinations of the stray charges, $Q_{i, s}$ and $Q_{i, a}$, are defined according to Eq. (\[Eq:Qrot\]). The boundary conditions in the time direction are given by $$\begin{aligned} \theta_{i, a}(\beta)&=&\theta_{i, a}(0)+2\pi n_{i, a} \ ,\\ \Theta_{i, s}(\beta)&=&\Theta_{i, s}(0)+2\pi n_{i, s}+\pi \delta_{i} \ ,\end{aligned}$$ where $n_{i, s (a)}$ are integer numbers and $$\delta_{i}=(n_{i+1, a}-n_{i, a})\mod 2.$$ We can now formally perform the functional integration over the symmetric mode. Indeed, the integrations at different spatial points decouple. It is then easy to see that the result of the integration over $\Theta_{i, s}(\tau)$ can be expressed as $$\begin{gathered} \int {\cal D} \Theta_{i, s}(\tau) \exp\left\{- \int {\mathrm{d}}\tau \left[\frac{\dot{\Theta}_{is}^2}{E_1} +2i\dot{\Theta}_{i, s}{\cal Q}_{i, s} \right.\right.\\\left.\left.- 2{E_{\mathrm{J}}}\cos\left[\Theta_{i, s}\right]\cos\left[\frac{(\partial_x \theta)_{i, a}}{2}\right]\right] \right\}\\={\operatorname{Tr}}U(\beta)\equiv e^{-\delta S[\partial_x \theta_{i, a}(\tau)]}, \label{Eq:deltaS}\end{gathered}$$ where $U(\tau)$ is the (imaginary-time) evolution operator defined by $$\frac{{\mathrm{d}}U}{{\mathrm{d}}\tau}=-{\cal H}[\theta_{i, a}(\tau)-\theta_{i+1, a}(\tau)]U(\tau), \label{Eq:U}$$ with the time dependent Hamiltonian $$\begin{gathered} \mathcal{H}=E_1 \left({\cal N}-2{\cal Q}_{i, s}-\frac{\delta_i}{2}\right)^2\\-2{E_{\mathrm{J}}}\cos\left(\Theta +\pi \delta_i\frac{\tau}{\beta}\right)\cos\frac{\theta_{i+1, a}(\tau)-\theta_{i, a}(\tau)}{2}. \label{Eq:HTheta}\end{gathered}$$ Here, ${\cal N}$ is the (integer-valued) momentum canonically conjugate to the coordinate $\Theta$. The contribution $\delta S[\partial_x \theta_{i, a}(\tau)]$ to the action of the antisymmetric mode defined by Eqs. (\[Eq:deltaS\]), (\[Eq:U\]) and (\[Eq:HTheta\]) is generally a complicated functional of the phase difference $\partial_x \theta_{i, a}(\tau)$. We are mainly interested, however, in the low-frequency modes of the field $\theta_{i, a}$ (with frequencies much less than the plasma frequency $\sqrt{E_1 {E_{\mathrm{J}}}}$). The adiabatic approximation can then be used for the computation of the evolution operator (\[Eq:U\]). Moreover, for $E_1\ll {E_{\mathrm{J}}}$ and low temperature, the dynamics of $\Theta$ can be determined just by the minimization of the potential energy in the Hamiltonian (\[Eq:HTheta\]). This leads to $$\delta S=-2{E_{\mathrm{J}}}\int {\mathrm{d}}\tau \left|\cos\frac{(\partial_x \theta)_{i, a}}{2} \right| . \label{Eq:DeltaS}$$ Equations (\[Eq:STheta1\]) and (\[Eq:DeltaS\]) give rise to the effective action for the antisymmetric mode $$\begin{gathered} S= \int {\mathrm{d}}\tau \sum_{i}\left\{ \frac{\left[(\partial_x\dot{\theta})_{i, a}\right]^2}{4E_1} -2{E_{\mathrm{J}}}\left|\cos\left[\frac{(\partial_x \theta)_{i, a}}{2}\right]\right| \right.\\\left. +\dot{\theta}_{i, a} Q_{i, a}+\frac{\dot{\theta}_{i,a}^2}{2E_0} \right\}. \label{Eq:SThetaA}\end{gathered}$$ The subsequent derivation of the effective sine-Gordon theory proceeds then along the lines of Ref. and leads to Eqs. (\[Eq:S\_clean\]), (\[Eq:S\_0\]) and (\[Eq:S\_ps\_disordered\]) with $\alpha=2$. The fourth order expansion of the Josephson coupling in (\[Eq:SThetaA\]) gives rise to the “gradient” anharmonicity, Eq. (\[Eq:nonlinearity\]). Before closing this section, let us comment on the relation between the presented derivation and the field-theoretic derivation discussed in Sec. \[Ap:FiledTheory\] of the appendix. Both derivations lead to the effective sine-Gordon model for the antisymmetric mode. It was found in Sec. \[Ap:FiledTheory\] that the corresponding fugacity $y$ fluctuates in space, see Eq.(\[S-ps-random-fugacity\]). Such fluctuations are not seen in Eq. (\[Eq:SThetaA\]). We anticipate that a more accurate treatment of QPS based on Eqs. (\[Eq:U\]) and (\[Eq:HTheta\]) will produce a fugacity $y$ that depends on the configuration of the stray charges $Q_{i, s}$ and fluctuates in space. Furthermore, as shown in Ref. , the random amplitude of the QPS term is generated under the renormalization-group transformation. The results of both derivations are therefore equivalent. [43]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevB.54.R6857) [****,  ()](\doibase 10.1103/PhysRevB.88.144506) [****,  ()](\doibase 10.1103/PhysRevB.92.045435) [****,  ()](\doibase 10.1103/PhysRevLett.119.167701) [****,  ()](\doibase 10.1103/PhysRevLett.81.204) [****,  ()](\doibase 10.1103/PhysRevB.88.104501) [****, ()](\doibase 10.1103/PhysRevB.30.1138) [****, ()](\doibase 10.1103/PhysRevLett.89.096802) [****,  ()](\doibase http://dx.doi.org/10.1038/nphys1697) [****,  ()](\doibase 10.1103/PhysRevB.87.174513) [****,  ()](http://stacks.iop.org/1367-2630/15/i=9/a=095014) [****,  ()](\doibase 10.1103/PhysRevB.57.R716) [****,  ()](\doibase 10.1023/A:1004603814529) [****, ()](\doibase 10.1103/PhysRevLett.87.186804) [****,  ()](\doibase 10.1103/PhysRevLett.89.197001) [****,  ()](\doibase 10.1103/PhysRevB.73.224503) [****,  ()](\doibase 10.1103/PhysRevB.96.064514) [****, ()](\doibase 10.1103/PhysRevB.98.054513) [****,  ()](\doibase 10.1103/PhysRevB.81.064508) [****,  ()](https://doi.org/10.1038/srep04083) @noop [ ()]{},  [****, ()](\doibase 10.1103/PhysRevB.97.205429) @noop [ ()]{},  [****,  ()](\doibase 10.1103/PhysRevLett.53.319) [****,  ()](\doibase 10.1103/PhysRevB.85.094503) [****,  ()](\doibase 10.1103/PhysRevB.65.134410) [****,  ()](\doibase 10.1103/PhysRevB.76.155108) @noop [**]{}, International series of monographs on physics ; 121 (, , ) [****,  ()](\doibase 10.1103/PhysRevB.88.045435) [****,  ()](\doibase 10.1103/PhysRevLett.110.016401) [****,  ()](\doibase 10.1103/PhysRevB.90.125113) [****, ()](\doibase 10.1103/PhysRevB.55.7673) [****,  ()](\doibase 10.1103/PhysRevLett.80.5611) [****,  ()](\doibase 10.1209/0295-5075/25/8/008) [****,  ()](\doibase 10.1209/0295-5075/28/2/007) [****,  ()](\doibase 10.1103/PhysRevLett.78.2803) [****,  ()](\doibase 10.1103/PhysRevB.56.13393) @noop [“,” ]{} @noop [ ()]{},  [****,  ()](\doibase 10.1103/PhysRevLett.81.4240) [****,  ()](http://stacks.iop.org/0953-8984/12/i=6/a=316) [^1]: The opposite case, $C_0\ll C_g$, corresponds to nearly decoupled chains, in which case the physics is more naturally described in terms of modes corresponding to individual chains rather than in terms of symmetric and antisymmetric modes. In the limit $C_0=0$ the decoupling is complete, and the analysis for a single chain applies, with $C_g$ playing the role of $C_0$. [^2]: In the case of strong charge disorder we neglect here the rare sites in the chain where $2Q_{i, s}$ is half integer.
--- abstract: 'The emphasise in this review about non–baryonic dark matter will be on experimental approaches to this fast evolving field of astroparticle physics, especially the direct detection method. The current status of experimental techniques will be reviewed and recent highlights as well as future plans will be introduced.' author: - | Y. Ramachers\ [*University of Oxford, Department of Physics,*]{}\ [*Nuclear and Particle Physics Lab., 1 Keble Road,*]{}\ [*Oxford OX1 3RH, U.K.*]{}\ [*present address: LNGS, INFN, 67010 Assergi, Italy*]{}\ [E-mail:yorck.ramachers@lngs.infn.it]{} title: 'Non–baryonic Dark Matter Searches' --- Introduction ============ The concept of dark matter in the universe is by now well established (see [@evidence] for a collection of recent reviews) but poses still a remarkable problem thereby inspiring the creativity of astronomers and physicists. Nowadays there exist two separate dark matter problems, the baryonic and the non-baryonic dark matter problem (including a non-zero vacuum energy). That separation is nicely visualised in [@motiv] from which Fig. \[motivpic\] has been taken. The separation is founded on the well established constraints on the allowed amount of baryonic matter in the universe from primordial nucleosynthesis (see [@olive] and references therein). Every hint from measurements, typically on large astronomical scales, for a mass abundance above the allowed one is actually an evidence for non-baryonic dark matter. As more refined measurements from experimental cosmology are collected, like from distant supernova Ia searches, large scale flows or cosmic microwave background [@proc], the question of existence of dark matter shifts rather to the question of abundance and nature of dark matter, especially for its non-baryonic part. That part in particular is one point where particle physics and astrophysics merged to form the field of astroparticle physics. Plenty of candidates for non-baryonic dark matter have been proposed. Here it should be noted that the accumulating evidences for the existence of a finite vacuum energy, like a cosmological constant, do not render a non-baryonic matter contribution unnecessary. In fact, the numerous candidates which are classified as cold dark matter (non-relativistic thermal or non-thermal relics from the early universe) are still necessary ingredients for structure formation in the universe [@proc]. The favourite particle candidates for non-baryonic dark matter in terms of experiments aiming at their detection are the axion and the neutralino. Since there exist extensive reviews about particle candidates and in particular about the axion and the neutralino, an introduction of these main candidates is considered to be beyond the scope of this review (see e.g. [@particle]). They can also be classified as WIMPs (weakly interacting massive particles) together with less prominent candidates (axinos, gravitinos, etc.). In fact, from the experimental point of view the term WIMP summarises almost all necessary requirements for a dark matter particle candidate. Any neutral, massive (between a few GeV and a few hundred TeV for thermal relics) and weakly interacting particle can represent a good candidate. Experiments aiming at the detection of particles with properties as above are described below [^1] (compare also the recent review [@morales]). Detection concepts ================== The experiments to reveal the nature and abundance of particle dark matter can be divided in two conceptually different approaches, direct and indirect detection. The physics underlying the direct detection technique is the elastic scattering of a WIMP with a nucleus of the detector. Therefore, the main observable is the deposited energy by the recoiling nucleus. For indirect detection the WIMPs first have to be accumulated in a gravitational potential to increase their density and therefore their annihilation rate. The annihilation products, high energy neutrinos, are then detected via their conversion to muons. Hence the signatures are muons originating from the centre of the earth or the sun, so called upward-going muons from these well defined sources. Additional observables for the direct detection technique which can serve to increase the signal to noise ratio are either recoil-specific or WIMP-specific. The recoil-specific observables are e.g. pulse-shapes or the partitioning of energy release into ionization and phonon signals or scintillation and phonon signals (see Fig. \[experimpic\] for a summary of applied techniques to reduce background). WIMP-specific observables result from the assumption of the existence of a WIMP halo around our galaxy as motivated from galaxy rotation curves [@rot]. Such a halo would yield kinematic signatures for WIMP-detection. The movement of the sun through the halo (excluding a conspiracy of a co-rotating halo) induces an annual modulation of WIMP-rates in the detector because the earth velocity would add to the mean kinetic energy of WIMPs impinging on the detector in summer and subtract in winter [@mods]. The asymmetry of the WIMP-wind itself would also induce a diurnal modulation for a recoil-direction sensitive detector [@spergel]. In order to rank the various background suppression mechanisms for direct detection one has to keep in mind that the powerful background discrimination via recoil-specific observables (factors of 100 and more have been reported, see e.g. [@cdmsref; @cresstref]) is systematically limited by neutron elastic scattering processes since these also produce nuclear recoils. The WIMP-specific observables are limited as well. First of all, the distribution function of WIMPs in the halo, see below, is unknown. Second, the annual modulation effect is small, of the order of a few percent. A recoil-direction sensitive detector would exploit the much larger diurnal modulations of the order of tens of percent modulations but detecting the tiny tracks from nuclear recoils is a formidable experimental task, see the DRIFT proposal below. A comparison of indirect and direct detection methods has been worked out either model-independent or for a specific candidate particle [@tao; @halzen; @sadoulet]. A general feature of such a comparison is that indirect searches are more sensitive for large WIMP-masses and spin-dependent interactions (see below) than direct searches (see also [@price]). Therefore these two approaches are complementary. Additionally, in case both techniques would give a consistent signal it would be possible to obtain in principle not only the approximate mass and elastic scattering cross section but also the annihilation cross section [@rich]. For more details about the indirect detection method, I refer to [@price] and references therein. Note that in case of the neutralino as the dark matter particle candidate another complementarity between direct detection and accelerator experiments has been shown [@baer] (for a discussion, see [@yr2]). Now for the direct detection technique, it is worth summarising the main experimental requirements. - Energy threshold: As low as possible due to the quasi-exponentially decreasing signal shape as function of recoil energy. The relevant energy region is generally below 100 keV. - Target mass: As high as possible due to the low cross section for WIMP-nucleus elastic scattering. Direct WIMP detection means rare event search, i.e. the already tested rates are of the order of one count/(day kg keV). - Background: As low as possible, especially the omnipresent neutron flux due to the production of nuclear recoils, simulating WIMP events. The exact dependencies of the direct detection technique on physical parameters can be extracted from eq.\[eqn1\]: \[eqn1\] =N\_[T]{} F\^[2]{}(Q)\^[v\_[max]{}]{}\_[v\_[min]{}]{} dv; v\_[min]{}= where dR/dQ is the measured quantity, the energy spectrum in rates over unit energy and unit detector mass. The other parameters can be classified as either completely unknown, like properties of the unknown WIMP, mass $m_{W}$ and elastic scattering cross section $\sigma_{0}$, or estimated input from astrophysics, like the local halo density $\rho_{0}$ (0.3-0.7 GeV/cc), escape velocity $v_{\rm max}$ ($\approx{}600$ km/s) from the galactic potential and the WIMP-halo distribution function f(v), often approximated by a Maxwellian distribution (see [@lewin] and references therein). The last set of values represents the number of targets $N_{T}$, target nucleus mass $m_{N}$, reduced mass $\mu=(m_{W}m_{N})/(m_{W}+m_{N})$ and detector threshold $E_{thr}$. The form factor $F^{2}(Q)$ parametrises the loss of coherence for the WIMP-nucleus interaction as the WIMP-energy increases. It represents an input from nuclear physics and depends on the type of WIMP interaction considered, since the elastic scattering cross section has two distinct interaction channels, a scalar or spin-independent and an axial or spin-dependent channel (for details, see [@particle] and the discussion in [@lewin]). Hence, depending on the spin-properties of the target nuclei, the appropriate form factor has to be used. Current Experiments =================== The direct detection of WIMPs became a very dynamic field of research during the last years. There are about 20 experiments running or being prepared worldwide and even more planned for the near future (for collections of contributions from most of these, see [@book1; @book2]). Various techniques and detector systems are applied. They can be classified by the applied detectors [@morales]. Here they are classified by their ability to discriminate nuclear recoils to some extent, i.e. to use recoil-specific observables (compare Fig. \[experimpic\]). That separation of experiments might give the impression that the most advanced experiments will use recoil discrimination techniques. On the other hand, note that this particular ability also adds complexity and therefore there are in fact experiments which do not apply any recoil discrimination but, nevertheless, give competitive perspectives (see below). As shown in Fig. \[experimpic\], the trick is for all no-discrimination detectors, like germanium semiconductor detectors (HD-Moscow [@hdmo], HDMS [@hdmo], GENIUS [@genius]), the cryogenic bolometers (CUORE [@cuore], Tokyo [@tokyoref], CRESST [@cresstref], Rosebud [@rosebud]), the superheated superconducting grains detector (ORPHEUS [@orpheus]) or the NaI scintillator ELEGANT V [@elegant], to use materials and shieldings for the setup as radio-pure as possible. Since by now this kind of experiments have already collected a large amount of experience in using clean materials it has been thought that their sensitivity might have reached a saturation and no further breakthroughs could be expected. As it turned out, this assumption is not true at least for germanium detectors (see the GENIUS expectation in the next section). The lowest background and therefore the best limit from raw data is still obtained by the Heidelberg–Moscow experiment. It uses an enriched $^{76}$Ge detector of 2.758 kg active mass and reached a background near threshold of $5.7\;10^{-2}$ counts/(day kg keV). Due to the rather high threshold of 9 keV its limit for lower WIMP-masses can be combined with another germanium experiment [@abriola] to give a combined Ge-diode limit (see Fig. \[rick\]) close to the currently best limits on spin-independent WIMP–nucleon interactions. HDMS is a specialised WIMP-detection setup from the same collaboration using a germanium well-type detector as active veto for a small inner germanium detector. After exchanging the inner natural germanium crystal by a $^{73}$Ge enriched crystal the prospects are to improve the existing limit by about a factor of 5-10, thereby challenging current limits. Other more complex techniques used for raw data experiments are cryogenic detectors. Two collaborations published first results recently, the LiF-crystal experiment in Tokyo [@tokyoref] and Rosebud [@rosebud] using sapphire (Al$_{2}$O$_{3}$) crystals. While the sapphire setup does not give competitive results so far it gives important insight into background contributions for cryogenic experiments in general. The LiF-experiment, although operating still at a shallow depth (15 m.w.e.), now improved the limit for light WIMP masses below 5 GeV (see Fig. \[tokyo\]). Due to their target materials both experiments are most sensitive for light WIMPs and the spin-dependent interaction channel. The Tokyo experiment will soon move to a deep underground laboratory and tries to remove identified background sources close to the detectors so that their estimate is to improve the current limit by an order of magnitude. Similar considerations have been put forward for Rosebud. The pulse-shape discrimination technique for NaI-scintillator detectors (DAMA [@damaref], UKDMC [@ukdmc]) has been the first applied discrimination technique and turned out to be quite effective for increasing energies. Still the best limit on WIMP–nucleon cross sections come from the DAMA collaboration (for the scalar channel above 40 GeV, compare Fig. \[rick\]). The calibration of this method by the production of nuclear recoils from neutrons showed that these pulses are significantly faster than electron recoil pulses from photons or electrons. Recently, there emerged the feature of a class of pulses even faster than nuclear recoil pulses in two different experiments using NaI detectors (UKDMC and Modane [@gerbier]). These are considered to belong to an unknown background source and might even give a systematic limit of sensitivity for this technique. However, this effect is still under investigation and might as well be removed in the near future. A highlight of NaI detectors is not only their discrimination ability and thereby the high sensitivity for WIMP detection but also the possibility to setup high target masses like the DAMA-experiment (87.3 kg active volume, see Fig. \[dama3pic\]). This puts the DAMA-experiment into the unique position of having the ability to even use WIMP-specific observables like the WIMP-signature of an annual modulation. Since that effect is just of the order of a few percent one has to collect a sufficient statistic to filter out the tiny modulation [@modrestriction]. In fact, this collaboration announced an evidence for the detection of that WIMP-signature. As shown in Fig. \[dama3pic\], they see an annual modulation in their data consistent with the expectation for a WIMP at $$m_{W} = 59^{+17}_{-14}\;{\rm GeV}\quad{};\quad{}\xi \sigma_{\rm scalar}^{\rm nucleon} = 7.0^{+0.4}_{-1.2}\;10^{-6}\;{\rm pb},$$ where $\xi{}\sigma_{\rm scalar}^{\rm nucleon}$ is the local halo density normalised (to 0.3 GeV/cc) WIMP–nucleon scalar cross section. The consistency requirements are the proper kinematic modulation (phase, amplitude and period), single hits in detectors and the proper signal shape (maximum signal in the lowest energy bin and subsequent decrease). However, the excitement about this evidence is accompanied by a similarly engaged criticism [@critic] inside the dark matter community. Fortunately, this is a matter of experiment, i.e. it will be possible already in the near future to test the evidence by experiments rather than arguments. The first competitors of the DAMA experiment which are expected to be able to test their result in the near future are the cryogenic detector experiments CDMS and EDELWEISS [@edelweissref]. They use a combined signal readout of phonon signals and ionization signals from germanium (and silicon in case of CDMS) crystals. The clue of this kind of readout scheme is that in germanium crystals the ionization efficiency of nuclear recoils is just about 25% (energy dependent, see [@efficiency] and references therein) compared to an electron recoil event. So far, they both suffered from the effect of incomplete charge collection of surface electron events which could mimic nuclear recoil events. Recently the CDMS experiment got rid of this problem (see Fig. \[rick\] and [@rickref]) and now they already test the DAMA result below about $m_{W}=40$ GeV. The EDELWEISS collaboration is expected to follow that development soon and release a new improved limit comparable to or even better than the CDMS result. Apart from that actual status report it is worth mentioning rather mid-term projects which show the variety of applied detection techniques in order to reduce background. The CASPAR proposal [@caspar] uses small grains of CaF$_{2}$ scintillators (of the order of a few hundred nanometer diameter) dissolved in an organic scintillator. Calcium or fluorine nuclear recoils would only produce a scintillation signal from their grain whereas electron recoils would have a much larger range and would give signals from the organic scintillator as well which then could be discriminated. Another discriminating detector concept using ionisation and scintillation signal readout from liquid Xenon (or two–phase Xenon, gas and liquid), the ZEPLIN [@zeplin] project, is still in its R&D–phase but first tests are very promising. The superheated droplet detectors PICASSO [@picasso] and SIMPLE [@simple] also use the specific energy loss of nuclear recoils to discriminate against minimum ionising particles. They use a well known technique for neutron dosimetry, namely droplets of a slightly superheated refrigerant liquid embedded in a gel. The droplets would then act like tiny bubble chambers, exploding due to a nuclear recoil event. By tuning the relevant parameters, pressure and temperature, the bubbles can be made insensitive to nuclear radiation so that practically only recoils from fission processes and neutrons remain background sources. Both experiments are currently build up and first results from PICASSO and SIMPLE have been reported recently. Future Plans ============ Although research in the field of direct detection evolves rapidly and more exciting results can be expected for the near future as mentioned above, there are three proposed detection concepts which shall be reported separately in order to point out their exceptional prospects [^2]. The DRIFT experiment (see Fig. \[drift\]) [@driftref] represents the only test-phase operating recoil-direction sensitive experiment. It consists of a low-pressure TPC using 20 torr Xe-CS$_{2}$ (50:50) gas mixture. The crucial point for such a device is that it must be able to detect reliably the tiny tracks from nuclear recoils, i.e. the design goal is to achieve a less than 1 mm track resolution. In order to setup higher target masses despite the low-pressure gas, the idea is to abandon the magnetic field. That would in principle worsen the track resolution due to enhanced diffusion. Therefore the detection concept has been changed in the sense that the TPC does not detect the drifted electrons but rather negative CS$_{2}^{-}$ ions with considerably reduced diffusion. The prospects for this setup are very encouraging due to the ability to measure the most decisive WIMP-signature, the diurnal modulation due to the WIMP-wind. The plan is to operate a 20 m$^{3}$ TPC by the end of 2001. The GENIUS proposal [@genius] is exceptional in the sense that it is a detection concept which works without specialised background discrimination procedures, i.e. it will not use nuclear recoil specific observables. This traditional detection method, using germanium semiconductor detectors, relies therefore on the utilisation of extreme radio-pure materials around the detectors. The idea is to remove all materials close to the crystals which were so far necessary to cool the detectors and instead operate them directly in liquid nitrogen which has been measured to be a very pure environment. The necessity to use that liquid nitrogen as shielding material scales the setup to a rather large size (dewar of 12 m diameter and height). On the other hand, that also admits to operate a large target mass (100 kg of natural germanium is planned for the first stage) in a common environment. The technical studies of operating ’naked’ germanium crystals in liquid nitrogen have already been performed successfully. An estimation of the expected sensitivity, i.e. the background expected, can be seen in Fig. \[geniuspic\]. Most dangerous appears the cosmic activation of the crystals while produced and transported on the earth surface due to spallation reactions with cosmic rays (hadronic component). The cited background expectation of $3.1\;10^{-2}$ counts/(y kg keV) below 100 keV would result in a WIMP-sensitivity more than 3 orders of magnitude below current best limits which definitely is an encouraging prospect. The CRESST phase II concept [@cresstref; @cresst2] consists of a combined signal readout from a scintillating crystal cooled to 12 mK. The light and phonon readout yields a very efficient discrimination mechanism as can be see in Fig. \[cresst\]. Test measurements using a non-optimised setup in a surface laboratory already give background suppression factors comparable to the ionization–phonon readout schemes seemingly without the problem of surface effects. Several scintillating crystals have been tested (BaF$_{2}$, BGO, PbWO$_{4}$ and CaWO$_{4}$) and their light yields at cryogenic temperatures measured. The operation of the scintillator as a cryogenic calorimeter poses special problems for the light detection since no light guide touching the crystal surface (matching refractive indices) is allowed since that would distort the phonon signal. For light detection a second calorimeter, a thin sapphire crystal coated with silicon to improve light absorption, is placed next to one surface of the scintillator and the other surfaces are surrounded by mirrors (compare Fig. \[cresst\]) [^3]. Apart from the discrimination abilities of the detector concept there is an additional advantage. Several target materials or scintillators can be used in such kind of setup which would result in an unique handle not only on the discrimination of the ambient neutron background, so far considered to be the ultimate limiting factor, but also on the extraction of the WIMP signal due to its target mass dependence (see eq.\[eqn1\]). Moreover, already in the existing cryogenic setup inside the Gran Sasso underground laboratory there is enough space to house some tens of kilogramms of active mass, rendering a modulation signature search possible. A first scintillation detector made from CaWO$_{4}$ is expected to be mounted underground already at the beginning of 2000. Conclusion ========== Non-baryonic dark matter is by now a well motivated concept from astronomy in the framework of a universe model containing cold dark matter. Several independent measurements from experimental cosmology indicate the necessity of a matter content above the allowed baryonic matter from primordial nucleosynthesis. In addition, particle physics offers attractive candidates for cold dark matter classified as WIMPs and initially motivated independent from cosmological reasoning (especially the neutralino as necessary ingredient of supersymmetric theories). WIMP searches in the form of direct and indirect detection experiments are a very active field of research also because of the attractive interdisciplinarity between astro- particle- and nuclear physics. A large variety of direct detection experiments, on which this review focused, currently produce results or will start in the near future. In addition, the first WIMP-detection evidence has been announced and will soon be tested by independent experiments. The benefit from this kind of research is twofold and worth to be reminded. One would learn about the supposed major part of matter in the universe and about beyond standard model physics by detection of non-baryonic dark matter. Acknowledgement {#acknowledgement .unnumbered} =============== The author thanks the following researchers for providing informations about their experiments and valuable comments: R. Bernabei, G. Chardin, D.B. Cline, J. Collar, S. Cooper, H. Ejiri, M. Di Marco, J. Hellmig, H.V. Klapdor-Kleingrothaus, M. Lehner, L. Lessard, M. Minowa, K. Pretzl, B. Sadoulet, W. Seidel, N. Smith, N.J.C. Spooner, D. Tovey and HanGuo Wang. [99]{} M.S. Turner, ; astro-ph/9901168; astro-ph/9904051; M. Rowan–Robinson, astro-ph/9906277; J. Silk, astro-ph/9903402 S. Dodelson, E. Gates and M.Turner, [*Science*]{} [**274**]{} 69 (1996) K. A. Olive, astro-ph/9901231 N.A. Bahcall, J.P. Ostriker, S. Perlmutter and P.J. Steinhardt, ; astro-ph/9906463; S. Perlmutter, M.S. Turner and M. White, G. Jungman, M. Kamionkowski and K. Griest, ; L. Roszkowski, hep-ph/9903467; J. Ellis, astro-ph/9812211; A. Bottino and N. Fornengo, hep-ph/9904469; A.D. Dolgov, astro-ph/9910532 G. Raffelt, ; A. Kitagawa et al., hep-ph/9908445; S. Cebrian et al., A. Morales, astro-ph/9810341 C. Firmani and V. Avila-Reese, in [@book2] p.367; A. Burkert and J.Silk, in [@book2] p.375; B. Fuchs, in [@book2] p.387 A. Drukier, K. Freese and D.N. Spergel, ; K. Freese, J. Frieman and A. Gould, D.N. Spergel, CDMS coll., CRESST coll., ; P. Meunier et al., physics/9906017 J. Rich and C. Tao, Preprint [*DAPNIA–SPP–95–01*]{} (1995) F. Halzen, astro-ph/9506304 M. Kamionkowski, K. Griest, G. Jungman and B. Sadoulet, L. Bergström, astro-ph/9902172 J. Rich, H. Baer and M. Brhlik, H.V. Klapdor-Kleingrothaus and Y. Ramachers, J.D. Lewin and P.F. Smith, N.J.C. Spooner and V. Kudryavtsev, eds., [*The identification of dark matter*]{} (World Scientific, Singapore, 1999) H.V. Klapdor-Kleingrothaus and L. Baudis, eds., [*Dark Matter in Astrophysics and Particle Physics*]{} (IOP Publ., Bristol, 1999) L. Baudis et al., L. Baudis et al., ; hep-ph/9910205 A. Alessandrello et al., in [@book2] p.785 W. Ootani et al., S. Cebrian et al., S. Casalbuoni et al., in [@book1] p.377 H. Ejiri et al., in [@book1] p.323 D. Abriola et al., R. Bernabei et al., ; ; hep-ph/9903501 P.F. Smith et al., ; V.A. Kudryavtsev et al., G. Gerbier et al., Y. Ramachers, M. Hirsch and H.V. Klapdor-Kleingrothaus, ; F. Hasenbalg, G. Gerbier et al., astro-ph/9710181; astro-ph/9902194 R. Gaitskell, Talk at TAUP99, Paris, France (1999) L. Berge et al., ; astro-ph/9801199 L. Baudis et al., N.J.C. Spooner et al., N.J.T. Smith et al., in [@book1] p.335 R. Gornea et al., in [@book2], p.772 J. Collar et al., in [@book1], p.383 C.J. Martoff, in [@book1] p.389; D.P. Snowden–Ifft, in [@book1] p.395; M.J. Lehner, in [@book2] p.767 M. Bravin et al., Talk at LTD8, Dalfsen, Netherlands, August 15-20 1999 Figure captions {#figure-captions .unnumbered} =============== Figure 1: Shown is a summary of astronomical results for the mean matter density in the universe combined with a conservative estimate from primordial nucleosynthesis as a function of the Hubble constant. The dark dividing line titled $\Omega_{B}$ in the middle gives the allowed amount of baryonic matter in the universe (the lower band gives the amount of visible matter). The gap between $\Omega_{B}$ and the observed matter density on large scales (summarised as everything above $\Omega_{0}=0.3$) represents the non-baryonic dark matter motivation (from ). Figure 2: Summary of existing and planned direct detection experiments, classified according to their ability to discriminate nuclear recoils. The numbers to the left indicate the applied background reduction technique which is given in the legend below. Note the variety of methods which gives a hint on the diverse experimental techniques and detector concepts involved in this fast evolving field of research. Figure 3: Collection of spin-dependent cross section limits for several direct detection experiments as function of the WIMP-mass (from ). Note the improved limit below 5 GeV and the large distance of limits to expectations (for neutralinos) for this particular interaction channel. Figure 4: Short summary of the most intriguing result from the DAMA NaI experiment (from ). Note that the upgrade to 250 kg has been approved. Figure 5: Summary of current WIMP-nucleon cross section limits for spin-independent interactions from the CDMS collaboration . The best limit for WIMP-masses above 40 GeV stem from the DAMA collaboration, below CDMS now tests the DAMA evidence contour. The combined Ge-diode limit is shown as dash-dotted line and dashed the UKDMC NaI result (limited by the unknown fast pulse shape component, see text). Figure 6: Summary of the DRIFT experiment and schematic view of their TPC setup (for details, see text). Figure 7: Shown are the background expectations for GENIUS according to Monte-Carlo simulations . Contributions from liquid nitrogen, the holder system, external background and cosmic activation have been included. Also shown is the sumspectrum and the contribution from the two-neutrino double beta decay of $^{76}$Ge in the natural germanium crystals. Figure 8: CRESST phase II setup and test measurement results (for details, see text). [^1]: That excludes the specialised experiments aiming at the detection of axions. For a collection of recent references on this topic, see [@axion]. [^2]: Admittedly, due to personal preferences. [^3]: New measurements use diffuse Teflon reflectors instead, resulting in a factor two per unit area more efficient light yield [@cresst2].
--- abstract: 'During 2004, four divisions of the American Physical Society commissioned a study of neutrino physics to take stock of where the field is at the moment and where it is going in the near and far future. Several working groups looked at various aspects of this vast field. The summary was published as a main report entitled “The Neutrino Matrix” accompanied by short 50 page versions of the report of each working group. Theoretical research in this field has been quite extensive and touches many areas and the short 50 page report [@short_report] provided only a brief summary and overview of few of the important points. The theory discussion group felt that it may be of value to the community to publish the entire study as a white paper and the result is the current article. After a brief overview of the present knowledge of neutrino masses and mixing and some popular ways to probe the new physics implied by recent data, the white paper summarizes what can be learned about physics beyond the Standard Model from the various proposed neutrino experiments. It also comments on the impact of the experiments on our understanding of the origin of the matter-antimatter asymmetry of the Universe and the basic nature of neutrino interactions as well as the existence of possible additional neutrinos. Extensive references to original literature are provided.' author: - | [**R.N. Mohapatra$^{*}$**]{} ([*Group Leader*]{})\ [**S. Antusch$^1$, K.S. Babu$^2$, G. Barenboim$^3$, M.-C. Chen$^4$, S. Davidson$^5$,**]{}\ [**A. de Gouvêa$^6$, P. de Holanda$^{7}$, B. Dutta$^8$, Y. Grossman$^9$, A. Joshipura$^{10}$,**]{}\ [**B. Kayser$^{11}$, J. Kersten$^{12}$, Y.Y. Keum$^{13}$, S.F. King$^{14}$ P. Langacker$^{15}$,**]{}\ [**M. Lindner$^{16}$, W. Loinaz$^{17}$, I. Masina$^{18}$, I. Mocioiu$^{19}$, S. Mohanty$^{10}$,**]{}\ [**H. Murayama$^{20}$, S. Pascoli$^{21}$, S.T. Petcov$^{22,23}$, A. Pilaftsis$^{24}$, P. Ramond$^{25}$,**]{}\ [**M. Ratz$^{26}$, W. Rodejohann$^{16}$, R. Shrock$^{27}$, T. Takeuchi$^{28}$, T. Underwood$^{5}$,**]{}\ [**L. Wolfenstein$^{29}$** ]{}\ $^{*}$ University of Maryland, College Park, MD 20742, USA\ $^1$ Universidad Autónoma de Madrid, 28049 Madrid, Spain and University of Southampton, Southampton SO17 1BJ, United Kingdom\ $^2$ Oklahoma State University, Stillwater, OK-74078, USA\ $^3$ University of Valencia, Valencia, Spain\ $^4$ Fermilab, Batavia, IL 60540\ $^5$ IPPP, University of Durham, Durham, DH1 3LE, Great Britain\ $^6$ Northwestern University, Evanston, Illinois 60208-3112, USA\ $^7$ Instituto de Física Gleb Wataghin, UNICAMP PO BOX 6165,\ CEP 13083-970, Campinas - SP, Brazil\ $^8$ University of Regina, Regina, Saskatchewan, Canada\ $^9$ SLAC, Stanford, CA-94305, USA\ $^{10}$ Physical Research Laboratory, Ahmedabad 380009, India\ $^{11}$ Fermilab, Batavia, Il-60617, USA\ $^{12}$ Deutsches Elektronen-Synchrotron DESY, 22603 Hamburg, Germany\ $^{13}$ Institute of Physics, Academia Sinica, Taipei, Taiwan 115, Republic of China\ $^{14}$ University of Southampton, Southampton SO17 1BJ, United Kingdom\ $^{15}$ University of Pennsylvania, Philadelphia, PA 19104-6396, USA\ $^{16}$ Technische Universität München, James-Franck-Stra[ß]{}e, 85748 Garching, Germany\ $^{17}$ Amherst College, Amherst, MA 01002-5000, USA\ $^{18}$ Fermi Center, Via Panisperna 89/A, I-00184 Roma, Italy\ and INFN, Sezione di Roma, “La Sapienza” Univ., P.le A. Moro 2, I-00185 Roma, Italy\ $^{19}$ Pennsylvania State University, University Park, PA 16802, USA\ $^{20}$ School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08, USA[^1]\ $^{21}$ UCLA, Los Angeles, CA 90095-1547, USA and Department of Physics, Theory Division, CERN, CH-1211 Geneva 23, Switzerland\ $^{22}$ SISSA/INFN-sezione di Trieste, Trieste, Italy\ $^{23}$ INRNE, Bulgarian Academy of Sciences, Sofia, Bulgaria\ $^{24}$ School of Physics and Astronomy, University of Manchester, Manchester M13 9PL, United Kingdom\ $^{25}$ University of Florida, Gainesville, FL 32611, USA\ $^{26}$ Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn, Germany\ $^{27}$ Department of Physics, Sloan Laboratory, Yale University, New Haven, CT 06250, USA\ $^{28}$ Virginia Tech, Blacksburg, VA 24061, USA\ $^{29}$ Carnegie-Mellon University, Pittsburgh, PA 15213, USA date: - - title: | **Theory of Neutrinos: A White Paper\ ** --- Introduction ============ Our understanding of neutrinos has changed dramatically in the past six years. Thanks to many neutrino oscillation experiments involving solar, atmospheric, accelerator and reactor (anti)-neutrinos [@expt; @pdg], we have learned that neutrinos produced in a well defined flavor eigenstate can be detected, after propagating a macroscopic distance, as a different flavor eigenstate. The simplest interpretation of this phenomenon is that, like all charged fermions, the neutrinos have mass and that, similar to quarks, the neutrino weak, or flavor, eigenstates are different from neutrino mass eigenstates [*i.e.*]{}, neutrinos mix. This new state of affairs has also raised many other issues [@barger] which did not exist for massless neutrinos: For example, (i) massive Dirac neutrinos, like charged leptons and quarks, can have nonzero magnetic dipole moments and massive Dirac and Majorana neutrinos can have nonzero transition dipole moments; (ii) the heavier neutrinos decay into lighter ones, like charged leptons and quarks, and (iii) (most importantly) the neutrinos can be either Majorana or Dirac fermions (see later for details). Learning about all these possibilities can not only bring our knowledge of neutrinos to the same level as that of charged leptons and quarks, but may also lead to a plethora of laboratory as well as astrophysical and cosmological consequences with far-reaching implications. Most importantly, knowing neutrino properties in detail may also play a crucial role in clarifying the blueprint of new physical laws beyond those embodied in the Standard Model. One may also consider the possibility that there could be new neutrino species beyond the three known ones $(\nu_e,\nu_\mu, \nu_\tau)$. In addition to being a question whose answer would be a revolutionary milestone pointing to unexpected new physics, it may also become a necessity if the LSND results are confirmed by the MiniBooNE experiment, now in progress at Fermilab. This would, undoubtedly, be a second revolution in our thinking about neutrinos and the nature of unification. The existence of neutrino masses qualifies as the first evidence of new physics beyond the Standard Model. The answers to the neutrino-questions mentioned above will add substantially to our knowledge about the precise nature of this new physics, and in turn about the nature of new forces beyond the Standard Model. They also have the potential to unravel some of the deepest and most long-standing mysteries of cosmology and astrophysics, such as the origin of matter, the origin of the heavy elements, and, perhaps, even the nature of dark energy. Active endeavors are under way to launch the era of precision neutrino measurement science, that will surely broaden the horizon of our knowledge about neutrinos. We undertake this survey to pin down how different experimental results expected in the coming decades can elucidate the nature of neutrinos and our quest for new physics. In particular, we would like to know (i) the implications of neutrinos for such long-standing ideas as grand unification, supersymmetry, string theory, extra dimensions, etc; (ii) the implications of the possible existence of additional neutrino species for physics and cosmology, and (iii) whether neutrinos have anything to do with the origin of the observed matter-antimatter asymmetry in the universe and, if so, whether there is any way to determine this via low-energy experiments. Once the answers to these questions are at hand, we will have considerably narrowed the choices of new physics, providing a giant leap in our understanding of the physical Universe. This review grew out of a year long study of the future of neutrino physics conducted by four divisions of the American Physical Society and is meant to be an overview of where we stand in neutrino physics today,[^2] where we are going in the next decades and the implications of this new knowledge for the nature of new physics and for the early universe. We apologize for surely missing vast parts of the neutrino literature in our references. We expect this overview to be supplemented by other excellent existing reviews of the subject in the literature. Regarding more references and the more experimental aspects of the topics under study, we refer to the other working group reports, the Solar and Atmospheric Experiments [@APSsol], the Reactor [@APSrea], the Neutrino Factory and Beta Beam Experiments and Development [@APSnufac], the Neutrinoless Double Beta Decay and Direct Searches for Neutrino Mass [@APSB] and the Neutrino Astrophysics and Cosmology [@APSastro] WGs. In particular, we have not discussed theoretical models for neutrino masses except giving a broad outline of ideas and getting beyond it only when there is a need to make some phenomenological point. Nonetheless, we hope to have captured in this study the essential issues in neutrino physics that will be relevant as we proceed to the next level in our exploration of this fascinating field. Our present knowledge about masses and mixings ---------------------------------------------- ### Dirac versus Majorana Neutrinos The fact that the neutrino has no electric charge endows it with certain properties not shared by the charged fermions of the Standard Model. One can write two kinds of Lorentz invariant mass terms for the neutrino, Dirac and Majorana masses, whereas for the charged fermions, conservation of electric charge allows only Dirac-type mass terms. In the four component notation for describing fermions, commonly used for writing the Dirac equation for the electron, the Dirac mass has the form $\bar{\psi}\psi$, connecting fields of opposite chirality, whereas the Majorana mass is of the form $\psi^TC^{-1}\psi$ connecting fields of the same chirality, where $\psi$ is the four component spinor and $C$ is the charge conjugation matrix. In the first case, the fermion $\psi$ is different from its antiparticle, whereas in the latter case it is its own antiparticle. A Majorana neutrino implies a whole new class of experimental signatures, the most prominent among them being the process of neutrinoless double beta decay of heavy nuclei, ($\beta\beta_{0\nu}$). Since $\beta\beta_{0\nu}$ arises due to the presence of neutrino Majorana masses, a measurement of its rate can provide very precise information about neutrino masses and mixing, provided (i) one can satisfactorily eliminate other contributions to this process that may arise from other interactions in a full beyond-the-standard-model theory, as we discuss below, (ii) one can precisely estimate the values of the nuclear matrix elements associated with the $\beta\beta_{0\nu}$ in question. The expressions for the Dirac and Majorana mass terms make it clear that a theory forbids Majorana masses for a fermion only if there is an additional global symmetry under which it has nonzero charge. As noted above, for charged fermions such as the electron and the muon, Majorana mass-terms are forbidden by the fact that they have nonzero electric charge and the theory has electromagnetic $U(1)$ invariance. Hence all charged fermions are Dirac fermions. On the other hand, a Lagrangian with both Majorana and Dirac masses describes, necessarily, a pair of Majorana fermions, irrespective of how small the Majorana mass term is (although it may prove very difficult to address whether the fermion is of the Dirac or the Majorana type when the Majorana mass-term is significantly smaller than the Dirac mass term). Hence, since the neutrino has no electric charge, the “simplest" theories predict that the neutrino is a Majorana fermion meaning that a Majorana neutrino is more natural (or at least requires fewer assumptions) than a Dirac neutrino. In most of the discussions below we assume that the neutrino is a Majorana fermion, unless otherwise noted. ### Neutrino mixings We will use a notation where the electroweak-doublet neutrino eigenstate (defined as the neutrino that is produced in a charged-current weak interaction process associated with a well-defined charged lepton) is denoted by $\nu_{\alpha}$, with $\alpha = e, {\mu}, {\tau}$. We will also consider $\nu_\alpha$ to include a set of $n_s$ possible electroweak-singlet (“sterile") neutrinos. Corresponding to these $3+n_s$ neutrino interaction eigenstates are $3+n_s$ mass eigenstates of neutrinos, $\nu_i$. We will order the basis of mass eigenstates so that $m_1^2<m^2_2$ and $\Delta m^2_{12}<|\Delta m^2_{13}|$, where $\Delta m^2_{ij}\equiv m_j^2-m_i^2$. The neutrino interaction eigenstates are expressed in terms of the mass eigenstates as follows: $\nu_{\alpha}=\sum_i U_{\alpha i}\nu_i$, where $U$ is a $(3+n_s) \times (3+n_s)$ dimensional unitary matrix. For the active neutrinos, with $\alpha=e,\mu,\tau$, the relevant submatrix is thus a rectangular matrix with three rows and $3+n_s$ columns. In seesaw models, the entries in the columns $4,...3+n_s$ are very small, of order $m_D/m_R$, where $m_D$ is a typical Dirac mass and $m_R$ is a large mass of a right-handed Majorana neutrino. Motivated by these models, one commonly assumes a decoupling, so that to good approximation the electroweak-doublet neutrinos can be expressed as linear combinations of just three mass eigenstates, and hence one deals with a $3 \times 3$ truncation of the full $(3+n_s) \times (3+n_s)$ neutrino mixing matrix. Since only the three electroweak-doublet neutrinos couple to the $W$, the actual observed lepton mixing matrix that appears in the charged weak current involves the product of the $3 \times (3+n_s)$ rectangular submatrix of the full lepton mixing matrix with the adjoint of the $3 \times 3$ unitary transformation mapping the mass to weak eigenstates of the charged leptons. Thus, the lepton mixing matrix occurring in the charge-lowering weak current has three rows and $3+n_s$ columns, corresponding to the fact that, in general, a charged lepton $\alpha$ couples to a $\nu_\alpha$ which is a linear combination of $3+n_s$ mass eigenstates. Henceforth, unless explicitly indicated, we shall assume the above-mentioned decoupling, so that the neutrino mixing matrix is $3 \times 3$, and will use $U$ to refer to the observed lepton mixing matrix, incorporating both the mixings in the neutrino and charged lepton sector. Neutrino oscillations and the mixing of two mass eigenstates of neutrinos, $\nu_1$ and $\nu_2$, to form the weak eigenstates $\nu_e$ and $\nu_\mu$ were first discussed by Pontecorvo and by Maki, Nakagawa, and Sakata [@BPont57]. The $3 \times 3$ truncation of the full neutrino mixing matrix is often called the MNS, MNSP, or PMNS matrix in honor of these pioneers. For the case of three Majorana neutrinos, the lepton mixing matrix $U$ can be written as $V K$, where $V$ will be parametrized as $$V = \begin{pmatrix} c_{12}c_{13} & s_{12}c_{13} & s_{13} e^{-i\delta} \cr -s_{12}c_{23}-c_{12}s_{23}s_{13} e^{i\delta} & c_{12}c_{23}-s_{12}s_{23}s_{13} e^{i\delta} & s_{23}c_{13} \cr s_{12}s_{23}-c_{12}c_{23}s_{13} e^{i\delta} & -c_{12}s_{23}-s_{12}c_{23}s_{13} e^{i\delta} & c_{23}c_{13} \end{pmatrix}, \label{V}$$ while $K~=~\mathrm{diag}\,(1, e^{i\phi_1},e^{i(\phi_2 + \delta)})$ [@Valle; @BHP80]. Neutrino oscillation experiments have already provided measurements for the neutrino mass-squared differences, as well as the mixing angles. At the 3$\sigma$ level, the allowed ranges are [@Maltoni:2004ei] $\sin^2 2\theta_{23}\geq 0.87$; $1.4\times 10^{-3}~{\rm eV}^2 \leq |\Delta m^2_{13}| \leq 3.3\times 10^{-3}~{\rm eV}^2$; $0.70 \leq \sin^2 2\theta_{12} \leq 0.94$; $7.1\times 10^{-5}~{\rm eV}^2 \leq \Delta m^2_{12} \leq 8.9\times 10^{-5}~{\rm eV}^2$; $\sin^2 \theta_{13}\leq 0.051$ [@CHOOZ]. There is currently no constraint on any of the CP odd phases or on the sign of $\Delta m^2_{13}$. Note that in contrast to the quark sector we have two large angles (one possibly maximal) and one small (possibly zero) angle. ### Matter effect on neutrino propagation A very important fact about neutrinos that we seem to have learned from solar neutrino data is that neutrino propagation in matter is substantially different from that in vacuum. This effect is known as the MSW (Mikheev-Smirnov-Wolfenstein) effect [@msw] and has been widely discussed in the literature [@othermsw]. There is however an important aspect of the favored large mixing angle (LMA) MSW solution which needs to be tested in future experiments. The LMA solution predicts a rise in the survival probability in the energy region of a few MeV as we move down from higher to lower solar neutrino energies. Since the present data do not cover this energy region, new solar neutrino data is needed in order to conclusively establish the LMA solution [@bahcall]. ### Neutrino masses Given the current precision of neutrino oscillation experiments and the fact that neutrino oscillations are only sensitive to mass-squared differences, three possible arrangements of the neutrino masses are allowed: 1. Normal hierarchy, i.e. $m_1 < m_2 \ll m_3$. In this case $\Delta m^2_{23}\equiv m^2_3-m^2_2 > 0$, and $m_3 \simeq \sqrt{\Delta m^2_{23}}\simeq 0.03-0.07$ eV. The solar neutrino oscillation involves the two lighter levels. The mass of the lightest neutrino is unconstrained. If $m_1\ll m_2$, then we find the value of $m_2 \simeq 0.008$ eV. 2. Inverted hierarchy, i.e. $m_1 \simeq m_2 \gg m_3$ [@silk] with $m_{1,2} \simeq \sqrt{\Delta m^2_{23}}\simeq 0.03-0.07$ eV. In this case, solar neutrino oscillation takes place between the heavier levels and we have $\Delta m^2_{23}\equiv m^2_3-m^2_2 < 0$. We have no information about $m_3$ except that its value is much less than the other two masses. 3. Degenerate neutrinos [@cald1] i.e., $m_1\simeq m_2 \simeq m_3$. The behaviors of masses for different mass patterns are shown in Fig. 1. \ ### Overall scale for masses Oscillation experiments cannot tell us about the overall scale of masses. It is therefore important to explore to what extent the absolute values of the masses can be determined. While discussing the question of absolute masses, it is good to keep in mind that none of the methods discussed below can provide any information about the lightest neutrino mass in the cases of a normal or inverted mass-hierarchy. They are most useful for determining absolute masses in the case of degenerate neutrinos [*i.e.,*]{} when all $m_i\geq 0.1$ eV. [*Neutrino mass from beta decay*]{} One can directly search for the kinematic effect of nonzero neutrino masses in beta-decay by modifications of the Kurie plot. This search is sensitive to neutrino masses regardless of whether the neutrinos are Dirac or Majorana particles. These may be due to the emission, via mixing, of massive neutrinos that cause kinks in this plot. If the masses are small, then the effects will occur near to the end point of the electron energy spectrum and will be sensitive to the quantity $m_{\beta}\equiv \sqrt{\sum_i |U_{ei}|^2m^2_i}$. The Mainz [@Weinheimer:tn] and Troitsk [@Lobashev:tp] experiments place the present upper limit on $m_\beta \leq 2.3$ eV and 2.2 eV, respectively. The proposed KATRIN [@Osipowicz:2001sq] experiment is projected to be sensitive to $m_{\beta}>0.2$ eV, which will have important implications for the theory of neutrino masses. For instance, if the result is positive, it will imply a degenerate spectrum; on the other hand a negative result will be a very useful constraint. ### Neutrino masses and neutrinoless double beta decay Another sensitive probe for the absolute scale of the neutrino masses is the search for neutrinoless double beta decay, $\beta\beta_{0\nu}$, whose rate is potentially measurable if the neutrinos are Majorana fermions and $m_{ee}~=~ \sum U^2_{ei} m_i$ is large enough [@BiPet87; @ElliotVogel02], or if there are new lepton number violating interactions [@moh1]. In the absence of new lepton number violating interactions, a positive sign of $\beta\beta_{0\nu}$ would allow one to measure $m_{ee}$. Either way, we would learn that the neutrinos are Majorana fermions [@valle1]. However, if $m_{ee}$ is very small, and there are new lepton number violating interactions, neutrinoless double beta decay will measure the strength of the new interactions (such as doubly charged Higgs fields or R-parity violating interactions) rather than neutrino mass. There are many examples of models where new interactions can lead to a $\beta\beta_{0\nu}$ decay rate in the observable range without at the same time yielding a significant Majorana mass for the neutrinos. As a result, one must be careful in interpreting any nonzero signal in $\beta\beta_{0\nu}$ experiments and not jump to the conclusion that a direct measurement of neutrino mass has been made. The way to tell whether such a nonzero signal is due to neutrino masses or is a reflection of new interactions is to supplement $\beta\beta_{0\nu}$ decay results with collider searches for these new interactions. Thus collider experiments, such as those at LHC, and double beta experiments play complementary roles. The present best upper bounds on $\beta\beta_{0\nu}$ decay lifetimes come from the Heidelberg-Moscow [@Klapdor-Kleingrothaus:2000sn] and the IGEX [@Aalseth:2002rf] experiments and can be translated into an upper limit on $m_{ee}\ls 0.9 $ eV [@Lisi]. There is a claim of discovery of neutrinoless double beta decay of enriched $^{76}Ge$ experiment by the Heidelberg-Moscow collaboration [@klapdor04]. Interpreted in terms of a Majorana mass of the neutrino, this implies $m_{ee}$ between 0.12 eV to 0.90 eV. If confirmed, this result is of fundamental significance. For a thorough discussions of this result (see also [@klapdor]) [@antiklapdor], we refer readers to the report of the double beta decay working group [@APSB]. ### Cosmology and neutrino masses A very different way to get information on the absolute scale of neutrino masses is from the study of the cosmic microwave radiation spectrum as well as the study of the large scale structure in the universe. A qualitative way of understanding why this is the case is that if neutrinos are present in abundance in the universe at the epoch of structure formation and have a sizable mass the formation of structure is affected. For instance, for a given neutrino mass $m$, all structure on a scale smaller than a certain value given by the inverse of neutrino mass is washed away by neutrino free-streaming. This implies a reduced power on smaller scales. Thus, accurate measurements of the galaxy power spectrum for small scales can help constrain or determine neutrino masses. Recent results from the WMAP and surveys of large scale structure have set a limit on the sum of neutrino masses $\sum m_i \leq 0.7-2$ eV [@wmap; @hannestad]. More recent results from the Sloan Digital Sky Survey (SDSS) place the limit of $\sum m_i \leq 1.6$ eV. Hannestad [@hannestad] has emphasized that these upper limits can change if there are more neutrino species — e.g. for 5 neutrinos, $\sum m_i \leq 2.12$ eV if they are in equilibrium at the epoch of BBN. A point worth emphasizing is that the above result is valid for both Majorana and Dirac neutrinos as long as the “right-handed” neutrinos decouple sufficiently earlier than the BBN epoch and are not regenerated subsequently[^3]. These limits already provide nontrivial information about neutrino masses: the limit $\sum_i m_{i}=0.7$ eV, if taken at face value, implies that each individual neutrino mass is smaller than $0.23$ eV, which is similar to the projected sensitivity of the proposed KATRIN experiment. PLANCK satellite observations are expected to be sensitive to even smaller values of $\sum_i m_i$, thereby providing a completely independent source of information on neutrino masses. These results may have implications for models of sterile neutrinos that attempt to explain the LSND results. ### CP violation It is clear from Eq.  that, for Majorana neutrinos, there are three CP-odd phases that characterize neutrino mixings [@Valle; @BHP80], and our understanding of the leptonic sector will remain incomplete without knowledge of these [@boris; @GKM]. There are two possible ways to explore CP phases: (i) one way is to perform long-baseline oscillation experiments and look for differences between neutrino and anti-neutrino survival probabilities [@minakata]; (ii) another way is to use possible connections with cosmology. It has often been argued that neutrinoless double beta decay may also provide an alternative way to explore CP violation [@BGKP96]. This is discussed in Sec. \[sec:0nubbCP\]. In summary, the most important goals of the next phase of neutrino oscillation experiments are: \(i) To determine the value of $\theta_{13}$ as precisely as possible; \(ii) To determine the sign of $\Delta m^2_{13}$, or the character of the neutrino mass hierarchy; \(iii) To improve the accuracy of the measurement of the other angles and the mass-squared differences; \(iv) To probe the existence of the three CP odd phases as best as possible. The discussion above assumes a minimal picture for massive neutrinos where the most general Majorana mass for three neutrinos has been added. While this may be the picture to the leading order, it is quite conceivable that there are other interesting subdominant effects that go beyond this. It is of utmost interest to determine to what extent one can constrain (or perhaps discover) these new nonstandard phenomena, since their absence up to a certain level (or, of course, their presence) will provide crucial insight into the detailed nature of the New Physics. ### Prospects for determining whether neutrinos are Majorana or Dirac As an example of what we can learn from future experiments, we focus on three experiments — searches for neutrinoless double beta decay (down to the level of $0.03$ eV level), studies to determine the sign of $\Delta m^2_{23}\equiv (m^2_3-m^2_2)$, and the KATRIN experiment, which is sensitive to the effects of a nonzero neutrino mass down to $0.2$ eV in tritium beta decay. The interplay between the possible results of these three experiments is summarized in Table \[tab:NatureOfNeutrinos\] $\beta\beta_{0\nu}$ $\Delta m^2_{13}$ KATRIN Conclusion --------------------- ------------------- -------- ------------------------------------------------------------- yes $>0$ yes Degenerate Hierarchy, Majorana yes $>0$ no Degenerate Hierarchy, Majorana or Normal Hierarchy, Majorana with heavy particle contribution yes $<0$ no Inverted Hierarchy, Majorana yes $<0$ yes Degenerate Hierarchy, Majorana no $>0$ no Normal Hierarchy, Dirac or Majorana no $<0$ no Dirac no $<0$ yes Dirac no $>0$ yes Dirac : Different possible conclusions regarding the nature of the neutrinos and their mass hierarchy from the three complementary experiments.[]{data-label="tab:NatureOfNeutrinos"} We see that extremely valuable information will follow from the results of these experiments. ### Sterile neutrinos A question of great importance in neutrino physics is the number of neutrino species. Measurement of the invisible $Z$-width in LEP-SLC experiments tell us that there are three types of light standard-model electroweak-doublet neutrinos that couple to the $W$ and $Z$ boson. These are the three known neutrinos $\nu_{e,\mu,\tau}$. This implies that if there are other neutrino-like interaction eigenstates, then they must either be sufficiently massive that they cannot occur in the decay of the $Z$ or they must be electroweak singlets with no coupling to the $W$ or $Z$. In the latter case, the interaction eigenstates are called sterile neutrinos. In general, a neutrino mass eigenstate will be a linear combination of the three electroweak-doublet neutrinos and some unknown number of electroweak-singlet (= sterile) neutrinos. In the presence of electroweak-singlet neutrinos, the neutral weak current is not, in general, diagonal [@LeeShrock:77; @valle80]. In common parlance, the word sterile neutrino is often used to denote a light electroweak-singlet neutrino and hence to exclude the heavy electroweak-singlet neutrino-like states that may well play a role in the seesaw mechanism. So the question is: are there any (light) sterile neutrinos and if so, how many are they and do they mix with the ordinary neutrinos? Light sterile neutrinos have been postulated in order to explain [@caldwell] the data from the Los Alamos Liquid Scintillation Detector (LSND) experiment [@lsnd], where neutrino flavor conversion both from a stopped muon (DAR) as well as the one accompanying the muon in pion decay have apparently been observed. The evidence from the DAR is statistically more significant and is interpreted as an oscillation from $\bar{\nu}_\mu$ to $\bar{\nu}_e$. The mass and mixing parameter range that fits data is: $$\begin{aligned} \Delta m^2 \simeq 0.2 - 2~\mathrm{eV}^2\;, \quad \sin^22\theta \simeq 0.003-0.03\;.\end{aligned}$$ There are points at higher masses specifically at 6 eV$^2$ which are also allowed by the present LSND data for small mixings. The KARMEN experiment at the Rutherford laboratory has very strongly constrained the allowed parameter range of the LSND data [@karmen]. Currently the MiniBooNE experiment at Fermilab is under way to probe the LSND parameter region [@louis]. Since this $\Delta m^2_\mathrm{LSND}$ is so different from that $\Delta m^2_{\odot, A}$, the simplest way to explain these results is to add one [@caldwell; @other] or two [@sorel] sterile neutrinos. The sterile neutrinos raise important issues of consistency with cosmology as well as physics beyond the simple three neutrino picture and will be discussed in a subsequent section. ### Neutrino electromagnetic dipole moments and neutrino decay A massive Dirac neutrino can have a diagonal magnetic (and a CP-violating electric) dipole moment. Because a Majorana neutrino is the same as its antiparticle, it has vanishing diagonal magnetic and electric dipole moments. A massive Dirac or Majorana neutrino can have nondiagonal, i.e., transition, magnetic and electric dipole moments. Some discussions of diagonal and transition neutrino electromagnetic moments in renormalizable electroweak gauge theories (where these can be calculated) include [@mmom1; @mmom2], [@LeeShrock:77], [@leftright]–[@mmom2004]. In the standard model extended to contain massive Dirac neutrinos, $\mu_{\nu_j} = 3 e G_F m_{\nu_j}/(8 \pi^2 \sqrt{2}) = 1.6 \times 10^{-19} (m_{\nu_j}/(1 \ {\rm eV})) \ \mu_B$ for the neutrino mass eigenstate $\nu_j$ [@fs], where $\mu_B~=~\frac{e}{2m_e}$ is a Bohr magneton. In left-right models and others with new physics beyond the standard model, this may be larger (e.g., [@leftright; @rs82; @mmom6; @mmom7; @mmom2004]). In contrast to the magnetic dipole moment, the neutrino electric dipole moment vanishes at one-loop order for a massive Dirac neutrino in the extended standard model [@rs82]. However, in a left-right model, a Dirac neutrino may acquire an electric dipole moment at the one-loop level [@rs82]. In the more generic case of a Majorana neutrino, one’s interest focuses on the neutrino transition magnetic (and electric) dipole moments. The presence of these diagonal or transition moments allows for new electromagnetic interactions between neutrinos and other fermions of the Standard Model. In particular in neutrino-electron scattering, in addition to the usual weak interaction contribution, there will be a photon exchange contribution to the scattering cross section. The existing neutrino scattering measurements therefore provide an upper limit on the neutrino magnetic moment: $\mu_{\nu_e}\leq (1-1.3)\times 10^{-10}\mu_B$. As we will discuss in more detail later, the observation of nonzero neutrino magnetic moment would be considered evidence of new physics at the TeV scale. The reason for that is that if all new physics is parameterized by (Majorana or Dirac) neutrino masses, or, equivalently, if all new physics effects are suppressed by the very large naive seesaw energy scale (close to the GUT scale) the neutrino magnetic moments are expected to be of order $10^{-19}\mu_B\left(\frac{m_\nu}{1~eV}\right)$. High-precision searches for a magnetic moment provide, therefore, complementary tools to probe the physics that is expected to lie just beyond the electroweak symmetry breaking scale. A neutrino magnetic or electric dipole moment leads to new processes that can alter our understanding of energy balance in astrophysical systems such as in stars and supernovae [@raffelt]. It can also affect considerations involving the neutrinos in the early universe such as the BBN. In Sec. \[mm\] we discuss more details on neutrino magnetic moments and what one can learn from various proposed experiments. The existence of a neutrino magnetic or electric transition moment is also related to neutrino decays. For instance, it would allow heavier neutrinos to decay radiatively to the lighter ones [@mmom1; @ms77; @pw82; @LeeShrock:77; @STP77; @Marciano77]. Such decays can be detectable in astrophysical experiments. Present upper limits coupled with the general idea about spectra of neutrinos from oscillation experiments, imply that lifetimes of the primary mass eigenstates in electroweak-doublet neutrinos are larger than $10^{20}$ sec., much longer than the age of the universe [@mmom1; @ms77; @pw82; @LeeShrock:77; @STP77; @Marciano77]. Such decays do not therefore affect the evolution of the universe. It is however possible that there are other scalar particles to which the neutrinos decay; one such example is the majoron, which is a Goldstone boson corresponding to the spontaneous breaking of a global $B-L$ symmetry [@cmp]. The decay to these scalar bosons may occur at a faster rate[@gelmini] than that to photons and may therefore have astrophysical and cosmological implications [@beacom]. This will be the subject of another working group [@APSastro]; so we only focus on the implications of the magnetic moment in one of the subsequent sections. Neutrino probes of other fundamental symmetries ----------------------------------------------- Neutrino experiments can also be used to probe the validity of other fundamental symmetries, some of which are often commonly assumed in theoretical discussions, as well as the basic assumptions of local quantum field theories on which the Standard Model is based. Some examples of these are: - Violation of Lorentz invariance; - CPT violation; - Possible existence of new long range forces in nature associated with lepton number; - Nonstandard interactions of neutrinos such as flavor changing neutral currents involving neutrinos. We will explore to what extent existing limits on these departures from standard scenarios can be improved. Why neutrino mass necessarily means physics beyond the Standard Model? ---------------------------------------------------------------------- Neutrino oscillations are, to date, the only evidence for the existence of physics beyond the Standard Model (in the domain of particle physics). It is of utmost importance to decipher the kind of new physics indicated by the existing data and to anticipate the signals of new physics that might appear in future planned observations. We must understand how and if they fit into the different big pictures that have been advocated for independent reasons, including the gauge hierarchy problem and gauge coupling unification. To discuss this, we first introduce the Standard Model and possible ways to extend it to accommodate the neutrino observations. In the Standard Model, which is based on the gauge group $SU(3)_c\times SU(2)_L\times U(1)_Y$, the quarks and leptons transform as $Q_L(3,2, {1\over 3})$, $u_R(3, 1, {4\over 3})$, $ d_R(3, 1,-\frac{2}{3})$, $L (1, 2, -1)$, $e_R(1,1,-2)$. The Higgs boson $H$, responsible for electroweak symmetry breaking, transforms as $(1, 2, +1)$. The electroweak symmetry $SU(2)_L\times U(1)_Y$ is broken by the vacuum expectation of the Higgs doublet $\langle H^0\rangle=v_{\rm wk}\simeq 246$ GeV, which renders the $W^{\pm}$ and $Z^0$ gauge bosons and the electrically charged fermions massive. The reason neutrinos do not get mass as a result of the Higgs mechanism is that the right-handed neutrino $N_R$ was not included in the list of fermions in the Standard Model; as a result there is no coupling of the form $h_\nu\bar{L}HN_R$ that could have given mass to the neutrinos after symmetry breaking. One seemingly straightforward way to understand the neutrino mass would be to extend the Standard Model to include the $N_R$. This would also be desirable from the point of view of making the model quark lepton symmetric. There are two problems with this naïvely trivial modification. One is that by quark lepton symmetry one would expect the neutrino masses arising from the Yukawa coupling $h_\nu\bar{L}HN_R$ to be of the same order as the quark and charged leptons. Observations suggest that neutrino masses are at least $10^6$ times smaller than the smallest quark and lepton masses. Therefore, a nonzero neutrino mass not only suggests the existence of right-handed neutrinos (of which there would be three if they correspond to the usual generations), but some new physics that will enable us to understand why $M_\nu \ll m_{q,\ell}$. The seesaw mechanism provides a plausible basis for this understanding, since it makes use of the fact that, among the known fermions, only neutrinos can have Majorana mass terms. Thus, ironically, we may have a better way to understand the lightness of the neutrinos than we do to understand the generational hierarchy factor of $\sim 10^6$ between the masses of the top quark and the electron, for which there is no accepted explanation at present. The other problem with introducing a set of right-handed neutrino fields is the fact that they are Standard Model gauge singlets. This means that, as far as the symmetries of the Standard Model are concerned, a Majorana mass for the $N_R$ fields is allowed. If such a mass term is present, however, the neutrino masses are not simply given by the $h_{\nu}v_{\rm wk}$, but are determined by a more complicated function of $h_{\nu}v_{\rm wk}$ and the Majorana masses of the right-handed neutrinos. In order to avoid the presence of a Majorana mass for the right-handed neutrinos one is required to impose an extra [*symmetry*]{} to the Standard Model Lagrangian (say, lepton number) — a very nontrivial modification of what is traditionally referred to as the Standard Model of electroweak interactions. ### Seesaw mechanism for small neutrino masses A simple way to understand the smallness of neutrino mass within this minimally extended Standard Model is to break lepton number symmetry (or $B-L$ symmetry) and add a Majorana mass for the right-handed neutrino $M_RN^T_RC^{-1}N_R$. Thus the two terms that give mass to the neutrinos have the form $h_\nu v_{\rm wk} \bar{\nu}_LN_R + M_R N^T_RC^{-1}N_R$ + h.c. Assuming $n$ “left-handed” neutrinos $\nu_L$ and $m$ “right-handed neutrinos $N_R$, the $(n+m)\times (n+m)$ Majorana neutrino mass matrix is $${\cal M}~=~\begin{pmatrix}0 & h_\nu v_{\rm wk}\cr h_\nu v_{\rm wk} & M_R\end{pmatrix}. \label{eq:6by6}$$ In the limit $M_R\gg hv_{\rm wk}$, the eigenvalues of this matrix are given by $-\frac{(h_\nu v_{\rm wk})^2}{M_R}$ and $M_R$, with respective approximate eigenvectors $\nu_L$ and $N_R$. The effective active neutrino masses are clearly much smaller than typical charged fermion masses (which are of order $h_\nu v_{\rm wk}$) as long $M_R \gg v_{\rm wk}$. This is the well-known seesaw mechanism [@minkowski; @Yanagida:1980; @Gell-Mann:1980vs; @Glashow:1979vf; @Mohapatra:1980ia]. If we take as a guide a value for $h_\nu\leq 1$, then atmospheric neutrino data requires that $M_R\leq 10^{15}$ GeV. It should be emphasized that there is very little concrete information or experimental guidance regarding the magnitude of $M_R$, which is virtually unconstrained [@LSND_seesaw]. One question which arises is why this value rather than $M_{\rm Pl}\simeq 10^{18}$ GeV, which, one may argue, would have been a more natural value? Could this be an indication of a new symmetry? The answer to this question is obviously of fundamental significance. An example of such a symmetry is the $B-L$ symmetry embodied in the left-right symmetric models based on the gauge group $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$ [@moh]. This gauge group is also a subgroup of $SO(10)$ grand unification group. The above mentioned value of $M_R$ is rather close to the conventional GUT scale of $10^{16}$ GeV. This makes the seesaw mechanism a very attractive framework for discussing the neutrino mass. We will discuss further consequences of grand unification for neutrino masses in a subsequent section. We will also explore in this review unification-model independent consequences of the seesaw mechanism. ### Type I vs type II seesaw mechanism If there are indeed right-handed neutrinos, the most general Majorana mass matrix that mixes active and sterile neutrinos is given by Eq. (\[eq:6by6\]) with the $n\times n$ zero matrix in the upper left-hand corner replaced by a generic (symmetric) matrix $M_L$. This phenomenon occurs, for example, when the theory containing the $N_R$ becomes parity symmetric as is the case for $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$ or $SO(10)$ based models. In this case the seesaw formula is modified to $${ M}_\nu^{\rm II} = M_L - M_\nu^D M_R^{-1} (M_\nu^D)^T\;, \label{eq:typeII}$$ where, in an $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$ symmetric model, $M_L = f v_L$ and $M_R=f v_R$, where $v_{L,R}$ are the vacuum expectation values of Higgs fields that couple to the right and left-handed neutrinos. Eq. (\[eq:typeII\]) is called the type II seesaw relation [@seesaw2]. It should be noted that in the absence of a discrete left-right symmetry, $M_L$ is in general not related to $M_R$. ### Triplet seesaw An alternative way to understand the small neutrino mass without introducing the right handed neutrino is the triplet seesaw mechanism. It was pointed out in various papers [@triplet1] in early 1980 that if the standard model is extended by the addition of a triplet Higgs $\Delta_L$ with weak hypercharge $Y=2$, a vev for it can lead to Majorana mass for the neutrinos from the interaction $f_\nu \psi_L^TC^{-1}\tau_2\vec{\tau}\cdot \vec{\Delta}\psi_L$ ($\psi_L$ being the lepton doublet $(\nu_L,e_L)$). However, one has to tune the Yukawa coupling $f_\nu$ by about $10^{-10}$ or so to get desirable neutrino masses. It has subsequently been shown [@triplet2] that in the context of grand unified theories, the triplet vev is given by the formula $<\Delta^0_L>\sim \frac{v^2_{wk}}{M_U}$, where $M_U$ is close to the grand unification scale and corresponds to the physical mass of the triplet Higgs field. Since $M_U \gg v_{wk}$, this provides a natural suppression of the triplet vev and the right order for the neutrino mass emerges. Note also that this can also emerge from the type II seesaw formula in the limit of $M_{N_R}\rightarrow \infty$. In this case, the neutrino mass matrix is directly proportional to the coupling matrix $f_\nu$. ### Seesaw with triplet fermions Yet another possible extension of the standard model without right handed neutrinos which leads to small neutrino masses is to postulate the existence of triplet vectorlike fermions: $\vec{\Lambda}$ [@ma]. Since a vectorlike triplet can have an arbitrary mass, it also leads to seesaw mass formula. ### Understanding large mixings within the seesaw mechanism A major puzzle of quark–lepton physics is the fact that the quark mixing matrix and the leptonic one are qualitatively different. In order to understand the mixing angles [@barger; @king], we have to study the mass matrices for the charged leptons and neutrinos. [*A general approach*]{} To see the possible origin of neutrino mixings, one can start with the following form for the mass part of the neutrino Lagrangian: $$\begin{aligned} {\cal L}_{mass}~=~\bar{\nu}_L M_{\nu}^D N_R + \bar{e}_L M_\ell e_R + N^T_R M_R N_R ~~+ ~~ h.c.\end{aligned}$$ Using the seesaw mechanism one can derive from this equation, the formula for neutrino masses can be written as for the case of type I seesaw: $${ M}_\nu^{\rm I} = - M_\nu^D M_R^{-1} (M_\nu^D)^T,$$ To obtain the lepton mixing matrix, one can diagonalize the charged lepton mass matrix by $M_\ell ~=~ U_\ell M^d_\ell V^{\dagger}$ and $m_{\nu}=(U^{*})_{\nu }m^d_\nu(U^{\dagger})_{\nu}$ and find that $U=~U^{\dagger}_\ell U_\nu$. With this theoretical preamble, understanding of neutrino mixings can proceed along two paths. In theories where quark and lepton mixings are disconnected (such as many weak scale theories), one may like to pass to a basis where the charged lepton masses are diagonal. In that case, all the neutrino mixing information is in the effective neutrino mass matrix. One can then look for the types of mass matrices for neutrinos that can lead to bi-large mixings and try to understand them in terms of new physics. Here we give a brief overview of some generic structures for $M_\nu$ that do the job. \(i) [*The case of normal hierarchy:*]{} In this case, one neutrino mass matrix that leads to “bi-large” mixing has the form: $$M_\nu~=~m_0\,\begin{pmatrix}\epsilon & \epsilon & \epsilon\cr \epsilon & 1+\epsilon & 1\cr \epsilon & 1 & 1\end{pmatrix}$$ where $m_0$ is $\sqrt{\Delta m^2_{\rm A}}$. We have omitted order one coefficients in front of the $\epsilon$’s. This matrix leads to $\tan \theta_A \simeq 1$, $\Delta m^2_{\odot}/\Delta m^2_{\rm A} \simeq \epsilon^2$ and also to a large solar angle. For the LMA solution, we find the interesting result that $\epsilon \sim \lambda$ where $\lambda$ is the Cabibbo angle ($\simeq 0.22$). This could be a signal of hidden quark lepton connection[^4]. In fact we will see below that in the context of a minimal $SO(10)$ model, this connection is realized in a natural manner. \(ii) [*The case of inverted hierarchy:*]{} The elements of the neutrino mass matrix in this case have a pattern $$\begin{aligned} { m}_\nu=m_0~\left(\begin{array}{ccc} \epsilon & c & s\\ c & \epsilon & \epsilon\\ s & \epsilon & \epsilon\end{array}\right).\end{aligned}$$ where $c=cos\theta$ and $s=\sin\theta$ and it denotes the atmospheric neutrino mixing angle. An interesting point about this mass matrix is that in the limit of $\epsilon\rightarrow 0$, it possesses an $L_e-L_\mu-L_\tau$ symmetry [@inverted]. One therefore might hope that if inverted hierarchy structure is confirmed, it may provide evidence for this leptonic symmetry. This can be an important clue to new physics beyond the Standard Model. This issue of leptonic symmetries will be discussed in the main body of this report. \(iii) [*Degenerate neutrinos:*]{} One may either add a unit matrix to the just mentioned mass matrices and look for new physics models for them; alternatively, one may look for some dynamical ways by which large mixings can arise. It turns out that if neutrinos are mass degenerate, one can generate large mixings out of small mixings [@Babu:1993qv; @Balaji:2000gd; @Antusch:2002fr; @Mohapatra:2003tw; @Hagedorn:2004ba] purely as a consequence of radiative corrections. We will call this possibility radiative magnification and will discuss it in a future section. In grand unified theories, quark and lepton mass matrices are connected. One may therefore lose crucial information about symmetries if one works in a basis where the charged leptons are diagonal. Furthermore, if either of the quark (up or down) mass matrices are chosen diagonal, it may not even be possible to go to the diagonal charged lepton basis. Thus in this case, we have $U~=~ U^{\dagger}_\ell U_\nu$. So one may seek an understanding of large mixings in the charged lepton sector. For example in $SU(5)$ type theories, $U_\ell$ is related to the mixings of right-handed quarks which are unobservable in low energy weak interactions and can therefore be the source of large mixings. Models of this type are called lopsided mixing models [@lop]. The basic strategy then would be to look for clues for new symmetries in the structure of the mass matrices, which could then provide information about the nature of physics beyond the Standard Model. The symmetries of course may become obscured by our choice of basis where the charged leptons are diagonal. It is this which gives different possibilities for arriving at the bi-large mixings and the hope is that different strategies will lead to different predictions for observables, which can then be put to experimental test. ### Alternatives to high-scale seesaw While the high-scale seesaw mechanism is the simplest and perhaps the most elegant way to understand the small neutrino masses and become couched in a quark leptonic and parity symmetric framework leading to simple grand unification theories, there are alternatives to seesaw which can also explain the small neutrino masses [@zee; @li; @babu2; @dchang; @valle2; @gluza; @nt; @lrs; @chacko; @pl]. In such a case the neutrinos can either be Dirac or Majorana fermions depending on the theory. Unlike the non-supersymmetric seesaw models, alternatives such as the one presented in [@valle2] often predict observable charged lepton lepton-flavor violating signals [@Ilakovac], e.g., $\mu \rightarrow e \gamma$, $\mu, \tau \to eee$, etc. More generically, searches for charged-lepton flavor violation crucially help distinguish among the several theoretical interpretations of the origin of neutrino masses. Summary of the Introduction --------------------------- Some of the questions that we would like to answers in the course of this work are: - Can we decide whether the neutrino is a Dirac or Majorana particle? - To what extent can the planned neutrino experiments pin down the structure of the three neutrino mass matrix? This involves such questions as determining the sign of $\Delta m^2_{23}$, higher precision measurement of mixing parameters, etc. - What is the impact of a $\theta_{13}$ measurement (and the improved determination of the other elements of the lepton mixing matrix) on the general landscape of physics beyond the Standard Model? We find that $\theta_{13}$ is a powerful discriminator of models. - Can we test the seesaw hypothesis and discriminate between different types of seesaw using lepton flavor violation and other “non-neutrino probes?” - How can one experimentally discover or limit physics beyond the standard scenario? This will address such aspects as: \(1) Flavor changing neutral currents for neutrinos; present limits and future prospects; \(2) Admixtures of sterile neutrinos, both heavy and light; \(3) magnetic moments of neutrinos. - What can we learn about CP violation in the lepton sector and how can we connect it to the question of the origin of matter via leptogenesis. Given what we know about the neutrino masses, assuming thermal leptogenesis, do we have an explanation of the observed baryon to photon ratio? What can we learn about neutrino mass matrices from experiments? ================================================================ In this section we briefly review our ability to reconstruct the neutrino mass matrix. We will also discuss (from “the bottom up") what we hope to learn from the neutrino mass matrix itself, instead of trying to quantify what different models predict for the neutrino mass matrix. See, for example, [@barger; @king] for reviews of a few different models. In a subsequent section, we will discuss the connection of neutrino masses to GUTs, and will spend a little more time on “top-down" predictions for neutrino masses and mixing angles. As mentioned earlier, we will assume that the neutrinos are Majorana fermions. While there is no experimental evidence that this is the case, the majority of the theoretical HEP community considers it more likely that the neutrinos are Majorana fermions, and a larger amount of phenomenological research effort has gone into understanding and interpreting Majorana neutrino mass matrices than Dirac mass matrices. For some discussions of Dirac neutrino mass matrices and how they are related to the large mixing in the leptonic sector and the neutrino mass-squared differences, see, for example, [@Dirac]. Below the electroweak phase transition, the Majorana neutrino mass matrix $m_{\nu}$ is the coefficient of the operator (using four-component-spinor notation) $$\frac{1}{2}m_{\nu}^{\alpha\beta}\nu^{T}_{\alpha}C^{-1}\nu_{\beta}+H.c., \label{m_operator}$$ where $\alpha,\beta=e,\mu,\tau,\ldots$ are flavor indices, and $m_{\nu}^{\alpha\beta}$ are the components of the neutrino mass matrix (note that $m_{\nu}$ is symmetric, [*i.e.,*]{} $m_{\nu}^{\alpha\beta}=m_{\nu}^{\beta\alpha}$). In this section we will concentrate on a purely active $3\times 3$ mass matrix. A detailed discussion of $4\times 4$ (and larger) mass matrices, which also allow for the existence of fourth generation and/or sterile neutrinos is the subject of subsequent sections. Note that Eq. (\[m\_operator\]) is not sensitive to the mechanism that generates neutrino masses. These will be discussed in more detail in a later section. In general, one cannot work back from a knowledge of the observed lepton mixing matrix to the individual nondiagonal mass matrices in the charged lepton and neutrino sectors. It is, indeed, the diagonalization of both of these mass matrices that gives rise to the observed lepton mixing, and models exist where the mixing in the charged lepton sector is large. One can always choose to work on the weak basis where the charged lepton mass matrix is diagonal — the price one pays for doing this is that the flavor structure of the theory may not be manifest. In this case, one can calculate the neutrino mass matrix in terms of the observed lepton mixing matrix as $$m^{\alpha\beta}_{\nu}=\sum_i(U^{*})_{\alpha i}m_i (U^{\dagger})_{i\beta}, \label{m_ab}$$ We choose sign conventions such that the neutrino mass eigenvalues are real and positive. By choosing to write $U=V K$, where $V$ and $K$ are given by Eq. (1) we have removed all of the redundancy contained in $m_{\nu}$ associated with re-defining the neutrino fields by a complex phase. Hence, $m_{\nu}$ as defined by Eq. (\[m\_ab\]) is only a function of observable parameters. The phases in $K$ are the so-called Majorana phases [@Valle; @BHP80]. They can be redefined away by allowing the neutrino mass eigenvalues to be complex. In this case, $U=V$, $m_1$ is real and positive, and $m_2=|m_2|e^{-2i\phi_1}$, $m_3=|m_3|e^{-2i\phi_2}$. In the near future, we hope to significantly improve the determination of the elements of the neutrino mass matrix, although some uncertainty will still remain (for a detailed discussion, see, for example, [@reconstruction]). Through neutrino oscillation experiments, all three mixing angles $\theta_{12}, \theta_{23}$, and $\theta_{13}$ are expected to be determined with good precision (this is one of the main goals of next-generation neutrino oscillation experiments, discussed in great lengths in this report), while there is hope that the “Dirac phase" $\delta$ can be probed via long-baseline $\nu_{\mu}\to\nu_e$ oscillation searches. Neutrino oscillation experiments will also determine with good precision the neutrino mass-squared differences ($\Delta m^2_{12}$ at the 5%–10% level, $\Delta m^2_{13}$ \[including the sign\] at the few percent level). In order to complete the picture, three other quantities must also be measured, none of which is directly related to neutrino oscillations. One is the overall scale for neutrino masses. As already briefly discussed, this will be probed, according to our current understanding, by studies of the end-point spectrum of beta-decay, searches for neutrinoless double beta decay, and cosmological observations (especially studies of large-scale structure formation). Note that neutrinoless double beta decay experiments are sensitive to $|m_{\nu}^{ee}|$, [*i.e.*]{}, they directly measure the absolute value of an element of $m_{\nu}$. The other two remaining observables are the “Majorana" phases. Neutrinoless double beta decay experiments are sensitive to a particular combination of these, the so-called effective Majorana mass, $$\left|m_{\nu}^{ee} \right| \equiv \langle m \rangle_{eff} = \left|\cos^2\theta_{13} \left(|m_1|\cos^2\theta_{12}+|m_2|e^{2i\phi_1}\sin^2\theta_{12}\right) +\sin^2\theta_{13}|m_3|e^{2i\phi_2}\right|\;.$$ With present uncertainties in the nuclear matrix elements, however, it seems at least very challenging [@noMaj] to obtain any information regarding Majorana phases from neutrinoless double beta decay. For a detailed study, see, for example, [@noMajresp]. A few comments are in order. First, the relation between the rate for neutrinoless double beta decay and the Majorana phases and neutrino masses only holds under the assumption that the neutrino masses are the only source of lepton-number violation (as far as neutrinoless double beta decay is concerned). Second, only one or a combination of the two independent Majorana phases can be determined in this way. It is fair to say that there is no realistic measurement one can look forward to making in the near future that will add any information and help us disentangle the “other" Majorana phase. Third, it is curious to note that the effect the Majorana phases have on the rate for neutrinoless double beta decay is CP even [@GKM]. While Majorana phases can mediate CP violating phenomena [@GKM], it seems unlikely that any of them can be realistically studied experimentally in the foreseeable future. For further discussion of CP violation among neutrinos see Ref. [@jenkins]. In spite of all the uncertainty due to our inability to measure Majorana phases, it is fair to say that we expect to correctly reconstruct several features of the neutrino mass matrix [@reconstruction], especially if the overall mass scale and the neutrino mass hierarchy are determined experimentally. What do we hope to accomplish by reconstructing the neutrino mass matrix? The answer is that we wish to uncover whether there are new fundamental organizing principles responsible for explaining in a more satisfying way the values of the neutrino masses and the leptonic mixing angles. In other words, we would like to establish whether there is a fundamental reason behind the fact that the $\nu_3$ state contains almost the same amount of $\nu_{\mu}$ and $\nu_{\tau}$, while at the same time containing a relatively small amount of $\nu_e$. Are there flavor (or family) symmetries, capable of dynamically distinguishing the different generations of quarks and leptons and, we hope, explaining why there are three quasi-identical particles for each matter field? In the neutrino sector, we are only getting started. We have, for example, identified several textures for the neutrino mass matrix that lead to the currently observed mass-squared differences and mixing angles, and have identified some of the measurements that will allow us to identify which textures best describe Nature. As has been already pointed out, it is not clear whether this is the best avenue to pursue as far as identifying whether there is a deep explanation for the patterns we observe in experiments. For example, it may turn out that we have made a weak-basis choice that renders the job more complicated (it is possible that the mixing angles are “determined by the charged lepton sector" [@fpr; @charged_lepton; @Antusch:2004re]), or that all the structure contained in the neutrino sector is obscured after heavy degrees of freedom are integrated out (as may happen in type-I seesaw models). Nonetheless, we will discuss a few of these textures in order to exemplify some of the measurements (and how precise they should be) that will shed a significant amount of light in the issue of interpreting neutrino masses and mixing angles. Arguably the simplest assumption one can make is that there is no symmetry or dynamical principle that explains why leptonic mixing angles are large [@anarchy]. This “flavorless" neutrino flavor model is often referred to as “neutrino mass anarchy" and is, currently, compatible with data [@anarchy; @anarchy_stat]. Curiously, the anarchical hypothesis is not without predictions: it requires that the unobserved magnitude of the $U_{e3}$ element of the leptonic mixing matrix is $|U_{e3}|^2>0.01$ at the 95% confidence level (see [@anarchy_stat] for details and a proper definition of this bound). This means that after the next-generation of reactor and/or long-baseline experiments analyze their data we will know whether we can afford a “random" leptonic mixing matrix or not. It should be noted that this model applies only for the leptonic mixing matrix — it has nothing specific to say about the order of magnitude of neutral and charged lepton masses, or their hierarchies. If one assumes that there is a nontrivial texture to the neutrino mass matrix, and that this texture “explains" the observed values of the mixing parameters, there are several completely different options. Some are tabulated in Table \[texturetable\], and will be discussed briefly. Before proceeding, however, it is important to explain how these textures should be interpreted. The hypothesis is that, at leading order, the neutrino mass matrix can be parametrized by far fewer than the usual six complex coefficients. These are chosen in such a way that the dominant features of neutrino masses and mixings are explained. These are: (i) $m^2_3$ is either much larger or much smaller than $m_1^2,m_2^2$. This splitting determines the atmospheric mass-squared difference. (ii) the $\nu_e$ content of the $\nu_3$ state is zero. (iii) the $\nu_{\mu}$ and $\nu_{\tau}$ contents of the $\nu_3$ state are similar (or, perhaps, identical). In order to accommodate the other observed features (like $|U_{e3}|$, the solar mass-squared difference and the solar angle) one includes sub-leading effects that violate the leading-order structure. The structure of these sub-leading effects determines the “predictions" for the observables that are not determined by the leading order mass-texture. In Table \[texturetable\], we list the predictions obtained in the case of a structureless sub-leading mass matrix, [*i.e.,*]{} one proportional to the anarchical texture [@theta_23_andre]. In the case of a structured sub-leading mass matrix, expectations may vary significantly from these quoted in Table \[texturetable\]. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Case Texture $\,$Hierarchy$\,$ $|U_{e3}|$ $\begin{array}{c} |\cos2\theta_{23}|\\ \hbox{(n.s.)} \end{array}$ $\,|\cos2\theta_{23}|$ $\,$Solar Angle$\,$ --------- ------------------------------------------------------------------- ------------------- --------------------------------------------- ------------------------------------------------------------------- --------------------------------------------- ---------------------------------------------------------------------- A $\frac{\sqrt{\Delta m^2_{13}}}{2} \left(\begin{array}{ccc}0&0&0\\ Normal $\sqrt{\frac{\Delta ${\cal{O}}(1)$ $\sqrt{\frac{\Delta ${\cal{O}}(1)$ 0&1&1 \\ 0&1&1\end{array}\right)$ m^2_{12}}{\Delta m^2_{13}}}$ m^2_{12}}{\Delta m^2_{13}}}$ B $\sqrt{\Delta m^2_{13}}\left(\begin{array}{ccc}1&0&0\\ Inverted $\frac{\Delta – $\frac{\Delta m^2_{12}}{|\Delta ${\cal{O}}(1)$ 0&\frac{1}{2}&-\frac{1}{2} \\ m^2_{12}}{|\Delta m^2_{13}|}$ m^2_{13}|}$ 0&-\frac{1}{2}&\frac{1}{2}\end{array}\right)$ C $\frac{\sqrt{\Delta m^2_{13}}}{\sqrt{2}} Inverted $\frac{\Delta m^2_{12}}{|\Delta m^2_{13}|}$ ${\cal{O}}(1)$ $\frac{\Delta m^2_{12}}{|\Delta m^2_{13}|}$ $\begin{array}{c} \left(\begin{array}{ccc}0&1&1\\ 1&0&0\\ 1&0&0\end{array}\right)$ |\cos2\theta_{12}| \\ \sim\frac{\Delta m^2_{12}}{|\Delta m^2_{13}|} \end{array}$ Anarchy $\sqrt{\Delta m^2_{13}}\left(\begin{array}{ccc}1&1&1\\ 1&1&1 \\ Normal $>0.1$ ${\cal{O}}(1)$ – ${\cal{O}}(1)$ 1&1&1\end{array}\right)$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Different leading-order neutrino mass textures and their “predictions" for various observables. The fifth column indicates the “prediction" for $|\cos2\theta_{23}|$ when there is no symmetry relating the different order one entries of the leading-order texture (‘n.s.’ stands for ‘no structure’, meaning that the entries of the matrices in the second column should all be multiplied by and order one coefficient), while the sixth column indicates the “prediction" for $|\cos2\theta_{23}|$ when the coefficients of the leading order texture are indeed related as prescribed by the matrix contained in the second column. See text for details. One may argue that the anarchical texture prefers but does not require a normal mass hierarchy.[]{data-label="texturetable"} Case A is characterized by large entries in the “$\mu-\tau$” sub-matrix, and small entries in the “$e$” column and row. The determinant of the “$\mu-\tau$” sub-matrix is constrained to be small in order to guarantee a hierarchy between the two independent mass-squared differences. The hierarchy of the neutrino masses is predicted to be normal ($m_3^2>m_2^2>m_1^2$). Maximal atmospheric mixing can be imposed at the leading order by requiring that the “$\mu-\tau$" sub-matrix is democratic. The introduction of sub-leading effects leads to a “large" $|U_{e3}|$ and $\cos2\theta_{23}$, of order the square-root of the ratio of mass-squared differences, which is ${\cal{O}}(0.1)$. If this texture is indeed realized in nature, we expect to observe a nonzero $|U_{e3}|$ and a deviation of the atmospheric mixing from maximal at next-generation experiments. It may prove difficult to distinguish between case A and the anarchical texture via neutrino oscillation measurements alone. One potential discriminant seems to be the expected rate for neutrinoless double beta decay. Case B is characterized by small “$e-\mu$" and “$e-\tau$" entries, a small determinant of the “$\mu-\tau$” submatrix, and the constraint that the trace of $m_{\nu}$ is close to $2m_{\nu}^{ee}$. In this case, one predicts an inverted mass hierarchy ($m_2^2>m_1^2\gg m_3^2$), and both $|U_{e3}|$ and $\cos2\theta_{23}$ are constrained to be of order the ratio of the mass-squared differences $({\cal{O}}(0.01))$. The system is constrained enough that it is hard to obtain a much larger deviation of the atmospheric angle from maximal or a much larger $|U_{e3}|$, while much smaller ones are, of course, obtainable if the sub-leading contributions are structured. If case B is indeed realized in Nature, there is a good chance that no $|U_{e3}|$ effects will be observed in next-generation oscillation experiments, while precise measurements of the atmospheric mixing angle will remain consistent with $\theta_{23}=\pi/4$ (equivalently, if a large deviation of the atmospheric angle is detected this texture will be ruled out). On a more positive note, one should expect a “large" rate for neutrinoless double beta decay ($m_{\nu}^{ee}\sim\sqrt{\Delta m^2_{13}}$). A texture which is naïvely similar to case B is to change the sign of the “$\mu-\tau$” sub-matrix, such that the trace of the leading order mass matrix is close to zero. This case, however, is disfavored by solar data, as the solar angle is constrained to be too small (for a more detailed discussion see, for example, [@theta_23_andre]). Case C is characterized by “$e-\mu$" and “$e-\tau$" entries which are much larger than all the other ones (set to zero at leading order) and thus corresponds to the case of approximate $L_e - L_{\mu} - L_{\tau}$ symmetry [@inverted]. It leads to an inverted mass hierarchy, and a close to bi-maximal [@bi-maximal] leading order mixing matrix. The solar angle is (at leading order) exactly maximal, while the atmospheric angle is generically large, becoming maximal in the limit when $m_{\nu}^{e\mu}=\pm m_{\nu}^{e\tau}$ (for real $m_{\nu}^{e\alpha}$). Sub-leading corrections to case C which are responsible for splitting the two heavy leading-order mass eigenstates will induce a $|U_{e3}|$, $\cos2\theta_{23}$ and $\cos2\theta_{12}$ of order the ratio of the mass-squared differences $({\cal{O}}(0.01))$, or smaller. Hence, similar to case B, it seems unlikely that $U_{e3}$-effects will be measured at next-generation experiments. This scenario is currently disfavored, as it also predicts a solar angle $\theta_{12}$ very close to $\pi/4$ [@lms; @fpr; @theta_23_andre]. One should not conclude, however, that scenarios based on “perturbations" around bi-maximal mixing are ruled out. A related issue has been discussed in detail recently by the authors of [@fpr; @charged_lepton]. A realistic three generation extension of the mass matrix in case (A) that leads to large but not maximal solar neutrino mixing is given by $$\begin{aligned} M_\nu~=~\frac{\sqrt{\Delta m^2_A}}{2}\left(\begin{array}{ccc}c\epsilon^n &b\epsilon &d\epsilon\\ b\epsilon & 1+c\epsilon & -1 \\ d\epsilon & -1 & 1+\epsilon\end{array}\right)\end{aligned}$$ Note that if $b=d$ and $c=1$, the atmospheric neutrino mixing is maximal and the mixing parameter $\theta_{13}=0$ [@mutau]. In this case the mass matrix has $\mu-\tau$ interchange symmetry. Depending on how this symmetry is broken, the parameter $\theta_{13}$ is either of order $\sqrt{\frac{\Delta m^2_{\odot}}{\Delta m^2_A}}$ (for $c\neq 1$) or $\frac{\Delta m^2_{\odot}}{\Delta m^2_A}$ (for $b\neq d$) [@rabianjan]. Therefore search for $\theta_{13}$ down to the level of $0.01$ will be a big help in determining the structure of the neutrino mass matrix for the case of normal hierarchy. In both these cases, there is an important correlation between $\theta_{13}$ and $\theta_A-\frac{\pi}{4}$ [@rabianjan]. In general, a neutrino mixing matrix originating from a $\mu-\tau$ symmetric mass matrix has the following structure (for simplicity, we did not include here the Majorana phases) $$\begin{aligned} U = \left(\begin{array}{ccc} cos \theta_{12} & sin \theta_{12} & 0 \\[0.2cm] -\frac{\sin \theta_{12}}{\sqrt{2}} & \frac{\cos \theta_{12}}{\sqrt{2} } & \frac{1}{\sqrt{2}} \\[0.2cm] \frac{ \sin \theta_{12}}{\sqrt{2}} & -\frac{ \cos \theta_{12}}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\[0.3cm] \end{array} \right)~. \end{aligned}$$ Note that the mass spectrum of the neutrinos is not predicted by the $\mu-\tau$ symmetry. Depending on the value of $\theta_{12}$, several interesting mixing schemes can arise: if $\theta_{12} = \pi/4$ then we have bi–maximal mixing. However, as mentioned, the observed deviation from $\pi/4$ is rather large. Much closer to current data is the so–called tri–bimaximal mixing scheme [@tbm0], corresponding to $\sin^2 \theta_{12} = 1/3$ and leading to the following often–studied mixing matrix: $$\begin{aligned} U = \left( \begin{array}{ccc} \sqrt{\frac{2}{3}} & \sqrt{\frac{1}{3}} & 0 \\[0.2cm] -\sqrt{\frac{1}{6}} & \sqrt{\frac{1}{3}} & \sqrt{\frac{1}{2}} \\[0.2cm] \sqrt{\frac{1}{6}} & -\sqrt{\frac{1}{3}} & \sqrt{\frac{1}{2}} \end{array} \right)~. \end{aligned}$$ Models which give rise to such a matrix (for some recent attempts see [@tbm1]) are typically quite intricate and not as straightforward to construct as models leading to bi–maximal mixing. There are other viable neutrino mass textures, including some that lead to degenerate neutrino masses. We refer readers to the literature for a more thorough discussion ([@barger; @king] and references therein). The point we wish to emphasize here is that the amount of information we have concerning neutrino masses and leptonic mixing is still very limited. This is reflected in the fact that too many different hypothesis can be raised in order to “explain" the same set of observables. The situation is bound to change in the near future, and there is hope that the data will “select" one specific neutrino mass matrix. Our job will then be to interpret what Nature is trying to “say" through $m_{\nu}$. A more accurate determination of a few observables will already shine a significant amount of light in the currently obscure picture we are trying to obtain: (i) what is the neutrino mass hierarchy? (ii) is $|U_{e3}|$ larger than 0.1? (iii) is $|\cos2\theta_{23}|>0.1$? All three of these can be answered in a next-generation $\nu_{\mu}\to\nu_{e}$ long-baseline accelerator experiment, while the second one can be addressed by a next-generation reactor neutrino experiment (with a baseline of ${\cal{O}}$(1 km)). Of course, in order to be sure we are on the right track we also need to (iv) determine that lepton number is not a conserved symmetry. $\boldsymbol{\beta\beta_{0\nu}}$-Decay and CP Violation {#sec:0nubbCP} ======================================================= $\boldsymbol{\beta\beta_{0\nu}}$ decay -------------------------------------- In this section, we focus our attention on what we can learn from neutrinoless double beta decay experiments. As already alluded to in the introduction, in a given theory, neutrinoless double beta decay can arise from two sources: (i) neutrino Majorana mass or/and (ii) lepton number violating interactions. While the absence of a signal in a $\beta\beta_{0\nu}$ experiment will constrain both sources (and associated theories), a positive signal cannot necessarily be considered as evidence for one or the other exclusively. For instance, one must supplement the results from $\beta\beta_{0\nu}$ experiment with collider experiments such as from LHC or a possible $e^+e^-$ type to get a definitive information regarding the source. Alternatively, one may conduct the double beta decay experiment with different nuclei and if the matrix elements happen to differ substantially for the two sources, one may be able to disentangle the different sources. Therefore as we interpret any positive signal for $\beta\beta_{0\nu}$ decay one must keep this in mind. Below, we discuss what we can learn about neutrino masses and mixings, once it is established that the source of the positive signal is the Majorana mass for the neutrino. The experiments with solar and atmospheric neutrinos and with reactor antineutrinos have provided data on $\theta_{12}$, $\theta_{23}$ and $\theta_{13}$, and on the neutrino mass squared differences driving the solar and atmospheric neutrino oscillations, $\Delta m^2_{12}$ and $\Delta m^2_{13}$. Future oscillation experiments will improve considerably the precision on these basic parameters. However, these experiments are insensitive to the nature of massive neutrinos $\nu_j$ which can be Dirac or Majorana particles [@BHP80; @Lang87] (see also, e.g., [@BiPet87]). They cannot give information on the absolute scale of neutrino masses (i.e., on the value of $m_1$), and on the two Majorana CP violation phases — the latter do not enter into the expressions for the probabilities of flavor neutrino oscillations [@BHP80]. Determining the nature of massive neutrinos and obtaining information on the absolute neutrino mass scale is one of the fundamental problems in the studies of neutrino mixing. Neutrinos are predicted to be Majorana particles in the seesaw model of neutrino mass generation. This model gives a natural explanation of the smallness of the neutrino masses and, through the leptogenesis theory, provides an explanation of the observed baryon asymmetry in the Universe, which thus is linked to the existence of neutrino mixing. The only experiments which have the potential of establishing the Majorana nature of massive neutrinos are the neutrinoless double beta decay experiments searching for the process $(A,Z) \rightarrow (A,Z+2) + e^- + e^-$ (see, e.g., [@BiPet87; @ElliotVogel02]). The observation of the $\beta\beta_{0\nu}$-decay and the measurement of the corresponding $\beta\beta_{0\nu}$-decay rate with a sufficient accuracy, would not only be a proof that the total lepton charge is not conserved in nature, but might provide also a unique information on i) the type of the neutrino mass spectrum, ii) the absolute scale of neutrino masses, and on iii) the values of the Majorana CP violation phases. Let us add that in supersymmetric theories with seesaw mechanism of neutrino mass generation, the rates of lepton flavor violating decays $\mu \rightarrow e + \gamma$, $\tau \rightarrow \mu + \gamma$ can be interestingly large (e.g., [@ihl]) and may depend on the Majorana CP violating phases in the lepton mixing matrix (see, e.g., [@RaidalJE; @PPY]). Furthermore, the values of the Majorana phases can be important for the stability under RGE running of the neutrino mass and mixing parameters, see Sec. \[sec:RGE\]. Let us recall that the SK atmospheric neutrino and K2K data do not allow one to determine the sign of $\Delta m^2_{\rm A}$. This implies that if we identify $\Delta m^2_{\rm A}$ with $\Delta m^2_{13}$ in the case of 3-neutrino mixing, one can have $\Delta m^2_{13} > 0$ or $\Delta m^2_{13} < 0$. The two possibilities correspond to two different types of neutrino mass spectrum: with normal hierarchy, $m_1 < m_2 < m_3$, and with inverted hierarchy, $m_3 < m_1 < m_2$. In the case of strong inequalities between the masses, the spectra are called [*normal hierarchical*]{} (NH) and [*inverted hierarchical*]{} (IH). The NH and IH spectra correspond to $m_1 \ll 0.02$ eV and $m_3 \ll 0.02$ eV, respectively. If $m_1 \cong m_2 \cong m_3 \cong m_0$ and $m_j^2 \gg |\Delta m^2_{\rm A}|,\Delta m^2_\odot$, the spectrum is [*quasi-degenerate*]{} (QD). The QD spectrum is realized if $m_{1,2,3} > 0.20$ eV roughly requiring that the largest mass difference is about 10% of the common mass. Under the assumptions of (1) 3-neutrino mixing, for which we have compelling evidence from the experiments with solar and atmospheric neutrinos and from the KamLAND experiment, (2) massive neutrinos $\nu_j$ being Majorana particles, and (3) $\beta\beta_{0\nu}$-decay generated [*only by the (V-A) charged current weak interaction via the exchange of the three Majorana neutrinos $\nu_j$*]{}, the effective Majorana mass in $\beta\beta_{0\nu}$-decay of interest is given by (see, e.g., [@BiPet87; @BPP1]): $$\langle m \rangle_{eff} = \left| m_1 |U_{e 1}|^2 + m_2 |U_{e 2}|^2~e^{2i\phi_{1}} + m_3 |U_{e 3}|^2~e^{2i\phi_2} \right|~, \label{effmass2}$$ where $U_{ej}$, $j=1,2,3$, are the elements of the first row of the lepton mixing matrix $U$, $m_j > 0$ is the mass of the Majorana neutrino $\nu_j$, and $\phi_{1}$ and $\phi_{2}$ are the two Majorana CP violating phases [@Valle; @Lang87]. In the case of CP conservation we have $e^{2i\phi_{1,2}} \equiv \eta_{21(31)}= \pm 1$, $\eta_{ij}$ being the relative CP parity of the neutrinos $\nu_i$ and $\nu_j$. One can express [@SPAS94] two of the three neutrino masses, say, $m_{2,3}$, in terms of the third mass, $m_1$, and of $\Delta m^2_\odot$ and $\Delta m^2_{\rm A}$, while the elements $|U_{ej}|$ can be expressed in terms of $\theta_\odot$ and $\theta$ (a concise discussion of the relevant formalism can be found, e.g., in Refs. [@BPP1; @fsv; @PPVenice03]). Within the convention employed in the present study in both cases of neutrino mass spectrum with normal and inverted hierarchy one has: $\Delta m^2_\odot = \Delta m_{12}^2 > 0$, and $m_2 = \sqrt{m_1^2 + \Delta m^2_\odot}$. In the case of normal hierarchy, $\Delta m^2_{\rm A} = \Delta m_{13}^2 > 0$ and $m_3 = \sqrt{m_1^2 + \Delta m^2_{\rm A}}$, while if the spectrum is with inverted hierarchy, $\Delta m^2_{\rm A} = \Delta m_{23}^2 > 0$ and thus $m_1 = \sqrt{m_3^2 + \Delta m^2_{\rm A}- \Delta m^2_\odot}$. For both types of hierarchy, the following relations hold: $|U_{\mathrm{e} 1}|^2 = \cos^2\theta_{\odot} (1 - |U_{\mathrm{e} 3}|^2)$, $|U_{\mathrm{e} 2}|^2 = \sin^2\theta_{\odot} (1 - |U_{\mathrm{e} 3}|^2)$, and $|U_{\mathrm{e} 3}|^2 \equiv \sin^2\theta$. We denote the smallest neutrino mass as $m_{min}$ and we have $m_{min}=m_{1 \,(3)}$ for the case of normal (inverted) hierarchy. [|c|c|c|c|c|]{} ------------------------------------------------------------------------ $\sin^2 \theta$ & ${\langle m\rangle_{eff}}_{\rm max}^{\rm NH}$ & ${\langle m\rangle_{eff}}_{\rm min}^{\rm IH}$ & ${\langle m\rangle_{eff}}_{\rm max}^{\rm IH}$ & ${\langle m\rangle_{eff}}_{\rm min}^{\rm QD} $\ 0.0 & 2.6 (2.6) & 19.9 (17.3) & 50.5 (44.2) & 79.9\ 0.02 & 3.6 (3.5) & 19.5 (17.0) & 49.5 (43.3) & 74.2\ 0.04 & 4.6 (4.3) & 19.1 (16.6) & 48.5 (42.4) & 68.5\ [|c|c|c|c|c|]{} ------------------------------------------------------------------------ $\sin^2 \theta$ & ${\langle m\rangle_{eff}}_{\rm max}^{\rm NH}$ & ${\langle m\rangle_{eff}}_{\rm min}^{\rm IH}$ & ${\langle m\rangle_{eff}}_{\rm max}^{\rm IH}$ & ${\langle m\rangle_{eff}}_{\rm min}^{\rm QD} $\ 0.0 & 3.7 (3.7) & 10.1 (8.7) & 56.3 (50.6) & 47.9\ 0.02 & 4.7 (4.6) & 9.9 (8.6) & 55.1 (49.6) & 42.8\ 0.04 & 5.5 (5.3) & 11.4 (9.9) & 54.0 (48.6) & 45.4\ The problem of obtaining the allowed values of $\langle m\rangle_{eff}$ given the constraints on the relevant parameters following from the neutrino oscillation data has been first studied in Ref. [@SPAS94] and subsequently in a large number of papers, see, e.g., [@BGKP96; @BGGKP99; @BPP1; @bbpapers1; @bbpapers2; @bbpapers3]. Detailed analyses were performed, e.g., in Refs. [@BPP1; @noMajresp; @PPVenice03; @Lisi], and in particular in Ref. [@PPaddendum], where the allowed values of $\Delta m^2_{\rm A}$, $\Delta m^2_\odot$, $\theta_\odot$ and $\theta$, obtained from the most recent neutrino oscillation data, were used. The results are summarized in Tabs. \[tabmeff1\] and \[tabmeff2\], and in Fig. \[figmeff2\]. ![ The dependence of $\langle m\rangle_{eff}$ on $m_{min}$ in the case of the LMA-I solution, for normal and inverted hierarchy, and for the $90 \%$ C.L. allowed regions of $\Delta m^2_{\odot}$ and $\sin^2 \theta_\odot$ found in Ref. [@SNO3BCGPR] and of $\Delta m^2_{\rm A}$ in Ref. [@Fogliatm0308055], and a fixed value of $\sin^2 \theta = 0.0 (0.02) [0.04]$ in the upper (middle) \[lower\] panel. In the case of CP conservation, the allowed values of $\langle m\rangle_{eff}$ are constrained to lie: for i) normal hierarchy and the middle and lower panels (upper panel) - in the medium-gray and light-gray regions [*a)*]{} between the two lower thick solid lines (between the two lower thick solid lines) if $\eta_{21} = \eta_{31} = 1$, [*b)*]{} between the two long-dashed lines (between the two lower thick solid lines) if $\eta_{21} = - \eta_{31} = 1$, [*c)*]{} between the three thick dash-dotted lines and the axes (between the dash-dotted lines and the axes) if $\eta_{21} = - \eta_{31} = - 1$, [*d)*]{} between the three thick short-dashed lines and the axes (between the dash-dotted lines and the axes) if $\eta_{21} = \eta_{31} = - 1$; and for ii) inverted hierarchy and the middle and lower panels (upper) - in the light-gray regions [*a)*]{} between the two upper thick solid lines (between the two upper thick solid lines) if $\eta_{32} = \eta_{31} = \pm 1$, [*b)*]{} between the dotted and the thin dash-dotted lines (between the dotted and the thick short-dashed lines) if $\eta_{32} = - \eta_{31} = 1$, [*c)*]{} between the dotted and the upper thick short-dashed lines (between the dotted and the thick short-dashed lines) if $\eta_{32} = - \eta_{31} = - 1$. In the case of CP violation, the allowed regions for $\langle m\rangle_{eff}$ cover all the gray regions. Values of $\langle m\rangle_{eff}$ in the dark gray regions signal CP violation.(From Ref. [@PPaddendum]). []{data-label="figmeff2"}](meffgray90cl.eps){height="12.8cm" width="8cm"} In Fig.  \[figmeff2\] (taken from Ref. [@PPaddendum]) we show the allowed ranges of values of $\langle m\rangle_{eff}$ as a function of $m_{min}$ for the cases of NH and IH spectrum The predictions for $\langle m\rangle_{eff}$ are obtained by using the allowed at 90% C.L. (Fig. \[figmeff2\]), values of $\Delta m^2_\odot$, $\theta_\odot$ and $\Delta m^2_{\rm A}$ from Refs. [@SNO3BCGPR] and [@Fogliatm0308055] and for three fixed values of $\sin^2 \theta$. The existence of significant lower bounds on $\langle m\rangle_{eff}$ in the cases of IH and QD spectra [@PPSNO2bb], $\langle m\rangle_{eff}^{\rm{IH}} \geq 10 \,\mathrm{meV}$ and $\langle m\rangle_{eff}^{\rm{IH}} \geq 43 \,\mathrm{meV}$, respectively [@PPaddendum], which lie either partially (IH spectrum) or completely (QD spectrum) within the range of sensitivity of the next generation of $\beta\beta_{0\nu}$-decay experiments, is one of the most important features of the predictions of $\langle m\rangle_{eff}$. The indicated minimal values are given, up to small corrections, by $\Delta m^2_{\rm A} \cos2\theta_{\odot}$ and $m_0 \cos2\theta_{\odot}$. According to the most recent combined analyses of the solar and reactor neutrino data, including the latest SNO and KamLAND results (see, e.g., [@BCGPRKL2]) i) the possibility of $\cos2\theta_{\odot} = 0$ is excluded at more than 6 s.d., ii) the best fit value of $\cos2\theta_{\odot}$ is $\cos2\theta_{\odot} = 0.40$, and iii) at 95% C.L. one has for $\sin^2\theta = 0~(0.02)$, $\cos{2\theta_{\odot}} \geq 0.27~(0.32)$. The quoted results on $\cos{2\theta_{\odot}}$ together with the range of possible values of $\Delta m^2_{\rm A}$ and $m_0$, lead to the conclusion about the existence of significant and robust lower bounds on $\langle m\rangle_{eff}$ in the cases of IH and QD spectrum [@PPaddendum; @Carlosbb03]. At the same time one can [*always*]{} have $\langle m\rangle_{eff} = 0$ in the case of neutrino mass spectrum with normal hierarchy [@PPW]. It follows from Tabs. \[tabmeff1\] and \[tabmeff2\] that in this case $\langle m\rangle_{eff}$ cannot exceed $5.5$ meV. This implies that the maximal value of $\langle m\rangle_{eff}$ in the case of neutrino mass spectrum with normal hierarchy is considerably smaller than the minimal values of $\langle m\rangle_{eff}$ for the inverted hierarchy and quasi-degenerate neutrino mass spectrum. This opens the possibility of obtaining information about the type of neutrino mass spectrum from a measurement of $\langle m\rangle_{eff} \neq 0$, or from obtaining a sufficiently stringent upper bound on $\langle m\rangle_{eff}$. In particular, a positive result in the future generation of $\beta\beta_{0\nu}$-decay experiments with $\langle m\rangle_{eff} > 10 \ $meV would imply that the NH spectrum is excluded. The uncertainty in the relevant nuclear matrix elements[^5] and prospective experimental errors in the values of the oscillation parameters, in $\langle m\rangle_{eff}$, and for the case of QD spectrum — in $m_0$, weaken, but do not invalidate, the reported results (see, e.g., Refs. [@PPRSNO2bb]). If the neutrino mass spectrum turned out to be of the QD type, a measurement of $\langle m\rangle_{eff}$ in $\beta\beta_{0\nu}$-decay experiment and of $m_0$ in the KATRIN experiment [@Osipowicz:2001sq] could be used, in particular, to check the validity of the light Majorana neutrino exchange mechanism for the $\beta\beta_{0\nu}$-decay and search for indications for contributions from other types of mechanisms (see, e.g., [@moh1; @bb0nunmi]). It follows from Fig. \[figmeff2\] that a measurement of $\langle m\rangle_{eff} \geq 10$ meV would either i) determine a relatively narrow interval of possible values of the lightest neutrino mass $m_{min}$, or ii) would establish an upper limit on the possible values of $m_{min}$. If a sufficiently stringent upper limit on $\langle m\rangle_{eff}$ is experimentally obtained below 100 meV, this would lead to a significant upper limit on the possible value of $m_{min}$. The possibility of establishing CP violation in the lepton sector due to Majorana CP violating phases has been studied in [@PPW; @bbpapers3] and in much greater detail in Ref. [@noMajresp]. It was found that it is very challenging[^6]: it requires quite accurate measurements of $\langle m\rangle_{eff}$ and of $m_1$, and holds only for a limited range of values of the relevant parameters. For the IH and the QD spectra, which are of interest, the “just CP violation” region [@BPP1] — an experimental point in this region would signal unambiguously CP violation associated with Majorana neutrinos, is larger for smaller values of $\cos 2 \theta_\odot$. More specifically, proving that CP violation associated with Majorana neutrinos takes place requires, in particular, a relative experimental error on the measured value of $\langle m\rangle_{eff}$ not bigger than (15 – 20)%, a “theoretical uncertainty” in the value of $\langle m\rangle_{eff}$ due to an imprecise knowledge of the corresponding nuclear matrix elements smaller than a factor of 2, a value of $\tan^2\theta_{\odot} \geq 0.55$, and values of the relevant Majorana CP violating phases ($2\phi_{1,2}$) typically within the ranges of $\sim (\pi/2 - 3\pi/4)$ and $\sim (5\pi/4 - 3\pi/2)$ [@noMajresp]. The MNSP Lepton Mixing Matrix and CP Violation in the Lepton Sector ------------------------------------------------------------------- It is well known that, in general, in gauge theories with massive neutrinos the MNSP lepton mixing matrix results from a product of two matrices: $$U = U_{\ell}^\dagger \, U_\nu~. \label{UPMNSCPV}$$ where $U_{\ell}$ and $U_\nu$ are two $3\times 3$ unitary matrices: $U_{\ell}$ arises from the diagonalization of the charged lepton mass matrix, while $U_\nu$ diagonalizes the neutrino Majorana mass term. Any $3\times 3$ unitary matrix contains 3 moduli and 6 phases and can be written as [@wir] (see also [@SKing02]): $$\label{eq:unit} U = e^{i \Phi} \, P \, \tilde{U} \, Q~,$$ where $P \equiv {\rm diag} (1,e^{i \phi},e^{i \omega})$ and $Q \equiv {\rm diag} (1,e^{i \rho},e^{i \sigma}) $ are diagonal phase matrices having 2 phases each, and $\tilde{U}$ is a unitary “CKM-like” matrix containing 1 phase and 3 angles. The charged lepton Dirac mass term, $M_{\ell}$, is diagonalized by a bi-unitary transformation: $$M_{\ell} = U_L \, M_{\ell}^{\rm diag} \, U_R^\dagger ~,$$ where $U_{L,R}$ are $3\times 3$ unitary matrices and $M_{\ell}^{\rm diag}$ is the diagonal matrix containing the masses of the charged leptons. Casting $U_{L,R}$ in the form (\[eq:unit\]), i.e., $U_{L,R} = e^{i \Phi_{L,R}} \, P_{L,R} \, \tilde{U}_{L,R} \, Q_{L,R}$, we find $$M_{\ell} = e^{i (\Phi_L - \Phi_R)} \, Q_L \, \tilde{U}_L \, P_L \, M_{\ell}^{\rm diag} \, Q_R^\dagger \, \tilde{U}_R^\dagger \, P_R^\dagger~.$$ The term $P_L \, M_{\ell}^{\rm diag} \, Q_R^\dagger$ contains only 2 relative phases, which can be associated with the right-handed charged lepton fields. The three independent phases in $e^{i (\Phi_L - \Phi_R)} \, Q_L$ can be absorbed by a redefinition of the left-handed charged lepton fields. Therefore, $U_{\ell}$ is effectively given by $\tilde{U}_L$ and contains three angles and one phase. The neutrino mass matrix $M_\nu$ is diagonalized via $$M_\nu = U_\nu^\ast \, M_\nu^{\rm diag} \, U_\nu^\dagger ~.$$ The unitary matrix $U_\nu$ can be written in the form (\[eq:unit\]). It is not possible to absorb phases in the neutrino fields since the neutrino mass term is of Majorana type [@Valle; @Lang87]. Thus, $$\label{eq:pmnscpv} U = U_{\ell}^\dagger \, U_\nu = e^{i \Phi \nu} \, \tilde{U}_{\ell}^\dagger \, P_\nu \, \tilde{U}_\nu \, Q_\nu ~.$$ The common phase $\Phi_\nu$ has no physical meaning and we will ignore it. Consequently, in the most general case, the elements of $U$ given by Eq. (\[eq:pmnscpv\]) are expressed in terms of six real parameters and six phases in $\tilde{U}_{\ell}$ and $U_\nu$. Only six combinations of those — the three angles and the three phases of $U$, are observable, in principle, at low energies. Note that the two phases in $Q_\nu$ are “Majorana-like”, since they will not appear in the probabilities describing the flavor neutrino oscillations [@Valle; @Lang87]. Note also that if $U_{\ell} = {\mathbbm 1}$, the phases in the matrix $P_\nu$ can be eliminated by a redefinition of the charged lepton fields. If one assumes that, e.g., $\tilde{U}_\nu$ is bimaximal, $$\label{eq:Ubimax} \tilde{U}_\nu \equiv U_{\rm bimax} = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \cr -\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \cr \frac{1}{2} & -\frac{1}{2} & \frac{1}{\sqrt{2}} \end{pmatrix}~,$$ which permits a rather simple explanation of the smallness of $\sin \theta_{13}$ and the deviation of $\theta_{\odot}$ from $\pi/4$, then $\tilde{U}_\nu$ is real . In this case the three angles and the Dirac phase in the neutrino mixing matrix $U$ will depend in a complicated manner on the three angles and the phase in $\tilde{U}_{\ell}$ and on the two phases in $P_\nu$. The two Majorana phases will depend in addition on the parameters in $Q_\nu$. See [@fpr] for details. It should be emphasized that the form of $U$ given in Eq. (\[eq:pmnscpv\]) is the most general one. A specific model in the framework of which Eq. (\[eq:pmnscpv\]) is obtained might imply symmetries or textures both in $m_{\ell}$ and $M_\nu$, which will reduce the number of independent parameters in $U_{\ell}^\dagger$ and/or $U_\nu$. In the scheme with three massive Majorana neutrinos under discussion there exist three rephasing invariants related to the three CP violating phases in $U$, $\delta$ and $\phi_{1,2}$ [@CJ85; @PKSP3nu88; @JMaj87; @BrancoLR86; @ASBranco00]. The first is the standard Dirac one $J_{CP}$ [@CJ85], associated with the Dirac phase $\delta$: $$\label{eq:JCP} J_{CP} = {\rm Im} \left\{ U_{e1} \, U_{\mu 2} \, U_{e 2}^\ast \, U_{\mu 1}^\ast \right\}~.$$ It determines the magnitude of CP violation effects in neutrino oscillations [@PKSP3nu88]. Let us note that if $U_{\ell} = {\mathbbm 1}$ and $\tilde{U_\nu}$ is a real matrix, one has $J_{CP} = 0$. The two additional invariants, $S_1$ and $S_2$, whose existence is related to the Majorana nature of massive neutrinos, i.e., to the phases $\alpha$ and $\beta$, can be chosen as [@JMaj87; @ASBranco00] (see also [@BPP1])[^7]: $$S_1 = {\rm Im}\left\{ U_{e1} \, U_{e3}^\ast \right\}~,~~~~ S_2 = {\rm Im}\left\{ U_{e2} \, U_{e3}^\ast \right\}~. \label{eq:SCP}$$ If $S_1 \neq 0$ and/or $S_2 \neq 0$, CP is not conserved due to the Majorana phases $\phi_1$ and/or $\phi_2$. The effective Majorana mass in $\beta\beta_{0\nu}$-decay, $\langle m\rangle_{eff}$, depends, in general, on $S_1$ and $S_2$ [@BPP1] and not on $J_{CP}$. Let us note, however, even if $S_{1,2} = 0$ (which can take place if, e.g., $|U_{e3}| = 0$), the two Majorana phases $\phi_{1,2} $ can still be a source of CP non-conservation in the lepton sector provided ${\rm Im}\left\{ U_{e1} \, U_{e2}^\ast \right\} \neq 0$ and ${\rm Im}\left\{ U_{\mu 2} \, U_{\mu 3}^\ast \right\}\neq 0$ [@ASBranco00]. Let us denote the phase in $\tilde{U}_{\ell}$ by $\psi$. We will include it in $\tilde{U}_{\ell}$ in the same way this is done for the phase $\delta$ in the standard parametrization of the $U$ matrix. If we write $P_\nu = {\rm diag} (1,e^{i \phi},e^{i \omega})$ and $Q_\nu \equiv {\rm diag} (1,e^{i \rho},e^{i \sigma})$, the Dirac phase $\delta$ in $U$, which has observable consequences in neutrino oscillation experiments, is determined [*only by the “Dirac phase” in $\tilde{U}_{\nu}$ and the phases $\psi$, $\phi$ and $\omega$*]{}. The Majorana phases in $U$ receive contributions also from the two remaining phases $\rho$ and $\sigma$. Allowing the phases $\delta$ and $\phi_{1,2}$ to vary between 0 and $2 \pi$, permits to constrain (without loss of generality) the mixing angles in $\tilde{U}_{\ell}$, $\theta_{ij}$, to lie between 0 and $\pi/2$. There are interesting specific cases in which there are direct relations between all 3 CP violating phases in the $U$ matrix [@fpr]. Testing Seesaw Models ===================== Although it is far from clear that the seesaw mechanism [@Yanagida:1980; @Gell-Mann:1980vs; @Glashow:1979vf; @Mohapatra:1980ia] is responsible for neutrino masses, most physicists consider that it is by far the most elegant mechanism. It fits very well into the big picture of other areas of particle physics such as supersymmetry, grand unification etc. It is therefore important to discuss how we can test the seesaw models. Evidently, such tests are indirect, since the right-handed electroweak-singlet neutrinos are much too heavy to be produced at colliders. In this section, we consider two aspects of the seesaw mechanism: (i) indirect signals of seesaw mechanism in lepton flavor violating processes, which can be measured in near future; (ii) leptogenesis, which can give us a hint as to the CP violating phases in the lepton sector as well as perhaps the spectrum of the RH neutrinos. The presence of CP violating phases needed for leptogenesis (see Sec.\[sec:Leptogenesis\]) in turn can lead to CP violating low energy observables in the seesaw models. We explore these probes of the seesaw mechanism in this section. \[sec:LFV\]Lepton Flavor Violation and Lepton Electric Dipole Moments --------------------------------------------------------------------- Neutrino oscillation experiments have revealed that the violation of flavor symmetry is much greater in the lepton sector than in the quark sector. We will discuss how this flavor violation manifests itself via the seesaw mechanism in other observable quantities where lepton flavor and/or CP are violated. As we are going to discuss, among the laboratory observables particularly interesting are Lepton Flavor Violating (LFV) decays, like $\mu \rightarrow e \gamma$ and $\tau \rightarrow \mu \gamma$, and Lepton Electric Dipole Moments (LEDM), like $d_e$ and $d_\mu$. BR($\mu \rightarrow e \gamma$) BR($\tau \rightarrow \mu \gamma$) $d_e$\[e cm\] $d_\mu$\[e cm\] -------------------- -------------------------------- ----------------------------------- --------------------- -------------------- present [@present] $< 1.2~ 10^{-11} $ $<1.1~ 10^{-6} $ $ < 1.5~ 10^{-27} $ $< 10^{-18} $ planned [@planned] $< 10^{-14}$ $ < 10^{-8}$ $< 10^{-29(-32)}$ $ < 10^{-24(-26)}$ : Present status and future prospects for LFV decays and LEDM.[]{data-label="experimental"} Searches for LFV decays and for LEDM are experimentally very promising, since the present upper bounds could be strengthened by many orders of magnitude, as summarized in Table \[experimental\]. Their impact on theory is also very promising: finding LFV and LEDM means discovering new low energy physics beyond the SM supplemented with the seesaw [@STP77; @ChengLi80], like e.g. supersymmetry. This can be easily understood by identifying the SM with the operators of dimension $d \le 4$ of a low energy effective theory valid up to a cutoff $\Lambda$. Flavor and CP are accidentally conserved in the leptonic sector of the SM, hence there is no room for neutrino oscillations nor for LFV decays and LEDM. Their possible sources have to be found among the operators of $d>4$: neutrino masses arise from the $d=5$ operator $ \nu^T C^{-1} \nu \langle H^0 \rangle^2 / \Lambda$, while LFV decays and LEDM from the $d=6$ operator $ {\bar \ell} \sigma^{\mu \nu} (1+\gamma_5) \ell F_{\mu \nu} \langle H^0 \rangle / \Lambda^2 $. In the seesaw lepton flavour is no longer conserved but $\Lambda \sim M_R \ls 10^{15}$ GeV and, as a consequence, the $d=6$ operator above is so strongly suppressed that its effects are negligibly small [@STP77; @ChengLi80]. However, if additional physics is present at smaller mass scales and if this additional physics violates LF and/or CP, the suppression is milder and LFV decays and LEDM could be raised up to the experimentally interesting range. This enhancement due to new low-energy physics is precisely what happens in low-energy supersymmetry [^8] where, due to loops with sleptons and gauginos, the $d=6$ operator is suppressed by powers of $m_\mathrm{SUSY}$. The experimental limits on LFV decays and LEDM then imply such severe constraints [@constraints] on the amount of LF and CP violations in slepton masses (defined in the basis where charged fermions are diagonal), that one would expect LF and CP to be exact symmetries of the supersymmetry breaking mass terms defined at the appropriate cutoff scale (the Planck scale for supergravity, the messenger mass for gauge mediation, etc). It is important to stress that, if below this scale there are LF and CP violating Yukawa interactions, in the running down to $m_\mathrm{SUSY}$ they nevertheless induce a small amount of LF and CP violations in slepton masses. It is well known that this is the case for the seesaw interactions of the right-handed neutrinos [@bormas] and/or the GUT interactions of the heavy triplets [@barhall]. Remarkably enough, this radiative contribution to the LFV decays and LEDM, which essentially depends on the supersymmetric spectrum and on the pattern of the Yukawa interactions, might be close to or even exceed the present or planned experimental limits. Clearly, this has an impact on seesaw models, possibly embedded also in a GUT framework. In the following we will discuss separately the case of type I and type II seesaw. ### Type I Seesaw: LFV For type I seesaw, in the low energy basis where charged leptons are diagonal, the $ij$ mass term of $L$-sleptons, ${m^2}^{LL}_{ij}$, is the relevant one in the decay $\ell_i \rightarrow \ell_j \gamma$. Assuming for the sake of simplicity the mSUGRA [@mSUGRA] spectrum at $M_{\rm Pl}$, one obtains at the leading log [@bormas] (see also [@Hisano:1995cp]): $${m^2}^{LL}_{ij} = \frac{1}{8 \pi^2} (3 m_0^2 + 2 A_0^2) C_{ij} ~, ~~~ C_{ij} \equiv \sum_k ~ (Y_\nu)_{ik}~ ({Y_\nu})^*_{jk} ~ \dd\ln\dd\frac{M_{\rm Pl}}{M_k} ~,$$ where $Y_\nu = M_\nu^D /v_{\rm wk}$ and $m_0$ and $A_0$ are the universal scalar masses and trilinear couplings at $M_{\rm Pl}$, respectively, and we have chosen the basis where $M_R$ is diagonal. For the full RG results, see [@Petcov:2003zb]. The seesaw model dependence thus resides in $|C_{ij}|$, and an experimental limit on BR$(\ell_i \rightarrow \ell_j \gamma)$ corresponds to an upper bound on $|C_{ij}|$. For $\mu \rightarrow e \gamma$ and $\tau \rightarrow \mu \gamma$ this bound [@isaLFV] is shown in Fig. \[FR\] as a function of the lightest charged slepton and gaugino masses. (-235,178)[ $m_{\tilde{e}_R}$ ]{} (-460,178)[ $m_{\tilde{e}_R}$ ]{} (-280,0)[ $\tilde M_1$ ]{} (-50,0)[ $\tilde M_1$ ]{} (-150,220)[ Upper limit on $|C_{\mu e}|$ ]{} (-370,220)[ Upper limit on $|C_{\tau \mu }|$ ]{} (-120,60)[ $\times \frac{BR(\mu \rightarrow e \gamma)}{10^{-14}} \frac{20}{\tan \beta}$ ]{} (-340,60)[ $\times \frac{BR(\tau \rightarrow \mu \gamma)}{10^{-8}} \frac{20}{\tan \beta}$ ]{} It has been shown that many seesaw models predict $|C_{\mu e}|$ and/or $|C_{\tau \mu}|$ close to the experimentally accessible range [@modelsLFV; @casasibarra] and, in particular, this might be the case for models based on $U(1)$ flavor symmetries [@U1]. To reduce the uncertainty due to the supersymmetric spectrum, it is interesting to exploit the correlation between LFV decays and muon $g-2$ [@LFVeg-2] or neutralino dark matter [@LFVeDM]. Planned searches could also help in discriminating between categories of seesaw models [@isaLFV]. To give some hints on the latter issue consider, e.g., hierarchical eigenvalues of $Y_\nu$. The different $N^c$ thresholds can then be neglected and in first approximation one has $|C_{ij}| \approx |{V_L}_{i3}| |{V_L}_{j3}| y_3^2$ $ \log (M_{\rm Pl} /M_3)$, where $V_L$ is the lepton analog of the CKM mixing matrix. In $SO(10)$-inspired models $y_3 =y_t \sim 1$, and the model dependence essentially resides in the magnitude of $|{V_L}_{i3}| |{V_L}_{j3}|$, namely on the amount of LF violation present in the left-mixings of $Y_\nu$. Under the above assumptions $|C_{\tau \mu}| = {\cal{O}}( 10 \times |{V_L}_{32}| )$. If at high energy LF is strongly violated in the $\tau -\mu$ sector (models with ’lopsided’ $y_\nu$ as the one studied in [@Blazek:2001zm]) planned searches for $\tau \rightarrow \mu \gamma$ could be successful for a significant region of the supersymmetric parameter space. If this violation is on the contrary tiny like in the quark sector — in which case the large LF violation observed at low energy purely arises from a magnification effect of the seesaw [@magnif] — $\tau \rightarrow \mu \gamma$ would not be observed. Progress in the experimental sensitivity to the latter decay would thus offer precious information. The prediction for $\mu \rightarrow e \gamma$, linked to the product $|{V_L}_{23}| |{V_L}_{13}|$, is more model dependent but, on the other hand, the present experimental bound is already very severe. For instance, simple $U(1)$ flavor symmetries, those with all lepton charges of the same sign, predict $|C_{\mu e}| ={\cal{O}}(10 \times \Delta m^2_{\odot} / \Delta m^2_{\rm A} )$ and for LMA are already in crisis [@U1; @isaLFV]. Since the present limit corresponds to a significantly smaller degree of LFV at high energy, this means that a much richer flavor symmetry has to be at work. Notice also that in the future we could test $|C_{\mu e}|$ up to the CKM-level [@mvv]; in fact, if $y_3 = {\cal{O}}(1)$ and $V_{L} \approx V_{CKM}$, then $C_{\mu e} = {\cal{O}} (10^{-3})$ which is well inside Fig. \[FR\]. ### Type I Seesaw: EDM Let us now discuss the consequences of type I seesaw models for lepton EDM. It is well known that in the simplest supersymmetric models (with or without neutrino mass) the dipole moments of electrons and muons obey a simple scaling law $d_e/d_\mu \approx m_e/m_\mu$. Given the present bound on $d_e$, this implies $d_\mu < 10^{-25}$ e cm, which is at the level of the best experimental prospects. Things can change in seesaw models due to the fact that interactions involving right-handed neutrinos via radiative corrections can affect the scaling law. In type I seesaw with degenerate $N^c$, the radiative contributions to $d_e$ and $d_\mu$ still preserve the scaling law. However, with hierarchical $N^c$ this proportionality is broken due to threshold effects arising from both the flavor conserving $A$-term contribution [@ellisetal] and by the FV double-insertion contributions [@isaEDM; @FarPesk], which dominate for $\tan \beta > 10$. Nevertheless, if only the type I seesaw radiative contributions are taken into account, $d_e$ and $d_\mu$ turn out to be barely at hand of future experimental searches (but can be for very particular textures [@ellisetal]). Discovering LEDM would then suggest the presence of additional particles and interactions beyond those of the supersymmetric type I seesaw. The heavy color triplets of GUT theories are excellent candidates for this [@barhall; @romstr; @isaEDM]. In particular, it turns out that the limits on $d_e$ are competitive with those on proton lifetime in constraining the pattern of GUT theories where heavy triplets and right-handed neutrinos are simultaneously present [@IeC]. ### Type II Seesaw We will now consider a class of models where the right-handed neutrino mass arises from a renormalizable coupling of the form $f N N \Delta_R$, where $N$ is a right-handed neutrino, $f$ is a coupling constant and $\Delta_R$ is a Higgs field whose vacuum expectation value (vev) gives mass to the right-handed neutrino. This is a natural feature of models with asymptotic parity conservation, such as those based on $SU(2)_L\times SU(2)_R \times U(1)_{B-L}$ or any higher gauge group such as $SO(10)$, where the $\Delta_R$ field is part of an $SU(2)_R$ triplet field. Parity invariance then implies that we also have an $f \nu \nu \Delta_L$ coupling term as a parity partner of the $NN\Delta_R$ coupling. In this class of theories, whenever $\Delta_R$ acquires a vev, so does $\Delta_L$ and they are related by the formula $\langle\Delta_L \rangle \equiv v_L = \frac{v^2_{\rm wk}}{\gamma v_R}$, where $v_{\rm wk}$ is the weak scale, $v_R$ is the $\Delta_R$ vev and $\gamma$ is a coupling constant in the Higgs potential. The $\Delta_L$ vev contributes a separate seesaw suppressed Majorana mass to neutrinos leading to the type II seesaw formula (see Eq. (\[eq:typeII\])) [@seesaw2]. In the case where right-handed Majorana masses are heavy enough, the second term in the type II seesaw formula can be negligible, and the first term, $M_L~=~f v_L$, is dominant. We will call this type II seesaw. The type II seesaw gives rise to the most simple explanation of the neutrino sector and is phenomenologically very successful especially when we try to construct realistic models. The simplest model that can be imagined for type II seesaw has just MSSM and right-handed neutrinos below the GUT scale and hence there is no new symmetry breaking scale. The right-handed neutrino masses can have hierarchies and therefore get decoupled at different scales below the GUT scale. Due to the radiative corrections from the RGEs, the flavor-violating pieces present in $Y_{\nu}$ and $f$ get transmitted to the flavor universal scalar masses and produce lepton flavor violation. The $f$ term contribution is the additional contribution that is typical of type II seesaw models [@Babu:2002tb]: $$\begin{aligned} dY_{e}/dt &=& {1 \over 16 \pi^2}(Y_\nu Y_\nu^{\dag}+\cdots)Y_e~,\\\nonumber dY_{\nu}/dt &=& {1 \over 16 \pi^2}(f f^{\dag}+\cdots)Y_\nu~,\\\nonumber dm^{2}_{LL}/dt &=& {1 \over 16 \pi^2}(Y_\nu Y_\nu^{\dag}m^2_{LL}+ m^2_{LL}Y_\nu Y_\nu^{\dag}+\cdots)~.\end{aligned}$$ Here, $m^2_{LL}$ represents the soft left-handed scalar masses. The flavor non-diagonal pieces generate the lepton flavor violation. In the type II seesaw the structure $f$ coupling generates the neutrino mixing parameters. Both $f$ and $Y_{\nu}$ are determined by the particular model which explains the quark and lepton masses. In order to calculate the BRs of $\mu\rightarrow e\gamma$, $\tau\rightarrow \mu\gamma$ and the electric dipole moments for the electron and muon, we use the minimal SUGRA universal boundary conditions at the GUT scale. The unifying framework of $SO(10)$ has been chosen and the values of quark masses and the CKM CP violation are satisfied. The values of the universal scalar mass $m_0$, universal gaugino mass $m_{1/2}$, universal trilinear term $A_0$, $\tan\beta$ and the sign of $\mu$ as free parameters determine the final result. The assumption of universality allows us to probe the flavor violation originating from the neutrino sector. We also assume that there is no phase associated with the SUSY breaking. The Yukawa and/or the Majorana couplings are responsible for CP violation in these models. The mSUGRA parameter space is constrained by the experimental lower limit on $m_h$, the measurements of $b\rightarrow s\gamma$ and the recent results on dark matter relic density [@wmap]. For low $\tan\beta$, the parameter space has a lower bound on $m_{1/2}$ stemming from the light Higgs mass bound of $m_h\geq 114$ GeV. For larger $\tan\beta$ the lower bound on $m_{1/2}$ is obtained by the CLEO constraint on BR($b\rightarrow s\gamma$). The lightest neutralino is the dark matter candidate in this model and we satisfy the 2$\sigma$ range of the recent relic density constraint $\Omega_{\rm CDM}=0.1126^{+0.008}_{-0.009}$ [@wmap] in the parameter space. The allowed parameter space of mSUGRA mostly reduces to the neutralino-stau co-annihilation region for $m_0,\,m_{1/2}\leq 1000$ GeV and when we satisfy the relic density constraint, $m_0$ gets determined within a very narrow band. For example, $m_0$ varies between 60–100 GeV for $A_0=0$ line in the graph. In Figs. \[fig:fig1\]–\[fig:fig4\], we show BR\[$\mu\rightarrow e\gamma$\] and BR\[$\tau\rightarrow \mu\gamma$\] as a function of $m_{1/2}$ for different values of $A_0$. We find that the BR is large in most of the parameter space and can be observable. In addition, BR\[$\tau\rightarrow\mu\gamma$\] can also be observable in the near future. The figures demonstrate that lepton flavor violation typically increases with increasing $\tan\beta$. ![\[fig:fig1\] BR\[$\mu\rightarrow e\gamma$\] is plotted as a function of $m_{1/2}$ for different values $A_0$ and $\tan\beta=10$, 40 and 50 in pure type II seesaw.](muegmtyp2104050.eps){width="8cm"} ![\[fig:fig2\] BR\[$\tau\rightarrow \mu\gamma$\] is plotted as a function of $m_{1/2}$ for $\tan\beta=10$, 40 and 50 in pure type II seesaw.](taumugmtyp2.eps){width="8cm"} The electron EDM is plotted in Fig. \[fig:fig3\]. We find that the maximum value of EDM is $\sim 10^{-31}$ e cm. The muon EDM is shown in Fig. \[fig:fig4\] and the maximum value shown is about $10^{-29}$ e cm. The scaling is broken in this model. We do not assume any new CP phases in SUSY parameters, hence all CP phases arise from the Yukawa and Majorana couplings. ![\[fig:fig3\] The electron EDM is plotted as a function of $m_{1/2}$ for different values $A_0$ and $\tan\beta=40$ and 50 in pure type II seesaw.](edmtyp24050.eps){width="8cm"} ![\[fig:fig4\] The muon EDM is plotted as a function of $m_{1/2}$ for $\tan\beta=10$, 40 and 50 in pure type II seesaw.](muedmtyp2.eps){width="8cm"} It is clear that if the seesaw mechanism eventually turns out to be the explanation of small neutrino mass, the crucial question becomes whether it is of type I or type II. One may then use lepton flavor violation as a way to discriminate between these two possibilities. Leptogenesis in the Type I Seesaw {#sec:Leptogenesis} --------------------------------- The origin of matter is a fundamental puzzle of cosmology and particle physics. The seesaw provides many mechanisms to generate this excess; we discuss what we can learn about neutrino physics, as well as the pattern of right-handed neutrino masses, from the observed baryon asymmetry. Three ingredients are required to generate the observed Baryon Asymmetry of the Universe [@sakharov]: baryon number violation, C and CP violation and some out-of-thermal equilibrium dynamics. The seesaw model [@minkowski; @Yanagida:1980; @Gell-Mann:1980vs; @Glashow:1979vf; @Mohapatra:1980ia], which was introduced to give small neutrino masses, naturally satisfies these requirements, producing the baryon asymmetry by the “leptogenesis” mechanism [@FY]. It is interesting to investigate the relation between the requirements of successful leptogenesis and the observable neutrino mass and mixing matrices. In particular, does the CP violation that could be observed in neutrino oscillations bear any relation to leptogenesis? Do the Majorana phases that appear in neutrinoless double beta decay experiments do so? The next subsection reviews the thermal leptogenesis scenario, focusing on the Type I seesaw, with hierarchical RH neutrinos ($M_1 \lappeq M_{2,3}/10$). The relation with light neutrino parameters in type I seesaw models with three generations is discussed in Subsection \[type1\]. The situation in type II seesaw is given in Section \[sec:YBII\], and the case of quasi-degenerate $N_R$ masses is discussed in subsection \[Resonant\]. ### Thermal Leptogenesis {#sec:typeIleptogenesis} The idea of leptogenesis is to use the lepton number violation of the $N_R$ Majorana masses, in conjunction with the $B+L$ violation contained in the Standard Model [@'tHooft:up], to generate the baryon asymmetry. The most cosmology-independent implementation is “thermal leptogenesis” [@FY; @FPSCRV; @AP; @BP; @gian] which will be reviewed in the following paragraph. Other leptogenesis scenarios, where the initial number density of (s)neutrinos is produced non-thermally (by inflaton decay, scalar field dynamics,$\ldots$) depend on additional parameters of the cosmological model. If the temperature $T_{RH}$ of the thermal bath after inflation is $ \gappeq M_{R1}$, the lightest $N_R$ will be produced by scattering. If the $N_R$ subsequently decay out of equilibrium, a CP asymmetry $\epsilon_1$ in the decay produces a net asymmetry of Standard Model leptons. This asymmetry is partially transformed into a baryon asymmetry by the non-perturbative $B+L$ violation [@Kuzmin:1985mm]. Thermal leptogenesis has been studied in detail [@BP; @FPSCRV; @AP; @APTU; @gian; @Buchmuller:2004nz]; the baryon to entropy ratio produced is $$Y_B \simeq C \kappa \frac{n}{s} \epsilon_1 \label{thlep} ~~,$$ where $\kappa \leq 1$ is an efficiency factor to be discussed in a moment, $n/s \sim 10^{-3}$ is the ratio of the $N_R$ equilibrium number density to the entropy density, and $\epsilon_1$ is the CP asymmetry in the $N_{R1}$ decay. $C \sim 1/3$ tells what fraction of the produced lepton asymmetry is reprocessed into baryons by the $B+L$ violating processes. $Y_B$ depends largely on three parameters: the $N_{R1}$ mass $M_{R1}$, its decay rate $\Gamma_1$, and the CP asymmetry $\epsilon_1$ in the decay. The decay rate $\Gamma_j$ of $N_{Rj}$ can be conveniently parametrized as $ \Gamma_j = \frac{[Y_\nu^\dagger Y_\nu ]_{jj} M_j} {8 \pi } \equiv \frac{\tilde{m}_j M_j^2}{8 \pi v_{\rm wk}^2}~~, $ where $\tilde{m}_j$ is often of order of the elements of the $\nu_L$ mass matrix, although it is a rescaled $N_R$ decay rate. The requisite $CP$ violating decay asymmetry $\epsilon_1$ is caused by the interference of the tree level contribution and the one-loop corrections in the decay rate of the heavy Majorana neutrinos. For hierarchical neutrinos it is given by: $$\label{eq:eps} \begin{array}{ll} \epsilon_1 & \equiv \frac{\displaystyle \Gamma (N_1 \rightarrow \Phi^- \, \ell^+) - \Gamma (N_1 \rightarrow \Phi^+ \, \ell^-)}{\displaystyle \Gamma (N_1 \rightarrow \Phi^- \, \ell^+) + \Gamma (N_1 \rightarrow \Phi^+ \, \ell^-)} \\ & \simeq - \frac{\displaystyle 3}{\displaystyle 16 \, \pi} \sum\limits_{j\neq 1} \frac{\displaystyle{\rm Im} (Y_\nu^\dagger Y_\nu)^2_{1j}}{\displaystyle(Y_\nu^\dagger Y_\nu)_{11}} \, ~ \frac{\displaystyle M_1}{\displaystyle M_j}, \end{array}$$ where $\Phi$ and $\ell$ indicate the Higgs field and the charged leptons, respectively. ![ The efficiency parameter $\kappa$ as function of $(\tilde{m}_1, M_{R1}/T_{\rm RH}$) for the Type I Seesaw with hierarchical $N_R$, in the SM and MSSM. In this plot $M_{R1} = 10^{10}$ GeV; the plot would only change slightly for $M_{R1}\ll 10^{14}$ GeV. []{data-label="rehgian"}](reh.eps){height="4cm" width="12cm"} Eq. (\[thlep\]) can be of the order of the observed $Y_B \sim 3 \times 10^{-11} $ when the following conditions are satisfied: 1. $M_{R1}$ should be $ \lappeq T_{RH}$[^9]. This temperature is unknown, but bounded above in certain scenarios ([*e.g.*]{} $ T_{RH} \lappeq 10^9\,\mathrm{GeV}$ due to gravitino overproduction in some supersymmetric models, see \[sec:GravitinoProblem\]). This can be seen in Figure \[rehgian\], where the efficiency factor $\kappa$ falls off rapidly for $M_{R1} > T_{RH}$. 2. The $N_{R1}$ decay rate $\propto \tilde{m}_1 $ should sit in a narrow range. To be precise, $\tilde{m}_1$ must be large enough to produce an approximately thermal number density of $N_{R1}$, and small enough that the $N_{R1}$ lifetime is of order the age of the Universe at $T \sim M_{R1}$ (the out-of-equilibrium decay condition). These two constraints are encoded in the efficiency factor $\kappa$, plotted in Figure \[rehgian\]. 3. $\epsilon_1$ must be $\gappeq 10^{-6}$. ![Contour plot, from [@Buchmuller:2002rq], of the baryon to photon ratio produced in thermal leptogenesis, as a function of $M_{R1} $ and $ \tilde{m}_1$. The decay asymmetry $\epsilon_1$ was taken to be $10^{-6}$. The three (blue) close-together lines are the observed asymmetry. The horizontal contours, for small $\tilde{m}_1$ assume a thermal $N_R$ abundance as initial condition.[]{data-label="YBBP"}](BP.eps){height="5cm" width="8cm"} In Figure \[YBBP\] is plotted the baryon asymmetry, produced by thermal leptogenesis, as a function of $M_{R1}$ and $\tilde{m}_1$, for $T_{RH} \gg M_{R1}$, and $\epsilon_1 = 10^{-6}$. To reproduce the observations, $M_{R1}$ and $\tilde{m}_1$ must be inside the three neighboring (blue) lines. ### Parametrizing the type I seesaw {#type1} Twenty-one parameters are required to determine the three generation lepton sector of the type I seesaw model. This includes the charged lepton masses, and a mixing matrix (with three complex parameters, [*e.g.*]{} the MNSP matrix) in the left-handed lepton sector. The remaining 9 real numbers and 3 phases can be chosen in various ways: 1. “ top-down” — input the $N_R$ sector: the eigenvalues of the mass matrix $M_{R}$ and of $Y_\nu$, and a matrix transforming between the bases where these matrices are diagonal [@wir] (see also Refs. [@Ellis:2001xt; @Davidson:2001zk]). 2. “ bottom-up” — input the $\nu_L$ sector: the eigenvalues of the mass matrix $M^I_{\nu}$ and of $Y_\nu$, and a matrix transforming between the bases where these matrices are diagonal. 3. “intermediate” — the Casas-Ibarra parametrization [@casasibarra]: the ${M_R}$ and ${M^I_\nu}$ eigenvalues, and a complex orthogonal matrix ${R}$ which transforms between these two bases. To relate the RH parameters relevant for leptogenesis to the LH ones, many of which are accessible at low energy, it is useful to consider the first and second parametrization. ### Implications for CP conserving observables {#real} The second requirement (i.e., the range of $\tilde{m}_1$) sets an upper bound on the mass scale of light neutrinos. The scaled decay rate $\tilde{m}_1$ is usually $\sim m_2, m_3$; for hierarchical light neutrinos, it naturally sits in the desired range. One can show [@Fujii:2002jw; @di2] that $m_1 < \tilde{m}_1$, so that $m_1 \lappeq 0.15 $ eV [@BdBP; @gian; @strumia; @Buchmuller:2004nz] is required for thermal leptogenesis in the type I seesaw, with hierarchical $N_R$. This is shown in Figure \[m3strumia\]. ![ Upper bound on the light neutrino mass scale, assuming hierarchical $N_R$, taken from [@strumia]. The plot shows the measured baryon asymmetry (horizontal line) compared with the maximal leptogenesis value as function of the heaviest neutrino mass $m_3$. []{data-label="m3strumia"}](m3.eps){width="12cm"} In type I seesaw models with hierarchical $N_R$, the third condition ($\epsilon_1 \gappeq 10^{-6}$) imposes $M_{R1} \gappeq 10^8$ GeV, because $\epsilon_1 \leq 3 M_{R1} (m_3 - m_1)/(8 \pi v_{\rm wk}^2)$ in most of parameter space [@di2; @strumia]. For three $N_{R}$, the value of $M_{R1}$ has little implication on low energy neutrino observables. If $\epsilon_1$ is maximal — that is, $M_{R1}$ close to its lower bound, — this sets one constraint on the 21 parameters of the type I seesaw. This has no observable consequences among Standard Model particles, because at most 12 masses, angles and phases are measurable, and $\epsilon_1$ can be maximized by choice of the nine other parameters. The situation is more promising [@dip; @wir] in SUSY models with universal soft terms, where some of the 9 additional parameters can contribute to slepton RGEs and thereby to lepton flavor violating branching ratios, as discussed in Section \[sec:LFV\]. ### Relations between Leptogenesis and leptonic CP violation {#phases} The leptogenesis parameter $\epsilon_1$ is a $\CPV$ asymmetry, suggesting a possible correlation with CP violation in $\nu$ oscillations (the phase $\delta$), or to the low energy Majorana phases ($\phi_{1,2}$). Let us assume that $\epsilon_1$ is large enough — so thermal leptogenesis works — and concentrate on the implications for low-energy CP violation. The first thing that must always be said, in discussing potential connections between phases in the MNSP lepton mixing matrix and leptogenesis, is that there is no linear relation: leptogenesis can work when there is no $\CPV$ in MNSP, and measuring low energy leptonic phases does not imply that there is CP violation available for leptogenesis. This was clearly and elegantly shown by Branco, Morozumi, Nobre and Rebelo in [@Branco:2001pq]. The problem is that six phases are included in the general three neutrino seesaw scenario — it would be astonishing if the $\CPV$ parameter we are interested in ($\epsilon_1$) is proportional to the low energy phases ($\delta$, $\phi_{1,2}$). Nonetheless, some sort of relation between $\epsilon_1$ and the low energy phases would be interesting — so what can we say? Needing more inputs that the data provides is a familiar problem for extensions of the SM. The usual solutions are to scan over unknowns, or to fix them. But this is subtle: any relation depends on the choice of “independent” phases. For instance, if $\epsilon_1$ and $\delta$ are chosen as inputs, then it follows that they are unrelated [^10]. A choice of parametrization is not obvious: the $\CPV$ parameter $\epsilon_1$ is a function of $N_R$ phases, masses, mixing angles, whereas the observable phases are those of $U$. A useful step is to write $\epsilon_1 \propto$ a Jarlskog invariant, which can be done for thermal leptogenesis with hierarchical $N_R$ [@ryu]: $$\epsilon_1 \propto \Im \{ {\rm Tr} [ M_\nu^\dagger M_\nu M_\nu^\dagger ( Y_\nu Y_\nu^\dagger)^{-1} M_\nu ( Y_\nu^* Y_\nu^T)^{-1} ] \}. \label{trace}$$ The advantage is that Jarlskog invariants can be evaluated in any basis/parametrization. It is easy to see, evaluating Eq. (\[trace\]) in the $\nu_L$ mass eigenstate basis, that the $\CPV$ for leptogenesis is controlled by a matrix $W$ which transforms between the bases were $Y_\nu$ and $ M_\nu$ are diagonal. This matrix is unobservable, verifying the no-go theorem of [@Branco:2001pq]. However, in many popular/common Yukawa texture models, where $Y_\nu$ and $M_{\ell}$ are almost simultaneously diagonalizable, $W \sim U$ and the MNSP phases are relevant for thermal leptogenesis. For $s_{13}$ larger than the mixing angles between diagonal $Y_\nu$ and $M_{\ell }$, $\epsilon_1 \propto \sin 2(\phi_1 - \phi_2 + \delta)$. ### Model dependent approaches Within specific models interesting links between the phase relevant for leptogenesis and the phase $\delta$ measurable in neutrino oscillation experiments have been made. The precise link depends on how many “texture” zeroes are assumed to be present in the neutrino Dirac mass matrix. For example within the class of two right-handed neutrino models, if two texture zeroes are assumed then there is a direct link between $\delta$ and the leptogenesis phase, with the sign of $\delta$ being predicted from the fact that we are made of matter rather than antimatter [@min; @Frampton:2002qc]. Two right-handed neutrino models can be obtained as a limiting case of sequential dominance models, and in such models if only the physically motivated texture zero in the 11 entry of the Dirac mass matrix is assumed, then the link is more indirect [@King:2002qh]. Other approaches which give rise to a link between leptogenesis and CP violation include GUT models [@GUTS], or textures [@Ellis:2001xt; @RaidalJE; @textures; @wir] or left-right symmetric models, to be discussed in the next Subsection. On the other hand, if the charged lepton sector contributes significantly to the lepton mixing $\theta_{13}$ and therefore also to $\delta$, such links may be spoiled [@Antusch:2005kw]. ### Leptogenesis in supersymmetric scenarios {#sec:GravitinoProblem} In supersymmetric scenarios, the history of the early universe is subject to various constraints. Many of them are associated to the gravitino problem [@Khlopov:1984pf; @Moroi:1993mb]. In short, unstable gravitinos are notoriously in conflict with nucleosynthesis (see [@Cyburt:2002uv] for a recent analysis), while stable ones may ‘overclose’ the universe. If the gravitino is very light ($m_{3/2}\lappeq \mathrm{keV}$ [@Pagels:1981ke]) or very heavy ($m_{3/2}\gappeq 10\,\mathrm{TeV}$), these bounds disappear, and thermal leptogenesis works (see, e.g., [@Ibe:2004tg]). For all other masses, nucleosynthesis or ‘overclosure’ constraints translate into bounds on the gravitino abundance at $T\sim 1\,\mathrm{MeV}$ or today, respectively. Assuming that gravitinos are not produced by inflaton decays (see [@Nilles:2001ry]), this gravitino abundance is linear in the reheat temperature [@Bolz:2000fu]. Unstable gravitinos with masses below $\sim10\,\mathrm{TeV}$ lead to severe constraints on the reheat temperature $T_{RH}$ [@Cyburt:2002uv] which are in conflict with thermal leptogenesis where $T_{RH}\gappeq10^9\,\mathrm{GeV}$. There are, however, various alternative leptogenesis scenarios such as non-thermal leptogenesis [@Lazarides:1991wu] where the heavy neutrinos are directly produced by inflaton decays, or mechanisms using the superpartner of the neutrino, the sneutrino [@Murayama:1992ua], or sneutrino oscillations (see Subsec. \[sec:SneutrinoOscillation\]). These scenarios can be consistent with unstable gravitinos. Stable gravitinos, on the other hand, may evade the constraints from nucleosynthesis provided that the decays of the next-to-lightest superpartner into the gravitino are harmless [@Feng:2004mt]. However, the ‘overclosure’ constraint leads to $T_{RH}\lappeq(10^9-10^{10})\,\mathrm{GeV}$. Such an upper bound on the reheat temperature is suggested independently by string-theoretical arguments where $T_{RH}\lappeq\sqrt{m_{3/2}\,M_\mathrm{P}}$ [@Buchmuller:2003is]. Stable gravitinos are thus (marginally) consistent with thermal leptogenesis, and provide a natural dark matter candidate [@Bolz:1998ek]. It is clear that the neutrino mass bound, as discussed in \[real\], becomes much tighter now since $\widetilde{m}_1\sim 10^{-3}\,\mathrm{eV}$ (cf. Fig. \[YBBP\]) and $m_1\lappeq \widetilde{m}_1$ [@Fujii:2002jw; @di2]. This scenario therefore predicts hierarchical light neutrinos as well as gravitino cold dark matter. These predictions will be tested in neutrino experiments and at future colliders [@Buchmuller:2004rq]. \[sec:YBII\]Leptogenesis and type II seesaw mechanism ----------------------------------------------------- In type II seesaw scenarios the neutrino mass matrix $M_\nu$ reads $$\begin{aligned} M_\nu^{II} = M_L - M_\nu^D~M_R^{-1}~(M_\nu^D)^T \equiv M_L + M_\nu^{I}~,\end{aligned}$$ where we divided the mass matrix in the conventional type I part $M_\nu^{I} = - M_\nu^D~M_R^{-1}~(M_\nu^D)^T$ and the part characteristic for type II, $M_L$. A type II seesaw term can, e.g., be present in $SO(10)$ models, in which the $B-L$ symmetry is broken by a [**126**]{} Higgs fields. Depending on the parameters of the model, either $M_L$ or $M_\nu^{I}$ can be the dominating source of $M_\nu^{II}$. As mentioned above, in case of the conventional type I seesaw mechanism with three families of light and heavy Majorana neutrinos, there are six phases. As already discussed, in this case there is in general no relation between the PMNS phase and leptogenesis phase. For the type II case the phase counting gives the result of 12 independent CP phases and there is no connection between low and high energy CP violation either. The number of CP phases can be obtained by going to a basis in which both $M_L$ and $M_R$ are real and diagonal. Any CP violation will then stem from the matrices $M_\nu^D$ and $M_\ell M_\ell^\dagger$ (with $M_\ell$ being the charged lepton mass matrix). Those two matrices posses in total 9 + 3 = 12 phases. The term $M_L$ is induced by a $SU(2)_L$ Higgs triplet, whose neutral component acquires a vev $v_L \propto v_{\rm wk}^2/M_{\Delta_L}$, where $M_{\Delta_L}$ is the mass of the triplet and $v_{\rm wk}$ the weak scale. Consequently, the triplet contribution to the neutrino mass matrix is $$\begin{aligned} M_L = v_L~f_L~,\end{aligned}$$ with $f_L$ a symmetric $3 \times 3$ coupling matrix. The magnitude of the contribution of $\Delta_L$ to $M_\nu^{II}$ is thus characterized by its vev $v_L$. In left-right symmetric theories the left-right symmetry necessarily implies the presence of a $SU(2)_R$ triplet, whose coupling matrix is given by $f_R = f_L \equiv f$ and its vev is given by $v_R$, where $v_L~v_R = \gamma~v_{\rm wk}^2$ with $\gamma$ a model dependent factor of order one. The right-handed Majorana neutrino mass matrix is thus given by $M_R = v_R~f = v_R/v_L~M_L$. Before acquiring its vev, the presence of the doubly charged Higgs and the coupling of the Higgs triplet with the doublet introduces the possibility of additional diagrams capable of generating a lepton asymmetry. First, there is the possibility that in the decay $N_1 \rightarrow L~H_u$ a virtual Higgs triplet is exchanged in the one-loop diagram, which contributes to the decay asymmetry in the decay of the heavy Majorana neutrinos [@O'Donnell:1994am; @Lazarides:1998iq; @Hambye:2003ka; @anki]. The corresponding term $\epsilon_1^{\Delta} $ adds up to the conventional term $\epsilon_1$ whose properties were discussed in the previous Subsection. The second new diagram possible is given by the decay of the doubly charged Higgs triplet in two charged leptons. One-loop exchange of a heavy Majorana neutrino gives rise to the decay asymmetry [@O'Donnell:1994am] $$\begin{aligned} \epsilon_{\Delta} \equiv \frac{\Gamma(\Delta_L \rightarrow l^c~l^c) - \Gamma(\Delta_L^\ast \rightarrow l~l) } {\Gamma(\Delta_L \rightarrow l^c~l^c) + \Gamma(\Delta_L^\ast \rightarrow l~l)} ~.\end{aligned}$$ If $M_1 \ll M_{\Delta_L}$ ($M_1 \gg M_{\Delta_L}$) the decay of the Majorana neutrino (Higgs triplet) will govern the baryon asymmetry. Thus, depending on which term dominates $M_\nu^{II}$, four different situations are possible [@Hambye:2003ka]. The discussion of 3 of the cases has so far not been discussed in as much detail as the conventional leptogenesis in type I seesaw mechanisms. If $M_1 \ll M_{\Delta_L}$ and the conventional term $M_\nu^I$ dominates $M_\nu^{II}$, we recover the usual seesaw and leptogenesis mechanisms and the statements given in Sec. \[sec:typeIleptogenesis\] apply. ![Scatter plots of the effective mass against $\tan \theta_{13}$ and $\Delta m^2_\odot$ against $J_{CP}$ for the normal (left) and inverted (right) neutrino mass spectrum. Taken from [@Joshipura:2001ui]. []{data-label="fig:fig"}](meffn.ps "fig:"){width="6.2cm" height="5cm"} ![Scatter plots of the effective mass against $\tan \theta_{13}$ and $\Delta m^2_\odot$ against $J_{CP}$ for the normal (left) and inverted (right) neutrino mass spectrum. Taken from [@Joshipura:2001ui]. []{data-label="fig:fig"}](meffi.ps "fig:"){width="6.2cm" height="5cm"}\ ![Scatter plots of the effective mass against $\tan \theta_{13}$ and $\Delta m^2_\odot$ against $J_{CP}$ for the normal (left) and inverted (right) neutrino mass spectrum. Taken from [@Joshipura:2001ui]. []{data-label="fig:fig"}](JCPn.ps "fig:"){width="6.2cm" height="5cm"} ![Scatter plots of the effective mass against $\tan \theta_{13}$ and $\Delta m^2_\odot$ against $J_{CP}$ for the normal (left) and inverted (right) neutrino mass spectrum. Taken from [@Joshipura:2001ui]. []{data-label="fig:fig"}](JCPi.ps "fig:"){width="6.2cm" height="5cm"} In situations in which $M_1 \ll M_{\Delta_L}$, the heavy Majorana neutrinos display a hierarchical structure and $M_L$ dominates $M_\nu^{II}$, it has been shown in [@Hambye:2003ka; @anki] that one can rewrite the decay asymmetries such that $\epsilon_1^{\Delta} $ depends on $M_L$ and $\epsilon_1$ on $M_\nu^{I}$. However, since matrices are involved, $\epsilon_1$ can still be the dominant contribution to the decay asymmetry, a situation which in the context of left-right symmetry has intensively been investigated in [@Joshipura:1999is; @Joshipura:2001ui], see also [@sche]. Calculating in this framework the baryon asymmetry in terms of light neutrino parameters (a bottom-up approach) leads typically to a main dependence on the Majorana phases in the PMNS matrix. If $M_\nu^D$ is given by the up-quark mass matrix and the light neutrinos display a normal hierarchal spectrum, one of the low energy Majorana phases has to be very close to zero or $\pi/2$ [@Joshipura:1999is]. For $M_\nu^D$ given by the down quark or charged lepton mass matrix one finds that the in general unknown mass spectrum of the heavy Majorana neutrinos is [*exactly*]{} given by the measurable mass spectrum of the light Majorana neutrinos. In case of a normal hierarchy, the Majorana phases should lie around $\pi/4$ or $5\pi/4$. Both values give comparable results for the rate of neutrinoless double beta decay. Thus, measuring neutrinoless double beta decay fully determines the neutrino mass matrix in this scenario. It is also possible to set limits on the lightest neutrino mass $m_1$ because the baryon asymmetry is proportional to $m_1$. It should be larger than $10^{-5}$ eV in order to produce a sufficient baryon asymmetry [@Joshipura:2001ui]. For an inverted hierarchy of the neutrinos it turns out that rather sizable values of $\theta_{13}$ are required. Thus, sizable effects of CP violation in future long-baseline neutrino oscillation experiments are possible. The preferred value of the Majorana phase implies in addition a rather sizable rate of neutrinoless double beta decay. Furthermore, the lightest neutrino mass should be heavier than $10^{-3}$ eV. Figure \[fig:fig\] shows for the normal and inverted hierarchy typical examples for the expected values of $\theta_{13}$, the effective mass and the CP violating parameter $J_{CP}$ in neutrino oscillations. A similar example within a framework incorporating spontaneous CP violating is discussed in Section \[sec:muchun\]. Finally, if neutrinos possess a quasi-degenerate mass spectrum, one of the Majorana phases is required to lie around $\pi$ or $\pi/2$. A measurement of neutrinoless double beta decay can resolve this ambiguity.\ The other possible scenarios have not been discussed in detail in the literature so far (see, e.g., [@IIothers]). General statements are however possible. If, e.g., $M_1 \ll M_{\Delta_L}$ and the term $\epsilon_1^{\Delta}$ dominates the decay asymmetry, the limits on the light neutrino masses of order 0.1 eV (see Sec. \[real\]) no longer apply [@Hambye:2003ka], since the couplings responsible for the neutrino masses do not influence the wash-out processes. For hierarchical light neutrinos the upper bounds on the decay asymmetry $\epsilon_1$ and $\epsilon_1^\Delta$ are identical. In case of quasi-degenerate neutrinos, however, the limit in case of type II seesaw is weaker by a factor of $2m_0^2/\Delta m^2_{\rm A}$ [@anki], where $m_0$ is the common neutrino mass scale. Along the same lines, the lower limit of order $10^9$ GeV on the lightest of the heavy Majorana neutrinos can be relaxed by roughly one order of magnitude [@anki], thereby making thermal leptogenesis less in conflict with the gravitino problem. Consider the case when $M_1 \gg M_{\Delta_L}$ and $M_\nu^{I}$ dominates $M_\nu^{II}$. Then $\epsilon_{\Delta}$ will produce the baryon asymmetry and again the limits on light neutrino masses do not apply. The same is true when $M_1 \gg M_{\Delta_L}$ and $M_L$ is the main contribution to $M_\nu^{II}$. A smaller range of allowed parameters is expected in this case [@Hambye:2003ka]. Therefore, given the fact that quasi-degenerate light neutrinos are hard to reconcile with standard thermal leptogenesis in type I seesaw models, our discussion implies that if we learn from future experiments that neutrinos are indeed quasi-degenerate, triplet induced leptogenesis represents a valid alternative (this was first noted in [@Chun:2000dr]). Also possible is — in inflationary scenarios — that the decay of the inflaton into light particles together with interference of one-loop diagrams with exchanged $SU(2)$ triplets and heavy Majorana neutrinos generates a lepton asymmetry. Various slepton decays in future colliders are expected to be observable [@Dent:2003dn].\ Up to now the discussion was constrained to the presence of only one triplet. If only one triplet is present, right-handed Majorana neutrinos are necessary to produce a decay asymmetry. Introducing two or more triplets allows self-energy diagrams which can produce a decay asymmetry without heavy Majorana neutrinos [@Ma:1998dx]. The possibility of lowering the triplet mass scale due to a resonance effect of close-in-mass triplets is possible [@Hambye:2000ui], giving the prospect of collider phenomenology. The presence of light and detectable Majorons is also possible. There are models implementing this kind of triplet self-energy scenarios with light left-handed neutrinos [@Hambye:2000ui] and with quasi-degenerate ones [@Lazarides:1998jt]. The latter also predicts a stable proton due to $R$ parity conservation. Introducing the triplet induced neutrino mass matrix of the type II seesaw mechanism along the lines of [@Mohapatra:1999zr], i.e., by a conjunction of flavor and permutation symmetries will typically include many additional Higgs fields. Rich phenomenology in form of rare charged lepton decays or charged lepton EDMs will be among the interesting consequences.\ To sum up, the type II seesaw mechanism displays the most general but more complicated framework of neutrino mass and leptogenesis. Nevertheless, richer phenomenology is expected, most of which remains to be explored. Dirac Leptogenesis ------------------ Since the seesaw mechanism coupled with existing data on neutrino masses and mixings does not give complete information about the RH neutrino sector, one must consider leptogenesis within various scenarios for RH neutrino masses that correctly explain neutrino observations. In this section, we discuss a possibility that the leptogenesis occurs with Dirac neutrinos. The conventional leptogenesis [@FY], which we call Majorana leptogenesis for definiteness, was based on the fact that the standard model violates $B+L$ [@'tHooft:up], while the Majorana neutrinos violate $L$, and hence both $B$ and $L$ are violated. Therefore it is possible to create $L$ from the decay of right-handed neutrino that is subsequently converted to $B$ [@Kuzmin:1985mm]. On the other hand, Dirac neutrinos conserve $L$ and hence $B-L$ is an exact symmetry. Therefore $B-L$ stays vanishing throughout the evolution of the universe and it appears impossible to generate non-vanishing baryon asymmetry[^11]. Dirac leptogenesis overcomes this problem by the following simple observation [@Dick:1999je]. Recall that the Dirac neutrinos have tiny Yukawa couplings, $M_\nu^D = Y_\nu v_{\rm wk}$, $Y_\nu \simeq 10^{-13}$. If this is the only interaction of the right-handed neutrinos, thermalization is possible only by processes like $N L \rightarrow H W$ and they do not thermalize for $T \gs g^2 Y_\nu^2 M_{\rm Pl} \sim 10$ eV.[^12] At this low temperature, obviously both $H$ and $W$ cannot be produced and the thermalization is further delayed until $T_\nu \simeq M_\nu$ when neutrinos become non-relativistic. Therefore the number of left-handed and right-handed neutrinos are separately conserved practically up to now. We call them $L$ and $N$, respectively, and the total lepton number is $L+N$. The combination $L+N-B$ is strictly conserved. Suppose the decay of a heavy particle produced an asymmetry $L = -N \neq 0$. The overall lepton number is conserved (see Fig. \[fig:Dirac\]). $N$ is frozen down to $T_\nu$. On the other hand, the lepton asymmetry $L$ is partially converted to the baryon asymmetry via the Standard Model anomaly. After the electroweak phase transition $T \ls 250$ GeV, the anomaly is no longer effective. Finally at $T_\nu$, $L$ and $N$ equilibrate. In the end there is a baryon asymmetry $B = -(L+N)$. ![The evolution of the lepton asymmetry in Dirac leptogenesis models. At the first stage, an asymmetry between the ordinary leptons and the right-handed neutrinos is created without lepton-number violation. Then the asymmetry in the ordinary leptons is partially converted to the baryon asymmetry. Finally, the right-handed neutrinos come in thermal equilibrium with the other leptons. The net baryon and lepton asymmetries remain while the overall $B-L$ vanishes.[]{data-label="fig:Dirac"}](NLB1 "fig:"){width="30.00000%"} ![The evolution of the lepton asymmetry in Dirac leptogenesis models. At the first stage, an asymmetry between the ordinary leptons and the right-handed neutrinos is created without lepton-number violation. Then the asymmetry in the ordinary leptons is partially converted to the baryon asymmetry. Finally, the right-handed neutrinos come in thermal equilibrium with the other leptons. The net baryon and lepton asymmetries remain while the overall $B-L$ vanishes.[]{data-label="fig:Dirac"}](NLB2 "fig:"){width="30.00000%"} ![The evolution of the lepton asymmetry in Dirac leptogenesis models. At the first stage, an asymmetry between the ordinary leptons and the right-handed neutrinos is created without lepton-number violation. Then the asymmetry in the ordinary leptons is partially converted to the baryon asymmetry. Finally, the right-handed neutrinos come in thermal equilibrium with the other leptons. The net baryon and lepton asymmetries remain while the overall $B-L$ vanishes.[]{data-label="fig:Dirac"}](NLB3 "fig:"){width="30.00000%"} The original paper [@Dick:1999je] introduced new electroweak doublet scalar $\Phi$ that has the same quantum numbers as the Higgs doublets and Yukawa couplings $\Phi L N$ and $\Phi^* L E$. If there are two sets of them, there is CP violation and their decays can create the asymmetry $L = -N \neq 0$. However, these doublets are there just for this purpose and have no other motivations. On the other hand, light Dirac neutrinos are natural in models where the neutrino Yukawa couplings are tied to the small supersymmetry breaking effects [@Arkani-Hamed:2000bq; @Borzumati:2000mc]. The Dirac neutrino mass is due to the effective operator $$\label{eq:2} \int d^2 \theta \frac{\chi}{M} L H_u N,$$ where $\chi$ is the hidden sector field which acquires a vacuum expectation value $\langle \chi \rangle \simeq m_{3/2} + \theta^2 m_{3/2} M_{\rm Pl}$ and $M$ is the heavy mass scale. The neutrino Yukawa coupling is $Y_\nu \simeq m_{3/2}/M$, and is naturally small. The operator can be obtained by integrating out (two sets of) new doublets $\phi+\phi^c$ that couple as $W = \phi N H_u + \phi^c L \chi + M \phi \phi^c$. The asymmetries $L = -N \neq 0$ are created by the decay of $\phi$ [@Murayama:2002je]. Then the origin of small neutrino mass and the origin of the lepton asymmetry are tied in the same way as the Majorana leptogenesis. Also concerning the gravitino problem (cf.\[sec:GravitinoProblem\]), Dirac and Majorana leptogenesis are on the same footing [@Murayama:2002je; @Boz:2004ga]. Such a scenario may be supported by the lack of neutrinoless double beta decay as well as the existence or right-handed sneutrino at LHC and Linear Collider. Resonant Leptogenesis {#Resonant} --------------------- The right handed neutrino sector of generic seesaw models is almost entirely unconstrained by existing data on neutrino masses and mixings. It is therefore necessary to consider all the various possibilities for the structure of the RH neutrino sector which are compatible with current experimental data. In this section, we consider the case where two or more right-handed neutrinos are nearly degenerate in mass. An important, further motivation for this possibility comes from the severe limits on the right handed neutrino sector that exist in models of thermal leptogenesis with hierarchical right handed neutrinos. In particular there exists a bound on the mass of the lightest right handed neutrino, $M_{R1} \stackrel{<}{{}_\sim} T_{RH}$, discussed in section 4.2.1. In supersymmetric theories with unstable gravitinos, this bound can be in conflict with the bound $T_{RH} \stackrel{<}{{}_\sim} 10^9$ GeV, coming from nucleosynthesis considerations (see 4.2.6). This motivates us to consider scenarios where the scale of the right handed neutrino masses can be lowered whilst still being compatible with thermal leptogenesis [@AP; @APIJMPA; @boubekeur]. This may be achieved naturally in scenarios with nearly degenerate right handed neutrinos [@AP], in complete accordance with current neutrino data, and with the advantage that the final baryon asymmetry generated is independent of the initial lepton, baryon or heavy neutrino abundances [@APTU]. If the mass difference between two heavy Majorana neutrinos happens to be much smaller than their masses, the self-energy ($\epsilon$-type) contribution to the leptonic asymmetry becomes larger than the corresponding ($\epsilon'$-type) contribution from vertex effects [@FPSCRV; @AP]. Resonant leptogenesis can occur when this mass difference of two heavy Majorana neutrinos is of the order of their decay widths, in which case the leptonic asymmetry could be even of order one [@AP; @APTU]. As a result, one can maintain the RH neutrino masses around the GUT scale [@afsmirnov] or one can contemplate the possibility that the heavy neutrino mass scale relevant to thermal leptogenesis is significantly lower, for example in the TeV range [@AP]. This of course requires a different realization of the seesaw mechanism [@valle2] but it can be in complete accordance with the current neutrino data [@APTU]. The magnitude of the $\epsilon$-type CP violation occurring in the decay of a heavy Majorana neutrino $N_i$ is given by [@AP], $$\epsilon_{N_i} = \frac{\mathrm{Im} (Y_\nu^\dagger\,Y_{\nu})^2_{ij}}{(Y_\nu^\dagger\,Y_{\nu})_{ii} (Y_\nu^\dagger\,Y_{\nu})_{jj}}\,\frac{(m_{N_i}^2-m_{N_j}^2) m_{N_i} \Gamma^{(0)}_{N_{j}}}{(m_{N_i}^2-m_{N_j}^2)^2 + m_{N_i}^2 \Gamma^{(0)\,2}_{N_{j}}}\,, \label{eps}$$ where $\Gamma^{(0)}_{N_i}$ is the tree level total decay width of $N_i$. It is apparent that the CP asymmetry will be enhanced, possibly to $\epsilon \sim 1$, provided $$\begin{aligned} m_{N_2}-m_{N_1} \sim \frac{1}{2}\,\Gamma^{(0)}_{N_{1,2}}\,,\nonumber\\ \frac{\mathrm{Im} (Y_\nu^\dagger\,Y_{\nu})^2_{ij}}{(Y_\nu^\dagger\,Y_{\nu})_{ii} (Y_\nu^\dagger\,Y_{\nu})_{jj}} \sim 1\,. \label{cond}\end{aligned}$$ It is important to note that Eq. (\[eps\]) is only valid for the mixing of two heavy Majorana neutrinos. Its generalization to the three neutrino mixing case is more involved and is given in [@APTU]. ![Numerical estimates of the lepton to photon, and neutrino to photon ratios, $\eta_L$ and $\eta_{N_{1,2}}$ as functions of $z=m_{N_1}/T$ for scenarios with $m_{N_1}=1\,\mathrm{TeV}$. The model is based on the Froggatt-Nielsen mechanism and is completely consistent with all current neutrino data (see [@APTU] for details). It naturally provides a degeneracy of $\frac{m_{N_2}}{m_{N_1}} - 1 = 9.2 \times 10^{-11}$, with $K$-factors and CP asymmetries of $K_1 = K_2 = 6570$ and $\delta_{N_1} = \delta_{N_2} = -0.003$. $\xi$ is a free parameter. The horizontal dotted line shows the value of $\eta_L$ needed to produce the observed $\eta_B$. The vertical dotted line corresponds to $T = T_c = 200\,\mathrm{GeV}$.[]{data-label="largeK"}](largeK.eps) Successful leptogenesis requires conditions out of thermal equilibrium. To quantify this, we introduce the parameter, $K^l_{N_i} = \Gamma^l_{N_i} / H(T=m_{N_i})$ where $H(T)$ is the Hubble parameter and $\Gamma^l_{N_i}$ is the total decay width of $N_i$ into a lepton species $l$ ($l=e,\mu,\tau$). In typical hierarchical leptogenesis scenarios $K_{N_i}^l$ is small, usually $K_{N_i}^l \sim 1$. This constraint can be translated directly into an upper bound on the Yukawa couplings of the neutrinos which can be expressed in terms of effective light neutrino masses, $\widetilde{m}_i$, $$\label{meff_bound} \widetilde{m}_i\ \equiv\ \frac{v^2\, (Y_\nu^\dagger Y_\nu)_{ii}}{2\, m_{N_i}}\ \simeq\ 10^{-3}\, K_{N_i}~{\rm eV}\,,$$ where $K_{N_i} = \sum_l K^l_{N_i}$. However, resonant leptogenesis can be successful with values of $K_{N_i}$ larger than $1000$ [@APTU] (see Figure \[largeK\]). This has implications for leptogenesis bounds on the absolute mass scale of the light neutrinos. If a large, $\stackrel{>}{{}_\sim}0.2\,\mathrm{eV}$, Majorana mass was seen in neutrinoless double beta decay, this could be naturally accommodated with resonant leptogenesis, whereas thermal leptogenesis models based on hierarchical heavy neutrinos would be strongly disfavored, as they naturally require smaller values of $K_{N_i}$ [@BBP; @Buchmuller:2002rq]. Conditions close to thermal equilibrium (with large $K_{N_i}$) endow resonant leptogenesis models with another particularly attractive feature; the final baryon asymmetry generated is almost independent of the initial baryon, lepton or heavy neutrino abundances [@APTU; @APTU2]. For $K_{N_i} \stackrel{>}{{}_\sim} 1$, and under the assumption that the neutrino Yukawa couplings are the same for each lepton flavour, an order of magnitude estimate for the baryon to photon ratio may be obtained by [@APTU] $$\label{etaBapprox} \eta^{\,\mathrm{univ.}}_B\ \sim\ -\, 10^{-2}\,\times\, \sum_{N_{i}}\: e^{-(m_{N_{i}} - m_{N_1})/m_{N_1}}\,\frac{\delta_{N_{i}}}{K}\,,$$ where $\delta_{N_i}$ is the leptonic CP asymmetry in the decay of $N_i$, $K = \sum_l K_l$ and $$K_l \ =\ \sum_{N_{i}}\, e^{-(m_{N_{i}} - m_{N_1})/m_{N_1}}\, K^{l}_{N_{i}}\,.$$ It is apparent that if the CP-asymmetry is enhanced, for example through resonant effects, then $K$ can be increased without an impact on the final baryon asymmetry. In resonant leptogenesis scenarios with neutrino Yukawa couplings that are not universal for each lepton flavour, the effects of individual lepton flavours on the resultant baryon asymmetry may become very important [@APtau; @APTU2]. This also applies to scenarios with mildly hierarchical RH neutrinos [@APTU2]. These effects may result in an enhancement of the baryon asymmetry predicted using (\[etaBapprox\]) by a factor as large as $10^6$ in some models of resonant leptogenesis. An order of magnitude estimate for the final baryon asymmetry, when the neutrino Yukawa couplings are not flavour-universal, may be obtained by [@APTU2] $$\eta_B\ \sim\ -\, 10^{-2}\,\times\, \sum_{l=1}^3\, \sum_{N_{i}}\: e^{-(m_{N_{i}} - m_{N_1})/m_{N_1}}\, \delta^l_{N_{i}}\: \frac{K^{l}_{N_{i}}}{K_l\,K_{N_{i}}}\ ,$$ where $\delta_{N_i}^l$ is the CP asymmetry in the decay of $N_i$ to leptons of flavour $l$. For a more precise computation of the resultant baryon asymmetry, a network of Boltzmann equations must be solved, one for each heavy Majorana neutrino species and one for each lepton flavour [@APTU2]. By adding a further Boltzmann equation for the baryon abundance, and including effects due to the rate of $B+L$ violating sphaleron transitions, it can be shown that successful resonant leptogenesis is possible with heavy Majorana neutrinos as light as the electroweak scale [@APTU2]. In particular, models of resonant $\tau$-leptogenesis [@APtau], where a lepton asymmetry is generated predominantly in the $\tau$-family, can allow large Yukawa couplings between the $e$ and $\mu$ lepton families and some of the RH neutrinos. These couplings, in conjunction with the low RH neutrino scale, lead to a significant amount of accessible phenomenology, such as potentially observable neutrinoless double beta decay, $\mu \to e\gamma$, $\mu \to eee$ and coherent $\mu \to e$ conversion in nuclei, and the possibility of the collider production of heavy Majorana neutrinos [@APtau; @APTU2]. The conditions for resonant leptogenesis can be met in several ways. Models based on the Froggatt-Nielsen mechanism can naturally provide nearly degenerate heavy Majorana neutrinos satisfying Eq. (\[cond\]) and provide a light neutrino spectrum fulfilling all experimental constraints. It can be shown that a model like this can produce the observed baryon asymmetry by solving the network of Boltzmann equations – including gauge mediated scattering effects [@APTU] (see Figure \[largeK\]). In this model the ‘heavy’ Majorana neutrinos can be as light as 1 TeV. $SO(10)$ models with a “type III seesaw mechanism” naturally predict pairs of nearly degenerate heavy Majorana neutrinos suitable for resonant leptogenesis [@AB]. In addition, a model of neutrino mass from SUSY breaking has been shown to naturally lead to conditions suitable for resonant leptogenesis [@THJMRSW]. In the radiative leptogenesis mechanism [@radlepto], small mass differences arise through renormalization group corrections between RH neutrinos which are exactly degenerate in mass at some high scale. The leptonic CP asymmetry induced in this scenario is sufficient to produce the observed baryon asymmetry. In soft leptogenesis [@DGRGKNR; @ejchun], soft SUSY breaking terms lead to small mass differences between sneutrinos. Resonant effects allow sneutrino decay to generate the required CP asymmetry. Several other mechanisms for leptogenesis where the right handed neutrinos can be at a TeV scale have been suggested [@boubekeur]. Clearly for the seesaw mechanism to operate in such models, the Dirac mass must be constrained, e.g., by a leptonic global U(1) symmetry [@babumoh]. The Heavy Majorana Mass Matrix ============================== General Considerations ---------------------- We have already seen from the Introduction that the type I seesaw mechanism requires the existence of right-handed neutrinos $N_R$, and then the light Majorana mass matrix is given as $$\begin{aligned} M_{\nu}=-M^D_{\nu}M_R^{-1}{M^D_{\nu}}^T \label{seesawI}\end{aligned}$$ where $M^D_{\nu}$ is the Dirac neutrino matrix (to be thought of as perhaps similar to the quark and charged lepton mass matrices) and $M_R$ is the heavy Majorana mass matrix. While the elements of $M^D_{\nu}$ must be at or below the electroweak scale, the characteristic scale of right-handed neutrino masses can and must be much higher. Having introduced right-handed neutrinos into the Standard Model for the purpose of accounting for light physical neutrino masses via the type I seesaw mechanism[^13] it is clearly an important question to understand the mass spectrum and couplings of the right-handed neutrinos. Since their only couplings are their Yukawa couplings to Higgs and left-handed neutrino fields, it will clearly not be an easy task to answer this question. However there are three areas where important clues may emerge: the light-neutrino mass matrix $M_{\nu}$; the baryon asymmetry of the universe; and (assuming supersymmetry) lepton flavor violation. Taken together with other theoretical ideas, we shall show that it may be possible to shed light on the right-handed neutrino sector of the theory. ### The Three Right-Handed Neutrino Paradigm It is most common to assume that there are exactly three right-handed neutrinos. Such an assumption is motivated by unified theories such as $SO(10)$ which predicts that the number of right-handed neutrinos is equal to the number of quark and lepton families, since a single right-handed neutrino makes up each 16-plet of the theory. In fact this prediction also follows more generally from any theory which contains a gauged right-handed group $SU(2)_R$, such as left-right symmetric theories, Pati-Salam and so on. Assuming three right-handed neutrinos one can ask whether their mass spectrum is hierarchical, or contains an approximate two or three-fold degeneracy. From the point of view of the type I seesaw mechanism in Eq. (\[seesawI\]) it is clear that the answer to this question depends on the nature of the Dirac neutrino mass matrix $M^D_{\nu}$. For example, suppose that the right-handed neutrinos had a three-fold degeneracy $M_R={\rm diag}(M,M,M)$, then the Eq.(\[seesawI\]) would predict $$\begin{aligned} M_{\nu}=-\frac{M^D_{\nu}{M^D_{\nu}}^T}{M}. \label{seesaw2}\end{aligned}$$ Then if the Dirac neutrino mass matrix were hierarchical and approximately proportional to the up-type quarks, for example, then Eq. (\[seesaw2\]) would imply $$\begin{aligned} m_1:m_2:m_3\approx m_u^2:m_c^2:m_t^2 \label{naive}\end{aligned}$$ which is much too strong a mass hierarchy compared to the rather mild experimentally measured ratio $0.1\leq m_2/m_3\leq 1$. The remaining two possibilities are that the three right-handed neutrinos are either hierarchical or contain an approximate two-fold degeneracy. In either case it is convenient to work in a basis where their mass matrix is diagonal: $$M_R= \left(\begin{array}{ccc}X' & 0 & 0 \\ 0 & X & 0 \\ 0 & 0 & Y\end{array}\right) . \label{MR1}$$ The neutrino Dirac mass matrix $M^D_{\nu}$ in this basis can be written as $$M^D_{\nu}= \left(\begin{array}{ccc}a' & a & d \\ b' & b & e \\ c' & c & f\end{array}\right) \label{MD}$$ where in this convention the first column of Eq. (\[MD\]) couples to the first right-handed neutrino, the second column of Eq. (\[MD\]) couples to the second right-handed neutrino and so on. Note that in the hierarchical case in Eq. (\[MR1\]) we do not specify which of the three right-handed neutrinos $X',X,Y$ is the lightest one, which is the intermediate one and which is the heaviest one, since the columns of $M^D_{\nu}$ and the eigenvalues $X',X,Y$ of $M_R$ may simultaneously be re-ordered without changing $M_{\nu}$. Having displayed the unknown Yukawa couplings associated with the Dirac neutrino mass matrix in Eq. (\[MD\]) it is clear that without further input it is not possible to say anything about the right-handed neutrino masses or couplings from the experimentally determined light Majorana mass matrix $M_{\nu}$. On the other hand, rather natural theoretical assumptions can lead to a great deal of information about the unknown masses and couplings of the right-handed neutrinos, as we now discuss. Regarding the implementation of the type I seesaw mechanism there seem to be two possible options: either all the right-handed neutrinos contribute equally (democratically) to each element of $M_{\nu}$, or some right-handed neutrinos contribute more strongly than others. In the second case, called right-handed neutrino dominance [@King:2002nf], a rather natural implementation of the seesaw mechanism is possible. For example if the right-handed neutrino of mass $Y$ contributes dominantly to the physical neutrino mass $m_3$, and the right-handed neutrino of mass $X$ contributes dominantly to the physical neutrino mass $m_2$, while the right-handed neutrino of mass $X'$ contributes dominantly to the physical neutrino mass $m_1$, then a sequential dominance of these three contributions leads to a neutrino mass hierarchy $m_1\ll m_2\ll m_3$. With such sequential dominance the mixing angles are then given as simple ratios of Dirac neutrino mass matrix elements: $\tan \theta_{23} \approx e/f$, $\tan \theta_{12} \approx \sqrt{2}a/(b-c)$, which can be naturally large independently of the neutrino mass hierarchy. The physical neutrino masses are given by $m_3\approx (e^2+f^2)/Y$, $m_2\approx 4a^2/X$, $m_1\approx (a',b',c')^2/X'$ and the mass ordering of the right-handed neutrino masses $X',X,Y$ is not determined unless further information is specified about the Dirac neutrino masses. In general there are six possible mass orderings of the three right-handed neutrinos: $$\begin{aligned} Y<X<X' \label{1}\\ Y<X'<X \label{2}\\ X<Y<X' \label{3}\\ X'<Y<X \label{4}\\ X'<X<Y \label{5}\\ X<X'<Y \label{6}\end{aligned}$$ The dominant right-handed neutrino of mass $Y$ (the one mainly responsible for the mass $m_3$) may thus be the lightest one as in Eqs. (\[1\], \[2\]), the intermediate one as in Eqs. (\[3\], \[4\]), or the heaviest one as in Eqs. (\[5\], \[6\]). The neutrino of mass $X'$ is essentially irrelevant from the point of view of the light Majorana mass matrix $M_{\nu}$, since the lightest physical neutrino of mass $m_1$ is approximately zero in the hierarchical case. Thus $X'$ cannot be determined from any low energy experiments such as neutrino oscillations or neutrinoless double beta decay. If $X'$ happens to be the heaviest right-handed neutrino, as in Eqs. (\[1\], \[3\]) then if its mass is above the GUT scale then it completely decouples from observable physics. In this case the three right-handed neutrino model becomes effectively a two right-handed neutrino model [@King:2002nf]. However even in this case there is a remaining ambiguity about whether the dominant right-handed neutrino is the lightest one or the next-to-lightest one as in Eqs. (\[1\], \[3\]). ### Grand Unification It is clear that further theoretical input is required in order to elucidate the nature of the masses and couplings of the right-handed neutrinos. In some GUT model, one can expect generically that the Dirac neutrino masses are related to the other quark and charged lepton masses and this additional information about $M^D_{\nu}$ can then be input into the type I seesaw formula Eq. (\[seesawI\]) to help to yield information about the right-handed neutrino mass matrix $M_R$. For example assuming that $M^D_{\nu} \approx M^u$, the up-type quark mass matrix, and inputting the approximately determined light Majorana mass matrix, the seesaw formula can then be rearranged to yield right-handed neutrino masses with a very hierarchical mass spectrum [@afsmirnov]: $$\begin{aligned} M_1:M_2:M_3\approx m_u^2:m_c^2:m_t^2~, \label{naive2}\end{aligned}$$ which can be compared to the naive expectation for the physical neutrino masses in Eq. (\[naive\]). Numerically Eq. (\[naive2\]) yields the order of magnitude estimates $M_1\sim 10^5 $ GeV, $M_2\sim 10^{10} $ GeV, $M_1\sim 10^{15} $ GeV, with an uncertainty of a least one or two orders of magnitude in each case. In addition there may be special cases which completely invalidate these estimates. In specific GUT models the above expectations can also be very badly violated. For example in the $SO(10)$ model with $SU(3)$ family symmetry [@King:2001uz], although the neutrino Dirac mass matrix is strikingly similar to the up-type quark mass matrix, a very different pattern of right-handed neutrino masses emerges: $$\begin{aligned} M_1:M_2:M_3\approx \epsilon^6\bar{\epsilon}^3: \epsilon^6\bar{\epsilon}^2:1 \label{naive3}\end{aligned}$$ where $\epsilon\approx 0.05$, $\bar{\epsilon}\approx 0.15$. In this model the dominant right-handed neutrino is the lightest one $Y=M_1$, with $X=M_2$, while the heaviest right-handed neutrino is decoupled $X'=M_3$ as in Eq. (\[1\]). This model therefore acts effectively as a two right-handed neutrino model, with the two right-handed neutrinos being very similar in mass, with interesting implications for leptogenesis, to which we now turn. ### Leptogenesis Leptogenesis and lepton flavor violation are important indicators which can help to resolve the ambiguity of right-handed neutrino masses in Eqs. (\[1\]-\[6\]). In the simplest case of two right-handed neutrino models, leptogenesis has been well studied with some interesting results [@min; @Frampton:2002qc; @King:2002qh]. In general, successful thermal leptogenesis for such models requires the mass of the lightest right-handed neutrino model to be quite high, and generally to exceed the gravitino constraints, required if supersymmetry is assumed. Such a strong bound is also at odds with the strong right-handed neutrino mass hierarchy expected from GUTs as in Eqs. (\[naive2\], \[naive3\]). In unified theories with type II see-saw, this potential problem can be resolved (see e.g. [@Antusch:2005tu]). In three right-handed neutrino models with sequential dominance, if the dominant right-handed neutrino is the lightest one, then the washout parameter $\tilde{m_1}\sim {\cal{O}}(m_3)$ is rather too large compared to the optimal value of around $10^{-3}$ eV for thermal leptogenesis. However, if the dominant right-handed neutrino is either the intermediate or the heaviest one then one finds $\tilde{m_1}\sim {\cal{O}}(m_2)$ or arbitrary $\tilde{m_1}$, which can be closer to the desired value [@Hirsch:2001dg]. ### Sneutrino Inflation It has been suggested that a right-handed sneutrino, the superpartner to a right-handed neutrino, could be a candidate for the inflaton in theories of cosmological inflation. This has interesting consequences for the masses and couplings of the right-handed neutrinos. For example in the case of chaotic sneutrino inflation, the mass of the right-handed sneutrino inflaton must be about $10^{13}$ GeV [@Ellis:2003sq], while in sneutrino hybrid inflation its mass could be considerably lighter [@Antusch:2004hd]. In both scenarios, the decaying sneutrino inflaton can be responsible for non-thermal leptogenesis, and can give a reheat temperature compatible with gravitino constraints providing its Yukawa couplings are sufficiently small. This typically implies that the associated right-handed neutrino must be effectively decoupled from the see-saw mechanism, so that it corresponds to the decoupled right-handed neutrino of mass $X'$ in sequential dominance discussed above. ### Type II Seesaw Models Once the more general type II seesaw framework is permitted [@seesaw2], then it apparently becomes more problematic to determine the properties of the right-handed neutrinos which contribute to $M_\nu$ via the type I part of the seesaw mechanism. On the other hand, the type II seesaw mechanism provides the most direct way of raising the neutrino mass scale to a level that will be observable in neutrinoless double beta decay. Furthermore, the difficulties of providing consistency of leptogenesis scenarios with the gravitino bound in supersymmetric theories, or simply with the strong right-handed neutrino mass hierarchy expected from GUT models, motivates a more general type II seesaw framework which can in principle help resolve some of these difficulties. It has recently been shown [@Antusch:2004re; @Antusch:2004xd] how to construct natural models for partially degenerate neutrinos by using an $SO(3)$ family symmetry to add a type II contribution to the light neutrino Majorana mass matrix proportional to the unit matrix, with large neutrino mixing originating from sequential dominance. Compared to the pure type I limit, the masses of the right-handed neutrinos become larger if the mass scale of the light neutrinos is increased via the type II contribution. This can also help to resolve the potential conflict between the typical predictions for $M_1$ as in Eqs. (47,48) and thermal leptogenesis [@Antusch:2005tu]. In addition, increasing the neutrino mass scale has interesting phenomenological consequences, such as a decreasing CP violating phase $\delta$ and a decreasing mixing angle $\theta_{13}$, testable in future experiments. ### Right-Handed Neutrinos in Extended Technicolor In the mechanism that has been constructed for producing light neutrinos in extended Technicolor [@nt; @lrs; @nuf03; @ckm], there are two right-handed neutrinos. Interestingly, this mechanism involves a seesaw, but one in which the relevant Dirac neutrino mass matrix elements are greatly suppressed down to the level of a few keV, and the Majorana masses are also suppressed, of order 100 MeV to 1 GeV. The origin of this suppression stems from the fact that the left- and right-handed chiral components of neutrinos transform differently under the ETC gauge group. Although the mechanism does involve a seesaw, it does not involve any GUT-scale masses. This is clear, since extended Technicolor models do not contain any such mass scales. It serves as an existence proof of how a seesaw mechanism can work with much lower Dirac and Majorana mass scales than the usual GUT-scale seesaw. Seesaw Neutrino mass and Grand unification ------------------------------------------ One of the major ideas for physics beyond the Standard Model is supersymmetric grand unification (SUSY GUT) [@raby]. It is stimulated by a number of observations that are in accord with the general expectations from SUSY GUTs : (i) A solution to the gauge hierarchy problem i.e why $v_{\rm wk}\ll M_{\rm Pl}$; (ii) unification of electroweak, i.e.$SU(2)_L\times U(1)_Y$ and strong $SU(3)_c$ gauge couplings assuming supersymmetry breaking masses are in the TeV range, as would be required by the solution to the gauge hierarchy; (iii) a natural way to understand the origin of electroweak symmetry breaking. Supersymmetric grand unified theories generically predict proton decay via dimension-five operators, which typically give large branching ratios for modes like $p \to \bar\nu_\mu K^+$. The current lower limits on proton decay modes place significant constraints on these theories and probably rule out a number of simpler SUSY GUT models [@su5out]. Nevertheless, the idea of grand unification is so attractive that we will proceed on the basis that appropriate modifications allow supersymmetric GUTs to evade current nucleon decay limits. Gauge coupling unification leads to a unification scale of about $10^{16}$ GeV and simple seesaw intuition leads to a seesaw scale of $10^{15}$ GeV to fit atmospheric neutrino data. This suggests that seesaw scale could be the GUT scale itself; thus the smallness of neutrino mass could go quite well with the idea of supersymmetric grand unification. However, in contrast with the items (i) through (iii) listed above, the abundance of information for neutrinos makes it a highly nontrivial exercise to see whether the neutrino mixings indeed fit well into SUSY GUTs. In turn, the freedom in constructing realistic GUT models allows many different ways to explain current neutrino observations. Thus even though neutrino mass is a solid evidence for physics beyond the Standard Model, the true nature of this physics still remains obscure. The hope is that the next round of the experiments will help to narrow the field of candidate theories a great deal. To see how this is likely to come about, the first point is the choice of the grand unification group. Even though attempts to implement the seesaw mechanism using an extension of the SU(5) with the addition of a right-handed neutrinos have been made, a more natural GUT gauge group from the point of view of neutrino mass is SO(10) since its basic spinor representation contains the right-handed neutrino automatically along with the other fifteen fermions of the Standard Model (for each family). Thus in some sense one could argue that small neutrino masses have already chosen $SO(10)$ GUT as the most natural way to proceed beyond the Standard Model. $SO(10)$ has therefore rightly been the focus of many attempts to understand neutrino mixings. It turns out that within the $SO(10)$ SUSY GUTs there are many ways to understand large mixings. We outline below only the major differences among the different ideas. The hope is that they differ in their predictions sufficiently so that they can be tested by planned experiments. One of the features that distinguishes $SO(10)$ from SU(5) is the presence of local $B-L$ symmetry as a subgroup and the $SO(10)$ models divide into two classes depending on whether $B-L$ symmetry is broken by a [**16**]{} Higgs field or an [**126**]{}. In the first case the right-handed neutrino mass necessarily arises out a nonrenormalizable coupling whereas in the second case it arises from a renormalizable one. Secondly, the breaking of $B-L$ by [**16** ]{} Higgs necessarily leads to low energy MSSM with R-parity breaking so that the model cannot have cold dark matter without additional assumptions, whereas [**126**]{} breaking of $B-L$ preserves R-parity at low energies so that the low energy MSSM that derives from such an $SO(10)$ has a natural dark matter candidate i.e. the lightest SUSY particle. As noted in the Introduction, the $SO(10)$ model has in general the type II seesaw formula for neutrino masses which can reduce to type I for some range of parameters. For instance, in the [**16**]{} based models, the first term in type II seesaw formula is negligible and therefore the neutrino masses are dictated by type I seesaw formula. In contrast in [**126**]{} Higgs models, the neutrino mass can be given either by the first term or the second term in the type II seesaw formula or both. ### A minimal [ **126**]{}-based $\boldsymbol{SO(10)}$ model As mentioned, in $SO(10)$ models where a [**126**]{} Higgs breaks $B-L$ symmetry, the right-handed neutrino masses can arise from renormalizable couplings. A minimal model of this type based on a single [**10**]{} and a single [**126**]{} field has a number of attractive features [@babu; @lee]. Since the [**126**]{} field also contributes to charged fermion masses through the MSSM doublets in it, this model unifies the flavor structure in the quark and the neutrino sector thereby increasing the predictivity of the model in the neutrino sector. In fact in the absence of CP violation, the model has only 12 free parameters all of which are determined by the quark masses and mixings along with the charged lepton masses. As a result all mixings and masses are predicted by the model up to an overall scale. It has been shown that if one uses the type I seesaw formula, the model fails to reproduce the observed solar mixing angle and also the solar mass difference squared and is therefore ruled out [@fukuyama]. It has however been shown recently that if one uses the type II seesaw mechanism with the first term dominating the neutrino mass matrix, the large mixings come out due to b-tau mass convergence in a very natural manner [@bajc]. In particular, an interesting prediction of this model is that $\theta_{13}\simeq 0.18$ making it quite accessible to the next generation of experiments such as long-baseline, off-axis and the reactor experiments. ### [**16**]{}-based models The main characteristic of the $SO(10)$ models where a [**16**]{} Higgs breaks $B-L$ is that right-handed neutrino masses arise from nonrenormalizable couplings in the superpotential, the implicit assumption being that there is a high-scale theory (perhaps string theory or a renormalizable high scale theory with heavier fields) that below the heavy scale leads to this version of $SO(10)$. This means that without additional symmetry restriction, there are more parameters than the physical inputs. Often in these models symmetries that tend to explain quark mixings restrict the number of couplings somewhat and one can make predictions in the neutrino sector. There exist several interesting examples of this kind of models [@so1016]. Several of these models tend to give values for $\theta_{13}$ which are much below the range that can be probed by the next generation of planned experiments. We give a very small sample of the different predictions for $\theta_{13}$ in models with both [**16**]{} and [**126**]{} in Table \[tab:PredT13\]. Model $\theta_{13}$ -------------------------------- --------------- [*[**126**]{}*]{} based models Goh, Mohapatra, Ng 0.18 Chen, Mahanthappa 0.15 [*[**16**]{}*]{} based models Babu, Pati, Wilczek 0.0005 Albright, Barr 0.014 Ross, Velasco-Sevilla 0.07 Blazek, Raby, Tobe 0.05 : The table lists some typical predictions for $\theta_{13}$ in different $SO(10)$ models and shows how the next generation of experiments can narrow the field of possible SO(10) unification models.[]{data-label="tab:PredT13"} ### Summary of what we can learn about $\boldsymbol{SO(10)}$ A review on different neutrino mass models based on $SO(10)$ can be found in Ref. [@Chen:2003zv]. From these models, we learn that 1. First a very generic prediction of all $SO(10)$ models is that the neutrino mass hierarchy is normal. The basic reason for this is the quark lepton symmetry inherent in the model, which tends to make the neutrino Dirac mass to be of similar hierarchy as the quarks, which via seesaw mechanism implies normal hierarchy for neutrinos. leading to $\Delta m^2_{23}\geq 0$. Again this is a result that can be probed in the long-baseline oscillation or neutrinoless double beta decay experiments; 2. The second point about the $SO(10)$ models is that they make definite predictions about the mixing angle $\theta_{13}$ as given in Table \[tab:PredT13\] and often for the other mixing angles. The planned experiments will therefore considerably narrow the field of viable $SO(10)$ models through their measurement or upper limit on these mixing parameters. ### \[sec:muchun\]Implications of Models with Spontaneous CP Violation Relations between leptogenesis and CP violation in low energy processes generally do not exist due to the presence of un-known mixing angles and phases in the heavy neutrino sector, as discussed in Sec. \[phases\]. In models with spontaneous CP violation, all Yukawa coupling constants are real. CP violation occurs due to the presence of the phases in the expectation values of the scalar fields, which break the gauge symmetry spontaneously. Recently it has been shown that [@Chen:2004ww] in the minimal left-right $SU(2)_{L} \times SU(2)_{R}$ symmetric model  [@moh] with spontaneous CP violation, there exist very pronounced relations between the CP violation in low energy processes, such as neutrino oscillation and neutrinoless double beta decay, and leptogenesis, which occurs at a very high energy scale. The minimal left-right symmetric model contains a bi-doublet and a pair of triplet Higgses. Using the gauge degrees of freedom, one can rotate away all but two phases present in the expectation values of the scalar fields. Thus there are only two intrinsic phases, the relative phase between the two vevs in the bi-doublet and that between the left- and right-handed triplets, to account for [*all*]{} CP violation in the quark sector and in the lepton sector. The relative phase between the two vevs in the bi-doublet is responsible for CP violation observed in the quark sector, while CP violation in the lepton sector dominantly comes from the relative phase between the vevs of the two triplet Higgses. The relative phase between the two vevs in the bi-doublet appears in the lepton sector only at the sub-leading order due to the large hierarchy in the bi-doublet vevs required by a realistic quark sector. As a result, the relation between CP violation in the quark sector and CP violation in the lepton sector is rather weak. Due to the left-right parity, the RH and LH neutrino Majorana mass terms are proportional to each other, which further reduces the unknown parameters in the model. In this model, both leptogenesis and the leptonic Jarlskog invariant are proportional to the sine of the relative phase between the vevs of the two triplet Higgses. Using the experimentally measured neutrino oscillation parameters as inputs, to obtain sufficient amount of lepton number asymmetry, the leptonic Jarlskog invariant has to be larger than $\sim 10^{-5}$. As the Type-II seesaw mechanism is at work, the hierarchy in the heavy neutrino sector required to obtain the observed neutrino oscillation parameters is very small, leading to a heavier mass for the lightest RH neutrino, compared to the case utilizing the Type-I seesaw mechanism. As a result, the requirement that the decay of the lightest RH neutrino is out-of-equilibrium in thermal leptogenesis can be easily satisfied. Similar attempts have been made to induce spontaneous CP violation from a single source. In one such attempt SM is extended by a singlet scalar field which develops a complex VEV which breaks CP symmetry [@Branco:2003rt]. Another attempt assumes that there is one complex VEV of the field which breaks the $B - L$ symmetry in SO(10) [@Achiman:2004qf]. Unlike in the minimal left-right symmetry model described above [@Chen:2004ww], there is no compelling reason why all other vevs have to be real in these models. \[sec:RGE\]Renormalization group evolution of neutrino parameters ----------------------------------------------------------------- Neutrino masses and mixing parameters are subject to the renormalization group (RG) evolution or running, i.e. they depend on energy. As theoretical predictions for these quantities typically arise from models at high energy scales such as the GUT scale, this implies that in general RG corrections have to be included in the testing of model predictions. In the case of leptonic mixing angles and CP phases the changes can be large for partially or nearly degenerate neutrino masses. On the other hand, strongly hierarchical masses bring about very small RG corrections for the mixing angles, while the running of the mass squared differences is sizable even in this case. ### Running masses, mixings and CP phases below the seesaw scale {#sec:AnalyticalApproximations} At energies below the seesaw scale $M_{\rm R}$, the masses of the light neutrinos can be described in a rather model-independent way by an effective dimension 5 operator if they are Majorana particles. The RG equation of this operator in the SM and MSSM [@Babu:1993qv; @Chankowski:1993tx; @Antusch:2001ck; @Antusch:2001vn; @Antusch:2002ek] leads to differential equations for the energy dependence of the mass eigenvalues, mixing angles and CP phases [@Chankowski:1999xc; @Casas:1999tg; @Antusch:2003kp; @Mei:2003gn; @Luo:2005sq]. Up to ${\mathcal{O}} (\theta_{13})$ corrections, the evolution of the mixing angles is given by [@Antusch:2003kp] $$\begin{aligned} \nonumber \dot{\theta}_{12} &=& -\frac{C y_\tau^2}{32\pi^2} \, \sin 2\theta_{12} \, s_{23}^2\, \frac{ | {m_1}\, e^{i \varphi_1} + {m_2}\, e^{i \varphi_2}|^2 }{\Delta m^2_{\odot} }\;, \\ \label{eq:RGEthetaij} \dot{\theta}_{13} &=& \frac{C y_\tau^2}{32\pi^2} \, \sin 2\theta_{12} \, \sin 2\theta_{23} \, \frac{m_3}{\Delta m^2_{\rm A} \left( 1+\zeta \right)} \times I(m_i, \varphi_i, \delta) \;, \\ \nonumber \dot{\theta}_{23} &=& -\frac{C y_\tau^2}{32\pi^2} \, \sin 2\theta_{23} \, \frac{1}{\Delta m^2_{\rm A}} \left[ c_{12}^2 \, |m_2\, e^{i \varphi_2} + m_3|^2 + s_{12}^2 \, \frac{|m_1\, e^{i \varphi_1} + m_3|^2}{1+\zeta} \right] \;, \label{eq:Theta23Dot}\end{aligned}$$ where the dot indicates the differentiation $d/ dt=\mu\,d/d\mu$ ($\mu$ being the renormalization scale), $s_{ij}:=\sin\theta_{ij}$, $c_{ij}:=\cos\theta_{ij}$, $\zeta:=\Delta m^2_{\odot}/\Delta m^2_{\rm A}$, $C=-3/2$ in the SM and $C=1$ in the MSSM, and $I(m_i, \varphi_i, \delta):=\left[ m_1 \cos(\varphi_1-\delta) - \left( 1+\zeta \right) m_2 \, \cos(\varphi_2-\delta) - \zeta m_3 \, \cos\delta \right]$. $y_\tau$ denotes the $\tau$ Yukawa coupling, and one can safely neglect the contributions coming from the electron and muon. For the matrix $K$ containing the Majorana phases, we use the convention $K = \mbox{diag}\, (e^{-i \varphi_1/2},e^{-i \varphi_2/2},1)$ here. For a discussion of RG effects in the case of exactly degenerate neutrino masses, where the above expressions cannot be applied, see e.g.[@Ellis:1999my; @Casas:1999tp; @Casas:1999ac; @Adhikari:2000as; @Joshipura:2000ts; @Chun:2001kh; @Joshipura:2002xa; @Joshipura:2002gr; @Singh:2004zu]. &gt;From Eqs.  one can easily understand the typical size of RG effects as well as some basic properties. First, in the SM and in the MSSM with small $\tan\beta$, the RG evolution of the mixing angles is negligible due to the smallness of the $\tau$ Yukawa coupling. Next, the RG evolution of the angles is the stronger the more degenerate the mass spectrum is. For a strong normal mass hierarchy, it is negligible even in the MSSM with a large $\tan\beta$, but for an inverted hierarchy a significant running is possible even if the lightest neutrino is massless [@Haba:1999fk; @Miura:2000bj]. Furthermore, non-zero phases $\delta$, $\varphi_1$ and $\varphi_2$ can either damp or enhance the running. For instance, the running of $\theta_{12}$ can be damped by non-zero Majorana phases [@Balaji:2000gd; @Haba:2000tx; @Chankowski:2001mx]. Typically, $\theta_{12}$ undergoes the strongest RG evolution because the solar mass squared difference is much smaller than the atmospheric one. Finally, in the MSSM, $\theta_{12}$ runs from smaller values at high energies to larger values at low energies [@Miura:2002nz]. The RG equations for the CP phases [@Casas:1999tg; @Antusch:2003kp; @Mei:2003gn; @Luo:2005sq] show that their changes are proportional to $1/\Delta m^2_{\odot}$. Therefore, whenever the mixing angles run sizably, the same happens for the CP phases. This is very important for the relation between the phases relevant for high-energy processes like leptogenesis and those appearing in neutrino oscillations and neutrinoless double beta decay. The evolution of the Dirac CP phase can be especially drastic for a small CHOOZ angle, since $\dot\delta$ contains a term proportional to $1/\theta_{13}$. It is also possible to generate a non-zero value of this phase radiatively if at least one of the Majorana phases is non-zero [@Casas:1999tg]. An exception is the CP conserving case where all phases are $0$ or $\pi$ and do not change with energy. Finally, the neutrino masses always change significantly with energy due to flavor-blind terms in the RG equations which contain large quantities like gauge couplings and the top Yukawa coupling. For strongly hierarchical masses and small $\tan\beta$, these terms dominate, so that the masses experience a common rescaling which is virtually independent of the mixing parameters [@Chankowski:2001mx]. Radiative corrections for Dirac neutrino masses have also been studied [@Chiang:2000um; @Lindner:2005as]. Roughly speaking, the RGEs for the Dirac case are obtained from Eqs.  by averaging over the Majorana phases. ### Details of the running in seesaw models {#sec:RunningAbove} In order to obtain precise results, one has to go beyond the simple approximations listed above and solve the RG equations numerically. This involves solving a rather complex system of coupled differential equations, as all parameters of the theory have to be evolved from high to low energy. A further complication arises in seesaw models with heavy singlet neutrinos which are in general non-degenerate in mass. The running above their mass thresholds is typically at least as important as the evolution below both in the SM and in the MSSM unless the neutrino Yukawa couplings are tiny [@Casas:1999tp; @Casas:1999ac; @King:2000hk; @Antusch:2002rr; @Antusch:2002hy; @Antusch:2002fr; @Mei:2004rn; @Ellis:2005dr]. This part of the RG evolution depends on many parameters of the model. An analytic understanding has been obtained only recently [@Antusch:2005gp; @Mei:2005qp]. If the singlet masses are non-degenerate, one can calculate the evolution of the neutrino mass parameters by considering a series of effective theories arising from integrating out the singlets successively at the respective thresholds [@King:2000hk; @Antusch:2002rr]. In general, it is not a good approximation to integrate out all singlets at the same energy scale, since the threshold corrections can be very large. ### Implications for model building As discussed above, predictions of high-energy mass models can differ substantially from low-energy experimental results due to the running. Therefore, RG corrections have to be included in the analysis. The RG evolution also opens up new interesting possibilities for model building, like the radiative magnification of mixing angles. In particular, small or vanishing solar mixing at high energy can be magnified to the observed large mixing at low energy (see e.g.[@Dutta:2002nq; @Antusch:2002fr; @Bhattacharyya:2002aq; @Hagedorn:2004ba]). Vice versa, the large but non-maximal solar mixing $\theta_{12}$ can also be reached starting from bimaximal mixing at the GUT scale [@Antusch:2002hy; @Miura:2003if; @Shindou:2004tv] (for examples, see Fig. \[fig:lindner3\]). ![Examples for the RG evolution of the lepton mixing angles from the GUT scale to the SUSY-breaking scale (taken to be $\approx 1$ TeV) in the MSSM extended by 3 heavy singlets (right-handed neutrinos) [[@Antusch:2002hy; @Antusch:2002fr]]{}. The masses of the lightest neutrinos for these examples are around $0.05\,$eV. The figures illustrate how the large but non-maximal value of the solar mixing angle $\theta_{12}$ is reached by RG running if one starts with bimaximal lepton mixing or with vanishing solar mixing at the GUT scale. The kinks in the plots correspond to the mass thresholds at the seesaw scales, where the heavy singlets are successively integrated out. The gray-shaded regions mark the various effective theories between the seesaw scales.[]{data-label="fig:lindner3"}](TypicalMAEvolutionMSSMLetter "fig:"){width="220pt"} ![Examples for the RG evolution of the lepton mixing angles from the GUT scale to the SUSY-breaking scale (taken to be $\approx 1$ TeV) in the MSSM extended by 3 heavy singlets (right-handed neutrinos) [[@Antusch:2002hy; @Antusch:2002fr]]{}. The masses of the lightest neutrinos for these examples are around $0.05\,$eV. The figures illustrate how the large but non-maximal value of the solar mixing angle $\theta_{12}$ is reached by RG running if one starts with bimaximal lepton mixing or with vanishing solar mixing at the GUT scale. The kinks in the plots correspond to the mass thresholds at the seesaw scales, where the heavy singlets are successively integrated out. The gray-shaded regions mark the various effective theories between the seesaw scales.[]{data-label="fig:lindner3"}](TypicalMAEvolutionMSSM4 "fig:"){width="220pt"} It is, however, important to stress that large mixing is no fixed point under the RGE in the usual see-saw framework. It has been observed that in SUSY models large mixing can be a fixed point for different (i.e. non-seesaw) types of neutrino mass operators [@Casas:2002sn; @Casas:2003kh]. In addition, the small neutrino mass squared differences can be produced from exactly degenerate neutrino masses at high energy (see e.g.[@Chankowski:2000fp; @Chen:2001gk; @Babu:2002dz; @Joshipura:2002xa; @Joshipura:2002gr; @Singh:2004zu]), if the neutrino masses are nearly degenerate. For further specific models where the RG evolution is relevant for neutrino masses and mixings see, for example, [@Haba:2000rf; @Kuo:2000ig; @GonzalezFelipe:2001kr; @Parida:2002gz; @Mei:2003gn; @Antusch:2004xd]. ### High-scale mixing unification and large mixings Another question one can ask is whether starting with small mixing angles at the seesaw scale (as would be naively expected in models with quark-lepton unification) can one get large mixings at the weak scale due to RG extrapolation. In a specific model where neutrino masses are quasi-degenerate, starting with neutrino mixings that are equal to quark mixings at the GUT scale i.e. $\theta_{12}\simeq V_{us}$, $\theta_{23}\simeq V_{cb}$ and $\theta_{13}\simeq V_{ub}$, one can indeed get mixing angles at the weak scale which are consistent with present observations as shown in Fig. \[mpr1\] below. We have chosen $\tan\beta=55$ in this calculation. An interesting point is that this mechanism works only if the common mass of the neutrinos is bigger than $0.1$ eV, a prediction which can easily be tested in the proposed neutrinoless double beta decay experiments [@Balaji:2000gd; @Balaji:2000au; @Balaji:2000ma; @Babu:2002ex; @Mohapatra:2003tw]. =8.5cm ### Deviations of $\boldsymbol{\theta_{13}}$ from 0 and of $\boldsymbol{\theta_{23}}$ from $\boldsymbol{\pi/4}$ due to RG effects {#sec:RGtheta13andtheta23} At present, observations are compatible with $\theta_{13}=0$ and $\theta_{23}=\pi/4$. New experiments are being planned to lower the limits on deviations from these values. Additional motivation for this kind of measurements is provided by the RG: although it is possible that a symmetry produces an exactly vanishing $\theta_{13}$ and exactly maximal atmospheric mixing, this symmetry would typically operate at a high scale, and therefore its predictions would be subject to the RG evolution. Hence, without fine-tuning, one expects non-zero values of $\theta_{13}$ and $\theta_{23}-\pi/4$ at low energy [@Antusch:2003kp; @Antusch:2004yx; @Mei:2004rn; @Singh:2004zu]. For example, in the MSSM one finds a shift $\Delta\!\sin^2 2\theta_{13} > 0.01$ for a considerable parameter range, i.e. one would expect to measure a finite value of $\theta_{13}$ already in the next generation of experiments [@Antusch:2003kp]. On the other hand, there are special configurations of the parameters, especially the phases, where RG effects are suppressed. Furthermore, there may be symmetries which stabilize some mixing angle completely against radiative corrections. For instance, for an inverted hierarchy with $m_3=0$, $\theta_{13}=0$ is stable under the RG [@Grimus:2004cj] (see also Eqs. ). Hence, if future precision measurements do not find $\theta_{13}$ and $\theta_{23}-\pi/4$ of the size of the generic RG change, one can restrict parameters, or even obtain evidence for a new symmetry. ### Implications for leptogenesis As has been discussed in Sec. \[sec:Leptogenesis\], the requirement of successful baryogenesis via leptogenesis places an upper bound on the mass of the light neutrinos. It is important to note that in order to relate constraints on the neutrino mass spectrum coming from physics at $M_1$ to observation, one has to take into account radiative corrections. It turns out that there are two effects operating in opposite directions [@Antusch:2003kp; @gian; @Buchmuller:2004nz; @Barbieri:1999ma] which partially cancel each other: since the mass scale is increasing, the washout driven by Yukawa couplings is stronger. On the other hand, larger $\Delta m^2$s allow for a larger decay asymmetry. Taking into account all these effects, one finds that the upper bound on the neutrino mass scale becomes more restrictive [@Antusch:2003kp; @gian; @Buchmuller:2004nz]. The RG evolution, together with thermal effects or spectator processes [@Buchmuller:2001sr], gives rise to the most important corrections to the mass bound [@Buchmuller:2004nz]. Non-Standard Neutrino Interactions ================================== Neutrino magnetic moments {#mm} ------------------------- Once neutrinos are massive, they can have transition magnetic and electric magnetic dipole moments and, in the Dirac case, also diagonal magnetic and electric dipole moments [@mmom1; @mmom2], [@LeeShrock:77], [@leftright]–[@mmom2004]. The lepton-number conserving (diagonal and transition) magnetic and electric dipole moment operators are given by $(1/2)\bar \nu_i\sigma^{\mu\nu}\nu_j F_{\mu\nu}$ and $(-i/2)\bar \nu_i\sigma^{\mu\nu}\gamma_5 \nu_j F_{\mu\nu}$. Analogous $\Delta L=2$ expressions hold (with $i \ne j$ because of their antisymmetry under $i \leftrightarrow j$) for Majorana neutrinos. Therefore, a magnetic or electric dipole moment always connects one species of neutrino with another. These moments are defined for mass eigenstates. In the case of a Dirac mass eigenstate, one sees that the operator connects a left-handed electroweak-doublet neutrino to a right-handed electroweak-singlet (sterile) neutrino. In the case of Majorana mass eigenstates, the (transition) dipole moment operators connect two neutrino fields of the same chirality. The two have fundamentally different physical implications. We have reviewed some basic properties of these magnetic and electric dipole moments above. Neutrino magnetic moments can be directly measured in terrestrial experiments using the neutrino beam from the Sun as in Super-K [@superK] or with neutrinos from close by nuclear reactors as in the MUNU [@munu] and in the Texono [@texono] experiments. These experiments have put upper bounds of the order of $10^{-10} \mu_B$ on the effective neutrino magnetic moment (defined below). It is interesting that in models involving right-handed charged currents the diagonal and transition neutrino magnetic moments [@mmom1; @mmom2; @ms77; @LeeShrock:77; @leftright; @rs82] are not suppressed by the neutrino mass (as in models with just the Standard Model interactions [@fs]) and could thus be somewhat larger. However, in general, the same interactions that can enhance neutrino magnetic moments can give corrections enhancing neutrino masses, so in a particular model one must be careful to avoid excessive loop contributions to the latter. The Borexino prototype detector has recently been utilized to put a bound of $|\mu_{eff}|_{MSW} < 5.5 \times 10^{-10} \mu_B$ at $90 \%$ C.L. using the elastic scattering of electrons by the solar $pp$ and $^7Be$ neutrinos [@borexino-nu]. At these sub-MeV energies, the solar neutrino beam contains roughly equal proportion of $\nu_1$ and $\nu_2$ ($P_1=P_2=0.5$). As a result, for the same $\nu_{eff}$ the bounds on $\mu_{11}, \mu_{13}$ would be much better in experiments utilizing the $pp, ^7Be$ neutrinos compared to the bounds that can be obtained by using the $^8 B$ neutrinos (which are predominantly $\nu_2$). Neutrinos with non-zero magnetic moments contribute to the elastic scattering of electrons in water Cerenkov detectors [@vogel; @joshi; @grimus]. The effective neutrino magnetic moment $\mu_{eff}$ (we neglect the contribution of the electric dipole moment here) responsible for the scattering event $\nu_i + e^- \rightarrow \nu_j+e^-$ is proportional to the incoherent sum of outgoing neutrino states $\nu_j$ as follows: $$|\mu_{eff}|^2= \sum_j \big| \sum_i A_i(L) \mu_{ij} \big|^2$$ where $A_i(L)$ is the probability amplitude of a neutrino produced as a flavor eigenstate (lets say $\nu_e$ or $\bar \nu_e$) to be in the $i'th$ mass eigenstate on propagating over the source-detector distance $L$. For vacuum oscillations $A_i(L)=U_{ei}~\exp(-i E_i L)$ and the effective magnetic moment depends upon the $\Delta m^2$ and the mixing angles as follows $$\begin{aligned} |\mu_{eff}|^2_{VO}&=& c_{12}^2 ~(\mu_{11}^2 + \mu_{12}^2 + \mu_{13}^2) + s_{12}^2 ~(\mu_{21}^2 + \mu_{22}^2 + \mu_{23}^2)\nonumber \\ &+& 2 c_{12} ~s_{12}~ (\mu_{11} \mu_{21} + \mu_{12} \mu_{22}+ \mu_{13} \mu_{23}) ~\cos \left( \frac{\Delta m^2_{12} L}{2 E_\nu}\right) \label{muvo}\end{aligned}$$ In the expression for $|\mu_{eff}|^2_{VO}$ above we have assumed that $U_{e3}$ is negligibly small and the atmospheric mixing angle is maximal ($s_{23}^2=1/2$). We have also dropped CP violating phases. In MUNU [@munu] and Texono [@texono] , $\bar \nu_{e}$ from nuclear reactors were detected by the elastic scattering with $e^-$. The source-detector distance is small ($L=18 m $ in MUNU and $L=28 m$ in TEXONO) compared to the $\nu_1 -\nu_2$ oscillation length so that the Cosine term in (\[muvo\]) is unity. The magnetic moment matrix $\mu_{ij}$ is symmetric for Dirac neutrinos and anti-symmetric if the neutrinos are Majorana. It is clear from (\[muvo\]) that there exists a possibility that there may be a cancellation between the last (interference) term which could be negative and the first two terms. So experimental upper bounds on $|\mu_{eff}|_{VO}$ which is $0.9 \times 10^{-10}\mu_B$ (MUNU) [@munu] and $1.3 \times 10^{-10} \mu_B$ (Texono) (both at $90\%$ C.L.) do not constrain the elements of the $\mu_{ij}$ matrix without making added assumptions that there is no cancellation between the different terms in (\[muvo\]). This problem does not arise for solar neutrinos as the interference term averages to zero since $2 E_{\nu}/\Delta m_{12}^2 << L_{earth-sun}$. For the solar neutrinos the expression for $|\mu_{eff}|^2$ reduces to a sum of two positive definite quantities $$\begin{aligned} |\mu_{eff}|^2_{MSW}= P_1 ~(\mu_{11}^2 + \mu_{12}^2 + \mu_{13}^2) + P_2 ~(\mu_{21}^2 + \mu_{22}^2 + \mu_{23}^2) \label{mumsw}\end{aligned}$$ where $P_1=|A_{e1}(L)|^2$ and $P_2=|A_{e2}(L)|^2=1-P_1$ are the probabilities of the solar neutrinos to be in the mass eigenstate $\nu_1$ and $\nu_2$ respectively at the earth. The recent upper bound $|\mu_{eff}|_{MSW} < 1.1 \times 10^{-10} \mu_B$ at $90 \%$ C.L. established by Super-Kamiokande [@superK] can be translated into bounds on individual elements of $\mu_{ij}$ without extra assumptions. The $^8B$ neutrinos which are detected by electron scattering at Super-K are predominantly $\nu_2$ state ($P_2 =0.94$ and $P_1 =0.06$). The Super-K bound on $|\mu_{eff}|_{MSW}$ therefore implies $|\mu_{12}| < 1.1 \times 10^{-10} \mu_B$ ; $ |\mu_{22}|,\,\, |\mu_{23}| < 1.13 \times 10^{-10} \mu_B$ and $|\mu_{11}|,\,\, |\mu_{13}| < 4.49 \times 10^{-10} \mu_B$. It is also possible to put bounds on $\mu_{ij}$ from SNO-NC data using the fact that neutrinos with non-zero magnetic moments can dissociate deuterium [@grifols] in addition to the weak neutral currents. The bounds established from SNO-NC data do not depend upon the oscillation parameters unlike in the case of Super-K. However the bounds are poorer due to the large uncertainty in our theoretical knowledge of the theoretical $^8B $ flux from the sun [@bahcall1]. We will see in a subsequent section (the one on extra dimensions) that the effective magnetic moment of the neutrinos can get substantially enhanced in a certain class of extra dimensions models. Searching for $\mu_{\nu}$ can therefore be used to put limits on theories with extra dimensions. Flavor changing and conserving nonstandard neutral current interactions ----------------------------------------------------------------------- The latest results of neutrino oscillation experiments indicate that the conversion mechanism between different neutrino flavors is driven by a non-vanishing mass difference between mass eigenstates together with large mixing angles between families. These analyses are done supposing that no non-standard neutrino interactions (NSNI) are present. In the presence of electroweak-doublet and electroweak-singlet neutrinos, the neutral weak current is, in general, nondiagonal in mass eigenstate neutrino fields [@LeeShrock:77; @valle80]. This is the same type of nondiagonality in the neutral weak current that was present in the original Weinberg electroweak model before the advent of the Glashow-Iliopoulos-Maiani (GIM) mechanism. It will be recalled that in this original Weinberg model the $d$ and $s$ quarks were assigned to a left-handed SU(2)$_L$ doublet $(u,d \cos\theta_C + s \sin\theta_C)_L$ and a left-handed SU(2)$_L$ singlet $-d \sin\theta_C + s \sin\theta_C$. The necessary condition for the diagonality of the neutral weak current is that all of the fermions of a given charge and chirality must transform according to the same weak $T$ and $T_3$. Alternatively, NSNI in the form of nondiagonal couplings between different neutrino flavor eigenstates [@LW78] and/or neutrino flavor-diagonal but flavor nonsymmetric couplings, can exist [@GMP91]. Including NSNI can modify the characteristics of neutrino conversion, and in general large values of NSNI parameters worsen the quality of the fit to data. We summarize here the present limits that can be obtained to NSNI parameters, using the result of neutrino oscillation experiments. ### Atmospheric neutrinos As repeatedly mentioned, atmospheric neutrino data are well described by the oscillation driven by one mass scale, $\Delta m^2_{32}$, and with maximal mixing between second and third families. One important prediction for these numbers is that the high-energy neutrino events that generate the through-going muon data are well described together with the low-energy neutrino events, due to the energy dependence of the Hamiltonian that describes the neutrino evolution. Assuming a non-vanishing NSNI acting together with mass and mixing, the solution to the atmospheric neutrino discrepancy can be spoiled if the NSNI parameters have too large values. This happens because the NSNI entries in the Hamiltonian that describes the neutrino evolution are energy independent [@GMP91]. Since a simultaneous explanation of low-energy and high-energy neutrino events requires a strong energy dependence in the $\nu_\mu,\nu_\tau$ conversion probability, inclusion of energy independent terms in the Hamiltonian tends to decrease the quality of the theoretical predictions fit to atmospheric neutrino data. The NSNI can be parametrized as a relative strength of such interactions to the $ \epsilon_{ij}^f=\frac{G_{\nu_i\nu_j}^f}{G_f}$ where $f$ stands for the fermion involved in the new interaction: $$\begin{aligned} \epsilon&=&\frac{G_{\nu_\mu\nu_\mu}^d-G_{\nu_e\nu_e}^d}{G_f} =\frac{G_{\nu_\tau\nu_\tau}^d-G_{\nu_e\nu_e}^d}{G_f} ~~~{\rm (d-quarks)} \nonumber \\ &=&\frac{G_{\nu_\mu\nu_\mu}^u-G_{\nu_e\nu_e}^u}{G_f} =\frac{G_{\nu_\tau\nu_\tau}^u-G_{\nu_e\nu_e}^u}{G_f} ~~~{\rm (u-quarks}) \nonumber \\ &=&\frac{G_{\nu_\mu\nu_\mu}^e-G_{\nu_e\nu_e}^e}{G_f} =\frac{G_{\nu_\tau\nu_\tau}^e-G_{\nu_e\nu_e}^e}{G_f} ~~~{\rm (electrons)} \nonumber \label{NSNI}\end{aligned}$$ The solar neutrino data is able to stablish new limits in the flavor-diagonal NSNI (\[NSNI\]). With the definitions written above, these limits stands for: In [@fornengo] an analysis of atmospheric neutrinos and NSNI is performed, and found the following limits, at $3\sigma$: $$\begin{aligned} |\epsilon_{\mu\tau}| &<& 0.03 \nonumber \\ |\epsilon'_{\mu\mu}-\epsilon'_{\tau\tau}| &<& 0.05\end{aligned}$$ These bounds refer to NSNI with d-quarks, and were obtained assuming a two-flavor ($\nu_{\mu}$ and $\nu_{\tau}$) system. A three family analysis significantly relaxes some of the bounds above [@atm_NSNI_3], such that order one values for some of the $|\epsilon|$s are not currently ruled out. For many more details, see [@atm_NSNI_3]. For u-quarks the bounds are expected to be of the same order, and for NSNI with electrons we expect bounds looser by a factor of $\sim 3$. ### KamLAND and solar neutrinos The excellent agreement between the LMA parameter region that provides a solution to the solar neutrino problem and the parameter region compatible with KamLAND dataprovides us with an opportunity to use these data sets to establish a limit on non-standard neutrino interactions (NSNI). The effect of NSNI is negligible in KamLAND due to the short distance traveled inside the earth, but due to the high density and long travel distances in the sun, the presence of a NSNI could displace the best fit point of solar neutrino analyses, and spoil the agreement between solar and KamLAND allowed regions. The oscillation of solar neutrinos is driven by only one mass scale, $\Delta m_{21}^2$. The higher mass scale $\Delta m_{32}^2$, relevant for atmospheric neutrino oscillations, decouples, and the mixing between the first and third family is very small, and will be set to zero in what follows. In this approach, after rotating out the third family from the evolution equation, we can write the 2x2 Hamiltonian that describes the neutrino evolution as $$\begin{aligned} H_{MSW} &=& \left[ \begin{array}{cc} +\sqrt2 G_F N_e(r) - \frac{\Delta m^2}{4E} \cos{2\theta} & \frac{\Delta m^2}{4E} \sin{2\theta} \\ \frac{\Delta m^2}{4E} \sin{2\theta} & \frac{\Delta m^2}{4E} \cos{2\theta} \end{array} \right] \nonumber \\ &+&\left[ \begin{array}{cc} 0 & \sqrt2 G_F \epsilon_f N_f(r) \\ \sqrt2 G_F \epsilon_f N_f(r) & \sqrt2 G_F \epsilon'_f N_f(r) \end{array} \right] ~~,\end{aligned}$$ where $N_f(r)$ is an effective density felt by the neutrino, given by $N_f=N_p+2N_n$ for d-quarks, $N_f=2N_p+N_n$ for u-quarks and $N_f=N_p$ for electrons. The NSNI parameters can be written as: $$\begin{aligned} \epsilon'_f&=&\frac{\epsilon_{\tau\tau}+\epsilon_{\mu\mu}}{2}+ \frac{\epsilon_{\tau\tau}-\epsilon_{\mu\mu}}{2}\cos2\theta_{23}- \epsilon_{\mu\tau}\sin2\theta_{23}-\epsilon_{ee} \nonumber \\ \epsilon_f&=&\epsilon_{e\mu}\cos\theta_{23}-\epsilon_{e\tau}\sin\theta_{23}~~.\end{aligned}$$ The atmospheric neutrino analyses have put strong bounds on $\epsilon_{\mu\tau}$ and $\epsilon_{\mu\mu}-\epsilon_{\tau\tau}$, and we expect a near to maximal mixing between second and third families ($\cos2\theta_{23}\simeq 0$). By further assuming that $\epsilon_f=0$ (which would be the case if $\epsilon_{e\mu}$ and $\epsilon_{e\tau}$ were negligible) we can write $\epsilon'_f$ as $$\epsilon'_f\sim\epsilon_{\mu\mu}-\epsilon_{ee}= \epsilon_{\tau\tau}-\epsilon_{ee}$$ With the present data set and the assumptions listed above, we are able to establish the following limits to the NSNI parameters, at $1\sigma$ ($2\sigma$): $$\begin{aligned} -0.20<&\epsilon'&<0.12~~(\epsilon'<0.30) ~~~\mbox{ d-quarks} \nonumber \\ -0.18<&\epsilon'&<0.10~~(\epsilon'<0.30) ~~~\mbox{ u-quarks} \nonumber \\ -0.55<&\epsilon'&<0.25~~(\epsilon'<0.86) ~~~\mbox{ electrons}~~.\end{aligned}$$ The limits obtained at $2\sigma$ reflect the weak bounds on $\Delta m^2$ obtained by KamLAND. At present there are a number of possible “islands” in the parameter region compatible with KamLAND data, so the displacement in $\Delta m^2$ could make the best fit point of a combined analysis jump between consecutive islands. The increase of statistics at KamLAND will determine in which of these islands the true values of neutrino parameters lie, avoiding this kind of jump and improving the limits on the NSNI parameters. Simulating 1 kton-year of data at KamLAND, the allowed range of $\Delta m^2$ would be reduced, and in the case of an agreement between the neutrino parameters coming from KamLAND and solar neutrino analysis (that would indicate a vanishing NSNI) new limits could be obtained. Although the $1\sigma$ regions do not change significantly, at $2\sigma$ we have: $$\begin{aligned} -0.42<&\epsilon'&<0.24 ~~~{\rm (d-quarks)} \nonumber \\ -0.40<&\epsilon'&<0.18 ~~~{\rm (u-quarks)} \nonumber \\ &\epsilon'&<0.40 ~~~{\rm (electrons)}~~.\end{aligned}$$ Further increase of KamLAND statistics will not improve these bounds, which are now determined by the uncertainty in $\Delta m^2$ in the solar neutrino analysis. Details of the analysis presented here can be found in [@holanda2]. Similar analyses were also done in [@pena1]. It should be pointed out that more severe constraints on (or a positive hint of) NSNI can be obtained in next-generation solar neutrino experiments, especially those sensitive to neutrino energies below $~6$ MeV [@pena1]. ### Bounds from non-oscillating phenomena Apart from phenomena that involve neutrino oscillations, bounds on NSNI can also come from the effects of such non-standard interactions on the charged leptons [@bergmann1; @holanda3]. We should only be careful in translating such bounds to the neutrino sector, since usually this translation can only be possible with a few assumptions on the model that generates the non-standard interactions. Some bounds can also be found using neutrino scattering experiments [@rossi; @pena2]. We quote here some of the numbers obtained by these analyses. Details of the calculations can be found in the references. The following tables should be read as limits on $\epsilon_{i,j}$, where $i$ and $j$ stand for $e$, $\mu$ and $\tau$, and are the lines and columns of the tables. The values quoted here depend on details of the models that generate the NSNI, such as $SU(2)_L$ breaking effects, absence of fine-tuning cancellations and the scale of new physics. Also, since neutrino oscillations are sensitive only to the vector coupling constant of the NSNI, correlations between the limits in $\epsilon_L$ and $\epsilon_R$ should be taken into account in order to compare the numbers presented here with the ones coming from neutrino oscillation experiments. Concluding, using the neutrino oscillation data we are able to find limits to NSNI parameters, without assuming any detail about the nature of new physics behind these interactions. More work is needed to improve the situation. Beyond the three neutrino picture ================================= The search for [*other*]{} light neutrinos {#lightsterile} ------------------------------------------ A neutrino that does not participate in Standard Model interactions (i.e., is sterile) might seem of little interest, but this concept includes reasonable theoretical constructs such as right-handed neutrinos themselves. We note in passing that in one-family Technicolor theories there are also technineutrinos that would couple with the usual strength to the $W$ and $Z$ but, because of their Technicolor interactions, are confined and gain large dynamical masses of order several hundred GeV; they are therefore not relevant for usual low-energy neutrino oscillation experiments. The hypothesis of ‘sterility’ concerns the weak forces; gravity is expected to be felt anyway, and we cannot exclude that the ‘sterile’ neutrino participates in new forces, perhaps, mostly coupled to quarks; or carried by new heavy mediators; or that sterile neutrinos have preferential couplings with new particles — say, with Majorons. Even putting aside these possibilities, we can probe sterile neutrinos by the search for observable effects due to their mixing with the ordinary neutrinos. In this section, we will further restrict our attention on ‘light’ sterile neutrinos (say, below $10$ eV) and discuss the impact on oscillations. We make extensive reference to Ref. [@cmsv], an updated overview on the phenomenology of one extra sterile neutrinos. ### Issues of theoretical justification Many extensions of the Standard Model incorporate particles behaving as sterile neutrinos. The main question is: Why are these light and do they have the couplings needed to mix with ordinary neutrinos? A recent discussion is in Ref. [@pl]. Models with mirror matter [@mirr1; @mirr2] contain mirror neutrinos [@mirrF; @mirrB] and offer a straightforward answer: ordinary and mirror neutrinos are light for the same reason. It is easy to arrange a ‘communication’ term between ordinary and mirror worlds, e.g., due to the operator $\sim \nu\phi\nu'\phi'/M_{\rm Pl}$. This leads to long-wavelength oscillations into sterile neutrinos. (see Fig. \[fig:mirr\], from [@mohan]). There are many other possibilities. For example, higher-dimensional operators in the superpotential in models involving a scalar field with an intermediate scale expectation value can naturally lead to small Dirac and Majorana masses of the same magnitude, and therefore to light ordinary and sterile neutrinos which can mix [@pl]. Already with mirror matter, the VEV $\langle \phi'\rangle$ could be different from $\langle \phi\rangle=174$ GeV, and this has important carryings for the phenomenology [@mirrF; @mirrB]. Alternatively, one could guess on dimensional ground the value ${\rm TeV}^2/M_{\rm Pl}$ as the mass (or mixing) of sterile neutrinos, and relate the TeV-value, e.g., to supersymmetry breaking [@alyos] or high GUT theories such as $E_6$ [@frank]. In theories with dynamical electroweak symmetry breaking, the mechanism that has been found for light neutrinos predicts also that there are sterile neutrinos with masses of order 100’s of MeV to GeV [@nt; @lrs; @nuf03; @ckm]. Right-handed neutrino masses of about 1 or 100 TeV lead to exceedingly large masses that pose a question for extra dimensions if “large” scales should be absolutely avoided and if the Dirac Yukawa couplings are not small on their own. A popular way out is to postulate that the masses are Dirac in character. As a benefit, one explains why neutrino masses are small when right-handed neutrinos propagate in the bulk [@rmpl]. But even if neutrinos turn out to be Dirac particles we would not have an evidence for these theories, since (1) Dirac neutrino masses are possible in conventional 4-dimensional theories; (2) even in theories with extra dimensions, one can assume that the usual dim.-5 term is the dominant source of neutrino masses. We just need to fix the scale of mass to the desired value of $10^{13-15}$ GeV, and neutrinos receive (presumably large) Majorana masses [@mohan]. To summarize, small neutrino masses are possible, and even more interestingly an infinite number of sterile neutrinos, but neutrino masses seem not to be a clean signal of low scale gravity theories. Oscillations have special character and there are interesting phenomenological constraints [@snsns]. We believe that there should be increased attention paid not only toward phenomenology, but also toward theories of (light- or heavy-mass) sterile neutrinos. The fact that we do not understand usual fermion masses should be not taken as an excuse to avoid confrontation with theory in our view. With these considerations in mind, we will give some emphasis to mirror neutrinos, and at the same time point out the interest of understanding the unknown ${\cal O}(1)$ coefficients in these theories, that have an important impact on the phenomenological implications. ### Phenomenological manifestations In the following discussion, we will be concerned mostly with oscillations. However, the implications can be also elsewhere. To see that, it is sufficient to recall that when we add 3 sterile neutrinos we can form Dirac masses, which means that there is no contribution to the neutrinoless double beta decay process. #### Terrestrial oscillation experiments Broadly speaking, there are two types of terrestrial experiments. The first one includes several disappearance experiments and LSND; the second one includes atmospheric neutrinos and long-baseline experiments. The first type is sensitive mostly to the mixing of $\nu_e$ and a sterile state, the other one also to $\nu_\mu$ or $\nu_\tau$. Both types of experiments probe only relatively large mixing angles, $\theta_s\sim 0.1$. Sterile neutrinos within the sensitivity regions are disfavored if standard cosmology (mostly BBN) applies; further important tests will be done by CMB+LSS or BBN data. None of these experiments [*alone*]{} requires the existence of sterile neutrinos. A case for sterile neutrinos can be made interpreting in terms of oscillations LSND together with solar and atmospheric data [@bgg]. The hypothesis that LSND signal is due to a relatively heavy and mostly sterile neutrino should be regarded as conservative [@ales], even though it leads to some problems with disappearance in terrestrial experiments, and interesting predictions for cosmology (BBN and CMB+LSS spectra). In view of this situation, the test of the LSND result is of essential importance. At the same time, we should not forget that sterile neutrinos could manifest themselves in other manners. $$\hspace{-5mm} \includegraphics[width=7cm]{spectrum}$$ #### Solar and KamLAND neutrinos The solar and KamLAND data can be explained well without sterile neutrinos. Even more, the ‘LMA’ solution received significant confirmations: the sub-MeV energy regions have been probed by Gallium experiments and the super-MeV ones by SNO and Kamiokande, and LMA is in agreement with KamLAND. Thus we are led to consider minor admixtures of sterile neutrinos, presumably not more than 20 %. In many interesting cases sterile neutrinos are invisible at KamLAND but affect the survival probability of solar neutrinos. Quite generally, to test the hypothesis of oscillations into sterile states it would be important to improve on (or measure precisely) the fluxes from Beryllium and pp-neutrinos. A few selected cases are shown in Fig. \[fig:nus\_sol\] from [@cmsv]. $$\hspace{-5mm}\includegraphics[width=18cm]{samples}$$ #### Neutrinoless double beta decay Massive Majorana sterile neutrinos, mixed with the active ones, would participate in mediating neutrinoless double beta decay. In the case of one light sterile neutrino, $\langle m \rangle_{eff}$ is given by the sum of the contributions of four massive neutrinos [@BPP2]: $$\langle m \rangle_{eff} = \sum_{i=1,4} m_i U_{ei}^2,$$ where $m_i$ denotes the mass of the massive eigenstates and $U_{ei}$ indicates their mixing with the electron neutrino. #### Supernovae Supernovae are an ideal place to search for manifestations of sterile neutrinos, either through long wavelength vacuum oscillations of a galactic supernova (whose distance is in the kpc scale-SN1987A was at 52 kpc) or through MSW effects (since nuclear densities are reached during the collapse); the main trouble is that we are still not able to understand supernova explosions theoretically (the data from SN1987A do not contradict the simplest current picture for neutrino emission, but they have puzzling features that suggest caution). Vacuum oscillations induced by mirror neutrinos (as in Fig.\[fig:mirr\]) can led to a disappearance of half of the emitted flux [@mohan]. MSW oscillations into sterile neutrinos can produce even more dramatic suppressions; in certain regions of the parameter space this can reach 80 % [@cmsv]. The last type of oscillations are due to the fact that the SN core is deleptonized, and require that the electron neutrino mixes with the sterile one. The trouble to verify these predictions is the accuracy of the expectations on the emitted neutrino energy, that amounts roughly to a factor of 2; thus it seems that, with present knowledge of supernovae, and using the data from SN1987A, we can only safely exclude the occurrence of dramatic MSW effects. We remark that with a quantitative theory of core collapse supernovae, this test of sterile neutrinos would become very powerful. There are other interesting effects possibly related to heavy sterile neutrinos, such as r-process nucleosynthesis, re-heating of the shock, rocketing of pulsars, not discussed here. #### Big-bang nucleosynthesis and other cosmological probes An often stressed doubt concerning cosmology is about neglected or unknown effects. This said, we must recall that there has been impressive progress in the last years. The impact on neutrinos can be summarized as follows: (1) The number of neutrinos at the photon decoupling is bounded to be $3\pm 2$. (2) The contribution to the energy density of the universe $\Omega_\nu h^2$ is below or at about 1 %, or, equivalently, the sum of neutrino masses is below or at about 1 eV. (It should be recalled that there is an interesting, not universally accepted claim that the cosmology gives not a bound but a value for neutrino masses [@allen]. This testifies the high sensitivity reached by these methods and points to the interest in the value of the bias parameter $\sigma_8$). (3) The effective number of neutrinos at nucleosynthesis time is $3\pm 2$, when extracted from deuterium abundance, or $2.4\pm 0.7$, when extracted from helium abundance. These numbers already imply strong bounds on sterile neutrinos, but do not rule out the sterile hypothesis for interesting regions of the parameters. One example is given by mirror neutrinos, when the new mass splittings are small enough. Another one is given by a new sterile state, that has only small mixing with the ordinary neutrinos. A second possibility is to have post-BBN phase transition involving the vacuum expectation value of a light scalar field which mixes the active and sterile neutrinos so that at the time of BBN, the active and sterile neutrinos are unmixed. There are however strong constraints on the nature and interaction of the scalar field [@chacko; @MohapatraNasri] and an interesting feature of these models is that they leave an imprint on the cosmic microwave background that can be tested in future experiments such as the Planck satellite mission. Yet another possibility is a large electron neutrino asymmetry in the early universe, which can compensate the effects of a number of extra neutrino types [@Barger:2003rt]. It has also been pointed out that the production of sterile neutrinos in the Early Universe is strongly suppressed in cosmological scenarios for which the reheating temperature is as low as few MeV [@Gelmini:2004ah]. In this case the bounds on the sterile neutrino parameters which can be derived from BBN and from the contribution of $\Omega_\nu h^2$ to the Dark Matter are much weaker than in the standard case. #### Ultra-high energy neutrinos Although there is a great interest in the search for ultra-high energy neutrinos, the number of reasonable (or even, less reasonable) mechanisms that have been discussed to produce them is not large. The reason is that neutrinos are produced along with electromagnetic radiation, that can be observed in a variety of ways, even when this is reprocessed. Following this line of thought, the astrophysical mechanism that can be conceived to overcome such a structure is the concept of a ‘hidden source’. Another escape way from this constraint involves sterile neutrinos. Indeed, if there are ultra-high energy mirror neutrinos, they inevitably oscillate into neutrinos from our world on cosmic scales [@bv]. This scenario can provide intense fluxes of ultra-high energy neutrinos, subject only to the observable electromagnetic radiation from their interaction with the relic neutrino sea. ### Summary of what we can learn on light sterile neutrinos In the view of many, there is something embarrassing in the hypothesis of (light) sterile neutrinos, and some of us believe that this is ‘a solution searching for a problem’. However, history tells that the most prominent characteristic of neutrinos is that they are amazing! Said more seriously, we should certainly aim to measure neutrino properties, but we should not forget that we could make discoveries. And, when we think to new experiments, we should evaluate their potential for the investigation of sterile neutrinos. In this section, we recalled that light sterile neutrinos can play a role not only for LSND but also in terrestrial oscillations experiment, solar neutrinos, supernovae, astrophysics and cosmology. There are links between the various observables, but it is not impossible to conceive that sterile neutrinos have an important role only in astrophysics or cosmology (e.g., in core collapse supernovae or big-bang nucleosynthesis). More measurements and theoretical progresses will lead to important tests of the idea that the neutrinos that we know are not the full story. What can we learn about four-neutrino mass matrices --------------------------------------------------- The solar, atmospheric and LSND data require three different (mass)$^2$ splittings. These cannot be accommodated by three neutrino flavors, which provide only two independent $\Delta m^2$. Additional degrees of freedom are necessary in order to understand all this data put together. The easiest option is to add a sterile neutrino and interpret the data in terms of oscillations of four neutrino flavors. The MiniBooNE experiment at Fermilab is crucial for confirming or refuting the LSND evidence for neutrino oscillations. If the LSND result is confirmed, a very exciting epoch in neutrino physics is just about to begin, as the number of questions that need to be answered becomes even larger than in the standard three-flavor case. A general 4-neutrino Majorana mass matrix is described by 4 masses, 6 mixing angles and 6 CP violating phases, 3 of which would affect oscillations. In this case there are 6 possible mass spectra, as shown in Fig. \[4schemes\]. These can be divided in two categories: “3+1” and “2+2”. The “3+1” mass patterns are comprised of one sterile neutrino separated by $\Delta m^2_{\rm LSND}$ from the other three. The group of three is the usual group of “active” neutrinos, one pair separated by $\Delta m^2_{\odot}$ and the third separated from these by $\Delta m^2_{\rm A}$. The “2+2” patterns are comprised of two pairs of neutrinos, one separated by $\Delta m^2_{\odot}$ and the other by $\Delta m^2_{\rm A}$. The two pairs are separated by $\Delta m^2_{\rm LSND}$. Both categories are already very strongly constrained by experiment [@Maltoni:2003yr; @4nu]. In the “3+1” scenario the 3 states relevant for solar and atmospheric oscillations are mostly active and the forth state is almost entirely sterile. This pattern has the usual three active flavor scenario as a limiting case, so it agrees very well with all solar and atmospheric data. It is however harder to accommodate short baseline neutrinos oscillation experiments. This is related to the irony of the fact that LSND is an active flavor appearance experiment, showing $\bar\nu_\mu\to\bar\nu_e$ oscillations, but its solution has to involve an almost entirely sterile neutrino, while the other experiments remain unaffected by the presence of the sterile state. The bounds on $\sin^22\theta_{\rm LSND}$ coming from KARMEN, CDHS, CHOOZ and atmospheric data are rather strong, almost conflicting with the value required to explain the LSND signal. The fit to all data is not very good, but the “3+1” scenario is not completely excluded at this point. In the ‘2+2’’ scenario both solar and atmospheric neutrino oscillations involve some fraction of conversion into a sterile state. This fractions are now very strongly constrained by the atmospheric, solar and reactor data, making the fit to the data to be rather poor in the‘2+2’’ case. The global analysis are usually performed by considering three mixing angles and neglecting the other three, which are known to be small. Including these additional small angles might improve the quality of the fits, but the “2+2” scenario is strongly disfavored. The presence of the fourth, sterile neutrino also has implications in cosmology and cosmological observations impose further constraints on the allowed parameter space, as discussed in the previous section. The first question regarding four neutrino mass matrices, namely if they are indeed necessary to interpret the experimentally observed neutrino oscillation data will soon be answered by the MiniBooNE experiment. Assuming the answer is positive, a whole new set of questions arises. Just as in the three flavor case, one would like better measurements of all $\Delta m^2$’s and mixing angles. Given the much larger number of phases involved in the 4 neutrino case, the possibilities for observing CP violation in the neutrino sector become very rich and maybe more easily accessible [@4nunufact]. If the LSND signal is confirmed, than there must be a state with mass higher than $\sqrt{\Delta m^2_{\rm LSND}}$. This would be in the range of sensitivity of future tritium $\beta$ decay experiments like KATRIN, raising the possibility of determining the absolute scale of neutrino mass. For Majorana neutrinos, neutrinoless double beta decay might also be accessible [@BPP2; @4nu0nubeta]. By combining data from all types of experiments, the specific mass pattern could also be determined. \ Heavy sterile neutrinos ----------------------- The general motivation for considering sterile neutrinos has already been discussed. Many extensions of the Standard Model imply the existence of more than one sterile neutrino with couplings to the active ones. The right-handed neutrinos participating in the seesaw mechanism, mirror neutrinos, the neutrinos in extra-dimensional models, and the right-handed technisinglet neutrinos in the ETC model discussed in other sections are some examples of such sterile neutrinos. “Light” (below $\sim$ 10 eV) sterile neutrinos have been discussed in Sec. \[lightsterile\]. Here we concentrate on “heavy” ones, by which we mean sterile neutrinos with masses above $\sim$ 10 eV, but below $\sim 1 GeV$. As noted, the mechanism constructed in Refs. [@nt; @lrs; @nuf03] for light neutrinos in Technicolor theories leads to (two) heavy neutrino mass eigenstates in this range. We do not discuss here “very heavy” neutrinos (e.g. GUT scale), whose properties have been talked about above. Once sterile neutrinos are introduced, there is no definitive prediction for either the number or the masses of these light neutral fermions. Answering the question of the total number of neutrinos (active and sterile) and their masses is fundamental, as it would lead to much progress in understanding physics beyond the Standard Model. It is thus very important to address these issues from the experimental/observational point of view. Heavy sterile neutrinos with couplings to the active ones have profound implications in cosmology and astrophysics. They can also be constrained by several types of laboratory experiments. We discuss here the present status and future prospects for determining the properties of these heavy sterile neutrinos. ### Laboratory Experiments The existence and mixing of heavy sterile neutrinos has many effects on particle and nuclear decays. These include contributions to $\mu^+ \to e^+ \gamma$, $\mu^+ \to e^+ e^+ e^-$, $K_L \to \mu^\pm e^\mp$, and $K^+ \to \pi^+ \mu^\pm e^\mp$, among others. However, given limits on mixing, these contributions are expected to be quite small (see further below on $\mu^+ \to e^+ \gamma$). The two-body leptonic decays of charged pions and kaons, and also measurement of the differential decay distribution in $\mu$ decay, can be used to search for, and set bounds on, the emission of massive neutrinos via lepton mixing [@Shrockhs; @Shrocklep]. The experimental signature for the emission of a massive neutrino via lepton mixing in $\pi^+_{\ell 2}$ or $K^+_{\ell 2}$ decay would be the appearance of an additional peak in the momentum spectrum of the outgoing charged lepton $\ell^+ = \mu^+$ or $e^+$. The position of the extra peak is determined by the mass of the heavy neutrino and the size of the extra peak is proportional to the mixing $|U_{\ell h}|^2$, where $\ell=e$ or $\mu$, between the extra state and the neutrino $\nu_\ell$. Initial bounds from retroactive data analyses were given in [@Shrockhs], and dedicated searches were carried out in $\pi^+_{\mu 2}$ [@pimu2], $K^+_{\mu 2}$ [@kmu2], $\pi^+_{e2}$ [@pie2], and $K^+_{e2}$ [@ke2] decays. Because of renewed interest [@armbruster], some recent searches in $\pi^+_{\mu 2}$ are reported in [@pimu2recent]. Resultant upper bounds on $|U_{\ell h}|^2$ range down to $10^{-5}$ – $10^{-7}$ in the mass range from several MeV to $\sim 300$ MeV. Admixed massive neutrinos also affect the observed ratio of branching ratios $BR(\pi^+_{e2})/BR(\pi^+_{\mu 2})$ and this has been used to set limits (e.g., [@britton92b]). The nuclear beta decay spectrum is also affected by the presence of heavy neutrinos mixed with $\nu_e$. The spectrum would have a “kink” at the endpoint energy $E_{max} (m_h)$ [@maki_nuclear; @Shrockhs; @mckellar; @kobzarev]. The position of the kink determines the mass of the heavy state, $m_h$, and the change in the slope of the spectrum determines the mixing $|U_{eh}|^2$. Many experiments searching for such kinks in the Kurie plots of nuclear beta decays were carried out in the 1980’s and beginning of the 1990’s; although a few claimed positive results, these were refuted [@pdg]. Resultant upper limits were of the order $|U_{eh}|^2\approx 10^{-3}$ for masses between $10$ keV and $ \sim 1$ MeV. The nuclear transition involving electron capture, $e^- + (Z,A) \to (Z-1,A) + \nu_e$ [@derujula] and muon capture, $\mu^- + (Z,A) \to (Z-1,A) + \nu_\mu$ [@deutsch] can also be used to search for, and put limits on, massive neutrino emission via mixing. Assuming that the additional sterile neutrinos are Majorana, there are several $|\Delta L|=2$ transitions and meson and hyperon decays (processes analogous to neutrinoless nuclear double beta decay) that can occur. One of these is the nuclear transition $\mu^- + (A,Z) -> \mu^+ + (A, Z-2)$ [@mmm]. Meson decays include $K^+ \to \pi^- \mu^+ \mu^+$. A first upper limit on the branching ratio for this decay was set in [@ls1], and this has been greatly improved by a dedicated search in a recent BNL experiment [@e865] (see also [@ls2; @dib; @cernk; @krev]). Analogous $|\Delta L|=2$ decays of heavy-quark mesons are also of interest. Early searches include one by the Mark II detector at PEP for the decays $D^+ \to K^- \mu^+\mu^+$ and $D^+ \to \pi^- \mu^+\mu^+$ [@pep] and one by the CLEO experiment at CESR for $B^+ \to K^-\mu^+\mu^+$ [@cleo]; current limits are given in Ref. [@pdg]. One can also consider $|\Delta L|=2$ hyperon decays such as $\Xi^- \to p \mu^-\mu^-$ and $\Sigma^- \to p \mu^- \mu^-$. A first upper limit, on $BR(\Xi^- \to p \mu^-\mu^-)$, was set in Ref. [@lsh]; a recent dedicated search reporting a much improved limit is by the HyperCP experiment at Fermilab [@hypercp]. Mixing between heavy and light neutrinos also leads to neutrino decay [@Shrockdecay; @gronau; @gnr84]. A number of searches for neutrino decays in accelerator experiments have been carried out [@nudecay]. Depending on the assumed mass of the neutrino mass eigenstate, various decays are possible, including $\nu_h \to \nu' e^+ e^-$, $\nu_h \to \nu' \mu^\pm e^\mp$, $\nu_h \to \nu' \mu^+ \mu^-$. Bounds on various combinations of mixing angle factors from these experiments are reported in [@nudecay]; published limits range down to $|U_{\ell h}|^2 < 10^{-9}$, $\ell=e,\mu$, for heavy neutrino masses of several hundred MeV [@nudecay; @pdg]. In addition to weak charged current contributions to these neutrino decays, there are also contributions from the weak neutral current, since, in the presence of sterile neutrinos, the weak neutral current is not, in general, diagonal in mass eigenstates [@LeeShrock:77; @valle80; @Kusenko:2004qc]. Present and future experiments as MiniBooNE and MINOS might have the possibility to improve on some of the present bounds on heavy neutrino decays [@Kusenko:2004qc]. Searches for heavy neutrino production and decay have also been carried out at $e^+e^-$ colliders; see [@pdg] for limits. For masses between a few GeV and $m_Z$ the best bounds come from measurements at the Z pole, where the Z boson could decay into a standard neutrino and a sterile one, that would further decay [@pdg]. ### Astrophysics and cosmology The existence of sterile neutrinos with even very small mixing to the active ones can have dramatic consequences in astrophysics and cosmology. These are discussed by a different working group [@APSastro]. Because astrophysical and cosmological observations provide the strongest constraints and prospects for future answers regarding heavy sterile neutrinos, we do, however, include here an overview of this subject. [*Cosmology*]{} Massive sterile neutrinos could be produced in the early universe and can provide some or even all the required dark matter. Heavy sterile neutrinos can be produced by scattering-induced conversion of active neutrinos [@DodelsonWidrow]. These neutrinos are produced non-resonantly and they can be a warm dark matter candidate. A different mechanism of production of heavy sterile neutrinos appears if there is a non-vanishing initial lepton number in the Universe [@ShiFuller]. In this case sterile neutrinos can be produced resonantly and the energy spectrum is in this case highly non-thermal. The sterile neutrino can act then as a warm, cool or even cold dark matter [@Fullerdm]. Cosmological observations impose strong constraints on massive sterile neutrinos [@Fullerdm; @steriledm]. The radiative decay of such neutrinos to a light neutrino and a photon would affect the diffuse extragalactic background radiation, by producing a large number of photons of energy of the order $m_H$. The DEBRA experiment is now constraining the parameter space of sterile neutrinos based on this. The Chandra X-ray observatory has a great potential of resolving a considerable fraction of the observed X-ray background and consequently imposing much stronger constraints or potentially detecting X-ray fluxes from dark matter sterile neutrinos in the gravitational potential wells of clusters of galaxies. Heavy sterile neutrino decay prior to cosmic microwave background (CMB) decoupling increases the energy density in relativistic particles, leading to further constraints on the allowed parameter space. Big bang nucleosynthesis (BBN) is one of the big successes of the Standard Model of cosmology, successfully predicting the primordial abundance of light elements. The energy density in the sterile neutrino sea prior to BBN must not be too high in order not to spoil the successful predictions of BBN. Photoproduction of deuterium and $^6{\rm Li}$ from decay of sterile neutrinos after BBN also imposes additional constraints on the sterile neutrino parameter space. Large scale structure observations are also essential, as they can constrain the nature of the dark matter (hot, warm or cold), consequently setting the scale for the mass of the sterile state. Cosmological constraints are illustrated in Fig. \[cosmconstraint\] (from [@Fullerdm]). \ [*Supernovae*]{} Neutrinos play a dominant role in core collapse supernovae. Even small admixtures of heavy sterile neutrinos can have profound implications for supernova physics. Too much active neutrino conversion into sterile states can lead to too much energy loss to sterile neutrinos, contradicting observations from supernova SN1987a [@Fullerdm; @steriledm]. The energy emitted in sterile neutrinos depends on the mixing angle between the sterile and active states in matter. For a long time it has been thought that most of the emission occurs in the resonant region, where the effective mixing angle becomes $\pi/4$. This is not necessarily true because of a non-linear effect that limits the resonant emission [@Fullerdm]. The effective matter potential for the $\nu_e\leftrightarrow \nu_s$ is given by $$V=G_F\rho/\sqrt{2}/m_N (3 Y_e-1+4Y_{\nu_e}+2Y_{\nu_\mu}+2Y_{\nu_\tau})$$ where $Y_i=(n_i-n_{\bar i})/n_B$. For antineutrinos the matter potential changes sign. Due to the presence of the $Y_\nu$ terms, coming from neutral current neutrino-neutrino scattering, this effective potential has zero as a fixed point. Once a resonance is reached for, let’s say, neutrinos, $\nu_e$’s start converting with maximal effective mixing angle into $\nu_s$. This decreases the $Y_\nu$ term, driving the parameters off resonance and thus limiting the emission. The matter potential is effectively driven to zero on a time scale of less than a second and the emission continues with vacuum mixing angle. The parameter space where the sterile neutrino emission from supernovae could be relevant is also marked on Fig. \[cosmconstraint\]. It is interesting to note that a sterile neutrino in the few keV region could also account for pulsar kicks [@pulsarkick]. In the presence of a strong magnetic field, neutrinos are emitted asymmetrically in a supernova core. This asymmetry is washed out for active neutrinos which are trapped. If some conversion to sterile neutrinos occurs, these can escape the core of the star preserving the initial asymmetry. Only a few percent asymmetry is sufficient to account for the initial kick of the star that would explain the unusually high velocities of pulsars. To summarize, the presence of heavy sterile neutrinos with some (even very small) mixing to active neutrinos has numerous implications in astrophysics and cosmology. At present, a sterile neutrino with mass of the order of a few keV and very small mixing ($\sin^22\theta\approx 10^{-8}$) with an active one seems to be allowed by constraints, could account for all or some fraction of the dark matter in the Universe, would affect emission of supernovae neutrinos, could explain the pulsar kicks and might lead to observable contributions to X-ray measurements. In the future, more and more precise cosmological observations can constrain very strongly the parameter space for such sterile neutrinos, potentially closing the window or leading to detection of some signal. Supersymmetry and neutrinos =========================== Neutrino masses are not the only motivation to extend the standard model. One also likes to extend it in order to solve the gauge hierarchy problem. Models of low-energy supersymmetry are attractive candidates for the theory of TeV scale physics. In the minimal supersymmetric extension of the Standard Model (MSSM) neutrinos are massless. Thus, we need to consider supersymmetric extensions of the Standard Model that allow for neutrino masses. There are basically three questions we like to answer when we talk about the relations between supersymmetry and neutrinos 1. Can successful predictions for neutrino masses of non-supersymmetric extensions of the Standard Model be retained once these models are supersymmetrized? In particular, can supersymmetry help in making such models more motivated? 2. Are there models where neutrino masses arise only due to supersymmetry? 3. Are there interesting phenomena in the slepton sector that can shed light on the issue of neutrino masses, lepton number violation and lepton flavor violation? The questions in the first item were already discussed in previous sections of this review. Generically, supersymmetry does not upset model predictions regarding neutrinos, and in some cases it in fact helps. For example, in the case of GUT, making the model supersymmetric helps to achieve coupling unification and thus to make the model realistic. In the following we concentrate on the last two items. We briefly describe two frameworks where neutrino masses are tightly connected to supersymmetry, or more precisely, to supersymmetry breaking, neutrino masses from R-parity violation and from supersymmetry breaking. We then discuss two effects, that of sneutrino oscillation and sneutrino flavor oscillation, that can help us disentangle the origin of neutrino masses using supersymmetric probes. The aspects of supersymmetric seesaw models, i.e. the connection to decays such as $\mu \rightarrow e \gamma$ have been discussed in previous Sections. Neutrino masses from R-parity violation --------------------------------------- Neutrino masses from R-parity violation have been extensively studied. Here we briefly summarize the main results. For a more complete reference list where more details can be found, see, for example, [@Grossman:2003gq]. Once R-parity is violated there is no conserved quantum number that would distinguish between the down-type Higgs doublet and the lepton doublets. (For definitions and notation see, for example, [@Grossman:1998py].) Thus, these fields in general mix. Such mixings generate neutrino masses; in fact, they generically produce too large masses. One neutrino gets a tree level mass which depends on the mixings between the Higgs and the sneutrinos [@JN]. The other two neutrinos get their masses at the one loop level, and thus their masses are smaller by, roughly, a loop factor. There are many different one loop contributions to the neutrino masses. (For a complete list see [@Davidson:2000uc; @Davidson:2000ne].) The “standard” diagrams are those that arise from the R-parity violating trilinear couplings in the superpotential, the so-called $\lambda$ and $\lambda'$ couplings. These are the only contributions that are present in the supersymmetric limit. Once supersymmetry breaking is included, there are many more contributions (in fact, also the tree level contribution is present only due to supersymmetry breaking). These contribution are likely to be much larger than the $\lambda$ and $\lambda'$ loop induced masses. The dominant diagrams are likely to be those that arise due to bilinear couplings [@Grossman:2003gq]. These are the couplings that mix the scalar components of the Higgs and the neutrino superfields. The basic reason that the $\lambda$ and $\lambda'$ loop induced masses are very small is that they are proportional to the small down-type quark or charged lepton Yukawa couplings. This suppression factor is absent in the bilinear induced masses. The most attractive feature of R-parity violation models of neutrino masses is that they naturally generate hierarchical neutrino masses with large mixing angles. This is due to the fact that only one neutrino gets a mass at tree level, while the other neutrinos only acquire loop induced masses. Numerically, however, the predicted mass hierarchy is in general somewhat too strong. Models with R-parity violation also predict observable lepton-number violating processes at possibly observable rates (e.g., [@rabirev; @lsh; @dib]). The biggest puzzle posed by R-parity violation models is to understand the smallness of the neutrino masses. There must be a mechanism that generates very small R-parity violating couplings. There are several ideas of how to do it. For example, the small R-parity violation couplings can be a result of an Abelian horizontal symmetry [@Banks:1995by]. Neutrino masses from supersymmetry breaking ------------------------------------------- The smallness of neutrino masses can be directly related to the mechanism of supersymmetry breaking, in particular, to the mechanism that ensures a weak scale $\mu$ parameter [@Arkani-Hamed:2000bq; @Borzumati:2000mc; @Arkani-Hamed:2000kj; @Borzumati:2000ya; @Abel:2004tt; @March-Russell:2004uf]. In general, there is no reason why the MSSM $\mu$ parameter is of the order of the weak scale. Generically, it is expected to be at the cut-off scale of the theory, say the Planck or the GUT scale. Phenomenologically, however, $\mu$ is required to be at the weak scale. One explanation, which is known as the Giudice-Masiero mechanism, is that a $\mu$ term in the superpotential is not allowed by a global symmetry. The required effective weak scale $\mu$ is generated due to supersymmetry breaking effects. The Giudice-Masiero mechanism can be generalized to generate small neutrino masses. It might be that the large Majorana mass term that drives the seesaw mechanism is forbidden by a global symmetry. Effective Majorana mass terms for the right-handed neutrinos, of the order of the weak scale, are generated due to supersymmetry breaking. The same global symmetry can also suppress the Dirac mass between the right and left-handed neutrinos. Then, the left-handed neutrinos have very small Majorana or Dirac masses as desired. The emerging neutrino spectrum depends on the exact form of the global symmetry that is used to implement the Giudice-Masiero mechanism. Nevertheless, the feature that the left-handed neutrino masses are very small is generic. Sneutrino oscillation {#sec:SneutrinoOscillation} --------------------- As already discussed in this report, it is interesting to find out whether neutrinos have Majorana masses. In other words, we would like to find out if total lepton number is violated in nature. As already discussed, the most promising way to do it, is to look for neutrinoless double beta decay. If supersymmetry is realized in nature we may have other probes. Here we describe one example, that of sneutrino-anti-sneutrino mixing and oscillation [@Grossman:1997is]. Consider a supersymmetric extension of an extended Standard Model that contains Majorana neutrino masses. For simplicity we consider only one neutrino generation. In such models, the effect of $\Delta L=2$ operators is to introduce a mass splitting and mixing into the sneutrino-anti-sneutrino system. This phenomena is analogous to the effect of a small $\Delta B=2$ perturbation to the leading $\Delta B=0$ mass term in the $B$-system which results in a mass splitting between the heavy and light neutral $B$ mesons. The very small mass splitting can be measured by observing flavor oscillations. The flavor is tagged in $B$-decays by the final state lepton charge. Since $\Delta m_B \sim \Gamma_B$, there is time for the flavor to oscillate before the meson decays. The sneutrino system can exhibit similar behavior. The lepton number is tagged in sneutrino decay using the charge of the outgoing lepton. The relevant scale is the sneutrino width. If the sneutrino mass splitting is large, namely when $$\label{xsnudef} x_{\tilde \nu} \equiv {\Delta m_{\tilde \nu} \over \Gamma_{\tilde \nu}} \gs 1,$$ and the sneutrino branching ratio into final states with a charged lepton is significant, then a measurable same sign dilepton signal is expected. Any observation of such oscillation will be an evidence for total lepton number violation, namely for Majorana neutrino masses. The size of the same sign lepton signal is model dependent. It is generically expected that the sneutrino mass splittings are of the order of the neutrino masses, as both are related to $\Delta L=2$ operators. The sneutrino width can vary a lot. When two body decay channels are dominant the width is roughly of the order of $\Gamma_{\tilde \nu} \sim \alpha M_{\tilde \nu}\sim {\cal O}(1\;{\rm GeV})$, and thus $x_{\tilde \nu} \ll 1$ and a very small signal is expected. In models where the two body decay channels are forbidden the sneutrino width can be much smaller. For example, in models where the stau is the LSP there can be a situation where the sneutrino has only three body decay channels allowed. Then $x_{\tilde \nu}$ may be large enough for the oscillation signal to be observed. Sneutrino flavor oscillation ---------------------------- As we know there are large mixing angles in the lepton sector. An independent probe of lepton flavor violation can be provided in supersymmetric models via the slepton sector [@Krasnikov:1995qq; @Arkani-Hamed:1996au; @Arkani-Hamed:1997km; @Dine:2001cf]. While generally the slepton mixings are not directly related to the lepton mixings, both are lepton flavor violating effects. Slepton flavor oscillations arise if the sleptons mass eigenstates are not flavor eigenstates. Experimentally, the signal consists of a lepton flavor imbalance. For example, in a linear collider one can search for a signal like $e^+ e^- \to \tilde\nu \tilde \nu^* \to X e^+ \mu^-$, where $X$ is the non leptonic part of the final state. In hadron colliders, one can look for signals like $\chi^- \to \tilde\nu \mu^- \to X e^+ \mu^-$, where $\chi^-$ is a chargino. The oscillation probabilities depend on the slepton mass splittings and their mixing angles. In principle, oscillation signals can be used to measure these parameters. Even if this cannot be achieved in practice, just the observation of an oscillation signal will provide an independent confirmation for lepton flavor non-conservation. In many proposals for supersymmetry breaking, a high degree of degeneracy among sleptons is predicted. As a result, there is the potential for large flavor mixing among the sleptons. This can lead to substantial and observable flavor violating signals at colliders. To be readily observable, it is necessary that mass splittings between the states are not much smaller than the decay widths, and that the mixing angles are not very small. In a large class of supersymmetry breaking models, the splittings can be comparable or even larger than the widths, and the mixing angles may be large. In such cases, dramatic collider signatures are possible. Expectations in Superstring Constructions ========================================= There has been relatively little work on the implications of superstring theories for neutrino masses. However, it is known that some of the ingredients employed in grand unified theories and other four-dimensional models may be difficult to implement in known types of constructions. For example, the chiral supermultiplets that survive in the effective four-dimensional field theory are generally bi-fundamental in two of the gauge group factors (including the case of fundamental under one factor and charged under a $U(1)$) for lowest level heterotic constructions, or either bi-fundamental or adjoint for intersecting brane constructions. This makes it difficult to break the GUT symmetry, and even more so to find the high-dimensional Higgs representations (such as the [**126**]{} of $SO(10)$) usually employed in GUT models for neutrino and other fermion masses. Thus, it may be difficult to directly embed many of the models, especially GUT models involving high-dimensional representations rather than higher-dimensional operators, in a string framework. Perhaps more likely is that the underlying string theory breaks directly to an effective four-dimensional theory including the Standard Model and perhaps other group factors [@Langacker:2003xa]. Some of the aspects of grand unification, especially in the gauge sector, may be maintained in such constructions. However, the GUT relations for Yukawa couplings are often not retained [@Dine:1985vv; @Breit:1985ud; @Witten:1985bz] because the matter multiplets of the effective theory may have a complicated origin in terms of the underlying string states. Another difference is that Yukawa couplings in string derived models may be absent due to symmetries in the underlying string construction, even though they are not forbidden by any obvious symmetries of the four-dimensional theory, contrary to the assumptions in many non-string models. Finally, higher-dimensional operators, suppressed by inverse powers of the Planck scale, are common. Much activity on neutrino masses in string theory occurred following the first superstring revolution. In particular, a number of authors considered the implications of an $E_6$ subgroup of the heterotic $E_8 \times E_8$ construction [@Dine:1985vv]-[@Nandi:1985uh]. Assuming that the matter content of the effective theory involves three [**27**]{}’s, one can avoid neutrino masses altogether by fine-tuned assumptions concerning the Yukawa couplings [@Dine:1985vv]. However, it is difficult to implement a canonical type I seesaw. Each [**27**]{} contains two Standard Model singlets, which are candidates for right-handed neutrinos, and for a field which could generate a large Majorana mass for the right-handed neutrinos if it acquires a large vacuum expectation value and has an appropriate trilinear coupling to the neutrinos. However, there are no such allowed trilinear couplings involving three [**27**]{}’s (this is a reflection of the fact that the [**27**]{} does not contain a [**126**]{} of the $SO(10)$ subgroup). $E_6$ string-inspired models were constructed to get around this problem by invoking additional fields not in the [**27**]{} [@valle1; @Witten:1985bz] or higher-dimensional operators [@Nandi:1985uh], typically leading to extended versions of the seesaw model involving fields with masses/vevs at the TeV scale. Similarly, more recent heterotic and intersecting brane constructions, e.g., involving orbifolds and twisted sectors, may well have the necessary fields for a type I seesaw, but it is again required that the necessary Dirac Yukawa couplings and Majorana masses for the right-handed neutrinos be present simultaneously. Dirac couplings need not emerge at the renormalizable level, but can be of the form $$\langle S'_1 \cdots S'_{d-3}\rangle N LH_u/M_{\rm PL}^{d-3},$$ where the $S'_i$ are standard model singlets which acquire large expectation values ($d=3$ corresponds to a renormalizable operator). Similarly, Majorana masses can be generated by the operators $$\langle S_1 \cdots S_{n-2}\rangle N N/M_{\rm PL}^{n-3}.$$ Whether such couplings are present at the appropriate orders depends on the underlying string symmetries and selection rules, which are often very restrictive. It is also necessary for the relevant $S$ and $S'$ fields to acquire the needed large expectation values, presumably without breaking supersymmetry at a large scale. Possible mechanisms involve approximately flat directions of the potential, e.g., associated with an additional $U(1)'$ gauge symmetry [@Cleaver:1997nj; @pl], stringy threshold corrections [@Cvetic:1992ct; @Mochinaga:1993td], or hidden sector condensates [@Faraggi:1993zh]. There have been surprisingly few investigations of neutrino masses in explicit semi-realistic string constructions. It is difficult to obtain canonical Majorana masses in intersecting brane constructions [@Blumenhagen:2005mu], because there are no interactions involving the same intersection twice. Two detailed studies [@Ibanez:2001nd; @Antoniadis:2002qm] of nonsupersymmetric models with a low string scale concluded that lepton number was conserved, though a small Dirac mass might emerge from a large internal dimension. Large enough internal dimensions for the supersymmetric case may be difficult to achieve, at least for simple toroidal orbifolds. There are also difficulties for heterotic models. An early study of $Z_3$ orbifolds yielded no canonical Dirac neutrino Yukawas [@Font:1989aj] at low order. Detailed analyses of free fermionic models and their flat directions were carried out in [@Faraggi:1993zh; @Coriano:2003ui] and [@Ellis:1997ni; @Ellis:1998nk]. Both studies concluded that small Majorana masses could be generated if one made some assumptions about dynamics in the hidden sector. In [@Faraggi:1993zh; @Coriano:2003ui] the masses were associated with an extended seesaw involving a low mass scale. The seesaw found in [@Ellis:1997ni; @Ellis:1998nk] was of the canonical type I type, but in detail it was rather different from GUT-type models. A seesaw was also claimed in a heterotic $Z_3$ orbifold model with $E_6$ breaking to $SU(3)\times SU(3)\times SU(3)$ [@Kim:2004pe]. A recent study of $Z_6$ orbifold constructions found Majorana-type operators [@Kobayashi:2004ya], but (to the order studied) the $S_i$ fields did not have the required expectation values. In [@Giedt:2005vx] a large class of vacua of the bosonic $Z_3$ orbifold were analyzed with emphasis on the neutrino sector to determine whether the minimal type I seesaw is common, or if not to find possible guidance to model building, and possibly to get clues concerning textures and mixing if examples were found. Several examples from each of 20 patterns of vacua were studied, and the nonzero superpotential terms through degree 9 determined. There were huge number of $D$ flat directions, with the number reduced greatly by the $F$-flatness condition. Only two of the patterns had Majorana mass operators, while [*none*]{} had simultaneous Dirac operators of low enough degree to allow neutrino masses larger than $10^{-5}$ eV. (One apparently successful model was ruined by off-diagonal Majorana mass terms.) It is not clear whether this failure to obtain a minimal seesaw is a feature of the particular class of construction, or whether it is suggesting that stringy constraints and selection rules might make string vacua with minimal seesaws rare. Systematic analyses of the neutrino sector of other classes of constructions would be very useful. There are other possibilities for obtaining small neutrino masses in string constructions, such as extended seesaws and small Dirac masses from higher dimension operators. Small Dirac neutrino masses from type I anisotropic compactifications have been discussed recently in [@Antusch:2005kf]. The possibility of embedding type II seesaw ideas (involving Higgs triplets) in string constructions was considered in [@Langacker:2005pf]. It is possible to obtain a Higgs triplet of $SU(2)$ with non-zero hypercharge in a higher level construction (in which $SU(2) \times SU(2)$ is broken to a diagonal subgroup). In this case, because of the underlying $SU(2) \times SU(2)$ symmetry the Majorana mass matrix for the light neutrinos should involve only off-diagonal elements (often with one of the three off-diagonal elements small or vanishing), with profound phenomenological consequences, including an inverted hierarchy and two large mixings. This is a top-down motivation for the texture C in Table \[texturetable\]. Theories with a TeV-scale $\boldsymbol{U(1)'}$ ============================================== Many extensions of the Standard Model and MSSM include the existence of additional non-anomalous $U(1)'$ gauge symmetries. These include many superstring constructions [@Cvetic:1995rj], grand unified theories, Little Higgs models, and models of dynamical symmetry breaking (DSB) [@Hill:2002ap]. In a regular grand unified theory the $U(1)'$ breaking needs to be at a large scale, because scalars that can mediate proton decay can have masses no larger than the $U(1)'$ breaking scale. In string theories, the $U(1)'$ symmetry breaking is usually induced by soft supersymmetry breaking effects at the TeV scale [@Cvetic:1995rj; @Cvetic:1997wu; @Erler:2002pr], although in some cases there is the possibility of breaking along an $F$ and $D$ flat direction at an intermediate scale [@Cleaver:1997nj] (depending on the sign of a combination of soft mass-squares). The Little Higgs and DSB models are at the TeV scale. A TeV scale $Z'$ has many interesting phenomenological consequences [@Langacker:2003bc], but here we are concerned with neutrino masses. In the intermediate $Z'$-scale case, higher-dimensional operators, involving one or more powers of the fields with intermediate-scale vevs but suppressed by powers of the Planck mass, can yield naturally small Dirac neutrino masses [@Cleaver:1997nj; @pl]. Variants can also lead to mixing between light ordinary and sterile neutrinos, as suggested by the LSND results, or even to a type I seesaw. Models in which the $U(1)'$ breaking is at the TeV scale generally do not allow a canonical type I seesaw model, because the Majorana mass for the right-handed neutrino $N_R$ requires $U(1)'$ breaking (unless the $N_R$ carries no $U(1)'$ charge). However, a number of other possibilities are allowed for the neutrino masses [@KLL], including small Dirac masses (e.g., associated with a second $U(1)'$ broken at an intermediate scale), and Majorana masses associated with a TeV-scale seesaw [@valle1] or a heavy Higgs triplet (Type II seesaw) [@Hambye:2000ui]. The small Dirac mass case involves a strong constraint from Big Bang Nucleosynthesis, because the $Z'$ interactions could efficiently produce the right-handed components prior to nucleosynthesis, leading to too much $^4$He [@Olive:wz]. For generic couplings of the $Z'$ to the $N_R$ the observed abundance implies a $Z'$ mass larger than around 4 TeV, stronger than indirect or collider constraints [@Barger:2003zh]. This can be evaded or weakened if the $N_R$ carries no $U(1)'$ charge (as can occur naturally in some models involving two $U(1)'$ factors [@Langacker:2003bc; @KLL]) or if the mass is Majorana. Neutrino Masses in Theories with Dynamical Symmetry Breaking ============================================================ The source of electroweak symmetry breaking (EWSB) remains unknown, and a dynamical origin of this breaking is an appealing possibility. This can be realized via the formation of a bilinear condensate involving fermions with a new strong gauge interaction, generically called Technicolor (TC) [@tc; @dsb]. Indeed, one may recall that in both of the well-known two cases in which scalar fields have been used in models of spontaneous symmetry breaking, namely the Ginzburg-Landau effective Hamiltonian for superconductivity and the Gell-Mann Levy sigma model for hadronic chiral symmetry breaking, the scalar fields were not fundamental, and the true underlying physics responsible for these respective phenomena involved the formation of bilinear fermion condensates (Cooper pairs in superconductivity and the $\langle \bar q q \rangle$ condensate in QCD). In order to communicate this symmetry breaking in the Technicolor sector to the standard-model (technisinglet) fermions, one embeds the Technicolor model in a larger, extended Technicolor (ETC) theory [@etc; @tcrev]. To satisfy constraints from flavor-changing neutral-current processes, the ETC vector bosons that mediate generation-changing transitions must have large masses. These masses arise from the sequential breaking of the ETC gauge symmetry on mass scales ranging from $10^3$ TeV down to the TeV level. Precision measurements place tight constraints on these models, suggesting that there are a small number of new degrees of freedom at the TeV scale and that the Technicolor theory has a slowly running (“walking”) gauge coupling with large anomalous dimensions [@wtc]. Since ETC models do not involve any superheavy GUT-scale mass, there was for a long time a puzzle of how one could explain light neutrino masses in these models. A possible solution to this puzzle was given in Ref. [@nt] and studied further in Refs. [@lrs; @nuf03; @ckm]. This does involve a seesaw, but one of a new type not involving any superheavy GUT scale. The resultant formula $M_\nu \simeq (M_\nu^D)^2/m_R$ holds, with the largest Dirac neutrino masses of order a few keV and the relevant Majorana neutrino mass of order O(0.1) GeV to O(100) GeV. These Dirac and Majorana neutrino masses are greatly suppressed relative to conventional values. This suppression is a natural consequence of the representations of the ETC gauge group for the various neutrino fields. These ETC models are not yet developed sufficiently to make detailed predictions for leptonic mixing angles, but it seems possible to get substantial neutrino mixing. One interesting feature of this mechanism for neutrino masses is that there are only two, rather than three right-handed electroweak-singlet neutrinos, in contrast, e.g., to $SO(10)$ GUT models. The ETC gauge group SU($N_{ETC}$) commutes with the Standard Model (SM) group $G_{SM}$. The ETC group gauges the three generations of technisinglet fermions and connects them with the technicolored fermions. The ETC gauge symmetry is chiral, so that when it becomes strong, sequential breaking occurs naturally. The ETC symmetry breaking takes place in stages, leaving the residual exact technicolor gauge symmetry SU($N_{TC}$). This entails the relation $N_{ETC}=N_{gen}+N_{TC}=3+N_{TC}$, where $N_{gen}$ is the number of standard-model fermion generations. The choice of $N_{TC}=2$ is required for the mechanism of Ref. [@nt] to work. This thus implies $N_{ETC}=5$; i.e., one uses an $SU(5)_{ETC}$ gauge theory. A related $SU(5)_{ETC}$ theory had previously been studied in Ref. [@at94]. The choice $N_{TC}=2$ has two other motivations: (a) it minimizes the TC contributions to the electroweak $S$ parameter, which is a stringent constraint on TC theories, (b) with a standard-model family of technifermions, $Q_L = {U \choose D}_L$, $L_L = {N \choose E}_L$, $U_R$, $D_R$, $N_R$, $E_R$ transforming according to the fundamental representation of SU(2)$_{TC}$, it can yield an approximate infrared fixed point and the associated walking behavior. This sequential breaking of the $SU(5)_{ETC}$ is driven by the condensation of SM-singlet fermions. One can explore whether this dynamical neutrino mass mechanism could take place in the context of ETC theories in which the strong-electroweak gauge group is extended beyond that of the Standard Model. Theories with the left-right symmetry group $G_{LR} = {\rm SU}(3)_c \times {\rm SU}(2)_L \times {\rm SU}(2)_R \times {\rm U}(1)_{B-L}$ [@moh] and the group $G_{422}={\rm SU}(4)_{PS} \times {\rm SU}(2)_L \times {\rm SU}(2)_R$ [@ps] are of particular interest here, where $B$ and $L$ denote baryon and lepton number and $SU(4)_{PS}$ combines color and lepton number. Ref. [@lrs] presented a full ETC model in which these extended strong-electroweak gauge symmetries can be broken dynamically and showed that the mechanism of Ref. [@nt] can also hold here. Dynamical symmetry breaking of $G_{LR}$ has also been studied in Ref. [@lrslindner]. Further, dynamical symmetry breaking of the electroweak symmetry can be triggered by a neutrino condensate [@Martin:1991xw]. ETC theories have many other testable implications. Some recent work has focused on ETC contributions to dimension-5 dipole moment operators [@dml; @qdml] and dimension-6 four-fermion operators and their effects [@ckm]. Neutrinos in extra dimensions ============================= The pioneering idea by Kaluza and Klein (KK) [@KK] that our world may have more than four dimensions has attracted renewed interest over the last ten years [@IA; @EW; @ADD; @DDG1]. The possible existence of extra dimensions has enriched dramatically our perspectives in searching for physics beyond the Standard Model. Obviously, extra dimensions have to be sufficiently compact to explain why they have escaped detection so far, although their allowed size is highly model-dependent [@pheno]. This means that the derived constraints not only depend on the number of the fields sensitive to extra dimensions and their transformation properties with respect to those, but also on the geometry and/or the shape of the new dimensions. In the latter case, higher-dimensional theories may be distinguished between those formulated on a flat space and those that utilize a warped geometry. Higher-dimensional theories may also provide interesting alternatives for explaining the smallness of the observed light neutrino masses. Their predictions for the light-neutrino spectrum can be confronted with recent neutrino oscillation data. In the following, we discuss a generic higher-dimensional neutrino scenario in which a flat geometry is realized. The original formulation of higher-dimensional neutrino models [@DDG2; @ADDM] relies on the possible existence of singlet neutrinos that propagate in a higher $[1 + (3+\delta)]$-dimensional space which is usually termed the bulk, where $\delta$ is the number of the additional spatial compact dimensions. In this formulation, the ordinary SM particles reside in a $(1+3)$-dimensional Minkowski subspace, which is called the wall. Hence, the left-handed neutrinos and the Higgs bosons live on the wall. The overlap of their wave-functions with the bulk neutrinos is suppressed by the volume of the extra-dimensional space $(R\, M_F)^{\delta/2} \approx M_{\rm P}/M_{\rm F}$, where $R$ is the common compactification radius, $M_{\rm F}$ is the fundamental gravity scale and $M_{\rm P} \approx 10^{16}$ TeV is the usual Planck mass. This volume-suppression factor gives rise to effective neutrino Yukawa couplings that are naturally very small, i.e. of order $M_{\rm F}/M_{\rm P} \sim 10^{-15}$, for $M_{\rm F} = 10$ TeV, although the original higher-dimensional Yukawa couplings of the theory could be of order unity. This suppression mechanism [@DDG2; @ADDM] is a generic feature of these higher-dimensional neutrino models realized on an toroidal bulk; it has some dependence on the compactification properties of the bulk neutrinos and the structure of the Higgs sector of the theory [@DS; @AP1]. To illuminate the discussion that follows, we consider a minimal extension of the SM where one 5-dimensional (bulk) sterile neutrino $N$ has been added. Furthermore, the extra dimension, say $y$, is compactified on a $S^1/Z_2$ orbifold. The SM fields are localized on a 4-dimensional Minkowski subspace at the orbifold fixed point $y=0$. Different minimal models may be constructed depending on the way that lepton number is broken: - One may add lepton-number violating bilinears of the Majorana type in the Lagrangian [@DDG2], e.g. operators of the form $N^T C^{(5)-1} N$, where $C^{(5)} = - \gamma_1 \gamma_3$ is the charge conjugation operator. - Lepton-number-violating mass terms can be generated through the Scherk-Schwartz mechanism [@SS]. This mechanism turns out to be equivalent to (i), after KK reduction. - Lepton number can be broken if the $Z_2$-even and $Z_2$-odd two-component spinors of the bulk neutrino couple simultaneously to the same left-handed charged lepton state. This is only possible if the 3-brane describing our observable world is shifted from the $S^1/Z_2$ orbifold fixed point [@DDG2; @BKPP]. - Violation of the lepton number can be achieved by introducing operators of higher dimension in the number of fields, e.g. $(L\Phi)^2/M_{\rm F}$. Such operators may be generated through gravity effects [@MNP]. The current neutrino oscillation data provide an important test for singling out a good candidate model that includes higher-dimensional neutrinos. For example, orbifold models with one bulk neutrino [@DDG2; @DS; @MNP; @MP; @BCS; @LRRR; @DLP], based on models of type (i) and/or (ii) mentioned above, seem to prefer the Small Mixing Angle (SMA) solution which is highly disfavored by recent neutrino data analyses. Alternatively, if all neutrino data are to be explained by oscillations of active neutrinos with a small admixture of a sterile KK component, then the compactification scale has to be much higher than the brane-Dirac mass terms. After integrating out the bulk neutrino of the model, the resulting effective light-neutrino mass matrix is of rank 1. Because of this restricted form of the neutrino mass matrix, two out of the three active neutrinos are massless. Since only one neutrino-mass difference can be formed in this case, it proves difficult to accommodate all neutrino oscillation data in these models [@BCS; @LRRR; @DLP]. Three bulk neutrinos -------------------- One way to avoid this problem is to add three bulk neutrinos, one for each generation [@MP; @DLP]. This model, in the absence of CP phases, is characterized by seven parameters: three neutrino masses $m_{1,2,3}$ and three mixing angles for left handed neutrinos as defined earlier and the radius of the large extra dimension. Since the three mixing angles are arbitrary, the model can easily accommodate the bilarge mixing solution preferred by oscillation data. In the diagonal mass basis, the bulk neutrinos are associated with mass eigenstates. The mixing of the $i$th active neutrino with the nth KK mode of the corresponding bulk neutrino is given by $\xi_{i,n}\simeq m_i R/n$. It is interesting that all mixings are intimately connected with the masses. There are limits on $\xi_i$ from laboratory data such as CHOOZ-Palo-Verde as well as from big bang nucleosynthesis [@GM]. BBN constraints for one extra dimension give $\xi^2_3 \leq 1.7\times 10^{-4}{(eV.R)}^{0.92}$. For a hierarchical pattern, using $\xi= m_3 R$, one gets $R\leq 0.03$ eV$^{-1}$. The bounds from neutrino data such as solar and atmospheric etc. are less stringent [@DLP] and are roughly given by: $R\leq 4$ eV$^{-1}$. Among the consequences of this model, two are especially interesting. Both of these concern the KK tower of sterile neutrinos. \[haibo\] \(i) In the presence of the infinite tower of states, the magnetic moment of the neutrino gets contribution from all the states [@ng]. For instance in the scattering of a neutrino of energy $E\approx 10 $ MeV (corresponding to a reactor neutrino beam) the number of states contributing to the magnetic moment is given by $(ER)^2\sim 10^{18}$. Since all the states add incoherently, the effective magnetic moment is increased from $10^{-20}\mu_B$ to $10^{-11}\mu_B$ ($\mu_B$ is the Bohr magneton). The effect on the differential cross section $d\sigma/dT$, where $T$ is the electron recoil energy has recently been calculated and given in Fig. \[haibo\] [@haibo]. \(ii) A second consequence of the existence of the KK tower of sterile neutrinos is the possibility that when neutrinos travel through dense matter there can be MSW resonances [@DS] and give rise to a dip pattern [@DS; @cmy] in the neutrino survival probability corresponding to energies spaced by $E\approx \Delta m^2_{\nu_F\nu_{KK}}/2\sqrt{2}G_F N_e$ (i.e. $E, 4E, 9E,\ldots$). The dip arises because typically the survival probability goes like $e^{-c\frac{\Delta m^2}{E}}$. Therefore at lower energies, there is more suppression which with increase in energy becomes less and less effective. Also the resonance condition is not satisfied as energy increases, if it was satisfied at lower energies. For the solar neutrinos, such dip structure is quite pronounced [@cmy]. In the hierarchical pattern for neutrino masses, this would correspond to $E\approx $ 10 MeV for densities comparable to solar core. The value of the energy clearly depends on the size of the extra dimensions, growing with $R^{-1}$. This is a very interesting phenomenon which could be used to probe this class of extra dimension models. Lepton number breaking in the bulk ---------------------------------- In the three bulk neutrino picture, all the neutrinos are Dirac neutrinos since the model has an additional global $B-L$ symmetry. An interesting possibility is the scenario (iii) described above, where sizable lepton-number violation is induced by shifting the $y=0$ brane by an amount $a \sim 1/M_W$. In this scenario, the tree-level rank-1 form of the effective neutrino mass matrix can be significantly modified through lepton-number violating Yukawa terms, thus offering sufficient freedom to describe the neutrino oscillation data [@BKPP]. In addition to constraints from neutrino oscillation data, other experiments can also play an important role in constraining higher-dimensional neutrino models. Specifically, strong limits on $M_F$ and the Yukawa couplings of the theory may be obtained from the non-observation of lepton-flavor-violating decays in muon and tau decays and from the absence of $\mu$-$e$ conversion in nuclei [@IP; @FP]. Table \[Tabextra\] gives a brief summary of these limits. These phenomenological constraints are complementary to those obtained from pure theoretical considerations, such as perturbative unitarity [@CGY]. ----------------------------------------------------------- ----------------- ----------------------------------------- -------------- $h_e\ =\ h_\mu$ $=\ h_\tau\ =\ h\ \stackrel{>}{{}_\sim}\ 1$ Observable Lower  limit on $M_F/h^2\ [\,{\rm TeV}\,]$ $\delta =2$ $\delta = 3$ $\delta =6 $ BR$(\mu \to e \gamma)$ $75$ $43$ $33$ BR$(\mu \to e e e) $ $250$ $230$ $200$ BR$(\mu \ ^{48}_{22}{\rm Ti} \to e \ ^{48}_{22}{\rm Ti})$ $380$ $320$ 300 ----------------------------------------------------------- ----------------- ----------------------------------------- -------------- : \[Tabextra\] One-loop-level limits on $M_F/h^2$ from [@IP]. Another low-energy experiment of great importance is the neutrinoless double beta decay of a nucleus. The recently claimed experimental evidence  [@klapdor] of an effective neutrino mass of order 0.4 eV (see however [@antiklapdor]), combined with information from solar and atmospheric neutrino data, restricts the admissible forms of the light-neutrino mass hierarchies in 4-dimensional models with 3 left-handed (active) neutrinos. The allowed scenarios contain either degenerate neutrinos or neutrinos that have an inverse mass hierarchy [@KS]. A positive interpretation of the recently reported $0\nu\beta\beta$ signal [@klapdor] imposes additional constraints on model-building. For example, higher-dimensional models that utilize the shining mechanism from a distant brane [@MLP] may accommodate an effective neutrino mass of 0.4 eV but also predict the emission of Majorons. On the other hand, 5-dimensional models formulated on a warped geometric space [@Huber] face difficulties to account for the observable excess in [@klapdor]. In the context of $S^1/Z_2$ orbifold models, one has to solve an additional theoretical problem. The resulting KK neutrinos group themselves into approximately degenerate pairs of opposite CP parities. Because of this, the lepton-number-violating KK-neutrino effects cancel each other leading to unobservably small predicted values for the $0\nu\beta\beta$ decay. These disastrous CP parity cancellation effects can be avoided by arranging the opposite CP parity KK neutrinos to couple to the $W^\pm$ bosons with unequal strength. This feature can naturally be implemented if the $y = 0$ wall displaced from one of the $S^1/Z_2$ orbifold fixed points by an amount of order $1/M_W$. A unique prediction of such a model [@BKPP] is that the effective neutrino mass and the scale of the light neutrino masses can be completely decorrelated. Other new physics and neutrinos =============================== New long range forces --------------------- Long range forces in the context of particle physics originated with the ideas of Yang and Lee [@yang] and Okun [@okun] who proposed that gauging the baryon number or lepton number would give rise to a composition dependent long range force which could be tested in the classic Eotovos type experiments [@adelberger]. A special class of long range forces which distinguish between leptonic flavors have far reaching implications for the neutrino oscillations [@am; @grifols-masso] which may be used as a probe of such forces. The standard model Lagrangian is invariant under four global symmetries corresponding to the Baryon and three Lepton numbers $L_\alpha$ $(\alpha=e,\mu,\tau)$. Of these, only three combinations [@foot] of lepton numbers (i) $L_e-L_\mu$, (ii) $L_e-L_\mu$ or (iii) $L_\mu-L_\tau$, can be gauged in an anomaly free way without extending the matter content. The existence of neutrino oscillations implies that these symmetries have to be broken but the relevant gauge bosons can still be light if the corresponding couplings are very weak. It is possible in this case to obtain light gauge boson induced forces having terrestrial range (e.g. the Sun-Earth distance) without invoking extremely low mass scales [@am]. The exchange of such boson would induce matter effects in terrestrial, solar and atmospheric neutrino oscillations. For example, the electrons inside the sun generate a potential $V_{LR}$ at the earth surface given by $$\label{vlr} V_{LR}=\alpha {N_e\over R_{es}}\approx (1.04 \times 10^{-11} \,\mathrm{eV})\left({\alpha\over 10^{-50}}\right) ~,$$ where $\alpha\equiv {g^2\over 4 \pi}$ corresponds to the gauge coupling of the $L_{e}-L_{\mu,\tau}$ symmetry , $N_e$ is the number of electrons inside the sun and $R_{es}$ is the earth-sun distance $\approx 7.6 \times 10^{26} \,\mathrm{GeV}^{-1}$. The present bound on the $Z$-dependent force with range $\lambda\sim 10^{13}$ cm is given [@adelberger] by $\alpha< 3.3\times 10^{-50}$. Eq. (\[vlr\]) then shows that the potential $V_{LR}$ can introduce very significant matter-dependent effects in spite of the very strong bound on $\alpha$. One can define a parameter $$\xi\equiv {2 E_\nu V_{LR}\over \Delta m^2}$$ which measures the effect of the long range force in any given neutrino oscillation experiment. Given the terrestrial bound on $\alpha$, one sees that $\xi$ is given by $\xi_{atm}\sim 27.4$ in atmospheric or typical long-baseline experiments while it is given by $\xi_{solar}\sim 7.6$ in the case of the solar or KamLAND type of experiments. In either case, the long range force would change the conventional oscillation analysis. Relatively large value of $\alpha$ suppresses the oscillations of the atmospheric neutrinos. The observed oscillations then can be used to put stronger constraints on $\alpha$ which were analyzed in [@am]. One finds the improved 90% CL bound . $$\label{atmbound} \alpha_{e\mu}\leq 5.5\times 10^{-52}~~~~;~~~~\alpha_{e\tau}\leq 6.4 \times 10^{-52} ~,$$ in case of the $L_{e}-L_{\mu,\tau}$ symmetries, respectively. Although these bounds represent considerable improvement over the conventional fifth force bound, they still allow interesting effects which can be used as a probe of such long range forces in future long-baseline experiments with super beam or at neutrino factories. As a concrete example, let us consider the influence of the $L_e-L_\mu$ gauge interactions on the long-baseline oscillations of muon neutrinos of ${\cal O}$(GeV) energy. The oscillations of these neutrinos are governed by the following $3\times 3$ (mass)$^2$ matrix in the flavor basis: $$\label{3by3} M_{\nu}^2=U^* \mathrm{Diag}(m_1^2,m_2^2,m_3^2) U^\dagger+\mathrm{Diag}(A_{CC}+A_{LR},-A_{LR},0) ~.$$ $U$ denotes the (vacuum) mixing matrix for which we adopt the conventional parametrization. $A_{CC}=2E_\nu \sqrt{2} G_F n_e\approx (1.04 \times 10^{-13}\,\mathrm{eV}) 2 E_\nu$ describes the conventional MSW matter contribution generated by the earth matter (density $\rho\sim 2.8~ \rm gm/cm^3$; electron fraction $Y_e\sim 0.49$). The $A_{LR}\approx (1.04 \times 10^{-13}\,\mathrm{eV}) 2 E_\nu \alpha_{52}$ with $\alpha_{52}$ denoting the coupling of the long range force measured in units of $10^{-52}$. The $A_{LR}$ term dominates over $A_{CC}$ if $\alpha$ saturates the bound in Eq. (\[atmbound\]). \ The matter induced terms in Eq. (\[3by3\]) modify the neutrino oscillation in a non-trivial manner. This effect is analyzed in the limit of the vanishing solar scale and $A_{LR}=0$ in [@matter]. The $23$ mixing angle remains unaffected by the matter induced contribution but the $13$ angle can get resonantly enhanced for a neutrino energy given by $$E_\nu\approx {\cos 2 \theta_{13}\Delta m_{32}^2\over 2 \sqrt{2} G_F n_e}\approx 11.8 \,\mathrm{GeV} ~.$$ This leads to a rise in the oscillation probability $P_{e\mu}$, Figure(\[amfig1\]). The additional long range contribution results in a noticeable shift in the resonance energy as seen from Fig. (\[amfig1\]). We assumed normal hierarchy in this figure. The resonance behavior would be absent in case of the inverted hierarchy or in case of the anti-neutrino beam. While more detailed study is required to distinguish these cases, it is clear that future observations of matter effects in the long-baseline neutrino experiments provide a good probe of additional long range forces. Lorentz noninvariance, CPT violation and decoherence ---------------------------------------------------- ### CPT Violation In this section, we discuss neutrino oscillation phenomenology in the presence of CPT violation. CPT is a symmetry in any theory that satisfies the three assumptions that are normally taken for granted: (1) locality, (2) Lorentz invariance, and (3) hermiticity of the Hamiltonian. In particular, it predicts that the mass is common for a particle and its anti-particle. Any violation of CPT therefore would have a profound consequence on fundamental physics. The best limit on CPT violation is in the neutral kaon system, $|m(K^0) - m(\overline{K}^0)| < 10^{-18} m_K = 0.50 \times 10^{-18}$ GeV [@Hagiwara:fs]. Having such a stringent bound does not seem a sizable CPT violation in neutrino at the first sight. However, the kinematic parameter is mass-squared instead of mass, and the constraint may naturally be considered on the CPT-violating difference in mass-squared $|m^2(K^0) - m^2(\overline{K}^0)| < 0.25$ eV$^2$. In comparison, the combination of SNO and KamLAND data leads to the constraint $|\Delta m^2_\nu - \Delta m^2_{\bar{\nu}}| < 1.3 \times 10^{-3}$ eV$^2$ (90% CL) and hence currently the best limit on CPT violation [@Murayama:2003zw]. Having seen that the CPT violation in neutrino masses may be of size relevant to neutrino oscillation, it is useful to discuss how it may affect the phenomenology. In fact, the primary motivation for recent discussions on CPT violation in neutrino oscillation has been to reconcile LSND data with other data [@Murayama:2000hm]. It is well known that the LSND data is not quite consistent with the other oscillation evidence and limits even if a sterile neutrino state is introduced, both for $2+2$ and $3+1$ spectra (see [@Maltoni:2003yr] for a recent analysis; adding more sterile states helps [@sorel]). The main point is that the LSND oscillation is primarily an anti-neutrino oscillation $\bar{\nu}_\mu \rightarrow \bar{\nu}_e$, while the solar neutrino oscillation is purely neutrinos $\nu_e \rightarrow \nu_{\mu,\tau}$. It was shown to fit the LSND, solar, and atmospheric neutrino data simultaneously without invoking a sterile neutrino at that time [@Murayama:2000hm; @Barenboim:2001ac; @Strumia:2002fw]. Phenomenology had been further developed in Refs. [@Barenboim:2002rv; @Barenboim:2002ah]. ![The proposed spectra of neutrinos and anti-neutrinos in [@Murayama:2000hm] and [@Barenboim:2001ac]. Excluded by KamLAND.[]{data-label="fig:oldCPTviolation"}](oldCPTviolation){width="40.00000%"} However, KamLAND data shows $\bar{\nu}_e \rightarrow \bar{\nu}_{\mu,\tau}$ oscillation with parameters consistent with the solar neutrino oscillation, and the CPT-violation alone cannot explain LSND. A new proposal tried to explain LSND and atmospheric anti-neutrino oscillations with a single $\Delta m^2$ [@Barenboim:2002ah], which was excluded by a global fit in [@Gonzalez-Garcia:2003jq]. Currently the best fit to the data is obtained by allowing for one sterile neutrino [*and*]{} CPT violation [@Barger:2003xm]. Because the short-baseline experiments that are constraining the interpretation of LSND data with sterile neutrino are mostly in neutrinos but not in anti-neutrinos, the $3+1$ spectrum is allowed if there is little mixing of the sterile state with others in neutrinos. ![The revised proposal in [@Barenboim:2002ah]. Excluded by the analysis taken from [@Gonzalez-Garcia:2003jq].[]{data-label="fig:newCPTviolation"}](newCPTviolation "fig:"){width="40.00000%"} ![The revised proposal in [@Barenboim:2002ah]. Excluded by the analysis taken from [@Gonzalez-Garcia:2003jq].[]{data-label="fig:newCPTviolation"}](LSND+CPT "fig:"){width="40.00000%"} ![The revised proposal in [@Barger:2003xm] that combines CPT violation and a sterile neutrino. The neutrinos always have $2+2$ spectrum, while the anti-neutrinos may have either $3+1$ or $2+2$ spectrum.[]{data-label="fig:newCPTviolation2"}](latestCPTviolation "fig:"){width="40.00000%"} ![The revised proposal in [@Barger:2003xm] that combines CPT violation and a sterile neutrino. The neutrinos always have $2+2$ spectrum, while the anti-neutrinos may have either $3+1$ or $2+2$ spectrum.[]{data-label="fig:newCPTviolation2"}](latestCPTviolation2 "fig:"){width="40.00000%"} Even though arbitrarily changing the neutrino and anti-neutrino masses seems to preserve Lorentz invariance, interacting theory also violates Lorentz invariance [@Greenberg:2002uu]. All discussions above assumed Lorentz invariance and hence should be regarded as purely phenomenological exercise. One theoretically well-defined way to break CPT is to introduce a cosmological “matter effect,” namely a background number density coupled to neutrinos. However, such framework does not explain data consistently [@DeGouvea:2002xp]. See also [@Kostelecky:2003xn] for a different framework of CPT violation and a recent discussion on the use of decoherence and CPT violation [@Barenboim:2004wu]. Lorentz invariance violation in the neutrino sector can arise via the see-saw mechanism. As discussed in [@Choubey:2003ke] this could explain why it would not be seen in the charged lepton sector. ### Decoherence So far, CPT violation as an inequality of masses between particles and antiparticles was the only way we understood CPT violation in high energy physics. However, is this CPT violation the only way a violation of this symmetry can manifest itself in nature? Such a question becomes extremely relevant for the case of LSND, because it is possible that other mechanisms leading to CPT violation exist, unrelated, in principle, to mass differences between particles and antiparticles. Such additional mechanisms for CPT violation may well be capable of explaining the LSND results within a three generation scenario without invoking a sterile neutrino (a scenario which, on the other hand, is getting totally excluded as new experimental data become available). It is therefore necessary to explore whether alternative ways exist to account for the LSND result without invoking extra (sterile) neutrino states. Quantum decoherence is the key to answer this question. Indeed, quantum decoherence in matter propagation occurs when the matter subsystem interacts with an ‘environment’, according to the rules of open-system quantum mechanics. At a fundamental level, such a decoherence may be the result of propagation of matter in quantum gravity space-time backgrounds with ‘fuzzy’ properties, which may be responsible for violation of CPT in a way not necessarily related to mass differences between particles and antiparticles. A characteristic example of such a violation occurs in quantum gravity models that involve singular space-time configurations, integrated over in a path integral formalism, which are such that the axioms of quantum field theory, as well as conventional quantum mechanical behavior, cannot be maintained. Such configurations consist of wormholes, microscopic (Planck size) black holes, and other topologically non-trivial solitonic objects, such as [*geons*]{} etc. Collectively, we may call such configurations [*space time foam*]{}. It has been argued that, as result, a mixed state description must be used ([*QG-induced decoherence*]{}) [@ehns], given that such objects cannot be accessible to low-energy observers, and as such must be traced over in an effective field theory context. As a consequence of that CPT invariance in its strong form must be abandoned in a foamy quantum gravity theory. Such a breakdown of CPT symmetry is a fundamental one, and, in particular, implies that a proper CPT operator may be [*ill defined*]{} in such QG decoherence cases. Some caution should be paid regarding CPT violation through decoherence. From a formal view point, the non-invertibility of the S-matrix, which implies a strong violation of CPT, does not preclude a softer form of CPT invariance, in the sense that any strong form of CPT violation does not necessarily have to show up in any single experimental measurement. This implies that, despite the general evolution of pure to mixed states, it may still be possible in the laboratory to ensure that the system evolves from an initial pure state to a single final state, and that the weak form of CPT invariance is manifested through the equality of probabilities between these states. If this is the case, then the decoherence-induced CPT violation will not show up in any experimental measurement. In the parametrization of [@ehns] for the decoherence effects, one uses three decoherence parameters with dimensions of energy, $\alpha,\beta,\gamma$, where the positivity of $\rho$, required by the fact that its diagonal elements express probability densities, implies $\alpha, \gamma \ge 0$, and $\alpha \gamma \ge \beta^2$. If the requirement of a completely positive map $\rho (t)$ is imposed in the two generation case, then ${\cal L}$ becomes diagonal, with only one non vanishing entry occupied by the decoherence parameter $\gamma > 0$ [@benatti]. Following this approach, for a three generation scenario, we will assume for the $9 \times 9$ decoherence matrix ${\cal L}$: $[{\cal L}_{\mu\nu}]= {\rm Diag}\left(0, -\gamma_1,-\gamma_2, -\gamma_3,-\gamma_4,-\gamma_5,-\gamma_6,-\gamma_7,-\gamma_8\right)$ in direct analogy with the two-level case of complete positivity [@lisi; @benatti], although there is no strong physical motivation behind such restricted forms of decoherence. This assumption, however, leads to the simplest possible decoherence models, and, for our [*phenomenological*]{} purposes in this work, we will assume the above form, which we will use to fit all the available neutrino data. It must be clear to the reader though, that such a simplification, if proved to be successful (which, as we shall argue below, is the case here), just adds more in favor of decoherence models, given the restricted number of available parameters for the fit in this case. In order to check these models, we have performed a $\chi^2 $ comparison (as opposed to a $\chi^2 $ fit) to SuperKamiokande sub-GeV and multi GeV data (40 data points), CHOOZ data (15 data points) and LSND (1 datum), for a sample point in the vast parameter space of our extremely simplified version of decoherence models. Let us emphasize that we have [**not**]{} performed a $\chi^2$-fit and therefore the point we are selecting (by “eye” and not by $\chi$) is not optimized to give the best fit to the existing data. Instead, it must be regarded as one among the many equally good sons in this family of solutions, being extremely possible to find a better fitting one through a complete (and highly time consuming) scan over the whole parameter space. Cutting the long story short, and to make the analysis easier, we have set all the $\gamma_i$ in the neutrino sector to zero, restricting this way, all the decoherence effects to the antineutrino one where we have assumed for the sake of simplicity, $ \overline{\gamma_{1}}= \overline{\gamma_{2}} = 2 \cdot 10^{-18} \cdot E ~~{\rm and}~~ \overline{\gamma_{3}} = \overline{\gamma_8}= 1 \cdot 10^{-24}/E~ $, where $E$ is the neutrino energy, and barred quantities refer to the antineutrinos, given that decoherence takes place only in this sector in our model. All the other parameters are assumed to be zero. All in all, we have introduced only two new parameters, two new degrees of freedom, $ \overline{\gamma_{1}}$ and $ \overline{\gamma_{3}}$, and we shall try to explain with them all the available experimental data. (For further details we refer the reader to [@Barenboim:2004wu]). At this point it is important to stress that the inclusion of two new degrees of freedom is not sufficient to guarantee that one will indeed be able to account for all the experimental observations. We have to keep in mind that, in no-decoherence situations, the addition of a sterile neutrino (which comes along with four new degrees of freedom -excluding the possibility of CP violating phases) did not seem to be sufficient for matching all the available experimental data, at least in CPT conserving situations. In order to test our model we have calculated the $\chi^2$ of the 56 data points mentioned above for different scenarios, The results with which we hope all our claims become crystal clear are summarized in the table, were we present the $\chi^2$ comparison for the following cases: (a) pure decoherence in the antineutrino sector, (b) pure decoherence in both sectors, (c) mixing plus decoherence in the antineutrino sector, (d) mixing plus decoherence in both sectors, and (e) mixing only - the standard scenario: --------------- ----------------------- ------------------------- Model $\chi^2$ without LSND $\chi^2$ including LSND \[0.5ex\] (a) 980.7 980.8 (b) 979.8 980.0 (c) 52.2 52.3 (d) 54.4 54.6 (e) 53.9 60.7 \[0.5ex\] --------------- ----------------------- ------------------------- From the table it becomes clear that the mixing plus decoherence scenario in the antineutrino sector can easily account for all the available experimental information, including LSND. It is important to stress once more that our sample point was not obtained through a scan over all the parameter space, but by an educated guess, and therefore plenty of room is left for improvements. On the other hand, for the mixing-only/no-decoherence scenario, we have taken the best fit values of the state of the art analysis and therefore no significant improvements are expected. At this point a word of warning is in order: although superficially it seems that scenario (d), decoherence plus mixing in both sectors, provides an equally good fit, one should remember that including decoherence effects in the neutrino sector can have undesirable effects in solar neutrinos, especially due to the fact that decoherence effects are weighted by the distance traveled by the neutrino, something that may lead to seizable (not observed!) effects in the solar case. One might wonder then, whether decohering effects, which affect the antineutrino sector sufficiently to account for the LSND result, have any impact on the solar-neutrino related parameters, measured through antineutrinos in the KamLAND experiment. In order to answer this question, it will be sufficient to calculate the electron survival probability for KamLAND in our model, which turns out to be $ P_{\bar\nu_{e}\rightarrow \bar\nu_{e}} \mid_{\mbox{\tiny KamLAND}} \simeq .63$, in perfect agreement with observations. It is also interesting to notice that in our model, the LSND effect is not given by the phase inside the oscillation term (which is proportional to the solar mass difference) but rather by the decoherence factor multiplying the oscillation term. Therefore the tension between LSND and KARMEN data is naturally eliminated, because the difference in length leads to an exponential suppression. Having said that, it is now clear that decoherence models (once neutrino mixing is taken into account) are the best (and arguably the only) way to explain all the observations including the LSND result. This scenario, which makes dramatic predictions for the upcoming neutrino experiments, expresses a strong observable form of CPT violation in the laboratory, and in this sense, our fit gives a clear answer to the question asked in the introduction as to whether the weak form of CPT invariance is violated in Nature. It seems that, in order to account for the LSND results, we should invoke such a decoherence-induced CPT violation, which however is independent of any mass differences between particles and antiparticles. This CPT violating pattern, with equal mass spectra for neutrinos and antineutrinos, will have dramatic signatures in future neutrino oscillation experiments. The most striking consequence will be seen in MiniBooNE, according to our picture, MiniBooNE will be able to confirm LSND only when running in the antineutrino mode and not in the neutrino one, as decoherence effects live only in the former. Smaller but experimentally accessible signatures will be seen also in MINOS, by comparing conjugated channels (most noticeably, the muon survival probability). Higher energy neutrino beams or long-baseline experiments, will have significant deviations from the non-decoherence models, as our effects scale with energy and distance traveled, being therefore the best tool to explore decoherence models. If the neutrino masses are actually related to decoherence as a result of quantum gravity, this may have far reaching consequences for our understanding of the Early stages of our Universe, and even the issue of Dark Energy that came up recently as a result of astrophysical observations on a current acceleration of the Universe from either distant supernovae data or measurements on Cosmic Microwave Background temperature fluctuations from the WMAP satellite. Indeed, decoherence implies an absence of a well-defined scattering S-matrix, which in turn would imply CPT violation in the strong form. A positive cosmological [*constant*]{} $\Lambda > 0$ will also lead to an ill definition of an S-matrix, precisely due to the existence, in such a case, of an asymptotic-future de Sitter (inflationary) phase of the universe, with Hubble parameter $\sim\sqrt{\Lambda}$, implying the existence of a cosmic (Hubble) horizon. This in turn will prevent a proper definition of pure asymptotic states. We would like to point out at this stage that the claimed value of the dark energy density component of the (four-dimensional) Universe today, $\Lambda \sim 10^{-122}M_P^4$, with $M_P \sim 10^{19}$ GeV (the Planck mass scale), can actually be accounted for (in an amusing coincidence?) by the scale of the neutrino mass differences used in order to explain the oscillation experiments. Indeed, $\Lambda \sim [(\Delta m^2)^2/M_P^4]M_P^4 \sim 10^{-122} M_P^4$ for $\Delta m^2 \sim 10^{-5}$ eV$^2$, the order of magnitude of the solar neutrino mass difference assumed in oscillation experiments (which is the one that encompasses the decoherence effects). The quantum decoherence origin of this mass then would be in perfect agreement with the decoherence properties of the cosmological constant vacuum, mentioned previously. NuTeV Physics ============= The NuTeV experiment [@Zeller:2001hh] at Fermilab has measured the ratios of neutral to charged current events in muon (anti)neutrino-nucleon scattering: $$\begin{aligned} R_\nu & = & \frac{ \sigma(\nu_\mu N \rightarrow \nu_\mu X) }{\sigma(\nu_\mu N \rightarrow \mu^- X) } \;=\; g_L^2 + r g_R^2\;, \cr R_{\bar{\nu}} & = & \frac{ \sigma(\bar{\nu}_\mu N \rightarrow \bar{\nu}_\mu X) }{ \sigma (\bar{\nu}_\mu N \rightarrow \mu^+ X) } \;=\; g_L^2 + \frac{g_R^2}{r}\;,\end{aligned}$$ where $$r = \frac{ \sigma( \bar{\nu}_\mu N \rightarrow \mu^+ X) }{ \sigma( \nu_\mu N \rightarrow \mu^- X) } \sim \frac{1}{2}\;,$$ and has determined the parameters $g_L^2$ and $g_R^2$ [@LlewellynSmith:ie] to be $$\begin{aligned} g_L^2 & = & 0.30005 \pm 0.00137\;, \cr g_R^2 & = & 0.03076 \pm 0.00110\;. \label{nutev}\end{aligned}$$ The Standard Model (SM) predictions of these parameters based on a global fit to non-NuTeV data, cited as $[g_L^2]_\mathrm{SM}=0.3042$ and $[g_R^2]_\mathrm{SM}=0.0301$ in Ref. [@Zeller:2001hh], differ from the NuTeV result by $3\sigma$ in $g_L^2$. Alternatively, if the SM is fit to the NuTeV result, the preferred range of the Higgs mass is $660~\mathrm{GeV}< m_H$ (90% C.L.) [@Chanowitz:2002cd], well above the value of $m_H \sim 90~\mathrm{GeV}$ preferred by the non-NuTeV global fit [@LEP/SLD:2003]. The significance of the NuTeV result remains controversial [@Davidson:2001ji], and a critical examination of the initial analysis is ongoing. Several groups are evaluating potential theoretical uncertainties arising from purely Standard Model physics which might be comparable to or larger than the quoted experimental uncertainty of the NuTeV result. Candidate sources of large theoretical uncertainty include next-to-leading-order (NLO) QCD corrections [@Dobrescu:2003], NLO electroweak corrections [@Diener:2003], and parton distribution functions (especially as involves assumptions about sea-quark asymmetries) [@Gambino:2003]. The effect of the former has been estimated to be comparable in size to the NuTeV experimental uncertainty, while the latter two might give rise to effects comparable in size to the full NuTeV discrepancy with the Standard Model. Elucidation of the actual impact of these effects on the NuTeV result awaits a reanalysis of the NuTeV data. However, it remains a distinct possibility that the discrepancy with the Standard Model prediction is genuine and that its resolution lies in physics beyond the Standard Model. Indeed, as Chanowitz has emphasized [@Chanowitz:2002cd], the precision electroweak data indicate new physics whether anomalous data are excluded from global fits (since the preferred Higgs mass is then well below the direct search limit) or included in the fits (in which case anomalous data themselves demand a new physics explanation). Note that the NuTeV value for $g_L^2$ in Eq. (\[nutev\]) is *smaller* than its SM prediction. This is a reflection of the fact that the ratios $R_\nu$ and $R_{\bar{\nu}}$ were smaller than expected by the SM. (The $g_R^2$ term is smaller than the $g_L^2$ term by an order of magnitude and is insignificant.) Thus, possible *new* physics explanations of the NuTeV anomaly would be those that suppress the neutral current cross sections over the charged current cross sections, or enhance the charged current cross sections over the neutral current cross sections. Two classes of models have been proposed which accomplish this task. The first class comprises models which suppress $R_\nu$ and $R_{\bar{\nu}}$ with the introduction of new neutrino-quark interactions, mediated by leptoquarks or extra $U(1)$ gauge bosons ($Z'$’s), which interfere either destructively with the $Z$-exchange amplitude, or constructively with the $W$-exchange amplitude [@Davidson:2001ji]. In order to preserve the excellent agreement between the SM and non-NuTeV data, the new interactions must selectively interfere with the $\nu_\mu N$ ($\bar{\nu}_\mu N$) scattering process, but little else. This severely restricts the types of interactions that may be introduced. Ref. [@Davidson:2001ji] proposes a model in which the $Z'$ couples to $B-3L_\mu$. This model must be fine-tuned to avoid $Z$-$Z'$ mixing [@Loinaz:1999qh] which would disrupt, among other things, lepton universality at the $Z$-pole. Fitting the NuTeV anomaly requires $$\frac{M_{Z'}}{g_{Z'}} \approx 3\;\mathrm{TeV}\;.$$ Bounds from direct $Z'$ searches at the Tevatron and LEP limit the possible range of $M_{Z'}$ to $M_{Z'} > 600\,\mathrm{GeV}$ for $g_{Z'}\sim 1$, or $2\;\mathrm{GeV} < M_{Z'} < 10\,\mathrm{GeV}$ for $g_{Z'}\sim 10^{-3}$. The $Z'$ in the model proposed in Ref. [@Ma:2001md] does not couple the neutrinos and quarks directly, since the gauged charge is $L_\mu - L_\tau$. Rather, it is a tunable $Z$-$Z'$ mixing in the model which is responsible for suppressing the neutral channel cross section. The same mixing violates lepton universality on the $Z$-pole and prevents the mechanism from completely mitigating the NuTeV anomaly. $Z'$ masses in the range $60\,\mathrm{GeV}<M_{Z'} <72\,\mathrm{GeV}$, or $M_{Z'}> 178\,\mathrm{GeV}$ brings the theoretical value of $g_L^2$ within $1.6\sigma$ of the NuTeV value while keeping lepton universality violation within $2\sigma$. In general, models in this class are constrained strongly by lepton universality, because $\nu_\ell$ is the $SU(2)_L$ partner of $\ell_L^-$. New interactions which respect the $SU(2)_L$ gauge symmetry cannot affect neutrino couplings alone: they necessarily affect couplings of the charged leptons. Nevertheless, they provide possible explanations of the NuTeV anomaly, and predict a flavor-selective gauge boson in the several 100 GeV to TeV range, well within reach of the LHC. Models of the second class suppress the $Z\nu\nu$ coupling by mixing the neutrino with heavy gauge singlet states (neutrissimos, i.e. right-handed neutrinos) [@numix; @Chang:1994hz; @LOTW1; @LORTW2]. For instance, if the $SU(2)_L$ active $\nu_\mu$ is a linear combination of two mass eigenstates with mixing angle $\theta$, $$\nu_\mu = (\cos\theta) \nu_\mathrm{light} + (\sin\theta) \nu_\mathrm{heavy}\;,$$ then the $Z\nu_\mu \nu_\mu$ coupling is suppressed by a factor of $\cos^2\theta$ (assuming the heavy states are too massive to be created on-shell). Likewise, the $W\mu\nu_\mu$ coupling is suppressed by $\cos\theta$. Although both the numerators and denominators of $R_{\nu}$ and $R_{\bar{\nu}}$ are suppressed in such a model, the suppression of the numerators exceeds that of the denominators, and the ratios are therefore diminished. More generally, if the $Z\nu_\ell \nu_\ell$ coupling ($\ell=e,\mu,\tau$) is suppressed by a factor of $(1-\varepsilon_\ell)$, then the $W\ell\nu_\ell$ coupling is suppressed by $(1-\varepsilon_\ell/2)$, and $R_{\nu}$ and $R_{\bar{\nu}}$ are suppressed by $(1-\varepsilon_\mu)$. The effect of such suppressions of the neutrino-gauge couplings is not limited to NuTeV observables alone. In addition to the obvious suppression of the $Z$ invisible width by a factor of $[1-(2/3)(\varepsilon_e+\varepsilon_\mu+\varepsilon_\tau)]$, all SM observables will be affected through the Fermi constant $G_F$ which is no longer equal to the muon decay constant $G_\mu$: $$G_F = G_\mu \left(1+\frac{\varepsilon_e + \varepsilon_\mu}{2} \right)\;.$$ This shift in $G_F$ will destroy the excellent agreement between the SM and $Z$-pole observables. However, since $G_F$ always appears in the combination $\rho G_F$ in neutral current amplitudes, the agreement can be recovered by absorbing the shift in $G_F$ into a shift in $\rho$, or equivalently, in the oblique correction parameter $T$ [@Peskin:1990zt]. Indeed, it was shown in Ref. [@LOTW1], that the $Z$-pole, NuTeV, and $W$ mass data can all be fit with the oblique correction parameters $S$, $T$, $U$, and a flavor universal suppression parameter $\varepsilon = \varepsilon_e = \varepsilon_\mu = \varepsilon_\tau$, the best fit values given by $$\begin{aligned} S & = & -0.03 \pm 0.10 \;,\cr T & = & -0.44\pm 0.15 \;,\cr U & = & \phantom{-}0.62\pm 0.16 \;,\cr \varepsilon & = & 0.0030\pm 0.0010\;,\end{aligned}$$ for a reference SM with $m_H=115\;\mathrm{GeV}$. Therefore, for this class of models to work, neutrino mixing with heavy gauge singlet states must be accompanied by new physics contributions to $S$, $T$, and $U$. The values of $S$ and $T$ can be accommodated within the SM by simply increasing the Higgs mass to hundreds of GeV, but the $W$ mass requires a large and positive $U$ parameter which cannot be generated within the SM. Thus, the models are not complete until some mechanism is found which explains the $W$ mass. But then, if the SM is fit to the $W$ mass alone, the preferred Higgs mass is far below direct search limits [@Chanowitz:2002cd], which could be an indication that the $W$ mass requires new physics regardless of NuTeV. At first blush, the preferred value of $\varepsilon$ above is also problematic. This implies a large mixing angle, $\theta = 0.055 \pm 0.010$, if interpreted as due to mixing with a single heavy state. The commonly accepted seesaw mechanism [@Yanagida:1980; @Gell-Mann:1980vs; @Glashow:1979vf; @Mohapatra:1980ia] relates the mixing angle to the ratio of the neutrino masses: $$\frac{m_\mathrm{light}}{m_\mathrm{heavy}}\approx \theta^2\;.$$ Choosing $m_\mathrm{light} \sim 0.1\,\mathrm{eV}$ and $m_\mathrm{heavy} \sim 100\,\mathrm{GeV}$ ($m_\mathrm{heavy}>M_Z$ is needed to suppress $\Gamma_\mathrm{inv}$) we find the mixing angle orders of magnitude too small: $\theta \sim 10^{-6}$. However, this result does not mean that it is impossible to have a large enough mixing angle between the light and heavy states. As pointed out in Ref. [@Chang:1994hz], in models with more than one generation, the generic mass matrix includes enough degrees of freedom to allow us to adjust all the masses and mixings independently. Concrete examples of models with large mass hierarchies AND large mixing angles can be found in Refs. [@Glashow:2003wt; @LORTW2]. What is sacrificed, however, is the traditional seesaw explanation of the small neutrino mass: i.e. since the Majorana mass $M$ in the neutrino mass matrix should be of the order of the GUT scale, the neutrino mass $m_\mathrm{light}\sim m^2/M$ is naturally suppressed if the Dirac mass $m$ is comparable to that of the other fermions. An alternative mechanism is used in Ref. [@LORTW2]. There, an intergenerational symmetry is imposed on the neutrino mass texture which reduces its rank, generating naturally light (massless) mass eigenstates. Abandoning the seesaw mechanism also frees the masses of the heavy states from being fixed at the GUT scale. Indeed, in the model discussed in Ref. [@LORTW2], the assumption that neutrinos and up-type quarks have a common Dirac mass implies that the masses of the heavy state could be a few TeV, well within the reach of the LHC. Without quark-lepton unification $m_\mathrm{heavy}$ could be even lighter, rendering them accessible to Tevatron Run II. Because of the large mixing angles between the light and heavy states in this class of models, flavor changing processes mediated by the heavy states may be greatly enhanced [@Glashow:2003wt; @LORTW2; @Shrock:1977; @LeeShrock:77]. As a result, stringent constraints can be placed on the models from the experimental limits on $\mu\rightarrow e\gamma$, $\tau\rightarrow \mu\gamma$, [@present] $\mu$-$e$ conversion in nuclei [@Ahmad:1988ur; @Simkovic:2001fy], muonium-antimuonium oscillation [@Willmann:1998gd; @Halprin:wm], etc. For instance, the MEGA limit on $\mu\rightarrow e\gamma$ leads to the constraint [@LORTW2] $$\varepsilon_e\varepsilon_\mu \approx 0\;.$$ Therefore, lepton universality among the $\varepsilon_\ell$ must be broken maximally. Ref. [@LORTW3] shows that it is possible to fit the $Z$-pole, NuTeV, and lepton universality data while satisfying this condition. The MEG (Mu-E-Gamma) experiment at PSI [@meg] plans to improve upon the MEGA limit by about two orders of magnitude. The MECO (Muon on Electron COnversion) experiment at Brookhaven [@meco] aims to improve the limits on $\mu-e$ conversion in nuclei by three orders of magnitude. Further constraints can be obtained from muon $g-2$ [@Sichtermann:2003cc; @Ma:2002df], and the violation of CKM unitarity [@Langacker:2003tv; @Abele:2002wc; @Tipton:2000qi]. The NuTeV anomaly, even if it does not ultimately endure sustained scrutiny, stirs us to look past orthodoxies in our model-building (seesaw, SUSY, GUTs,...) and to ask broadly what is permitted by the data. The neutrino mixing solution is relatively conservative in its use of the neutrino sector to address the NuTeV question. Nonetheless, it makes interesting predictions about new particles at LHC, can be probed by a wide range of neutrino oscillation experiments, precision measurements and rare decay searches, and introduces an alternative to the seesaw paradigm. Whether this or another solution resolves the NuTeV anomaly, the NuTeV result serves to focus the imagination of the theorist on the opportunities presented by the experiments. Conclusions =========== In this report, we have presented a brief review of the present[^14] knowledge of neutrino physics and what we can learn from the planned experiments in the next decade. Three very important measurements that are guaranteed to have a significant impact on the search for physics beyond the Standard Model are: (i) the rate of $\beta\beta_{0\nu}$, which will inform us not only whether the neutrino is a Majorana or Dirac particle but may also provide information about the neutrino masses; (ii) the value of $\theta_{13}$, which will considerably narrow the field of flavor models and (iii) the sign of the $\Delta m^2_{13}$, which determines the neutrino mass hierarchy and will also help guide our understanding of flavor physics. Within the three neutrino picture, more precise measurements of the solar and atmospheric mixing angles will be helpful in discriminating among various new physics possibilities. Important though somewhat model-dependent constraints can be drawn from experimental searches for charged lepton flavor violating processes, such as $\mu \rightarrow e \gamma$ or $\mu\to e$ conversion in nuclei, and from searches for nonzero electric dipole moments of leptons. Keep in mind that the matter-antimatter symmetry of the Universe may have its explanation in the very same mechanism that generates the small neutrino masses, and that we may be able to test this hypothesis with enough low-energy information. Beyond the three neutrino picture, a very important issue is the status of the LSND result and whether the existence of light sterile neutrinos can be inferred from terrestrial neutrino oscillations experiments. The results of MiniBooNE, assuming they confirms those from LSND, have the potential to revolutionize our current understanding of neutrinos. Even if MiniBooNE does not confirm the LSND result, sterile neutrino effects can still be present in different channels at a more subdominant level, as has been suggested in several theoretical models. Another important issue in neutrino physics is the magnetic moment of the neutrino, which is expected at to be nonzero but very small within the standard picture of eV sized neutrino masses and in the absence of new physics at the TeV scale. Thus, evidence for a nonzero neutrino magnetic moment close to the current astrophysical limit of $~10^{-11}\mu_B$ would have to be interpreted as evidence of TeV scale new physics such as TeV scale left-right models, horizontal models, or large extra dimensions. Other unique probes of TeV scale physics are provided by neutrino oscillation experiments, thanks to their sensitivity to non-standard neutrino interactions. Finally, one can use results in neutrino physics to test the limits of the assumptions on which the Standard Model is based, including Lorentz and CPT invariance. [**Acknowledgments**]{} The work of R.N.M. is supported by the National Science Foundation grant No. PHY-0099544 and PHY-0354401. The work of W.R. and M.L. is supported by the “Deutsche Forschungsgemeinschaft” in the “Sonderforschungsbereich 375 für Astroteilchenphysik” and under project number RO-2516/3-1 (W.R.). The work of M.R. is partially supported by the EU 6th Framework Program MRTN-CT-2004-503369 “Quest for Unification” and MRTN-CT-2004-005104 “ForcesUniverse”. The work of R.S. is supported in part by the NSF grant NSF-PHY-00-98527. The work of J.K. is supported by the “Impuls- und Vernetzungsfonds” of the Helmholtz Assciation, contract number VH-NG-006. The work of S.A. is supported by the PPARC grant PPA/G/O/2002/00468. The work of A.d.G. is sponsored in part by the US Department of Energy Contract DE-FG02-91ER40684. We thank F. Vissani for participating in the shorter version of this report. [999]{} \#1\#2\#3[Phys. Lett.  [**B\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys.  [**B\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev.  [**D\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Mod. Phys. Lett. [**A\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rep.  [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Science [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Astrophys. J.  [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Eur. Phys. J.  [**C\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[JHEP [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{}, \#2 (\#3)]{} R.N. Mohapatra [*et al.*]{}, hep-ph/0412099. B.T. Cleveland [*et al.*]{}, Astrophys. J. [**496**]{} (1998) 505; Y. Fukuda [*et al.*]{}, [Phys. Rev. Lett. ]{} [**77**]{} (1996) 1683; V. Gavrin, [Nucl. Phys. Proc. Suppl.]{} [**91**]{} (2001) 36; W. Hampel [*et al.*]{}, [Phys. Lett. ]{} [**B447**]{} (1999) 127; M. Altmann [*et al.*]{}, [Phys. Lett. ]{} [**B490**]{} (2000) 16; Super-Kamiokande Coll., Y. Fukuda [*et al.*]{}, [Phys. Rev. Lett. ]{} [**86**]{} (2001) 5651; Q. R. Ahmad [*et al.*]{} \[SNO Collaboration\], [Phys. Rev. Lett.]{} [**87**]{} (2001) 071301 \[nucl-ex/0106015\]; Q. R. Ahmad [*et al.*]{} \[SNO Collaboration\], [Phys. Rev. Lett.]{} [**89**]{} (2002) 011301 \[nucl-ex/0204008\]; S. N. Ahmed [*et al.*]{} \[SNO Collaboration\], Phys. Rev. Lett.  [**92**]{}, 181301 (2004) \[nucl-ex/0309004\]; K. Eguchi [*et al.*]{} \[KamLAND Collaboration\], [Phys. Rev. Lett.]{} [**90**]{} (2003) 021802 \[hep-ex/0212021\]; Y. Fukuda [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. Lett.  [**81**]{}, 1562 (1998) \[hep-ex/9807003\]. W. Allison et al., Phys. Lett. [**B391**]{}, 491 (1997); Phys. Lett. [**B449**]{}, 137 (1999). Reviews of solar neutrinos include J. Bahcall’s URL http://www.ias.edu/ jnb and a review of atmospheric neutrino data is C. K. Jung, C. McGrew, T. Kajita, and T. Mann, Ann. Rev. Nucl. Part. Sci. [**51**]{}, 451 (2001); see also the web site http://neutrinooscillation.org. Current experimental results are compiled in the Particle Data Group, Review of Particle Physics, at http://pdg.lbl.gov. R. N. Mohapatra and P. B. Pal, [*Massive Neutrinos in Physics and Astrophysics*]{}, 3rd ed. (World Scientific, Singapore, 2004); V. Barger, K. Whisnant and D. Marfatia, Int. J. Mod. Phys. [**E12**]{}, 569 (2003); C. Gonzales-Garcia and Y. Nir, Rev. Mod. Phys. [**75**]{}, 345 (2003); A. Smirnov, Int. J. Mod. Phys. A [**19**]{}, 1180 (2004) \[hep-ph/0311259\]; S. Pakvasa and J. W. F. Valle, hep-ph/0301061; S. T. Petcov, hep-ph/0412410; M. Fukugita and T. Yanagida, [*Physics of Neutrinos and Applications in Astrophysics*]{} (Springer, Berlin, 2003). For some earlier reviews, see S. M. Bilenky, C. Giunti and W. Grimus, Prog. Part. Nucl. Phys. (1999) 1; F. Boehm and P. Vogel, [*Physics of Massive Neutrinos*]{} (2nd ed., Cambridge Univ. Press, Cambridge, 1992); C. W. Kim and A. Pevsner, [*Neutrinos in Physics and Astrophysics*]{} (Harwood, Reading, 1992); S. M. Bilenky and S. T. Petcov, Rev. Mod. Phys. [**59**]{}, 67 (1987). See also a recent focus issue on Neutrinos by “New Journal of Physics” edited by F. Halzen, M. Lindner and A. Suzuki; there is an extensive web site that not only reviews the history of the early developments in the field but also provides a very up-to-date list of references of the important papers maintained by C. Giunti and Marco Leveder; entitled “Neutrino Unbound” at [ http://www.nu.to.infn.it/]{}. H. Back [*et al.*]{}, hep-ex/0412016 E. Anderson [*et al.*]{}, available online at [http://www.aps.org/neutrino/index.cfm]{} Part of the APS Neutrino Study. C. Albright[*et al.*]{}, available online at [http://www.aps.org/neutrino/index.cfm]{} C. Aalseth [*et al.*]{}, hep-ph/0412300. S.W. Barwick [*et al.*]{}, astro-ph/0412544. B. Pontecorvo, Zh. Eksp. Teor. Fiz. [**33**]{} (1957) 549 and [**34**]{} (1958) 247; Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys.  [**28**]{} (1962) 870 (discussed the $2 \times 2$ case); B. Pontecorvo, Zh. Eksp. Teor. Fiz. [**53**]{} (1967) 1717; V. N. Gribov and B. Pontecorvo, Phys. Lett. [**B 28**]{}, 493 (1969). S. M. Bilenky, J. Hosek and S. T. Petcov, Phys. Lett.  B[**94**]{} (1980) 495; J. Schechter and J.W.F. Valle, [*Phys. Rev.*]{} [**D22**]{} (1980) 2227; M. Doi [*et al.*]{}, Phys. Lett.  **B102** (1981) 323. For recent analyses, see J. N. Bahcall, M. C. Gonzalez-Garcia and C. Pena-Garay, JHEP [**0408**]{}, 016 (2004) \[arXiv:hep-ph/0406294\]; C. Gonzalez-Garcia and M. Maltoni, hep-ph/0406056; M. Maltoni, T. Schwetz, M. A. Tortola and J. W. F. Valle, hep-ph/0405172; A. Bandyopadhyay [*et al.*]{}, hep-ph/0406328; S. Goswami, A. Bandyopadhyay and S. Choubey, hep-ph/0409224; G. Fogli, E. Lisi, in [*Altarelli, G. (ed.) et al.: Neutrino mass, 2003*]{}, pp. 135. M. Apollonio *et al.*, Phys. Lett. [**B466**]{} (1999) 415; F. Boehm *et al.*, Phys. Rev. [**D62**]{} (2000) 072002. L. Wolfenstein, Phys. Rev.[**D 17**]{}, 2369 (1978); S. P. Mikheyev and A. Smirnov, Nuovo Cimento [**C 9**]{}, 17 (1986). V. Barger, R. Phillips and K. Whisnant, Phys. Rev. [**D 34**]{}, 980 (1986); H. Bethe, Phys. Rev. Lett. [**56**]{}, 1305 (1986); W. C. Haxton, Phys. Rev. Lett. [**57**]{}, 1271 (1986); S. Parke, Phys. Rev. Lett. [**57**]{}, 1275 (1986); S. P. Rosen and J. Gelb, Phys. Rev. [**D34**]{}, 969 (1986); T. K. Kuo and J. Pantaleone, Phys. Rev. Lett. [**57**]{}, 1805 (1986); P. Langacker [*et al.*]{}, Nucl. Phys. B [**282**]{} (1987) 589; S.T. Petcov, Phys. Lett. [**B200**]{} (1988) 373; A. Friedland, Phys. Rev. D[**64**]{} (2001) 013008; E. Lisi et al., Phys. Rev. D [**63**]{} (2001) 093002. J. N. Bahcall, M. C. Gonzalez-Garcia and C. Pena-Garay, JHEP [**0408**]{}, 016 (2004) \[arXiv:hep-ph/0406294\]. G. M. Fuller, J. R. Primack and Y. Z. Qian, Phys. Rev. D [**52**]{}, 1288 (1995) \[arXiv:astro-ph/9502081\]; D. O. Caldwell and R. N. Mohapatra, Phys. Lett. B [**354**]{}, 371 (1995) \[arXiv:hep-ph/9503316\]; G. Raffelt and J. Silk, Phys. Lett. B [**366**]{}, 429 (1996) \[arXiv:hep-ph/9502306\]. D. Caldwell and R. N. Mohapatra, Phys. Rev. [**D 48** ]{}, 3259 (1993); A. Joshipura, Phys. Rev. [**D51**]{}, 1321 (1995); K. S. Babu, E. Ma and J. W. F. Valle, hep-ph/0206292; S. Antusch and S. F. King, Nucl. Phys. B [**705**]{} (2005) 239, hep-ph/0402121. C. Kraus [*et al.*]{}, Eur. Phys. J. C [**40**]{}, 447 (2005) \[arXiv:hep-ex/0412056\]. V. M. Lobashev [*et al.*]{}, Phys. Lett. B [**460**]{}, 227 (1999). A. Osipowicz [*et al.*]{} \[KATRIN Collaboration\], hep-ex/0109033. S.M. Bilenky and S.T. Petcov in [@barger]. S. R. Elliott and P. Vogel, Ann. Rev. Nucl. Part. Sci. [**52**]{} (2002) 115; A. Morales and J. Morales, Nucl. Phys. Proc. Suppl.  [**114**]{}, 141 (2003) \[hep-ph/0211332\]. R. N. Mohapatra and J. Vergados, Phys. Rev. Lett. [**47**]{}, 1713 (1981); R. N. Mohapatra, Phys. Rev. [**D 34**]{}, 3457 (1986); B. Brahmachari and E. Ma, Phys. Lett. [**B536**]{}, 259 (2002). J. Schechter, J. W. F. Valle, [*Phys. Rev.*]{} [**D25**]{} (1982) 2951; E. Takasugi, [*Phys. Lett.*]{} [**B149**]{} (1984) 372. H. V. Klapdor-Kleingrothaus [*et al.*]{}, Eur. Phys. J. A [**12**]{}, 147 (2001) \[hep-ph/0103062\]. C. E. Aalseth [*et al.*]{} \[16EX Collaboration\], Phys. Rev. D [**65**]{}, 092007 (2002) \[hep-ex/0202026\]. G. L. Fogli [*et al.*]{}, Phys. Rev. D [**70**]{}, 113003 (2004). H. V. Klapdor-Kleingrothaus, I. V. Krivosheina, A. Dietz and O. Chkvorets, Phys. Lett. B [**586**]{}, 198 (2004) \[hep-ph/0404088\]. H. V. Klapdor-Kleingrothaus, A. Dietz, H. L. Harney and I. V. Krivosheina, Mod. Phys. Lett. A16 (2001) 2409-2420 \[hep-ph/0201231\]. C. E. Aalseth [*et al.*]{}, Mod. Phys. Lett. A [**17**]{}, 1475 (2002) \[hep-ex/0202018\]; H. V. Klapdor-Kleingrothaus, hep-ph/0205228; H. L. Harney, hep-ph/0205293; C. L. Bennett [*et al.*]{}, Astrophys. J. Suppl.  [**148**]{}, 1 (2003) \[astro-ph/0302207\]. S. Hannestad, hep-ph/0310220. A. D. Dolgov, K. Kainulainen and I. Z. Rothstein, Phys. Rev. D [**51**]{}, 4129 (1995) \[hep-ph/9407395\]. B. Kayser, in [*CP violation*]{}, ed. C. Jarlskog (World Scientific, 1988); Z.-z. Xing, Int. J. Mod. Phys. A [**19**]{}, 1 (2004) \[hep-ph/0307359\]. A. de Gouvêa, B. Kayser and R. N. Mohapatra, Phys.Rev. [**D67**]{}, 053004 (2003). H. Minakata, H. Nunokawa and S. Parke, Phys. Rev. [**D66**]{}, 093012 (2002) \[hep-ph/0208163\]; J. Burguet-Castell, M. B. Gavela, J. J. Gomez-Cadenas, P. Hernandez and O. Mena, Nucl. Phys. [**B646**]{} (2002) 301 (2002); H. Minakata, hep-ph/0402197 and references therein. S.M. Bilenky et al., [Phys. Rev.]{} [**D56**]{} (1996) 4432. B. W. Lee and R. E. Shrock, Phys. Rev. [**D16**]{}, 1444 (1977). J. Schechter and J. W. F. Valle, Phys. Rev. D 22, 2227 (1980). D. Caldwell and R. N. Mohapatra, Phys. Rev. [**D 46**]{}, 3259 (1993); J. Peltoniemi and J. W. F. Valle, Nucl. Phys. [**B 406**]{}, 409 (1993); J. Peltoniemi, D. Tommasini and J. W. F. Valle, Phys. Lett. [**B 298**]{}, 383 (1993). LSND collaboration, Phys. Rev. Lett. [**77**]{}, 3082 (1996). B. Armbruster et al. Phys. Rev. [**D65**]{}, 112001 (2002). A. O. Bazarko, \[BooNE collaboration\], hep-ex/9906003. S. Bilenky, W. Grimus, C. Giunti and T. Schwetz, hep-ph/9904316; V. Barger, B. Kayser, J. Learned, T. Weiler and K. Whisnant, Phys. Lett. [**B 489**]{}, 345 (2000); for a review, see S. Bilenky, C. Giunti and W. Grimus, Prog. Part. Nucl. Phys. [**43**]{}, 1 (1999). M. Sorel, J. Conrad and M. Shavitz, hep-ph/0305255. R. Shrock, Phys. Rev. D [**9**]{}, 743 (1974). J. Kim, Phys. Rev. D [**14**]{}, 3000 (1976). W. Marciano and A. I. Sanda, Phys. Lett. B [**67**]{}, 303 (1977). M. Beg, W. Marciano, and M. Ruderman, Phys. Rev. D [**17**]{}, 1395 (1978). K. Fujikawa and R. E. Shrock, Phys. Rev. Lett. [**45**]{}, 963 (1980). R. E. Shrock, Nucl. Phys. [**B206**]{}, 359 (1982). P. Pal and L. Wolfenstein, Phys. Rev. [**D25**]{}, 766 (1982). B. Kayser, Phys. Rev. [**D26**]{}, 1662 (1982). C. S. Lim and W. Marciano, Phys. Rev. [**D37**]{}, 1368 (1988). R. Barbieri and R. N. Mohapatra, Phys. Lett. [**B218**]{}, 225 (1989). K.S. Babu and R.N. Mohapatra, Phys. Rev. Lett. [**63**]{}, 228 (1989); Phys. Rev. Lett. [**64**]{}, 1705 (1990). R. N. Mohapatra, S.-P. Ng, and H.-B. Yu, Phys. Rev. [**D70**]{}, 057301 (2004). See G. Raffelt, [*Stars as Laboratories for Fundamental Physics*]{}, Chicago University Press (1996). S. T. Petcov, Sov. J. Nucl. Phys. [**25**]{} (1977) 340; (E) [**25**]{} (1977) 698. F. Wilczek and A. Zee, Phys. Rev. Lett. [**38**]{} (1977) 531. Y. Chikashige, R. N. Mohapatra and R. D. Peccei, Phys. Lett. [**B 98**]{}, 265 (1981). J. Schechter and J. W. F. Valle, Phys. Rev. D [**25**]{}, 774 (1982); J. W. F. Valle, Phys. Lett. B [**131**]{}, 87 (1983);G. B. Gelmini and J. W. F. Valle, Phys. Lett. B [**142**]{}, 181 (1984). J. F. Beacom, N. F. Bell, D. Hooper, S. Pakvasa and T. J. Weiler, Phys. Rev. D [**69**]{}, 017303 (2004) \[hep-ph/0309267\]. P. Minkowski, Phys. Lett. [**B67** ]{}, 421 (1977). T. Yanagida, in *Proceedings of the Workshop on the Unified Theory and the Baryon Number in the Universe* (O. Sawada and A. Sugamoto, eds.), KEK, Tsukuba, Japan, 1979, p. 95. M. Gell-Mann, P. Ramond, and R. Slansky, *Complex spinors and unified theories*, in *Supergravity* (P. van Nieuwenhuizen and D. Z. Freedman, eds.), North Holland, Amsterdam, 1979, p. 315. S. L. Glashow, *The future of elementary particle physics*, in *Proceedings of the 1979 Carg[è]{}se Summer Institute on Quarks and Leptons* (M. L[é]{}vy, J.-L. Basdevant, D. Speiser, J. Weyers, R. Gastmans, and M. Jacob, eds.), Plenum Press, New York, 1980, pp. 687–713. R. N. Mohapatra and G. Senjanovi[ć]{}, Phys. Rev. Lett. **44** (1980), 912. A. de Gouvêa, hep-ph/0501039. J. C. Pati and A. Salam, Phys. Rev. [**D10**]{}, 275 (1974); R. N. Mohapatra and J. C. Pati, Phys. Rev. [**D 11**]{}, 566, 2558 (1975); G. Senjanović and R. N. Mohapatra, Phys. Rev. [**D 12**]{}, 1502 (1975). G. Lazarides, Q. Shafi and C. Wetterich, Nucl.Phys.[**B181**]{}, 287 (1981); R. N. Mohapatra and G. Senjanović, Phys. Rev. [**D 23**]{}, 165 (1981). R. N. Mohapatra and R. E. Marshak, Proceedings of the [*Orbis Scientiae, January, 1980*]{}, p. 277 (Plenum Press, ed. B. Korsonoglu et al); J. Schechter and J. W. F. Valle, Phys. Rev. D [**22**]{}, 2227 (1980). T. P. Cheng and L. F. Li, Phys. Rev. D [**22**]{}, 2860 (1980). R. N. Mohapatra and P. B. Pal, [*Massive neutrinos in physics and astrophysics, vol. 1, 1990*]{}, p. 127 and 128; Eq. 7.19; E. Ma and U. Sarkar, Phys. Rev. Lett. [**80**]{}, 5716 (1998). E. Ma, Phys. Rev. Lett.[**81**]{}, 1171 (1998). For recent reviews, see S. F. King, Rept. Prog. Phys.  [**67**]{}, 107 (2004); G. Altarelli and F. Feruglio, hep-ph/0405048; New J.Phys. [**6** ]{}, 106 (2004). M. Raidal, hep-ph/0404046; H. Minakata and A. Y. Smirnov, hep-ph/0405088; P. H. Frampton and R. N. Mohapatra, hep-ph/0407139; W. Rodejohann, Phys. Rev. D [**69**]{}, 033005 (2004); J. Ferrandis and S. Pakvasa, hep-ph/0412038 S. Antusch, S. F. King and R. N. Mohapatra, arXiv:hep-ph/0504007; S. K. Kang, C. S. Kim and J. Lee, hep-ph/0501029; N. Li and B. Q. Ma, hep-ph/0501226; K. Cheung, S. K. Kang, C. S. Kim and J. Lee, hep-ph/0503122; T. Ohlsson, hep-ph/0506094. Leptonic symmetries $L_e-L_\mu \pm L_\tau$ and $L_e \pm L_\mu - L_\tau$ of neutrino masses were discussed early on in S. T. Petcov, Phys. Lett. [**B 110**]{}, 245 (1982); The specific combination $L_e-L_\mu-L_\tau$ were discussed in R. Barbieri, L. Hall, D. Smith, A. Strumia and N. Weiner, JHEP [**12**]{}, 017 (1998); A. Joshipura and S. Rindani, Eur. Phys.J. [**C14**]{}, 85 (2000); R. N. Mohapatra, A. Perez-Lorenzana, C. A. de S. Pires, Phys. Lett. [**B474**]{}, 355 (2000); T. Kitabayashi and M. Yasue, Phys. Rev. [**D63**]{}, 095002 (2001); Phys. Lett. [**B508**]{}, 85 (2001); Phys. Lett. [**B524**]{}, 308 (2002) \[hep-ph/0110303\]; L. Lavoura, Phys. Rev. [**D62**]{}, 093011 (2000); W. Grimus and L. Lavoura, Phys. Rev. [**D62**]{}, 093012 (2000); J. High Energy Phys. 09, 007 (2000); J. High Energy Phys. 07, 045 (2001); R. N. Mohapatra, Phys. Rev. [**D64**]{}, 091301 (2001) \[hep-ph/0107264\]; K. S. Babu and R. N. Mohapatra, Phys. Lett. [**B532**]{}, 77 (2002); H. S. Goh, R. N. Mohapatra and S. P. Ng, Phys. Lett. [**B542**]{}, 116 (2002) \[hep-ph/0205131\]; D. A. Dicus, H.-J. He, J. N. Ng, Phys. Lett. [**B536**]{}, 83 (2002); Q. Shafi and Z. Tavartkiladze, Phys. Lett. [**B482**]{}, 1451 (2000); S. T. Petcov and W. Rodejohann, hep-ph/0409135. K. S. Babu, C. N. Leung, and J. Pantaleone, Phys. Lett. **B319** (1993), 191–198 \[hep-ph/9309223\]. K. R. S. Balaji, A. S. Dighe, R. N. Mohapatra, and M. K. Parida, Phys. Rev. Lett. **84** (2000), 5034–5037 \[hep-ph/0001310\]. S. Antusch and M. Ratz, JHEP [**0211**]{}, 010 (2002) \[arXiv:hep-ph/0208136\]. R. N. Mohapatra, M. K. Parida and G. Rajasekaran, Phys. Rev. D [**69**]{}, 053007 (2004) \[hep-ph/0301234\]. C. Hagedorn, J. Kersten and M. Lindner, Phys. Lett. B [**597**]{}, 63 (2004) \[arXiv:hep-ph/0406103\]. C. H. Albright, K. S. Babu and S. M. Barr, Phys. Rev. Lett.  [**81**]{}, 1167 (1998) \[hep-ph/9802314\]. A. Zee, Phys. Lett. [**B93**]{}, 389 (1980). T. P. Cheng and L. F. Li, Phys. Rev. [**D 22**]{}, 2860 (1980). K. S. Babu, Phys. Lett. [**B 203**]{}, 132 (1988). D. Chang and R. N. Mohapatra, Phys. Rev. Lett. [**58**]{},1600 (1987) R. N. Mohapatra, Phys. Rev. Lett. [**56**]{}, 561 (1986). R. N. Mohapatra and J. W. F. Valle, Phys. Rev. [**D 34**]{}, 1642 (1986); E. Witten, Nucl. Phys. [**B 268**]{}, 79 (1986). J. Gluza, Acta Phys. Polon. B33 (2002) 1735. T. Appelquist and R. Shrock, Phys. Lett. B [**548**]{}, 204 (2002). T. Appelquist and R. Shrock, Phys. Rev. Lett. [**90**]{}, 201801 (2003). Z. Chacko, L. Hall, S. Oliver and M. Perelstein, hep-ph/0405067. P. Langacker, Phys. Rev. D [**58**]{}, 093017 (1998). A. Ilakovac and A. Pilaftsis, Nucl. Phys. B [**437**]{}, 491 (1995). G. Barenboim, G. C. Branco, A. de Gouvêa and M. N. Rebelo, Phys. Rev. D [**64**]{}, 073005 (2001). see, for example, M. Frigerio and A.Yu. Smirnov, Nucl. Phys. B [**640**]{}, 233 (2002); Phys. Rev. D [**67**]{}, 013007 (2003). V. Barger, S.L. Glashow, P. Langacker and D. Marfatia, Phys. Lett. B [**540**]{}, 247 (2002). S. Pascoli, S. T. Petcov and W. Rodejohann, Phys. Lett. B [**549**]{}, 177 (2002). A. Broncano, M. B. Gavela, E. Jenkins, Nucl. Phys. [**B672**]{}, 163 (2003). P. H. Frampton, S. T. Petcov and W. Rodejohann, Nucl. Phys. B [**687**]{}, 31 (2004) \[hep-ph/0401206\]. for recent studies see, for example, G. Altarelli, F. Feruglio and I. Masina, Nucl. Phys. B [**689**]{}, 157 (2004); A. Romanino, hep-ph/0402258; and references therein. S. Antusch and S. F. King, Phys. Lett. B [**591**]{} (2004) 104, hep-ph/0403053. L.J. Hall, H. Murayama and N. Weiner, Phys. Rev. Lett.  [**84**]{}, 2572 (2000); N. Haba and H. Murayama, Phys. Rev. D [**63**]{}, 053010 (2001). A. de Gouvêa and H. Murayama, Phys. Lett. B [**573**]{}, 94 (2003). A. de Gouvêa, Phys. Rev. D [**69**]{}, 093007 (2004) \[arXiv:hep-ph/0401220\]. F. Vissani, hep-ph/9708483; V. Barger, S. Pakvasa, T. Weiler and K. Whisnant, Phys. Lett. B [**437**]{}, 107 (1998); A. Baltz, A.S. Goldhaber and M. Goldhaber, Phys. Rev. Lett. [**81**]{} 5730 (1998); G. Altarelli and F. Feruglio, Phys. Lett. B [**439**]{}, 112 (1998); M. Jezabek and Y. Sumino, Phys. Lett. B [**440**]{}, 327 (1998); D. V. Ahluwalia, Mod. Phys. Lett. [**A13**]{}, 2249 (1998). S. Lavignac, I. Masina and C.A. Savoy, Nucl. Phys. [**B 633**]{} (2002) 139, hep-ph/0202086. T. Fukuyama and H. Nishiura, hep-ph/9702253; R. N. Mohapatra and S. Nussinov, Phys. Rev. [**D60**]{} (1999) 013002; E. Ma and M. raidal, Phys. Rev. Lett. [**87**]{} (2001) 011802; C. S. Lam, hep-ph/0104116; T. Kitabayashi and M. Yasue, Phys.Rev. [**D67**]{} 015006 (2003); W. Grimus and L. Lavoura, hep-ph/0305046; 0309050; Y. Koide, Phys.Rev. [**D69**]{} (2004) 093001; A. Ghosa, Mod. Phys. Lett. [**A 19**]{} (2004) 2579. R. N. Mohapatra, Slac Summer Inst. lecture; http://www-conf.slac.stanford.edu/ssi/2004; hep-ph/0408187; JHEP, [**10**]{}, 027 (2004); W. Grimus, A. S.Joshipura, S. Kaneko, L. Lavoura, H. Sawanaka, M. Tanimoto, hep-ph/0408123; R. N. Mohapatra and W. Rodejohann, Phys. Rev. D [**72**]{}, 053001 (2005) \[arXiv:hep-ph/0507312\]. P. F. Harrison, D. H. Perkins and W. G. Scott, Phys. Lett. B [**458**]{}, 79 (1999); Phys. Lett. B [**530**]{}, 167 (2002); P. F. Harrison and W. G. Scott, Phys. Lett. B [**535**]{}, 163 (2002); Phys. Lett. B [**557**]{}, 76 (2003); Z. Z. Xing, Phys. Lett. B [**533**]{}, 85 (2002). X. G. He and A. Zee, Phys. Lett. B [**560**]{}, 87 (2003); E. Ma, Phys. Rev. Lett.  [**90**]{}, 221802 (2003); Phys. Lett. B [**583**]{}, 157 (2004); C. I. Low and R. R. Volkas, Phys. Rev. D [**68**]{}, 033007 (2003); S. H. Chang, S. K. Kang and K. Siyeon, Phys. Lett. B [**597**]{}, 78 (2004); E. Ma, Phys. Rev. D [**70**]{}, 031901 (2004); F. Plentinger and W. Rodejohann, Phys. Lett. B [**625**]{}, 264 (2005); P. F. Harrison and W. G. Scott, hep-ph/0508012; S. Luo and Z. Z. Xing, hep-ph/0509065; W. Grimus and L. Lavoura, hep-ph/0509239; Originally, a very similar, but with recent data incompatible form of $U$ has been proposed already in L. Wolfenstein, Phys. Rev. D [**18**]{}, 958 (1978). G. Altarelli, F. Ferruglio, hep-ph/0504165; E. Ma, hep-ph/0505209; S. F. King, JHEP [**0508**]{}, 105 (2005); I. de Medeiros Varzielas and G. G. Ross, hep-ph/0507176; K. S. Babu and X. G. He, hep-ph/0507217. P. Langacker [*et al.*]{}, Nucl. Phys. B [**282**]{} (1987) 589. W. Rodejohann, J. Phys. G [**28**]{}, 1477 (2002). I-H. Lee, Phys. Lett. B [**138**]{}, 121 (1984); Nucl. Phys. B [**246**]{}, 120 (1984). J. Ellis and M. Raidal, Nucl. Phys. [**B643**]{} (2002) 229 \[hep-ph/0206174\]. S. Pascoli, S.T. Petcov and C.E. Yaguna, Phys. Lett. [**B564**]{} (2003) 241 \[hep-ph/0301095\]. S.M. Bilenky, S. Pascoli and S.T. Petcov, [Phys. Rev.]{} [**D64**]{} (2001) 053010. S.T. Petcov and A.Yu. Smirnov, Phys. Lett. **B322** (1994) 109. F. Feruglio, A. Strumia and F. Vissani, [Nucl. Phys.]{} [**B 637**]{} (2002) 345 \[Addendum-ibid.  [**B 659**]{} (2003) 359\]. S. Pascoli and S.T. Petcov, hep-ph/0308034. S.M. Bilenky *et al.*, [Phys. Lett.]{} [**B465**]{} (1999) 193. V. Barger and K. Whisnant, [Phys. Lett.]{} **B456** (1999) 194; H. Minakata and O. Yasuda, [Nucl. Phys.]{} **B523** (1998) 597; T. Fukuyama *et al.*, [Phys. Rev.]{} **D57** (1998) 5844 and [Mod. Phys. Lett.]{} [**A17**]{} (2002) 2597; P. Osland and G. Vigdel, [Phys. Lett.]{} **B520** (2001) 128; D. Falcone and F. Tramontano, [*Phys. Rev.*]{} **D64** (2001) 077302; T. Hambye, [Eur. Phys. J. direct]{} [**C4**]{} (2002) 13; F. Vissani, [JHEP]{} **06** (1999) 022; M. Czakon *et al.*, [Phys. Lett.]{} **B465** (1999) 211, hep-ph/0003161 and [Phys. Rev. ]{} [**D65**]{} (2002) 053008; H. V. Klapdor-Kleingrothaus, H. Pas and A. Yu. Smirnov, [Phys. Rev.]{} **D63** (2001) 073005; H. Minakata and H. Sugiyama, [Phys. Lett.]{} **B526** (2002) 335; N. Haba, N. Nakamura and T. Suzuki, hep-ph/0205141. W. Rodejohann, [Nucl. Phys.]{} B [**597**]{} (2001) 110, and hep-ph/0203214. S. Pascoli and S.T. Petcov, [Phys. Lett.]{} B [**580**]{} (2004) 280. A. Bandyopadhyay [*et al.*]{}, [Phys. Lett.]{} B [**583**]{} (2004) 134. G.L. Fogli *et al.*, [Phys. Rev.]{} [**D67**]{} (2003) 093006. G. L. Fogli, E. Lisi, A. Marrone, D. Montanino, A. Palazzo and A. M. Rotunno, Phys. Rev. D [**69**]{}, 017301 (2004) \[hep-ph/0308055\]. S. Pascoli and S.T. Petcov, [Phys. Lett.]{} [**B544**]{} (2002) 239. A. Bandyopadhyay [*et al.*]{}, hep-ph/0406328. H. Murayama and C. Pe[ñ]{}a-Garay, [Phys. Rev. ]{} [**D69**]{} (2004) 031301. S. Pascoli, S.T. Petcov and L. Wolfenstein, [Phys. Lett.]{} [**B524**]{} (2002) 319; S. Pascoli and S. T. Petcov, Phys. Atom. Nucl.  [**66**]{}, 444 (2003) \[Yad. Fiz.  [**66**]{}, 472 (2003)\] \[hep-ph/0111203\]. V. A. Rodin [*et al.*]{}, [Phys. Rev.]{} [**C68**]{} (2003) 044302. S. Pascoli, S.T. Petcov and W. Rodejohann, [Phys. Lett.]{} [**B558**]{} (2003) 141. K.S. Babu and R. N. Mohapatra, [Phys. Rev. Lett.]{} [**75**]{} (1995) 2276; M. Hirsch, H.V. Klapdor-Kleingrothaus and S.G. Kovalenko [*Phys. Lett.*]{} [**B398**]{} (1977) 311; [*ibid.*]{} [**B459**]{} (1999) 450. S. Pascoli, S. T. Petcov and W. Rodejohann, Phys. Rev. D [**68**]{}, 093007 (2003) \[hep-ph/0302054\]. S.F. King, JHEP 0209 (2002) 011. C. Jarlskog, Z. Phys. C [**29**]{}, 491 (1985); Phys. Rev. D [**35**]{}, 1685 (1987). P. I. Krastev and S. T. Petcov, Phys. Lett. B [**205**]{} (1988) 84. J. F. Nieves and P. B. Pal, Phys. Rev. D [**36**]{}, 315 (1987), and Phys. Rev. D [**64**]{}, 076005 (2001). G. C. Branco, L. Lavoura and M. N. Rebelo, Phys. Lett. B [**180**]{}, 264 (1986). J. A. Aguilar-Saavedra and G. C. Branco, Phys. Rev. D [**62**]{}, 096009 (2000). For $\mu \rightarrow e \gamma$: R. Bolton et al., Phys. Rev. [**D 38**]{} (1988) 2077; M.L. Brooks et al. \[MEGA Collaboration\], Phys. Rev. Lett. [**83**]{} (1999) 1521, For $\tau \rightarrow \mu \gamma$: S. Ahmed et al. \[CLEO Collaboration\], Phys. Rev. [**D 61**]{} (2000) 071101, hep- For $d_e$: E.D. Commins, S.B. Ross, D. Demille, B.C. Regan, Phys. Rev. [**A 50**]{} (1994) 2 For $d_\mu$: CERN-Mainz-Daresbury Collaboration, Nucl. Phys. [**B 150**]{} (1979) 1. For $\mu \rightarrow e \gamma$: L.M. Barkov et al., proposal for an experiment at PSI, http://meg.web.psi.ch; For $\tau \rightarrow \mu \gamma$: D.F. Carvalho, J.R. Ellis, M.E. Gomez, S. Lola and J.C. Romao, hep-ph/0206148; For $d_e$: S.K. Lamoreaux, nucl-ex/0109014. J. Aysto et al., hep-ph/0109217; Y.K. Semertzidis, hep-ex/0401016. For $d_\mu$: R. Carey et al., Letter of Intent to BNL (2000); Y.K. Semertzidis et al., hep-ph/0012087; J. Aysto et al., hep-ph/0109217; T. P. Cheng and L.-F. Li, Phys. Rev. Lett. [**45**]{} (1980) 1908. T. Appelquist, M. Piai, and R. Shrock, Phys. Lett. B [**593**]{}, 175 (2004). T. Appelquist, M. Piai, and R. Shrock, Phys. Lett. B [**595**]{}, 442 (2004). F. Gabbiani, E. Gabrielli, A. Masiero, L. Silvestrini, Nucl. Phys. [**B 477**]{} (1996) 321, hep-ph/9604387. For a recent re-analysis see: I. Masina and C.A. Savoy, Nucl. Phys. [**B 661**]{} (2003) 365, hep-ph/0211283. F. Borzumati and A. Masiero, Phys. Rev. Lett. [**57**]{} (1986) 961. R. Barbieri and L.J. Hall, Phys. Lett. [**B 338**]{} (1994) 212; R. Barbieri, L.J. Hall and A. Strumia, Nucl. Phys. [**B 445**]{} (1995) 219, hep-ph/9501334. R. Barbieri, S. Ferrara, C.A. Savoy, Phys. Lett. [**B 119**]{} (1982) 343; A. Chamsheddine, R. Arnowitt, P. Nath, Phys. Rev. Lett. [**49**]{} (1982) 970; L. Hall, J. Lykken, S. Weinberg, Phys. Rev. [**D 27**]{} (1983) 2359; A. Chamsheddine, R. Arnowitt, P. Nath, N=1 Supergravity, World Scientific, Singapore (1984); N. Ohta, Prog. Theor. Phys. 70 (1983) 542. J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi, Phys. Rev. D [**53**]{} (1996) 2442 \[arXiv:hep-ph/9510309\]; S. F. King and M. Oliveira, Phys. Rev. D [**60**]{} (1999) 035003 \[arXiv:hep-ph/9804283. S. T. Petcov, S. Profumo, Y. Takanishi and C. E. Yaguna, Nucl. Phys. B [**676**]{}, 453 (2004) \[hep-ph/0306195\]. S. Lavignac, I. Masina and C.A. Savoy, Phys. Lett. [**B 520**]{} (2001) 269, hep-ph/0106245; I. Masina, in Proceedings of SUSY02 [*Supersymmetry and unification of fundamental interactions*]{}, vol.1 331, Hamburg (2002), hep-ph/0210125. See for instance: J. Sato, K. Tobe and T. Yanagida, Phys. Lett. [**B 498**]{} (2001) 189, hep-ph/0010348; T. Blazek and S.F. King, Nucl. Phys. [**B 662**]{} (2003) 359, hep-ph/0211368; M. Ciuchini, A. Masiero, L. Silvestrini, S.K. Vempati, O. Vives, Phys. Rev. Lett. [**92**]{} (2004) 071801, hep-ph/0307191; S.M. Barr, Phys. Lett. [**B 578**]{} (2004) 394, hep-ph/0307372; M.-C. Chen and K. T. Mahanthappa, hep-ph/0409096; hep-ph/0409165. J.A. Casas and A. Ibarra, Nucl. Phys. [**B 618**]{} (2001) 171, hep-ph/0103065. W. Buchmüller, D. Delepine and F. Vissani, Phys. Lett. [**B 459**]{} (1999) 171, hep-ph/9904219; J. Sato and K. Tobe, Phys. Rev. [**D 63**]{} (2001) 116010; K.S. Babu, Ts. Enkhbat, I. Gogoladze, Nucl. Phys. [**B 678**]{} (2004) 233, hep-ph/0308093; I. Masina and C.A. Savoy, hep-ph/0501166. J. Ellis, M.E. Gomez, G.K. Leontaris, S. Lola and D.V. Nanopoulos, Eur. Phys. J. [**C14**]{} (2000) 319; J. Hisano and K. Tobe, Phys. Lett. [**B 510**]{} (2001) 197, hep-ph/0102315. A. Masiero, S. Profumo, S.K. Vempati, C.E. Yaguna, hep-ph/0401138. T. Blazek and S. F. King, Phys. Lett. B [**518**]{} (2001) 109 \[arXiv:hep-ph/0105005\]. A.Yu. Smirnov, Phys. Rev. [**D 48**]{} (1993) 3264, hep-ph/9304205; G. Altarelli, F. Feruglio, I. Masina, Phys. Lett. [**B 472**]{} (2000) 382, hep-ph/9907532. A. Masiero, S.K. Vempati, O. Vives, Nucl. Phys. [**B 649**]{} (2003) 189, hep-ph/0209303. J. R. Ellis, J. Hisano, S. Lola and M. Raidal, Nucl. Phys. B [**621**]{}, 208 (2002) \[arXiv:hep-ph/0109125\]; J. Ellis, J. Hisano, M. Raidal, Y. Shimizu, Phys. Lett. [**B 528**]{} (2002) 86, hep-ph/0111324. I. Masina, Nucl. Phys. [**B 671**]{} (2003) 432, hep-ph/0304299. Y. Farzan and M.E. Peskin, hep-ph/0405214. A. Romanino and A. Strumia, Nucl. Phys. [**B 622**]{} (2002) 73, hep-ph/0108275. I. Masina and C.A. Savoy, Phys. Lett. [**B579**]{} (2004) 99, hep-ph/0309067. K. S. Babu, B. Dutta and R. N. Mohapatra, Phys. Rev. D [**67**]{}, 076006 (2003); B. Dutta and R. N. Mohapatra, Phys.Rev. [**D68**]{}, 113008 (2003). A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz.  [**5**]{} (1967) 32 \[JETP Lett.  [**5**]{} (1967 SOPUA,34,392-393.1991 UFNAA,161,61-64.1991) 24\]. M. Fukugita and T. Yanagida, . G. ’t Hooft, Phys. Rev. Lett.  [**37**]{}, 8 (1976). M. Flanz, E. A. Paschos and U. Sarkar, Phys. Lett. B [**345**]{} (1995) 248 \[Erratum-ibid. B [**382**]{} (1996) 447\]; L. Covi, E. Roulet and F. Vissani, Phys. Lett. B [**384**]{} (1996) 169. A. Pilaftsis, Phys. Rev. D [**56**]{} (1997) 5431; Nucl. Phys. B [**504**]{} (1997) 61. A. Pilaftsis, Int. J. Mod. Phys. A [**14**]{}, 1811 (1999). A. Pilaftsis and T. E. J. Underwood, Nucl. Phys. B [**692**]{}, 303 (2004) \[arXiv:hep-ph/0309342\]. A. Pilaftsis, Phys. Rev. Lett. [**95**]{}, 081602 (2005). A. Pilaftsis and T. E. J. Underwood, arXiv:hep-ph/0506107. E. J. Chun, Phys. Rev. D [**69**]{}, 117303 (2004); arXiv:hep-ph/0508050. R. Gonzalez Felipe, F. R. Joaquim and B. M. Nobre, Phys. Rev. D [**70**]{}, 085009 (2004); G. C. Branco, R. Gonzalez Felipe, F. R. Joaquim and B. M. Nobre, arXiv:hep-ph/0507092. W. Buchmüller and M. Plümacher, Int. J. Mod. Phys. A [**15**]{} (2000) 5047 G. F. Giudice [*et al.*]{}, Nucl. Phys. B [**685**]{}, 89 (2004) \[arXiv:hep-ph/0310123\]. V. A. Kuzmin, V. A. Rubakov, and M. E. Shaposhnikov, Phys. Lett. B [**155**]{}, 36 (1985); J. A. Harvey and M. S. Turner, Phys. Rev. D [**42**]{}, 3344 (1990). W. Buchmüller, P. Di Bari and M. Plümacher, hep-ph/0401240. W. Buchmüller, P. Di Bari and M. Plümacher, Nucl. Phys. B [**643**]{} (2002) 367 \[hep-ph/0205349\]. J. R. Ellis, J. Hisano, S. Lola and M. Raidal, Nucl. Phys. B [**621**]{}, 208 (2002) \[arXiv:hep-ph/0109125\]. S. Davidson and A. Ibarra, JHEP [**0109**]{}, 013 (2001) \[arXiv:hep-ph/0104076\]. M. Fujii, K. Hamaguchi and T. Yanagida, Phys. Rev. D [**65**]{}, 115012 (2002) \[arXiv:hep-ph/0202210\]. S. Davidson and A. Ibarra, Phys. Lett. B [**535**]{} (2002) 25 \[hep-ph/0202239\]. W. Buchmüller, P. Di Bari and M. Plümacher, Nucl. Phys. B [**665**]{} (2003) 445 \[hep-ph/0302092\]. T. Hambye, Y. Lin, A. Notari, M. Papucci and A. Strumia, hep-ph/0312203. S. Davidson, JHEP [**0303**]{} (2003) 037 \[hep-ph/0302075\]. G. C. Branco, T. Morozumi, B. M. Nobre and M. N. Rebelo, Nucl. Phys. B [**617**]{} (2001) 475 \[hep-ph/0107164\]; see also M. N. Rebelo, Phys. Rev. D [**67**]{}, 013008 (2003) \[hep-ph/0207236\]. S. Davidson and R. Kitano, hep-ph/0312007. P. H. Frampton, S. L. Glashow and T. Yanagida, Phys. Lett. B [**548**]{}, 119 (2002) \[hep-ph/0208157\]; T. Endoh, S. Kaneko, S. K. Kang, T. Morozumi and M. Tanimoto, Phys. Rev. Lett.  [**89**]{}, 231601 (2002) \[hep-ph/0209020\]. R. Kuchimanchi and R. N. Mohapatra, Phys. Rev. D [**66**]{}, 051301 (2002) \[hep-ph/0207110\]; M. Raidal and A. Strumia, Phys. Lett. B [**553**]{}, 72 (2003) \[hep-ph/0210021\]; B. Dutta and R. N. Mohapatra, Phys. Rev. D [**68**]{}, 056006 (2003) \[hep-ph/0305059\]; A. Ibarra and G. G. Ross, hep-ph/0312138; S. F. King, Phys. Rev. D [**67**]{}, 113010 (2003) \[hep-ph/0211228\]. G. C. Branco, R. Gonzalez Felipe, F. R. Joaquim and M. N. Rebelo, Nucl. Phys. B [**640**]{} (2002) 202 \[hep-ph/0202030\]; H. B. Nielsen and Y. Takanishi, Nucl. Phys. B [**636**]{}, 305 (2002) \[arXiv:hep-ph/0204027\]; M. S. Berger and K. Siyeon, Phys. Rev. D [**65**]{} (2002) 053019 \[hep-ph/0110001\]; D. Falcone and F. Tramontano, Phys. Rev. D [**63**]{} (2001) 073007 \[hep-ph/0011053\]. S. Kaneko and M. Tanimoto, Phys. Lett. B [**551**]{} (2003) 127 \[hep-ph/0210155\]; S. Kaneko, M. Katsumata and M. Tanimoto, JHEP [**0307**]{} (2003) 025 \[hep-ph/0305014\]; L. Velasco-Sevilla, JHEP [**10**]{} (2003) 035, \[hep-ph/0307071\]; V. Barger, D. A. Dicus, H. J. He and T. Li, Phys. Lett. B583 (2004) 173 \[hep-ph/0310278\]; W. Rodejohann, Eur. Phys. J. C [**32**]{}, 235 (2004); B. Dutta and R. N. Mohapatra,hep-ph/0307163; R. N. Mohapatra, S. Nasri and H. B. Yu, Phys. Lett. B [**615**]{}, 231 (2005) \[arXiv:hep-ph/0502026\]. S. Antusch and S. F. King, arXiv:hep-ph/0508044. M. Y. Khlopov and A. D. Linde, Phys. Lett. B [**138**]{}, 265 (1984);\ J. R. Ellis, J. E. Kim and D. V. Nanopoulos, Phys. Lett. B [**145**]{}, 181 (1984);\ M. Kawasaki and T. Moroi, Prog. Theor. Phys.  [**93**]{}, 879 (1995) \[arXiv:hep-ph/9403364\];\ M. Kawasaki, K. Kohri and T. Moroi, Phys. Rev. D [**63**]{}, 103502 (2001) \[arXiv:hep-ph/0012279\]. T. Moroi, H. Murayama and M. Yamaguchi, Phys. Lett. B [**303**]{}, 289 (1993);\ A. de Gouvêa, T. Moroi and H. Murayama, Phys. Rev. D [**56**]{}, 1281 (1997) \[arXiv:hep-ph/9701244\]. R. H. Cyburt, J. R. Ellis, B. D. Fields and K. A. Olive, Phys. Rev. D [**67**]{}, 103521 (2003) \[arXiv:astro-ph/0211258\];\ M. Kawasaki, K. Kohri and T. Moroi, arXiv:astro-ph/0408426. H. Pagels and J. R. Primack, Phys. Rev. Lett.  [**48**]{}, 223 (1982). M. Ibe, R. Kitano, H. Murayama and T. Yanagida, Phys. Rev. D [**70**]{}, 075012 (2004) \[arXiv:hep-ph/0403198\]. H. P. Nilles, M. Peloso and L. Sorbo, Phys. Rev. Lett.  [**87**]{}, 051302 (2001) \[arXiv:hep-ph/0102264\];\ H. P. Nilles, K. A. Olive and M. Peloso, Phys. Lett. B [**522**]{}, 304 (2001) \[arXiv:hep-ph/0107212\]. M. Bolz, A. Brandenburg and W. Buchmüller, Nucl. Phys. B [**606**]{}, 518 (2001) \[arXiv:hep-ph/0012052\]. G. Lazarides and Q. Shafi, Phys. Lett. B [**258**]{} (1991) 305;\ K. Kumekawa, T. Moroi and T. Yanagida, Prog. Theor. Phys.  [**92**]{}, 437 (1994) \[arXiv:hep-ph/9405337\];\ G. Lazarides, R. K. Schaefer and Q. Shafi, Phys. Rev. D [**56**]{}, 1324 (1997) \[arXiv:hep-ph/9608256\];\ G. Lazarides, Springer Tracts Mod. Phys.  [**163**]{}, 227 (2000) \[arXiv:hep-ph/9904428\];\ G. F. Giudice, M. Peloso, A. Riotto and I. Tkachev, JHEP [**9908**]{}, 014 (1999) \[arXiv:hep-ph/9905242\];\ T. Asaka, K. Hamaguchi, M. Kawasaki and T. Yanagida, Phys. Lett. B [**464**]{}, 12 (1999) \[arXiv:hep-ph/9906366\];\ T. Asaka, K. Hamaguchi, M. Kawasaki and T. Yanagida, Phys. Rev. D [**61**]{}, 083512 (2000) \[arXiv:hep-ph/9907559\];\ M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. D [**63**]{}, 103514 (2001) \[arXiv:hep-ph/0011104\]. H. Murayama, H. Suzuki, T. Yanagida and J. Yokoyama, Phys. Rev. Lett.  [**70**]{}, 1912 (1993);\ H. Murayama and T. Yanagida, Phys. Lett. B [**322**]{}, 349 (1994) \[arXiv:hep-ph/9310297\];\ K. Hamaguchi, H. Murayama and T. Yanagida, Phys. Rev. D [**65**]{} (2002) 043512 \[hep-ph/0109030\];\ for a review see, e.g., K. Hamaguchi, arXiv:hep-ph/0212305 M. Fujii, M. Ibe and T. Yanagida, Phys. Lett. B [**579**]{}, 6 (2004) \[arXiv:hep-ph/0310142\];\ J. L. Feng, S. Su and F. Takayama, Phys. Rev. D [**70**]{}, 075019 (2004);\ L. Roszkowski and R. Ruiz de Austri, arXiv:hep-ph/0408227. W. Buchmüller, K. Hamaguchi and M. Ratz, Phys. Lett. B [**574**]{}, 156 (2003) \[arXiv:hep-ph/0307181\];\ W. Buchmüller, K. Hamaguchi, O. Lebedev and M. Ratz, Nucl. Phys. B [**699**]{}, 292 (2004) \[arXiv:hep-th/0404168\];\ R. Kallosh and A. Linde, JHEP [**0412**]{}, 004 (2004) \[arXiv:hep-th/0411011\];\ W. Buchmüller, K. Hamaguchi, O. Lebedev and M. Ratz, JCAP [**0501**]{}, 004 (2005) \[arXiv:hep-th/0411109\]. M. Bolz, W. Buchmüller and M. Plümacher, Phys. Lett. B [**443**]{}, 209 (1998) \[arXiv:hep-ph/9809381\];\ M. Fujii and T. Yanagida, Phys. Lett. B [**549**]{}, 273 (2002) \[arXiv:hep-ph/0208191\];\ M. Fujii, M. Ibe and T. Yanagida, Phys. Rev. D [**69**]{}, 015006 (2004) \[arXiv:hep-ph/0309064\];\ J. R. Ellis, D. V. Nanopoulos and S. Sarkar, Nucl. Phys. B [**259**]{}, 175 (1985). W. Buchmüller, K. Hamaguchi, M. Ratz and T. Yanagida, Phys. Lett. B [**588**]{}, 90 (2004) \[arXiv:hep-ph/0402179\];\ G. Weiglein [*et al.*]{}, arXiv:hep-ph/0410364. P. J. O’Donnell and U. Sarkar, Phys. Rev. D [**49**]{}, 2118 (1994) \[hep-ph/9307279\]. G. Lazarides and Q. Shafi, Phys. Rev. D [**58**]{}, 071702 (1998). T. Hambye and G. Senjanovic, Phys. Lett. B [**582**]{}, 73 (2004). S. Antusch and S. F. King, Phys. Lett. B [**597**]{}, (2004) 199, hep-ph/0405093. A. S. Joshipura and E. A. Paschos, hep-ph/9906498; A. S. Joshipura, E. A. Paschos and W. Rodejohann, Nucl. Phys. B [**611**]{}, 227 (2001). A. S. Joshipura, E. A. Paschos and W. Rodejohann, JHEP [**0108**]{}, 029 (2001); W. Rodejohann, Phys. Lett. B [**542**]{}, 100 (2002). S. Nasri, J. Schechter and S. Moussa, hep-ph/0402176. W. Rodejohann, Phys. Rev. D [**70**]{}, 073010 (2004) \[hep-ph/0403236\]; P. h. Gu and X. j. Bi, hep-ph/0405092; N. Sahu and S. Uma Sankar, arXiv:hep-ph/0406065. W. l. Guo, hep-ph/0406268. E. J. Chun and S. K. Kang, Phys. Rev. D [**63**]{}, 097902 (2001). T. Dent, G. Lazarides and R. Ruiz de Austri, hep-ph/0312033. E. Ma and U. Sarkar, Phys. Rev. Lett.  [**80**]{}, 5716 (1998). T. Hambye, E. Ma and U. Sarkar, Nucl. Phys. B [**602**]{}, 23 (2001). G. Lazarides, Phys. Lett. B [**452**]{}, 227 (1999). R. N. Mohapatra, A. Perez-Lorenzana and C. A. de Sousa Pires, Phys. Lett. B [**474**]{}, 355 (2000). K. Dick, M. Lindner, M. Ratz and D. Wright, Phys. Rev. Lett.  [**84**]{}, 4039 (2000) \[hep-ph/9907562\]. B. A. Campbell, S. Davidson, J. R. Ellis and K. A. Olive, Phys. Lett. B [**297**]{}, 118 (1992) \[arXiv:hep-ph/9302221\]. N. Arkani-Hamed, L. J. Hall, H. Murayama, D. R. Smith and N. Weiner, Phys. Rev. D [**64**]{}, 115011 (2001) \[hep-ph/0006312\]. F. Borzumati and Y. Nomura, Phys. Rev. D [**64**]{}, 053005 (2001) \[hep-ph/0007018\]. H. Murayama and A. Pierce, Phys. Rev. Lett.  [**89**]{}, 271601 (2002) \[hep-ph/0206177\]. M. Boz and N. K. Pak, Eur. Phys. J. C [**37**]{}, 507 (2004). W. Buchmüller, P. Di Bari and M. Plümacher, Phys. Lett. B [**547**]{} (2002) 128. C. H. Albright and S. M. Barr, hep-ph/0312224. G. D’Ambrosio, G. F. Giudice and M. Raidal, Phys. Lett. B [**575**]{} (2003) 75; Y. Grossman, T. Kashti, Y. Nir and E. Roulet, Phys. Rev. Lett.  [**91**]{} (2003) 251801. T. Hambye, J. March-Russell and S. M. West, hep-ph/0403183. R. N. Mohapatra and X. M. Zhang, Phys. Rev. [**D 46**]{}, 5331 (1992); L. Boubekeur, T. Hambye and G. Senjanović, Phys. Rev. Lett.  [**93**]{} (2004) 111601 \[arXiv:hep-ph/0404038\]; N. Sahu and U. Yajnik, hep-ph/0410075; S. F. King and T. Yanagida, hep-ph/0411030. K. S. Babu and R. N. Mohapatra, Phys. Lett. B [**267**]{}, 400 (1991). S. F. King, Phys. Lett. B [**439**]{}, 350 (1998) \[hep-ph/9806440\]; S. F. King, Nucl. Phys. B [**562**]{}, 57 (1999) \[hep-ph/9904210\]; S. F. King, Nucl. Phys. B [**576**]{} (2000) 85 \[hep-ph/9912492\]; S. F. King, JHEP [**0209**]{}, 011 (2002) \[hep-ph/0204360\]; for a review see S. Antusch and S. F. King, New J. Phys.  [**6**]{} (2004) 110 \[hep-ph/0405272\]. E. K. Akhmedov, M. Frigerio and A. Y. Smirnov, JHEP [**0309**]{}, 021 (2003) \[arXiv:hep-ph/0305322\]. S. F. King and G. G. Ross, Phys. Lett. B [**520**]{} (2001) 243 \[arXiv:hep-ph/0108112\]; S. F. King and G. G. Ross, Phys. Lett. B [**574**]{} (2003) 239 \[arXiv:hep-ph/0307190\]. S. Antusch and S. F. King, arXiv:hep-ph/0507333. M. Hirsch and S. F. King, Phys. Rev. D [**64**]{} (2001) 113005 \[arXiv:hep-ph/0107014\]. H. Murayama, H. Suzuki, T. Yanagida and J. Yokoyama, Phys. Rev. Lett.  [**70**]{} (1993) 1912; J. R. Ellis, M. Raidal and T. Yanagida, Phys. Lett. B [**581**]{}, 9 (2004) \[hep-ph/0303242\]. S. Antusch, M. Bastero-Gil, S. F. King and Q. Shafi, hep-ph/0411298. S. Antusch and S. F. King, Nucl. Phys. B [**705**]{} (2005) 239, hep-ph/0402121. For a review, see S. Raby, Particle Data group book: Phys. Rev. D [**66**]{}, 010001 (2002). H. Murayama and A. Pierce, Phys. Rev. [**D65**]{}, 055009 (2002). K. S. Babu and R. N. Mohapatra, Phys. Rev. Lett. [**70**]{}, 2845 (1993). D. G. Lee and R. N. Mohapatra, Phys. Rev. [**D 51**]{}, 1353 (1995). K. Matsuda, Y. Koide and T. Fukuyama, Phys. Rev. D [**64**]{}, 053015 (2001) \[hep-ph/0010026\]; K. Matsuda, hep-ph/0401154; K. Matsuda, Y. Koide, T. Fukuyama and H. Nishiura, Phys. Rev. D [**65**]{}, 033008 (2002); T. Fukuyama and N. Okada, JHEP [**0211**]{}, 011 (2002); B. Bajc, G. Senjanovic and F. Vissani, hep-ph/0402140; B. Dutta, Y. Mimura and R. N. Mohapatra, Phys.Rev. [**D69**]{}, 115014 (2004) ; Phys.Lett. [**B603**]{}, 35 (2004); Phys.Rev.Lett.[**94**]{}, 091804 (2005); hep-ph/0507319; T. Fukuyama, A. Ilakovac, T. Kikuchi, S. Meljanac and N. Okada, hep-ph/0401213, hep-ph/0405300; B. Bajc, A. Melfo, G. Senjanovic and F. Vissani, Phys. Rev. D [**70**]{}, 035007 (2004); C. S. Aulakh and A. Girdhar, hep-ph/0204097; S. Bertolini and M. Malinsky, Phys. Rev. D [**72**]{}, 055021 (2005) \[arXiv:hep-ph/0504241\]; B. Bajc, G. Senjanovic and F. Vissani, Phys. Rev. Lett.  [**90**]{}, 051802 (2003); H. S. Goh, R. N. Mohapatra and S. P. Ng, Phys. Lett. B [**570**]{}, 215 (2003) \[hep-ph/0303055\]; H. S. Goh, R. N. Mohapatra and S. P. Ng, Phys. Rev. [**D68**]{}, 115008 (2003) \[hep-ph/0308197\]. K. S. Babu, J. C. Pati and F. Wilczek, hep-ph/9812538, Nucl. Phys. [**B566**]{}, 33 (2000); C. Albright and S. M. Barr, Phys. Rev. Lett. [**85**]{}, 244 (2001); T. Blazek, S. Raby and K. Tobe, Phys. Rev. [**D62**]{}, 055001 (2000); Z. Berezhiani and A. Rossi, Nucl. Phys. [**B594**]{}, 113 (2001); R. Kitano and Y. Mimura, Phys. Rev. [**D63**]{}, 016008 (2001); for a recent review, see C. Albright, hep-ph/0212090. M.-C. Chen and K. T. Mahanthappa, Int. J. Mod. Phys. A [**18**]{}, 5819 (2003); AIP Conf. Proc.  [**721**]{}, 269 (2004). M.-C. Chen and K. T. Mahanthappa, hep-ph/0411158. G. C. Branco, P. A. Parada, M. N. Rebelo, hep-ph/0307119. Y. Achiman, Phys. Lett. [**B599**]{}, 75 (2004). P. H. Chankowski and Z. Pluciennik, Phys. Lett. B [**316**]{}, 312 (1993) \[arXiv:hep-ph/9306333\]. S. Antusch, M. Drees, J. Kersten, M. Lindner and M. Ratz, Phys. Lett. B [**519**]{}, 238 (2001) \[arXiv:hep-ph/0108005\]. S. Antusch, M. Drees, J. Kersten, M. Lindner and M. Ratz, Phys. Lett. B [**525**]{}, 130 (2002) \[arXiv:hep-ph/0110366\]. S. Antusch and M. Ratz, JHEP **07** (2002), 059 \[arXiv:hep-ph/0203027\]. P. H. Chankowski, W. Krolikowski, and S. Pokorski, Phys. Lett. **B473** (2000), 109 \[arXiv:hep-ph/9910231\]. J. A. Casas, J. R. Espinosa, A. Ibarra, and I. Navarro, Nucl. Phys. **B573** (2000), 652 \[arXiv:hep-ph/9910420\]. S. Antusch, J. Kersten, M. Lindner, and M. Ratz, Nucl. Phys. **B674** (2003), 401 \[arXiv:hep-ph/0305273\]. J. w. Mei and Z. z. Xing, Phys. Rev. D [**69**]{}, 073003 (2004) \[arXiv:hep-ph/0312167\]. S. Luo, J. w. Mei and Z. z. Xing, Phys. Rev. D [**72**]{}, 053014 (2005) \[arXiv:hep-ph/0507065\]. J. R. Ellis and S. Lola, Phys. Lett. B [**458**]{}, 310 (1999) \[arXiv:hep-ph/9904279\]. J. A. Casas, J. R. Espinosa, A. Ibarra and I. Navarro, Nucl. Phys. B [**556**]{}, 3 (1999) \[arXiv:hep-ph/9904395\]. J. A. Casas, J. R. Espinosa, A. Ibarra and I. Navarro, Nucl. Phys. B [**569**]{}, 82 (2000) \[arXiv:hep-ph/9905381\]. R. Adhikari, E. Ma and G. Rajasekaran, Phys. Lett. B [**486**]{}, 134 (2000) \[arXiv:hep-ph/0004197\]. A. S. Joshipura and S. D. Rindani, Phys. Lett. B [**494**]{}, 114 (2000) \[arXiv:hep-ph/0007334\]. E. J. Chun, Phys. Lett. B [**505**]{}, 155 (2001) \[arXiv:hep-ph/0101170\]. A. S. Joshipura, S. D. Rindani and N. N. Singh, Nucl. Phys. B [**660**]{}, 362 (2003) \[arXiv:hep-ph/0211378\]. A. S. Joshipura and S. D. Rindani, Phys. Rev. D [**67**]{}, 073009 (2003) \[arXiv:hep-ph/0211404\]. N. N. Singh and M. K. Das, (2004), hep-ph/0407206. N. Haba and N. Okamura, Eur. Phys. J. C [**14**]{}, 347 (2000) \[arXiv:hep-ph/9906481\]. T. Miura, E. Takasugi and M. Yoshimura, Prog. Theor. Phys. [**104**]{}, 1173 (2000) \[arXiv:hep-ph/0007066\]. N. Haba, Y. Matsui, and N. Okamura, Eur. Phys. J. **C17**, 513 (2000) \[arXiv:hep-ph/0005075\]; N. Haba, Y. Matsui, N. Okamura, and M. Sugiura, Prog. Theor. Phys.  [**103**]{}, 145 (2000) \[arXiv:hep-ph/9908429\]. P. H. Chankowski and S. Pokorski, Int. J. Mod. Phys. A [**17**]{}, 575 (2002) \[arXiv:hep-ph/0110249\]. T. Miura, T. Shindou, and E. Takasugi, Phys. Rev. **D66** (2002), 093002 \[arXiv:hep-ph/0206207\]. C. W. Chiang, Phys. Rev. D [**63**]{}, 076009 (2001) \[arXiv:hep-ph/0011195\]. M. Lindner, M. Ratz and M. A. Schmidt, arXiv:hep-ph/0506280. S. F. King and N. N. Singh, Nucl. Phys. **B591** (2000), 3–25 \[arXiv:hep-ph/0006229\]. S. Antusch, J. Kersten, M. Lindner, and M. Ratz, Phys. Lett. **B538** (2002), 87 \[arXiv:hep-ph/0203233\]. S. Antusch, J. Kersten, M. Lindner, and M. Ratz, Phys. Lett. **B544** (2002), 1 \[arXiv:hep-ph/0206078\]. J.-w. Mei and Z.-z. Xing, Phys. Rev. **D70** (2004), 053002 \[arXiv:hep-ph/0404081\]. J. Ellis, A. Hektor, M. Kadastik, K. Kannike and M. Raidal, arXiv:hep-ph/0506122. S. Antusch, J. Kersten, M. Lindner, M. Ratz and M. A. Schmidt, JHEP [**0503**]{}, 024 (2005) \[arXiv:hep-ph/0501272\]. J. w. Mei, Phys. Rev. D [**71**]{}, 073012 (2005) \[arXiv:hep-ph/0502015\]. G. Dutta, arXiv:hep-ph/0203222. G. Bhattacharyya, A. Raychaudhuri and A. Sil, Phys. Rev. D [**67**]{} (2003), 073004 \[arXiv:hep-ph/0211074\]. T. Miura, T. Shindou and E. Takasugi, Phys. Rev. D [**68**]{} (2003), 093009 \[arXiv:hep-ph/0308109\]. T. Shindou and E. Takasugi, Phys. Rev. D [**70**]{} (2004), 013005 \[arXiv:hep-ph/0402106\]. J. A. Casas, J. R. Espinosa and I. Navarro, Phys. Rev. Lett.  [**89**]{} (2002), 161801 \[arXiv:hep-ph/0206276\]. J. A. Casas, J. R. Espinosa, and I. Navarro, JHEP **09** (2003), 048 \[arXiv:hep-ph/0306243\]. P. H. Chankowski, A. Ioannisian, S. Pokorski, and J. W. F. Valle, Phys. Rev. Lett. **86** (2001), 3488 \[arXiv:hep-ph/0011150\]. M.-C. Chen and K. T. Mahanthappa, Int. J. Mod. Phys. A [**16**]{}, 3923 (2001) \[arXiv:hep-ph/0102215\]. K. S. Babu, E. Ma and J. W. F. Valle, Phys. Lett. B [**552**]{} (2003), 207 \[arXiv:hep-ph/0206292\]. N. Haba, Y. Matsui, N. Okamura and T. Suzuki, Phys. Lett. B [**489**]{} (2000), 184 \[arXiv:hep-ph/0005064\]. T. K. Kuo, S. H. Chiu and G. H. Wu, Eur. Phys. J. C [**21**]{} (2001), 281 \[arXiv:hep-ph/0011058\]. R. Gonzalez Felipe and F. R. Joaquim, JHEP [**0109**]{} (2001), 015 \[arXiv:hep-ph/0106226\]. M. K. Parida, C. R. Das and G. Rajasekaran, Pramana [**62**]{}, 647 (2004) \[arXiv:hep-ph/0203097\]. K. R. S. Balaji, A. S. Dighe, R. N. Mohapatra, and M. K. Parida, Phys. Lett. **B481** (2000), 33 \[arXiv:hep-ph/0002177\]. K. R. S. Balaji, R. N. Mohapatra, M. K. Parida and E. A. Paschos, Phys. Rev. D [**63**]{}, 113002 (2001) \[arXiv:hep-ph/0011263\]. K. S. Babu and R. N. Mohapatra, Phys. Lett. **B532** (2002), 77 \[arXiv:hep-ph/0201176\]. S. Antusch, P. Huber, J. Kersten, T. Schwetz and W. Winter, Phys. Rev. D [**70**]{} (2004) 097302, hep-ph/0404268. W. Grimus and L. Lavoura, arXiv:hep-ph/0410279. R. Barbieri, P. Creminelli, A. Strumia and N. Tetradis, Nucl. Phys. B [**575**]{}, 61 (2000) \[hep-ph/9911315v3\]. W. Buchmüller and M. Plümacher, Phys. Lett. B [**511**]{}, 74 (2001) \[hep-ph/0104189\]. D.W.Liu et al \[Super-Kamiokande collaboration\], hep-ex/0402015. Z. Paraktchieva et al \[MUNU collaboration\], Phys Lett [**B 564**]{}, 190 (2003); Phys.Lett.B615:153-159,2005. H.B.Li et al \[Texono collaboration\], Phys Rev Lett ,[**90**]{} , (2003). H.O. Back et al, Phys. Lett. B [**563**]{}, 35 (2003). J.F. Beacom and P.Vogel, Phys Rev Lett, [**83**]{},5222 (1999). A.S. Joshipura and S. Mohanty, Phys. Rev. D [**66**]{}, 012003 (2002). W. Grimus et al , Nucl Phys B [**648**]{}, 376 (2003). A.J. Grifols, E. Masso and S. Mohanty, hep-ph/0401144. J.N. Bahcall and M.H. Pinsonneault, astro-ph/0402114. L. Wolfenstein, Phys. Rev. [**D 17**]{} (1978) 2369. M. Guzzo, A. Masiero and S. T. Petcov, Phys. Lett. [**B260**]{} (1991) 154. N. Fornengo, M. Maltoni, R. Tomas Bayo, J. W. F. Valle, Phys. Rev. [**D 65**]{} 013010 (2002). A. Friedland, C. Lunardini and M. Maltoni, Phys. Rev. D [**70**]{}, 111301 (2004); Phys.Rev.  D [ **72**]{}, 053009 (2005); \[hep-ph/0506143\]. M. M. Guzzo, P. C. de Holanda, O. L. G. Peres, hep-ph/0403134. Alexander Friedland, Cecilia Lunardini, Carlos Peña-Garay, Phys. Lett. B [**594**]{}, 347 (2004) \[arXiv:hep-ph/0402266\]. S. Bergmann, Y. Grossman, D. M. Pierce Phys. Rev. [**D61**]{} 053005 (2000). S. Bergmann, M. M. Guzzo, P. C. de Holanda, P. I. Krastev, H. Nunokawa Phys. Rev. [**D 62**]{}, 073001 (2000). Zurab Berezhiani, Anna Rossi, Phys. Lett. [**B 535**]{}, 207 (2002). S. Davidson, C. Peña-Garay, N. Rius, A. Santamaria JHEP 0303 (2003) 011 M. Cirelli, G. Marandella, A. Strumia and F. Vissani, hep-ph/0403158. T.D. Lee and C.N. Yang, Phys. Rev. [**104**]{}, 254 (1956), A. Salam, Nuovo Cim. [**5**]{}, 299 (1957), V. Kobzarev, L. Okun, and I. Pomeranchuk, Sov.J.Nucl.Phys. [**3**]{}, 837 (1966). L.B. Okun, Sov.Phys. JETP [**52**]{}, 351 (1980), S.I. Blinnikov and M. Yu. Khlopov, Sov. Astron. Jour. [**60**]{}, 632 (1983), B. Holdom, Phys. Lett. [**B166**]{}, 196 (1985), S.L. Glashow, Phys. Lett. [**B167**]{}, 35 (1986), E.D. Carlson and S.L. Glashow, Phys. Lett. [**B193**]{}, 168 (1987), M.Yu. Khlopov [*et al.*]{}, Sov. Astron. Jour. [**68**]{}, 42 (1991), E. Kolb, D. Seckel and M. Turner, Nature, [**514**]{}, 415 (1985), Z.K. Silagadze, Mod. Phys. Lett. A [**14**]{}, 2321 (1999) and Acta Phys. Polon. B [**32**]{}, 99 (2001). R. Foot, H. Lew and R.R. Volkas, Phys. Lett. [**B271**]{}, 67 (1991) and Mod. Phys. Lett. A7, 2567 (1992), R. Foot, Mod. Phys. Lett. A9, 169 (1994), R. Foot and R.R. Volkas, Phys. Rev. D 52, 6595 (1995). Z. G. Berezhiani and R. N. Mohapatra, Phys. Rev. D [**52**]{}, 6607 (1995), Z.G. Berezhiani, A.D. Dolgov and R.N. Mohapatra, Phys. Lett. [**B375**]{}, 26 (1996), Z.G. Berezhiani, Acta Phys. Polonica, [**B27**]{}, 1503 (1996), R.N. Mohapatra and V.L. Teplitz, Astrophys. J.  [**478**]{}, 29 (1997), Z. Berezhiani, D. Comelli and F.L. Villante, Phys. Lett. [**B503**]{}, 362 (2001). K. Benakli and A. Y. Smirnov, Phys. Rev. Lett.  [**79**]{}, 4314 (1997) Z. Chacko and R. N. Mohapatra, Phys. Rev. D [**61**]{}, 053002 (2000) \[arXiv:hep-ph/9905388\]; M. Frank, M. Sher and I. Turan, arXiv:hep-ph/0412090; hep-ph/0503084. R. N. Mohapatra and A. Perez-Lorenzana, Nucl. Phys. B [**576**]{}, 466 (2000) V. Berezinsky, M. Narayan and F. Vissani, hep-ph/0401029. G. Cacciapaglia, M. Cirelli and A. Romanino, Phys. Rev. D [**68**]{}, 033013 (2003); For a review, see R. Volkas, Prog. in Part. and Nucl. Phys. [**48**]{}, 161 (2002). S. M. Bilenky, C. Giunti and W. Grimus, Eur. Phys. J. C [**1**]{}, 247 (1998) A. Strumia, Phys. Lett. B [**539**]{}, 91 (2002) S. M. Bilenky, S. Pascoli and S. T. Petcov, Phys. Rev. D [**64**]{}, 113003 (2001) \[arXiv:hep-ph/0104218\]; R. N. Mohapatra, S. Nasri and H. B. Yu, Phys. Rev. D [**72**]{}, 033007 (2005). S.W. Allen, R.W. Schmidt and S.L. Bridle, Mon. Not. Roy. Astron. Soc.  [**346**]{}, 593 (2003) R. N. Mohapatra and S. Nasri, hep-ph/0407194. V. Barger, J. P. Kneller, P. Langacker, D. Marfatia and G. Steigman, Phys. Lett. B [**569**]{}, 123 (2003) \[arXiv:hep-ph/0306061\]. G. Gelmini, S. Palomares-Ruiz and S. Pascoli, Phys. Rev. Lett.  [**93**]{}, 081302 (2004) \[arXiv:astro-ph/0403323\]. V. S. Berezinsky and A. Vilenkin, Phys. Rev. D [**62**]{}, 083512 (2000) M. Maltoni, T. Schwetz, M. A. Tortola and J. W. F. Valle, hep-ph/0305312, hep-ph/0405172. M.C. Gonzalez-Garcia, M. Maltoni, C. Pena-Garay, Phys. Rev. [**D64**]{}, 093001 (2001); M. Maltoni, T. Schwetz, M. Tórtola and J. W. F. Valle, Nucl. Phys. [**B643**]{}, 321 (2002); H. Päs, L. Song, T. Weiler, Phys. Rev. [**D67**]{}, 073019 (2003). A. Donini, M. Lusignoli, D. Meloni, Nucl. Phys. [**B624**]{}, 405 (2002). S. Pakvasa, P. Roy, Phys. Lett. [**B535**]{}, 181 (2002). R.E. Shrock, Phys. Lett. [**B96**]{}, 159 (1980); Phys. Rev. [**D24**]{}, 1232 (1981). R. E. Shrock, Phys. Rev. [**D24**]{}, 1275 (1981); Phys. Lett. [**B112**]{}, 382 (1982). R. Abela et al., Phys. Lett. [**B105**]{}, 263 (1981); F. Calaprice et al., Phys. Lett. [**B106**]{}, 175 (1981); R. Minehart et al., Phys. Rev. Lett. [**52**]{}, 804 (1984); M. Daum et al., Phys. Rev. [**D36**]{}, 2624 (1987). Y. Asano et al., Phys. Lett. [**B104**]{}, 84 (1981); R. Hayano et al., Phys. Rev. Lett. [**49**]{}, 1305 (1982). D. Bryman et al., Phys. Rev. Lett. [**50**]{}, 1546 (1983); G. Azuelos et al., Phys. Rev. Lett. [**56**]{}, 2241 (1986); N. De Leener-Rosier et al., Phys. Rev. [**D43**]{}, 3611 (1991); D. Britton et al., Phys. Rev. [**D46**]{}, R885 (1992). T. Yamazaki, in the Neutrino-84 Conference. B. Armbruster et al., Phys. Lett. [**B348**]{}, 19 (1995). D. Bryman and T. Numao, Phys. Rev. [**D53**]{}, 558 (1996); J. Formaggio et al., Phys. Rev. Lett. [**84**]{}, 443 (2000); M. Daum et al., Phys. Rev. Lett. [**85**]{}, 1515 (2000) P. Astier, Phys. Lett. [**B527**]{}, 23 (2002). D. Britton et al., Phys. Rev. Lett. [**68**]{}, 3000 (1992). M. Nakagawa, H. Okonagi, S. Sakata, and A. Toyoda, Prog. Theor. Phys. [**30**]{}, 727 (1963). B. McKellar, Phys. Lett. [**B97**]{}, 93 (1980). I. Kobzarev, B. Martemyanov, L. Okun, and M. Schepkin, Yad. Fiz. [**32**]{}, 1590 (1980) \[Sov. J. Nucl. Phys. [**32**]{}, 823 (1980)\]. A. De Rujula, Nucl. Phys. [**B188**]{}, 414 (1981). J. Deutsch, M. Lebrun, and R. Prieels, Phys. Rev. [**D27**]{}, 1644 (1983). J. H. Missimer, R. N. Mohapatra, and N. C. Mukhopahyay, Phys. Rev. D [**50**]{}, 2067 (1994). L. Littenberg and R. Shrock, Phys. Rev. Lett. [**68**]{} 443 (1992). R. Appel et al., Phys. Rev. Lett. [**85**]{}, 2877 (2000). L. Littenberg and R. Shrock, Phys. Lett. [**B491**]{}, 285-290 (2000). C. Dib, V. Gribanov, S. Kovalenko and M. Schmidt, Phys. Lett. B [**493**]{}, 82 (2000). A. Belyaev et al., hep-ph/0107046. Two recent reviews on rare $K$ decays include L. Littenberg, hep-ex/0201026 and J. Rosner and B. Winstein, eds., [*Kaon Physics*]{} (University of Chicago Press, Chicago, 2001); A. Weir et al., Phys. Rev. D [**41**]{}, 1384 (1990). P. Avery et al., Phys. Lett. B [**223**]{}, 470 (1989). L. Littenberg and R. Shrock, Phys. Rev. [**D46**]{}, R892 (1992). D. Rajaram et al., Phys. Rev. Lett. [**94**]{}, 181801 (2005). R. E. Shrock, Phys. Lett. B [**96**]{}, 159 (1980); Phys. Rev. [**D24**]{}, 1232, 1275 (1981); Phys. Lett. B [**112**]{}, 382 (1982). M. Gronau, Phys. Rev. [**D28**]{}, 2762 (1983). M. Gronau, C. N. Leung, and J. L. Rosner, Phys. Rev. D 29, 2539 (1984). Some early accelerator experiments include F. Bergsma et al., Phys. Lett. [**B128**]{}, 361 (1983); A. Cooper-Sarkar et al., [**B160**]{}, 267 (1985); J. Dorenbos et al., Phys. Lett. [**B166**]{}, 473 (1986); G. Bernardi et al., Phys. Lett. [**B166**]{}, 479 (1986); L. Oberauer et al., Phys. Lett. [**B198**]{}, 113 (1987); G. Bernardi et al., Phys. Lett. [**B203**]{}, 332 (1988). A more recent search is A. Vaitaitis et al., Phys. Rev. Lett. [**83**]{}, 4943 (1999). See [@pdg] for further references. A. Kusenko, S. Pascoli and D. Semikoz, arXiv:hep-ph/0405198. S. Dodelson, L. M. Widrow, Phys. Rev. Lett. [**72**]{}, 17 (1994). X. Shi, G. Fuller, Phys. Rev. Lett. [**82**]{}, 2832 (1999) K. Abazajian, G. Fuller, M. Patel, Phys. Rev. [**D64**]{}, 023501 (2001). A.D. Dolgov, S.H. Hansen, G. Raffelt, D.V. Semikoz, Nucl. Phys. [**B590**]{}, 562 (2000). A. Kusenko, G. Segre, Phys. Lett. [**B396**]{}, 197 (1997); G. Fuller, A. Kusenko, I. Mocioiu, S. Pascoli, Phys. Rev. [**D68**]{}, 103002 (2003). Y. Grossman and S. Rakshit, Phys. Rev. D [**69**]{}, 093002 (2004), hep-ph/0311310. Y. Grossman and H. E. Haber, Phys. Rev. D [**59**]{}, 093008 (1999) \[hep-ph/9810536\]. A. S. Joshipura and M. Nowakowski, Phys. Rev. D [**51**]{}, 2421 (1995) \[hep-ph/9408224\] S. Davidson and M. Losada, JHEP [**0005**]{}, 021 (2000) \[hep-ph/0005080\]. S. Davidson and M. Losada, Phys. Rev. D [**65**]{}, 075025 (2002) \[hep-ph/0010325\]. R. N. Mohapatra, Prog. in Part. and Nucl. Phys. 31, 39 (1993). T. Banks, Y. Grossman, E. Nardi and Y. Nir, Phys. Rev. D [**52**]{}, 5319 (1995) \[hep-ph/9505248\]; M. Nowakowski and A. Pilaftsis, Nucl. Phys. B [**461**]{}, 19 (1996) \[hep-ph/9508271\]. N. Arkani-Hamed, L. J. Hall, H. Murayama, D. R. Smith and N. Weiner, hep-ph/0007001. F. Borzumati, K. Hamaguchi, Y. Nomura and T. Yanagida, hep-ph/0012118. S. Abel, A. Dedes and K. Tamvakis, hep-ph/0402287. J. March-Russell and S. West, hep-ph/0403067. Y. Grossman and H. E. Haber, Phys. Rev. Lett.  [**78**]{}, 3438 (1997) \[hep-ph/9702421\]. N. V. Krasnikov, Phys. Lett. B [**388**]{}, 783 (1996) \[hep-ph/9511464\]. N. Arkani-Hamed, H. C. Cheng, J. L. Feng and L. J. Hall, Phys. Rev. Lett.  [**77**]{}, 1937 (1996) \[hep-ph/9603431\]. N. Arkani-Hamed, J. L. Feng, L. J. Hall and H. C. Cheng, Nucl. Phys. B [**505**]{}, 3 (1997) \[hep-ph/9704205\]. M. Dine, Y. Grossman and S. Thomas, eConf [**C010630**]{}, P332 (2001) \[Int. J. Mod. Phys. A [**18**]{}, 2757 (2003)\] \[hep-ph/0111154\]. For a recent discussion, see P. Langacker, hep-ph/0308033. M. Dine, V. Kaplunovsky, M. L. Mangano, C. Nappi and N. Seiberg, Nucl. Phys. B [**259**]{}, 549 (1985). J. D. Breit, B. A. Ovrut and G. C. Segre, Phys. Lett. B [**158**]{}, 33 (1985). E. Witten, Nucl. Phys. B [**268**]{}, 79 (1986). R. N. Mohapatra, see Ref.[@valle2]. R. N. Mohapatra and J. W. F. Valle, see vRef.[@valle2]. S. Nandi and U. Sarkar, Phys. Rev. Lett.  [**56**]{}, 564 (1986). G. Cleaver, M. Cvetic, J. R. Espinosa, L. L. Everett and P. Langacker, Phys. Rev. D [**57**]{}, 2701 (1998) \[hep-ph/9705391\]. M. Cvetic and P. Langacker, Phys. Rev. D [**46**]{}, 2759 (1992) \[hep-th/9205029\]. D. Mochinaga, Phys. Lett. B [**312**]{}, 405 (1993). A. E. Faraggi and E. Halyo, Phys. Lett. B [**307**]{}, 311 (1993) \[hep-th/9303060\]. R. Blumenhagen, M. Cvetic, P. Langacker and G. Shiu, arXiv:hep-th/0502005. L. E. Ibanez, F. Marchesano and R. Rabadan, JHEP [**0111**]{}, 002 (2001). I. Antoniadis, E. Kiritsis, J. Rizos and T. N. Tomaras, Nucl. Phys. B [**660**]{}, 81 (2003). A. Font, L. E. Ibanez, F. Quevedo and A. Sierra, Nucl. Phys. B [**331**]{}, 421 (1990). C. Coriano and A. E. Faraggi, Phys. Lett. B [**581**]{}, 99 (2004). J. R. Ellis, G. K. Leontaris, S. Lola and D. V. Nanopoulos, Phys. Lett. B [**425**]{}, 86 (1998). J. R. Ellis, G. K. Leontaris, S. Lola and D. V. Nanopoulos, Eur. Phys. J. C [**9**]{}, 389 (1999). J. E. Kim, Phys. Lett. B [**591**]{}, 119 (2004) \[arXiv:hep-ph/0403196\]. T. Kobayashi, S. Raby and R. J. Zhang, Nucl. Phys. B [**704**]{}, 3 (2005) \[arXiv:hep-ph/0409098\]. J. Giedt, G. L. Kane, P. Langacker and B. D. Nelson, Phys. Rev. D [**71**]{}, 115013 (2005) \[arXiv:hep-th/0502032\]. S. Antusch, O. J. Eyton-Williams and S. F. King, JHEP [**0508**]{} (2005) 103 \[arXiv:hep-ph/0505140\]. P. Langacker and B. D. Nelson, Phys. Rev. D [**72**]{}, 053013 (2005) \[arXiv:hep-ph/0507063\]. M. Cvetic and P. Langacker, Phys. Rev. D [**54**]{}, 3570 (1996) \[hep-ph/9511378\]. C. T. Hill and E. H. Simmons, Phys. Rept.  [**381**]{}, 235 (2003) \[Erratum-ibid.  [**390**]{}, 553 (2004)\] \[hep-ph/0203079\]. M. Cvetic and P. Langacker, in [*Perspectives in Supersymmetry*]{} (World Scientific, Singapore, 1998), ed. G. L. Kane, p 312, hep-ph/9707451. J. Erler, P. Langacker and T. j. Li, Phys. Rev. D [**66**]{}, 015002 (2002) \[hep-ph/0205001\]. For recent reviews, see P. Langacker, Int. J. Mod. Phys. A [**18**]{}, 4015 (2003) \[hep-ph/0304053\], and hep-ph/0402203. J. Kang, P. Langacker, and T. Li, Phys. Rev. D [**71**]{}, 015012 (2005) \[arXiv:hep-ph/0411404\]. K. A. Olive, D. N. Schramm and G. Steigman, Nucl. Phys. B [**180**]{}, 497 (1981). V. Barger, P. Langacker and H. S. Lee, Phys. Rev. D [**67**]{}, 075009 (2003) \[hep-ph/0302066\]. S. Weinberg, Phys. Rev. D [**19**]{}, 1277 (1979); L. Susskind, [*ibid.*]{} D [**20**]{}, 2619 (1979). Earlier work on dynamical symmetry breaking of gauge symmetries includes R. Jackiw and K. Johnson, Phys. Rev. D [**8**]{}, 2386 (1973); J. Cornwall and R. Norton, [*ibid.*]{} D [**8**]{}, 3338 (1973); M. Weinstein, Phys. Rev. D [**7**]{}, 1854 (1973); S. Weinberg, [*ibid.*]{} D [**13**]{}, 974 (1976). S. Dimopoulos, L. Susskind, Nucl. Phys. [**B155**]{}, 23, (1979); E. Eichten, K. Lane, Phys. Lett. B [**90**]{}, 125 (1980). Some recent reviews are K. Lane, hep-ph/0202255; C. Hill and E. Simmons, Phys. Rep. [**381**]{}, 235 (2003); R. S. Chivukula, M. Narain, J. Womersley, in Ref. [@pdg]. B. Holdom, Phys. Lett. B [**150**]{} (1985) 301; K Yamawaki, M. Bando, K. Matumoto, Phys. Rev. Lett. [**56**]{} (1986) 1335; T. Appelquist, D. Karabali, L.C.R. Wijewardhana, Phys. Rev. Lett. [**57**]{} (1986) 957; T. Appelquist and L.C.R. Wijewardhana, Phys. Rev. D [**35**]{} (1987) 774. T. Appelquist, J. Terning, Phys. Rev. D [**50**]{}, 2116 (1994). T. Appelquist and R. Shrock, in [*Neutrino Factories and Superbeams, NuFact03*]{}, A.I.P. Conf. Proc. 721 (A.I.P., New York, 2004), p. 261. T. Appelquist, M. Piai, and R. Shrock, Phys. Rev. D [**69**]{}, 015002 (2004); T. Appelquist, N. Christensen, M. Piai, and R. Shrock, Phys. Rev. D [**70**]{}, 093010 (2004). J. C. Pati and A. Salam, Phys. Rev. D [**10**]{}, 275 (1974). E. Akhmedov, M. Lindner, E. Schnapka, J. Valle, Phys. Rev. D [**53**]{}, 2752 (1996). S. P. Martin, Phys. Rev. D [**44**]{}, 2892 (1991);\ S. Antusch, J. Kersten, M. Lindner and M. Ratz, Nucl. Phys. B [**658**]{}, 203 (2003) \[arXiv:hep-ph/0211385\]. T. Appelquist, N. Christensen, M. Piai, and R. Shrock, Phys. Rev. D [**70**]{}, 093010 (2004). T. Kaluza, Sitzungsber. d. Preuss. Akad. d. Wiss. Berlin, (1921) 966; O. Klein, Z. Phys. [**37**]{} (1926) 895. I. Antoniadis, Phys. Lett. [**B246**]{} (1990) 377; J.D. Lykken, Phys. Rev. [**D54**]{} (1996) 3693. E. Witten, Nucl. Phys. [**B471**]{} (1996) 135; P. Ho$\check{{\rm r}}$ava and E. Witten, Nucl. Phys. [**B460**]{} (1996) 506; Nucl. Phys. [**B475**]{} (1996) 94. N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. [**B429**]{} (1998) 263; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. [**B436**]{} (1998) 257. K.R. Dienes, E. Dudas and T. Gherghetta, Phys. Lett. (1998) 55; Nucl. Phys. [**B537**]{} (1999) 47. G.F. Giudice, R. Rattazzi and J.D. Wells, Nucl. Phys. [**B544**]{} (1999) 3; T. Han, J.D. Jykken and R.-J. Zhang, Phys. Rev. [**D59**]{} (1999) 105006; J.L. Hewett, hep-ph/9811356; E.A. Mirabelli, M. Perelstein and M.E. Peskin, Phys. Rev. Lett. (1999) 2236; S. Nussinov and R. Shrock, Phys. Rev. [ **D59**]{} (1999) 105002; T.G. Rizzo, Phys. Rev. [**D59**]{} (1999) 115010; P. Nath and M. Yamaguchi, Phys. Rev. [**D60**]{} (1999) 116006; A. Mück, A. Pilaftsis and R. Rückl, Phys. Rev. D [**65**]{} (2002) 085037; hep-ph/0312186. K.R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. (1999) 25. N. Arkani-Hamed, S. Dimopoulos, G. Dvali and J. March-Russell, hep-ph/9811448. G. Dvali and A. Yu. Smirnov, Nucl. Phys. [**B563**]{} (1999) 63. A. Pilaftsis, Phys. Rev. [**D60**]{} (1999) 105023. J. Scherk and J.H. Schwarz, Phys. Lett. [**B82**]{} (1979) 60; Nucl. Phys. [**B153**]{} (1979) 61; P. Fayet, Phys. Lett. (1985) 121; Nucl. Phys. [**B263**]{} (1986) 649. G. Bhattacharyya, H.-V. Klapdor–Kleingrothaus, H. Päs and A. Pilaftsis, Phys. Rev. [**D67**]{} (2003) 113001. R.N. Mohapatra, S. Nandi and A. Perez-Lorenzana, Phys. Lett. [**B466**]{} (1999) 115. R.N. Mohapatra and A. Perez-Lorenzana, Nucl. Phys. [**B576**]{} (2000) 466. R. Barbieri, P. Creminelli and A. Strumia, Nucl. Phys. (2000) 28; A. Ioannisian and J.W.F. Valle, Phys. Rev. (2001) 073002; D.O. Caldwell, R.N. Mohapatra and S.J. Yellin, Phys. Rev. [**D64**]{} (2001) 073001; K.R. Dienes and I. Sarcevic, Phys. Lett. [**B500**]{} (2001) 133; A. de Gouvêa, G.F. Giudice, A. Strumia and K. Tobe, Nucl. Phys. [**B623**]{} (2002) 395. A. Lukas, P. Ramond, A. Romanino and G.G. Ross, JHEP [**0104**]{} (2001) 010. H. Davoudiasl, P. Langacker and M. Perelstein, Phys. Rev. [**D65**]{} (2002) 105015. H. S. Goh and R. N. Mohapatra, Phys. Rev. D [**65**]{}, 085018 (2002) \[hep-ph/0110161\]. G. Mclaughlin and J. N. Ng, Phys. Rev. [**D 63**]{}, 053002 (2001). H. Yu, S.-P. Ng and R. N. Mohapatra, Phys.Rev.[**D70**]{} 057301 (2004). D.O. Caldwell, R.N. Mohapatra and S.J. Yellin, Phys. Rev. [**D64**]{} (2001) 073001. A. Ioannisian and A. Pilaftsis, Phys. Rev. [**D62**]{} (2000) 066001. A.E. Faraggi and M. Pospelov, Phys. Lett. [**B458**]{} (1999) 237; K. Agashe and G. H. Wu, Phys. Lett. [**B498**]{} (2001) 230; B. He, T.P. Cheng and L.F. Li, Phys. Lett. [**B553**]{} (2003) 277. Q.-H. Cao, S. Gopalakrishna and C.P. Yuan, hep-ph/0312339; G. Moreau, hep-ph/0407177; J. L. Hewett, P. Roy and S. Roy, Phys. Rev. D [**70**]{}, 051903 (2004). H.V. Klapdor-Kleingrothaus and U. Sarkar, Mod. Phys.Lett. [**A16**]{} (2001) 2469. R.N. Mohapatra, A. Perez-Lorenzana and C.A. de S. Pires, Phys. Lett. [**B491**]{} (2000) 143. S.J. Huber and Q. Shafi, Phys. Lett. [**B544**]{} (2002) 295. T.D.Lee and C.N.Yang, Phys rev [**98**]{} , 1501 (1955). L.B.Okun, Sov J Nucl Phys , [**10**]{} 206 (1969); For a review see A.D.Dolgov , Phys Rept [**320**]{}, 1 (1999). E.G.Adelberger, B.R.Heckel and A.E.Nelson, hep-ph/0307284. A.S.Joshipura and S. Mohanty , Phys. Lett. [**B584**]{} 103 (2004)(hep-ph/0310210). J.A.Grifols and E. Masso , Phys Lett [**B 579**]{}, 123(2004). R.Foot, Mod Phys Lett [**A 6**]{}, 527 (1991); X.-G.He, G.C.Joshi , H.Lew and R.R.Volkas, Phys Rev [**D44**]{}, 2118 (1991); R.Foot et al, Phys Rev [**D 50**]{}, 4571(1994). H. W. Zaglaur and K. H. Schwarzer, Z. Phys. [**C40**]{} 273(1998); A. Bueno, M. Campanelli and A. Rubbia, Nucl. Phys. [**589**]{} 577 (2000). K. Hagiwara [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**66**]{}, 010001 (2002). H. Murayama, hep-ph/0307127. For more recent (and more stringent bounds) see A. de Gouvêa and C. Peña-Garay, hep-ph/0406301. Equivalent bounds in the atmospheric sector can be found in A. de Gouvêa, Nucl. Phys. Proc. Suppl.  [**143**]{}, 167 (2005) \[arXiv:hep-ph/0408246\]; H. Minakata, H. Nunokawa, W.J.C. Teves and R. Zukanovich Funchal, Phys. Rev. D [**71**]{}, 013005 (2005) \[arXiv:hep-ph/0407326\]. H. Murayama and T. Yanagida, Phys. Lett. B [**520**]{}, 263 (2001) \[hep-ph/0010178\]. G. Barenboim, L. Borissov, J. Lykken and A. Y. Smirnov, JHEP [**0210**]{}, 001 (2002) \[hep-ph/0108199\]. A. Strumia, Phys. Lett. B [**539**]{}, 91 (2002) \[hep-ph/0201134\]. G. Barenboim, L. Borissov and J. Lykken, Phys. Lett. B [**534**]{}, 106 (2002) \[hep-ph/0201080\]. G. Barenboim, L. Borissov and J. Lykken, hep-ph/0212116. M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, Phys. Rev. D [**68**]{}, 053007 (2003) \[hep-ph/0306226\]. V. Barger, D. Marfatia and K. Whisnant, Phys. Lett. B [**576**]{}, 303 (2003) \[hep-ph/0308299\]. O. W. Greenberg, Phys. Rev. Lett.  [**89**]{}, 231602 (2002) \[hep-ph/0201258\]. A. de Gouvêa, Phys. Rev. D [**66**]{}, 076005 (2002) \[hep-ph/0204077\]. A. V. Kostelecky and M. Mewes, hep-ph/0308300; S. Choubey and S. F. King, Phys. Lett. B [**586**]{}, 353 (2004) \[hep-ph/0311326\]. G. Barenboim and N. E. Mavromatos, hep-ph/0404014. S. Choubey and S. F. King, Phys. Lett. B [**586**]{} (2004) 353 \[arXiv:hep-ph/0311326\]. J. Ellis, J. Hagelin, D.V. Nanopoulos and M. Srednicki, Nucl. Phys. [**B241**]{}, 381 (1984). F. Benatti and R. Floreanini, Phys. Lett. B [**468**]{} (1999) 287 \[hep-ph/9910508\]. E. Lisi, A. Marrone and D. Montanino, Phys. Rev. Lett.  [**85**]{}, 1166 (2000) \[hep-ph/0002053\]. \[NuTeV Collaboration\] G. P. Zeller [*et al.*]{}, Phys. Rev. Lett.  [**88**]{}, 091802 (2002) \[hep-ex/0110059\]; Phys. Rev. D [**65**]{}, 111103 (2002) \[hep-ex/0203004\]; K. S. McFarland [*et al.*]{}, hep-ex/0205080; G. P. Zeller [*et al.*]{}, hep-ex/0207052. C. H. Llewellyn Smith, Nucl. Phys. B [**228**]{}, 205 (1983). M. S. Chanowitz, Phys. Rev. D [**66**]{}, 073002 (2002) \[hep-ph/0207123\]. The LEP Collaborations, the LEP Electroweak Working Group, and the SLD Heavy Flavor and Electroweak Groups, CERN-EP/2003-091, hep-ex/0312023. S. Davidson, S. Forte, P. Gambino, N. Rius and A. Strumia, JHEP [**0202**]{}, 037 (2002) \[hep-ph/0112302\]; S. Davidson, hep-ph/0209316; P. Gambino, hep-ph/0211009. B. A. Dobrescu and R. K. Ellis, hep-ph/0310154. K. P. O. Diener, S. Dittmaier and W. Hollik, hep-ph/0310364. P. Gambino, hep-ph/0311257. W. Loinaz and T. Takeuchi, Phys. Rev. D [**60**]{}, 115008 (1999) \[hep-ph/9903362\]. E. Ma, D. P. Roy and S. Roy, Phys. Lett. B [**525**]{}, 101 (2002) \[hep-ph/0110146\]; E. Ma and D. P. Roy, Phys. Rev. D [**65**]{}, 075021 (2002) \[hep-ph/0111385\]; Nucl. Phys. B [**644**]{}, 290 (2002) \[hep-ph/0206150\]. M. Gronau, C. N. Leung and J. L. Rosner, Phys. Rev. D [**29**]{}, 2539 (1984); J. Bernabeu, A. Santamaria, J. Vidal, A. Mendez and J. W. Valle, Phys. Lett. B [**187**]{}, 303 (1987); K. S. Babu, J. C. Pati and X. Zhang, Phys. Rev. D [**46**]{}, 2190 (1992); W. J. Marciano, Phys. Rev. D [**60**]{}, 093006 (1999) \[hep-ph/9903451\]; A. de Gouvêa, G. F. Giudice, A. Strumia and K. Tobe, Nucl. Phys. B [**623**]{}, 395 (2002) \[hep-ph/0107156\]; K. S. Babu and J. C. Pati, hep-ph/0203029. L. N. Chang, D. Ng and J. N. Ng, Phys. Rev. D [**50**]{}, 4589 (1994) \[hep-ph/9402259\]; W. Loinaz, N. Okamura, T. Takeuchi and L. C. R. Wijewardhana, Phys. Rev. D [**67**]{}, 073012 (2003) \[hep-ph/0210193\]; T. Takeuchi, hep-ph/0209109; T. Takeuchi, W. Loinaz, N. Okamura and L. C. R. Wijewardhana, hep-ph/0304203. W. Loinaz, N. Okamura, S. Rayyan, T. Takeuchi and L. C. R. Wijewardhana, Phys. Rev. D [**68**]{}, 073001 (2003) \[hep-ph/0304004\]. M. E. Peskin and T. Takeuchi, Phys. Rev. Lett. [**65**]{}, 964 (1990); Phys. Rev. D [**46**]{}, 381 (1992), J. L. Hewett, T. Takeuchi and S. Thomas, hep-ph/9603391. S. L. Glashow, hep-ph/0301250. B. W. Lee, S. Pakvasa, R. E. Shrock and H. Sugawara, Phys. Rev. Lett.  [**38**]{}, 937 (1977). S. Ahmad [*et al.*]{}, Phys. Rev. D [**38**]{} (1988) 2102; F. Simkovic, V. E. Lyubovitskij, T. Gutsche, A. Faessler and S. Kovalenko, Phys. Lett. B [**544**]{}, 121 (2002) \[hep-ph/0112277\]; R. Kitano, M. Koike and Y. Okada, Phys. Rev. D [**66**]{}, 096002 (2002) \[hep-ph/0203110\]; R. Kitano, M. Koike, S. Komine and Y. Okada, Phys. Lett. B [**575**]{}, 300 (2003) \[hep-ph/0308021\]. L. Willmann [*et al.*]{}, Phys. Rev. Lett.  [**82**]{}, 49 (1999) \[hep-ex/9807011\]. A. Halprin, Phys. Rev. Lett.  [**48**]{} (1982) 1313; T. E. Clark and S. T. Love, Mod. Phys. Lett. A [**19**]{}, 297 (2004) \[hep-ph/0307264\]. W. Loinaz, N. Okamura, S. Rayyan, T. Takeuchi and L. C. R. Wijewardhana, hep-ph/0403306. S. Ritt \[MUEGAMMA Collaboration\], Nucl. Instrum. Meth. A [**494**]{} (2002) 520. See also the MEG Collaboration website at `http://meg.web.psi.ch/`. J. L. Popp \[MECO Collaboration\], Nucl. Instrum. Meth. A [**472**]{}, 354 (2000) \[hep-ex/0101017\]; M. Hebert \[MECO Collaboration\], Nucl. Phys. A [**721**]{}, 461 (2003). E. Sichtermann \[g-2 Collaboration\], eConf [**C030626**]{}, SABT03 (2003) \[hep-ex/0309008\]. E. Ma and D. P. Roy, Phys. Rev. D [**65**]{}, 075021 (2002) \[hep-ph/0111385\]; K. S. Babu and J. C. Pati, Phys. Rev. D [**68**]{}, 035004 (2003) \[hep-ph/0207289\]; T. Fukuyama, T. Kikuchi and N. Okada, Phys. Rev. D [**68**]{}, 033012 (2003) \[hep-ph/0304190\]. P. Langacker, AIP Conf. Proc.  [**698**]{}, 1 (2004) \[hep-ph/0308145\]. H. Abele [*et al.*]{}, Phys. Rev. Lett.  [**88**]{}, 211801 (2002) \[hep-ex/0206058\]; Eur. Phys. J. C [**33**]{}, 1 (2004) \[hep-ph/0312150\]. B. Tipton [*et al.*]{}, AIP Conf. Proc.  [**539**]{}, 286 (2000). [^1]: On leave of absence from Department of Physics, University of California, Berkeley, CA 94720. [^2]: The bulk of this report was finalized at the end of January 2005. We have not included the (sometimes substantial) progress that has been obtained in several areas of neutrino physics since then. [^3]: In the Dirac case the “right-handed” degrees of freedom are decoupled because of the smallness of the corresponding Yukawa couplings. However, for very small temperatures, i.e. long after BBN, it is no longer appropriate to describe neutrinos in terms of chiral states. This means that strictly speaking there is a regeneration, but this does not affect BBN (see, e.g., [@Dolgov:1994vu]). [^4]: Alternatively, the relation $\theta_{\odot} + \lambda = \pi/4$, nowadays known as quark–lepton–complementarity, can also be interpreted as such a connection [@cablampi4]. [^5]: Recently, encouraging results in what regards the problem of the calculation of the nuclear matrix elements have been obtained [@FesSimVogel03]. [^6]: A very pessimistic conclusion about the prospects to establish CP violation in the lepton sector due to Majorana CP violating phases from a measurement of $\langle m\rangle_{eff}$ and, e.g., of $m_0$, was reached in [@noMaj]. [^7]: We assume that the fields of massive Majorana neutrinos satisfy Majorana conditions which do not contain phase factors. [^8]: It can also happen in quite different theories, such as extended technicolor [@dml; @qdml]. [^9]: In the so-called ‘strong washout’ regime, $T_{RH}$ can be an order of magnitude smaller than $M_1$ [@Buchmuller:2004nz]. [^10]: This arises in the Casas-Ibarra seesaw parametrization. [^11]: An obvious exception is electroweak baryogenesis, where $B-L=0$ while $B=L\neq$ after the electroweak phase transition. [^12]: In contrast, the Yukawa coupling of the right-handed electron $e_R$ is large enough to equilibrate the $e_R$s before sphaleronic processes switch off [@Campbell:1992jd]. [^13]: Here we shall assume $M_{\nu}=M_{\nu}^{\mathrm{I}}$. We shall later comment also on the type II seesaw mechanism. [^14]: As of the end of summer, 2005.
--- abstract: 'We theoretically investigate the flow of the atomic excitations in a driven chiral-coupled atomic chain with nonreciprocal decay channels. This one-dimensional system allows infinite-range dipole-dipole interaction, and enables directional guided modes of scattered light. Under a weakly driven condition, we are able to simulate the transport properties of atomic excitations between the left and right parts of the chain. In the steady states, the transport is highly dependent on the equidistant positions of the ordered array, the excitation field detunings, and the directionality of such chiral-coupled system. We discuss the parameter regimes which are resilient or sensitive to position fluctuations, and provide insights to precise and genuine state preparations. Furthermore, we study the effect of position fluctuations on the transport of excitations. Our results can shed light on deterministic state preparations and pave the way for many-body state manipulations in the driven and dissipative chiral-coupled systems.' address: 'Institute of Physics, Academia Sinica, Taipei 11529, Taiwan' author: - 'H. H. Jen' title: 'Selective transport of atomic excitations in a driven chiral-coupled atomic chain' --- Introduction ============ Manipulating light-matter interactions by precisely positioning quantum emitters has made progress in versatile platforms, including photonic crystal waveguide [@Goban2015], optical microtraps [@Barredo2016; @Endres2016], and diamond nanophotonics systems [@Sipahigil2016]. This promises to tailor the properties of quantum interface from the bottom up, which is in stark contrast to the atomic ensemble of many randomly distributed emitters [@Hammerer2010]. Recently, one-dimensional (1D) atom-nanophotonic waveguide system [@Kien2005; @Kien2008; @Tudela2013; @Kumlin2018; @Chang2018] presents another potential paradigm to engineer light-matter interactions. This 1D coupled system features strong and infinite-range couplings in the resonant dipole-dipole interactions (RDDI) [@Stephen1964; @Lehmberg1970], which is difficult to reach in a free-space atomic system. Only recently, superradiance [@Dicke1954; @Gross1982] is observed in two atomic clouds above the nanofibers [@Solano2017], demonstrating the infinite-range RDDI in the 1D atom-fiber coupled system. This system is also proposed to realize mesoscopic entangled states [@Tudela2013] and allow universal atomic dynamics [@Kumlin2018]. In addition to the advantage of strong coupling in the 1D atom-fiber or atom-waveguide systems, it can further construct a chiral quantum network [@Stannigel2012; @Ramos2014; @Pichler2015; @Lodahl2017], which enables nonreciprocal decay channels and directional emissions. This chiral coupling breaks the time-reversal symmetry that should be preserved in conventional light-matter interacting systems in free space, and emerges due to the locking of transverse spin angular momentum and light propagation direction [@Bliokh2014; @Bliokh2015]. The chirality can be engineered via either controlling the atomic internal states [@Mitsch2014] or applying external magnetic fields [@Luxmoore2013; @Sollner2015]. This 1D chiral-coupled system can be used as complementary single-photon devices which are fundamental in quantum internet [@Kimble2008], and can potentially operate CNOT gates [@Sollner2015] which are essential to quantum computation. In such dynamical system of chiral-coupled atomic chain, the steady-state preparations should highly depend on the positions of the atoms, the driven field detunings, and its directionality of the decay channels. However, the interplay or competition between these parameters is less explored. Here we investigate the selective and controllable transport of atomic excitations to locate the parameter regimes resilient to position fluctuations, which are advantageous to precise and genuine state preparations. The effect of position fluctuations is also discussed, and it shows that the system functions more stable with less than $1\%$ fluctuations. Our results demonstrate potentially deterministic state preparations in the driven and dissipative system and hold promises to manipulate many-body spin dynamics [@Hung2016]. ![Schematic driven chiral-coupled atomic chain. (a) A one-dimensional atom-fiber coupled system shows nonreciprocal decay channels with $\gamma_L\neq\gamma_R$ in $\hat z$. Using weak and uniform external fields $\Omega$ along $\hat x$ with a detuning $\delta$, we are able to drive two-level quantum emitters ($|g(e)\rangle$ for ground and excited states respectively) toward some collective atomic states. This scheme utilizes either the ordered atomic chain with equidistant separations or controllable $x_{i,j}$ between the $i$th and $j$th atoms in (b). A displaced atom in red represents a local disorder, which can further manipulates the atomic steady states via adjusting the collective dipole-dipole interactions in this driven chiral-coupled atomic system.[]{data-label="fig1"}](1.eps){width="8.0cm" height="6cm"} Effective chiral-coupled interactions ===================================== For atoms in free space, the spontaneous emissions from excited atoms initiate from system-reservoir interactions, which decay as an exponential function with a characteristic time constant. This decay rate represents an intrinsic property of the independent atoms, which can be hugely modified when atoms are close to each other within a transition wavelength. In this regime, strongly-coupled RDDI show up since light can rescatter multiple times before leaving the whole medium. RDDI in a three-dimensional (3D) reservoir are responsible for the collective phenomena of superradiance [@Dicke1954; @Lehmberg1970; @Gross1982; @Mazets2007; @Jen2012; @Jen2015; @Sutherland2016_2; @Araujo2016; @Roof2016; @Jennewein2016; @Bromley2016; @Zhu2016; @Shahmoon2017] and subradiance [@Scully2015; @Guerin2016; @Facchinetti2016; @Jen2016_SR; @Jen2016_SR2; @Sutherland2016; @Bettles2016; @Jen2017_MP; @Jenkins2017; @Garcia2017; @Plankensteiner2017; @Jen2018_SR1; @Jen2018_SR2] in a dense atomic system. RDDI in a 3D reservoir in general are reciprocal couplings which preserve the time-reversal symmetry, and in the long range behave asymptotically as the inverse of mutual atomic separation (see appendix A). By contrast for RDDI in a 1D reservoir, it shows infinite-range couplings in sinusoidal forms [@Tudela2013; @Kumlin2018], and can be further structured [@Scelle2013; @Chen2014; @Ramos2014] to show nonreciprocal decay channels in the atom-fiber or atom-waveguide coupled systems. Here we consider a driven chiral-coupled atomic chain as shown in figure \[fig1\], where the system dynamics can be described by the effective chiral master equation [@Pichler2015], =-\[H\_S+H\_L+H\_R,\]+\_L\[ \]+\_R\[ \]. The external light-matter interaction is H\_S\_\^N(\_+\_\^)-\_\^N\_\_\^\_, which drives the system of $N$ two-level quantum emitters ($|g(e)\rangle$ for the ground and excited state respectively and $\sigma_\mu^\dag\equiv|e\rangle_\mu\langle g|$, $\sigma_\mu=(\hat\sigma_\mu^\dag)^\dag$) with a uniform Rabi frequency $\Omega$ and spatially dependent detunings $\delta_\mu$. The coherent parts are H\_L&&-\_\^N(e\^[ik|x\_-x\_|]{}\_\^\_-), which denote the collective energy shifts, and the Lindblad forms of \_L\[\]&-\_[,]{}\^N,\ \_R\[\]&-\_[,]{}\^N, characterize the collective decay behaviors. $k=2\pi/\lambda$ is the wave vector for the transition wavelength $\lambda$, and the subscripts $L$ and $R$ respectively label the left- and right-propagating decay channels. For a 1D atomic chain, we can order them as $x_1<x_2<...<x_{N-1}<x_N$ for convenience. The above Lindblad forms do not include the non-guided decay or other non-radiative losses of the atoms, which would compromise the light detection efficiency via fibers or the fidelity of state preparations. Next we further assume a weakly driven system with $N$ atoms, such that the Hilbert space can be confined in the ground $|g\rangle^{\otimes N}$ and singly excited states $|\psi_\mu\rangle=(\sqrt{N})^{-1}\sum_{\mu=1}^N\sigma_\mu^\dag|g\rangle^{\otimes N}$. This is similar to the coherent dipole model with weak laser excitations [@Sutherland2016_2] or Green’s function approach in low saturation regime [@Tudela2017], where the ground state population is much larger than the one of the excited state, that is $\langle\sigma_\mu\sigma_\mu^\dag\rangle\approx 1\gg\langle\sigma^\dag_\mu\sigma_\mu\rangle$. In this limit, the state of the system can be expressed as |(t)=(|g\^[N]{}+A\_(t)|\_), where the probability amplitude $A_\mu(t)$ can be obtained by \_(t)=i+\_[=1]{}\^N V\_[,]{}A\_(t), and the chiral-coupled interaction $V$ reads       V=, where $x_{\mu,\nu}\equiv x_\mu-x_\nu$. The above chiral-coupled interaction becomes reciprocal, which is $V_{\mu,\nu}=V_{\nu,\mu}$, only when $\gamma_L=\gamma_R$. For reciprocal interaction, $-\gamma_{L(R)}\cos(k|x_{\mu,\nu}|)$ and $\gamma_{L(R)}\sin(k|x_{\mu,\nu}|)$ respectively represent the collective energy shifts and decay rates. In general, $VV^\dag\neq V^\dag V$, so $V$ is not a normal matrix. Therefore, the eigen-decomposition does not work here, and we solve for the system evolutions directly from A=i+ V A,\[A\] where $\vec A\equiv (A_1(t),A_2(t),...,A_N(t))$ with the initial conditions of $\vec A(t=0)=0$. Below we use equation (\[A\]) to investigate the transport properties of atomic excitations and state manipulations in a driven chiral-coupled atomic chain. Selective transport of atomic excitations ========================================= Here we quantify the transport of atomic excitations by the difference of excited state populations between the left and right sections of the atomic chain for even and odd $N$ respectively, T\_p = , where the excited state population is $P_\mu\equiv|A_\mu|^2$, and we have excluded the central atom (the $[(N+1)/2]$th one) for odd $N$. Positive or negative $T_p$ means that the atomic excitations accumulate toward the left or right parts of the chain, from which we can analyze how the distributions of the excitations are manipulated and controlled by system parameters. $T_p$ can be used as indicators of how efficient the light is transferred to either directions of the chain, and can further apply to quantum links between multiple atomic chains. Two atoms --------- Before we study a longer atomic chain, it is helpful to study the case of $N=2$, where we can get some insights of how atomic excitations are distributed. The coupled equations from equation (\[A\]) are \_1(t)=&i+(i\_1-)A\_1-\_L e\^[-i]{}A\_2,\ \_2(t)=&i- \_Re\^[-i]{}A\_1 + (i\_2-)A\_2, where a dimensionless $\xi\equiv k|x_1-x_2|$ sets the length scale for mutual separations. In the steady state, we obtain A\_1() =& ,\ A\_2() =& , where we have replaced $\gamma_R=\gamma - \gamma_L$ with $\gamma$ indicating the overall decay rate of the system. The steady-state populations are thus determined by individual detunings, mutual distances, and the directionality factor $D\equiv (\gamma_R-\gamma_L)/\gamma$ [@Mitsch2014]. $D=\pm 1$ means the cascaded scheme [@Stannigel2012; @Gardiner1993; @Carmichael1993] where only the guided emission to the right or left is allowed, while $D=0$ represents the reciprocal case as in conventional light-matter interacting systems without the presence of nanofiber or waveguide. For $N=2$, we obtain T\_p(N=2) = . In figure \[fig2\], we plot the distribution of atomic excitations for $N=2$. For the cascaded scheme of $D=1$ in figure \[fig2\](a), the mirror symmetry shows in the cases of $\pm\delta_1$ at $\xi=0$. Since $D=1$ means unidirectional decay channel to the right, a preference of the excitation transfer to the right (negative $T_p$) should cover most of the parameter regimes of $\xi$. In this cascaded scheme, a relatively sharp profile shows up around $\xi\sim\mp\pi/3$ for a positive $T_p$, while a flattened one for a negative $T_p$ at $\xi\sim\pm\pi/2$, respectively for $\delta_1=\mp 1\gamma$. The positive transport in this cascaded scheme manifests the dominance of 1D RDDI over the directionality of the chiral-coupled system. Meanwhile, the flattened regions of $T_p$ indicates a resilient response of position fluctuations in $\xi$, which is preferential for precise and genuine state preparations. On the other hand for reciprocal couplings of $D=0$ in figure \[fig2\](a), similar asymmetric profile emerges and shows mirror reflection at $\xi=0$ with $\pm\delta_2$. Specifically at $\xi=0$, $T_p$ becomes one as long as the left atom is resonantly driven, which means a complete suppression of the atomic excitations on the right. This dispersion-like distribution of atomic excitations enables both positive and negative transport around $\xi=0$, making flexible steady-state preparations simply by either manipulating excitation detunings or $\xi$. Under the resonance condition $\delta_{1(2)}=0$, $T_p$ is not defined since $A_{1(2)}(\infty)$ becomes infinite. This is due to the breakdown of the low saturation approximation in the model, which will be addressed in the end of the next subsection. Next we study a longer atomic chain and investigate the multi-atom effect on the transport property. ![Distribution of atomic excitations for $N=2$. (a) The cascaded case ($D=1$) with $\delta_2/\gamma=0$ and $\delta_1/\gamma=-1$ (red-$\diamond$), $0$ (solid-black), $1$ (blue-$\square$), respectively. (b) The case for reciprocal couplings with $\gamma_L=\gamma_R$ ($D=0$) is shown with $\delta_1/\gamma=0$ and $\delta_2/\gamma=$ $-0.5$ (red-$\diamond$), $0.5$ (blue-$\square$), respectively.[]{data-label="fig2"}](2.eps){width="12.0cm" height="6cm"} Atomic chain ------------ For a longer atomic chain, the left- and right-propagating emissions can go through multiple scatterings of transmissions and reflections before leaving the whole array. As more atoms are included, the many-body coherences will play important roles in determining the steady-state properties. Further with varying equidistant positions $\xi$, the chiral-coupled interactions can also be modified significantly. Therefore, here we focus on the interplay between the number of atoms and their positions in the transport of atomic excitations. First we consider the cascaded case ($D=1$) with uniform detunings in figure \[fig3\]. On resonance, the transport profiles should be symmetric around $\xi=0$ or $\pi$ as shown in figure \[fig3\](a), similar to figure \[fig2\](b) for $N=2$. As $N$ increases, the width of the minimum $T_p(\xi=\pi)$ narrows, and positive $T_p$ becomes sharper near $\xi\sim\pi$. More ripples around $T_p\approx 0$ show up as $N$ increases, which indicates of multiple interferences from these quantum emitters. This also appears in figure \[fig3\](b) with finite excitation detunings, where the minimum of $T_p$ shifts toward $\xi=2\pi$. This suggests to allow adjustable transport of excitations by controlling external fields and locating the optimal $\xi$. The narrowing distribution of $T_p$ for larger $N$, however, restricts a genuine preparation of the states if the atomic chain undergoes significant position fluctuations. For an example of $N=10$ in figure \[fig3\](a), the full width of the half minimum $T_p(\xi=\pi)$ is $\Delta\xi\sim 1$, which provides an approximate tolerance of position fluctuations around $\pm 0.5/\pi$, i.e. $\sim\pm 15\%$ displacement around $\xi=\pi$. Thus for an atomic chain of $N>10$ subject to more significant fluctuations $\gtrsim 15\%$, it is more demanding in stabilizing the system to transfer the excitations with high fidelity. ![The transport $T_p$ of atomic excitations as a dependence of $N$ and $\xi$ for $D=1$. With uniform detunings (a) $\delta_\mu/\gamma=0$ and (b) $\delta_\mu/\gamma=1$, $T_p$ shows positive or negative flow of atomic excitations up to $N=20$. We note that $\xi\in[0,2\pi]$ is periodic by $2\pi$, and thus is equivalent to $\xi\in[-\pi,\pi]$.[]{data-label="fig3"}](3.eps){width="12.0cm" height="6cm"} ![The transport $T_p$ of atomic excitations for $N=10$ and $D=0.5$. Symmetric and asymmetric profiles of $T_p$ are plotted for various uniform detunings in (a) $\delta_\mu/\gamma=0$, (b) $\delta_\mu/\gamma=1$, and (c) $\delta_\mu/\gamma=2$, respectively.[]{data-label="fig4"}](4.eps){width="12.0cm" height="6cm"} On the other hand for the case of nonreciprocal decay channels, we consider $D=0.5$ as an example. We show $T_p$ in figure \[fig4\] for $N=10$ with increasing detunings. Similar to the cascaded scheme of figure \[fig3\](a), in figure \[fig4\](a), the transport of atomic excitations is symmetric at $\xi=0$ or $\pi$ under the conditions of resonantly driven fields. The small bump at $\xi=\pi$ emerges as long as $D<1$, which can be further enhanced as $D$ decreases. Other than the parameter regime of $\xi\sim\pi$, $T_p$ shows relatively flattened distributions with almost equal excitation populations between the left and right parts of the chain. By contrast when detunings are increasing in figures \[fig4\](b) and (c), the minimum of $T_p$ shifts toward $\xi=2\pi$. Significant positive $T_p$ at $\xi\gtrsim 0$ emerges as well in figure \[fig4\](c), which is over $0.5$. Furthermore in figure \[fig4\](c), the width of this negative peak of $T_p$ widens as $\delta_\mu$ increases. This shows the possibility, in the widened regions of $T_p$, to manipulate a genuine transport of atomic excitations in the nonreciprocal scheme. In addition, the red-detuned excitation fields hold a symmetric relation in the transport property, such that $T_p(\delta_\mu=-\delta, -\xi)=T_p(\delta_\mu=\delta, \xi)$. We note that as $D\rightarrow 0$ and $N\gg 1$, the time to reach steady states prolongs and even longer at $\xi\sim\pi$. At the special parameter regime of $D=0$, $\delta_\mu=0$, and $\xi=\pi$, the excited atoms under reciprocal couplings become decoherence-free, and thus are pumped by external fields $\Omega$ indefinitely. This eventually violates the assumptions of small excited state populations in our models in section 2. The validity of weakly-coupled regime can be retrieved with a finite detuning $\delta_\mu=\delta$ and satisfying $\Omega/\delta\ll 1$. Under this condition, the system evolves with a generalized Rabi frequency $\sim \delta$, and exchanges between the ground and the singly-excited states, where the steady-state is never reached. Effect of position fluctuations of the atomic chain =================================================== ![The transport $T_p$ of atomic excitations for $N=10$ and $\delta_\mu=0$. (a) The cascaded case of $D=0$ and (b) the case with nonreciprocal decay channels of $D=0.5$, with $0.5\%$ (left) and $1\%$ (right) position fluctuations respectively. Shaded areas are filled between the upper and lower curves with $1\sigma$ standard deviation, and a solid line in black represents the mean value with ensemble averages.[]{data-label="fig5"}](5.eps){width="12.0cm" height="6cm"} Finally we study the effect of position fluctuations on the transfer of state excitations in the driven and chiral-coupled atomic chain. On the experimental side, a precise positioning of the atoms is not easily fulfilled. Whether the spatial variations of the atoms make a notable effect or not depends on the ratio of deviation and the transition wavelength. Superconducting qubits, for example, is more resilient to this spatial fluctuation due to microwave transmission line, in contrast to the optical transition of neutral atoms. In figure \[fig5\], we plot the $T_p$ by introducing the position fluctuations on each atoms of the chain. As the fluctuation increases, $T_p$ smooths out especially close to $\xi\sim 2\pi$, compared to figures \[fig3\](a) and \[fig4\](a). $T_p$ is more sensitive to the spatial variation at large $\xi$ since the change of $e^{ik|x_\mu-x_\nu|}$ in the RDDI is more radical under the same amount of position fluctuations. For the cascaded case of $D=1$ in figure \[fig5\](a) near $\xi\sim\pi$, the transport of the atomic excitations is less affected by the fluctuations, in contrast to the non-cascaded case of $D=0.5$ in figure \[fig5\](b). This is due to the fact that the non-cascaded scheme permits both left- and right-propagating decay channels, and therefore the driven atomic chain experiences the position fluctuations more significantly from both directions. This shows that the cascaded chiral-coupled atomic chain can better withstand the influences of atomic spatial fluctuations. Discussion and conclusion ========================= The efficiency and fidelity of state preparations can be reduced due to non-guided modes of light in the chiral-coupled systems. This limits the operation time in state manipulations, and thus the non-guided decay rate sets the overall timescale for genuinely controlled dynamical systems. The inefficiency can be overcame, for example in an atom-fiber system, by aligning the atoms close to the nanofiber with an optimal fiber radius [@Kien2005_2; @Chang2018] to raise the interaction probability. Some recent progress has shown the potential of flexible control over the chiral-coupled systems, including the superconducting qubits without external magnetic fields [@Hamann2018] and a bilayer atomic array in free space [@Grankin2018]. This shows rich opportunities in structuring the 1D reservoirs to manipulate the directionality of light coupling in the system. Moreover, subradiance dynamics can also emerge in such chiral-coupled atomic chain [@Jen2018_chiral], which can be potentially used for quantum storage of light. With the scalability of atom-fiber or atom-waveguide platforms, the chiral-coupled systems can further stimulate applications in quantum information processing. In conclusion, we have investigated the distribution of the atomic excitations in a weakly driven chiral-coupled atomic chain. In this 1D system, we quantify the distributions by the transport of the excitations between the left and right parts of the chain. With controllable parameters of the external field detunings, ordered array positions, and directionality of the decay channels, we are able to make a deterministic transfer of atomic excitations to both sides. With the advantage of tunable nonreciprocal couplings, chiral-coupled system allows a selective transport of excitations in the optimal parameter regimes resilient to position fluctuations. Our results provide a fundamental study on the steady-state preparations in the driven and dissipative chiral-coupled systems, and are potentially applicable in many-body state manipulations. Resonant dipole-dipole interactions in one-dimensional reservoir ================================================================ General formalism in three-dimensional reservoir ------------------------------------------------ Here we review the results of resonant dipole-dipole interactions (RDDI) in one-dimensional (1D) reservoir. This can be directly obtained and extended from the RDDI in three-dimensional (3D) reservoir [@Stephen1964; @Lehmberg1970]. The RDDI result from a system of many atoms interacting with quantized bosonic light modes. Due to the light rescattering events in the dissipation process of the system, the RDDI that feature long-range atom-atom interactions emerge as if the whole medium are effectively resonantly induced dipoles in pairs. The RDDI characterize the coherent frequency shifts and collective decay rates of the atomic system, which can be expressed respectively as imaginary and real parts of the coupling constant $J_{\mu,\nu}$, such that the evolutions of any atomic observables $Q$ can be governed by Lindblad forms, (t)&=i\_[Im]{}(J\_[,]{})\[\_\^\_,Q\]+(Q),\ (Q)&=\_[,]{}[Re]{}(J\_[,]{}), where $\sigma_\mu^\dag\equiv|e\rangle_\mu\langle g|$ and $\sigma_\mu=(\hat\sigma_\mu^\dag)^\dag$ are raising and annihilating operators respectively for ground $|g\rangle$ and excited states $|e\rangle$. The above form can be obtained with the Born-Markov and secular approximations, which can be sustained respectively when the response time of the reservoir is faster than the system and the dynamical time scale of the system is longer than the time light travels throughout the whole medium. These conditions can be satisfied when the macroscopic length scale of the medium is way below several meters for rubidium atoms (intrinsic decay time $\sim 26$ ns). The explicit forms of $J_{\mu,\nu}$ are defined as J\_[,]{}&=\_q |g\_q|\^2\_0\^dt’ e\^[i\_q(\_-\_)]{}\[e\^[i(\_e-\_q)t’]{}+e\^[-i(\_e+\_q)t’]{}\],\ &=\_q |g\_q|\^2\_0\^dt’ e\^[i\_q(\_-\_)]{}\[(\_q-\_e)+(\_q+\_e)\ &+i(\_e-\_q)\^[-1]{}-i(\_q+\_e)\^[-1]{}\],\[J\] where the coupling constant is $g_q\equiv d/\hbar\sqrt{\hbar\omega_q/(2\epsilon_0V)}(\vec\epsilon_q\cdot\hat d)$ with dipole moment $d$ and its unit direction $\hat d$, field polarizations $\vec\epsilon_q$, and a quantization volume $V$. $\mathcal{P}$ is the principal value of the integral. From equation (\[J\]), the coupling constant depends on respective atomic positions ${\mathbf{r}}_{\mu(\nu)}$, and thus is a long-range interaction. For a 3D reservoir, we allow continuous modes of reservoir, that is $\sum_q\rightarrow\sum_{\vec\epsilon_q}\int_{-\infty}^\infty\frac{V}{(2\pi)^3}d^3q$ with two possible field polarizations $\vec\epsilon_q$. In spherical coordinates, we show $J_{\mu,\nu}$ explicitly [@Lehmberg1970], (2J\_[,]{})=&{ +(-)},\[F\]\ (J\_[,]{})=&{- +(+)}\[G\], where $\hat\p$ aligns with the excitation field polarization, the natural decay constant $\Gamma=d^2\omega_e^3/(3\pi\hbar\epsilon_0c^3)$, and dimensionless $\xi\equiv k_L|{\mathbf{r}}_\mu-{\mathbf{r}}_\nu|$ with $k_L=\omega_e/c$. Results of RDDI in one-dimensional reservoir -------------------------------------------- Now it is straightforward to derive the RDDI in 1D reservoir from equation (\[J\]). Since the reservoir only allows one dimension, we should replace $V$ in $g_q$ by $L$ as a length scale of the quantization volume. We then obtain J\_[,]{}=&\_[-]{}\^|g\_q\^2 L e\^[i\_q(\_-\_)]{}\[(\_q-\_e)+(\_q+\_e)\ &+i(\_e-\_q)\^[-1]{}-i(\_q+\_e)\^[-1]{}\], where $\bar g_q^2$ $\equiv$ $(d/\hbar)^2[\hbar\omega_q/(2\epsilon_0 L)]$ and the term $(\vec\epsilon_q\cdot\hat d)$ becomes one due to the spin-momentum locking in 1D reservoir. Next we let $x_{\mu,\nu}=x_\mu-x_\nu$ and drop $q$ in $\omega_q$ for brevity, and we obtain J\_[,]{}=&\_0\^|\_q()||g\_q\^2 L(\_qx\_[,]{})\[(-\_e)+(+\_e)\ &+i(\_e-)\^[-1]{}-i(+\_e)\^[-1]{}\]. Let $\Gamma_{1D}\equiv 2|\partial_\omega q(\omega)|_{\omega=\omega_e}\bar g_{k_L}^2L$, where we keep the dispersion relation of the 1D coupling constant with $\partial_\omega q(\omega)$ being the group velocity of light in the medium. Finally we obtain            J\_[,]{}&=(k\_L x\_[,]{})-\_[-]{}\^d,\ &=.\[chiral1D\] where the respective real and imaginary parts should demonstrate the Kramers-Kronig relation [@Zangwill2013]. This work is supported by the Ministry of Science and Technology (MOST), Taiwan, under the Grant No. MOST-106-2112-M-001-005-MY3 and 107-2811-M-001-1524. We thank Y.-C. Chen, G.-D. Lin, and M.-S. Chang for insightful discussions, and are also grateful for NCTS ECP1 (Experimental Collaboration Program). References {#references .unnumbered} ========== [99]{} Goban A, [*et al.*]{} 2015 [*Phys. Rev. Lett.*]{} [**115**]{}, 063601 Barredo D, de Léséleuc S, Lienhard V, Lahaye T, and Browaeys A 2016 [*Science*]{} [**354**]{}, 1021-1023 Endres M, [*et al.*]{} 2016 [*Science*]{} [**354**]{}, 1024-1027 Sipahigil A, [*et al.*]{} 2016 [*Science*]{} [**354**]{}, 847-850 Hammerer K, Sørensen A S, and Polzik E S 2010 [*Rev. Mod. Phys.*]{} [**82**]{}, 1041 Le Kien F, Gupta S D, Nayak K P, and Hakuta K 2005 [*Phys. Rev. A*]{} [**72**]{}, 063815 Le Kien F and Hakuta K 2008 [*Phys. Rev. A*]{} [**77**]{}, 013801 González-Tudela A and Porras D 2013 [*Phys. Rev. Lett.*]{} [**110**]{}, 080502 Kumlin J, Hofferberth S, and Büchler H P 2018 [*Phys. Rev. Lett.*]{} [**121**]{}, 013601 Chang D E, Douglas J S, González-Tudela A, Hung C L, Kimble H J 2018 [*Rev. Mod. Phys.*]{} [**90**]{}, 031002 Stephen M J 1964 [*J. Chem. Phys.*]{} [**40**]{}, 669-673 Lehmberg R H 1970 [*Phys. Rev. A*]{} [**2**]{}, 883-888 Dicke R H 1954 [*Phys. Rev.*]{} [**93**]{}, 99-110 Gross M and Haroche S 1982 [*Phys. Rep.*]{} [**93**]{} 301-396 Solano P, Barberis-Blostein P, Fatemi F K, Orozco L A, and Rolston S L 2017 [*Nat. commun.*]{} [**8**]{}, 1857 Stannigel K, Rabl P, and Zoller P 2012 [*New J. Phys.*]{} [**14**]{}, 063014 Ramos T, Pichler H, Daley A J, and Zoller P 2014 [*Phys. Rev. Lett.*]{} [**113**]{}, 237203 Pichler H, Ramos T, Daley A J, and Zoller P 2015 [*Phys. Rev. A*]{} [**91**]{}, 042116 Lodahl P, Mahmoodian S, Stobbe S, Rauschenbeutal A, Schneeweiss P, Volz J, Pichler H, and Zoller P 2017 [*Nature*]{} [**541**]{}, 473 Bliokh K Y, Bekshaev A Y, and Nori F 2014 [*Nat. Commun.*]{} [**5**]{}, 3300 Bliokh K Y and Nori F 2015 [*Phys. Rep.*]{} [**592**]{}, 1 Mitsch R, Sayrin C, Albrecht B, Schneeweiss P, and Rauschenbeutal A 2014 [*Nat. Commun.*]{} [**5**]{}, 5713 Luxmoore I J, [*et. al.*]{} 2013 [*Phys. Rev. Lett.*]{} [**110**]{}, 037402 Söllner I, [*et. al.*]{} 2015 [*Nat. Nanotechnol.*]{} [**10**]{}, 775 Kimble H J 2008 [*Nature*]{} [**453**]{}, 1023-1030 Hung C L, Gonz´ales-Tudela A, Cirac J I, and Kimble H J 2016 [*Proc. Natl Acad. Sci.*]{} [**113**]{}, E4946 Mazets I E and Kurizki G 2007 [*J. Phys. B: At. Mol. Opt. Phys.*]{} [**40**]{}, F105-F112 Jen H H 2012 [*Phys. Rev. A*]{} [**85**]{}, 013835 Jen H H 2015 [*Ann. of Phys. (N.Y.)*]{} [**360**]{}, 556 Sutherland R T and Robicheaux F 2016 [*Phys. Rev. A*]{} [**93**]{}, 023407 Araújo M O, Krešić I, Kaiser R, and Guerin W 2016 [*Phys. Rev. Lett.*]{} [**117**]{}, 073002 Roof S J, Kemp K J, Havey M D, and Sokolov I M 2016 [*Phys. Rev. Lett.*]{} [**117**]{}, 073003 Jennewein S, [*et al.*]{} 2016 [*Phys. Rev. Lett.*]{} [**116**]{}, 233601 Bromley S L [*et al.*]{} 2016 [*Nat. Commun.*]{} [**7**]{} 11039 Zhu B, Cooper J, Ye J, and Rey A M 2016 [*Phys. Rev. A*]{} [**94**]{}, 023612 Shahmoon E, Wild D S, Lukin M D, and Yelin S F 2017 [*Phys. Rev. Lett.*]{} [**118**]{}, 113601 Scully M O 2015 [*Phys. Rev. Lett.*]{} [**115**]{}, 243602 Guerin W, Araújo M O, and Kaiser R 2016 [*Phys. Rev. Lett.*]{} [**116**]{}, 083601 Facchinetti G, Jenkins S D, and Ruostekoski J 2016 [*Phys. Rev. Lett.*]{} [**117**]{}, 243601 Jen H H, Chang M-S, and Chen Y-C 2016 [*Phys. Rev. A*]{} [**94**]{}, 013803 Jen H H 2016 [*Ann. Phys. (N. Y.)*]{} [**374**]{}, 27-34 Sutherland R T and Robicheaux F 2016 [*Phys. Rev. A*]{} [**94**]{}, 013847 Bettles R J, Gardiner S A, and Adams C S 2016 [*Phys. Rev. A*]{} [**94**]{}, 043844 Jen H H 2017 [*Phys. Rev. A*]{} [**96**]{}, 023814 Jenkins S D, Ruostekoski J, Papasimakis N, Savo S, and Zheludev N I 2017 [*Phys. Rev. Lett.*]{} [**119**]{}, 053901 Asenjo-Garcia A, Moreno-Cardoner M, Albrecht A, Kimble H J, and Chang D E 2017 [*Phys. Rev. X*]{} [**7**]{}, 031024 Plankensteiner D, Sommer C, Ritsch H, and Genes C 2017 [*Phys. Rev. Lett.*]{} [**119**]{}, 093601 Jen H H, Chang M S, and Chen Y C 2018 [*Sci. Rep.*]{} [**8**]{}, 9570 Jen H H 2018 [*Sci. Rep.*]{} [**8**]{}, 7163 Scelle R, Rentrop T, Trautmann A, Schuster T, and Oberthaler M K 2013 [*Phys. Rev. Lett.*]{} [**111**]{}, 070401 Chen D, Meldgin C, and DeMarco B 2014 [*Phys. Rev. A*]{} [**90**]{}, 013602 González-Tudela A, Hood J D, Chang D E,, and Kimble H J 2017 [*Phys. Rev. A*]{} [**95**]{}, 033818 Gardiner C W 1993 [*Phys. Rev. Lett.*]{} [**70**]{} 2269 Carmichael H J 1993 [*Phys. Rev. Lett.*]{} [**70**]{} 2273 Le Kien F, Gupta S D, Balykin V I, and Hakuta K 2005 [*Phys. Rev. A*]{} [**72**]{}, 032509 Hamann A R, Müller C, Jerger M, Zanner M, Combes J, Pletyukhov M, Weides M, Stace T M, and Fedorov A 2018 [*Phys. Rev. Lett.*]{} [**121**]{}, 123601 Grankin A, Guimond P O, Vasilyev D V, Vermersch B, and Zoller P 2018 [*Phys. Rev. A*]{} [**98**]{}, 043825 Jen H H, Chang M S, Lin G D, Chen Y C 2018, in preparation Zangwill A 2013 [*Modern Electrodynamics*]{} (Cambridge University Press)
--- author: - Ivo Klemeš title: '**More symmetric polynomials related to $p$-norms.**' --- *McGill University, Montréal, Québec, H3A 0B9, Canada.* Email: klemes@math.mcgill.ca [*Abstract.*]{} It is known that the elementary symmetric polynomials $e_k(x)$ have the property that if $ x, y \in [0,\infty)^n$ and $e_k(x) \leq e_k(y)$ for all $k$, then $||x||_p \leq ||y||_p$ for all real $0\leq p \leq 1$, and moreover $||x||_p \geq ||y||_p$ for $1\leq p \leq 2$ provided $||x||_1 =||y||_1$. Previously the author proved this kind of property for $p>2$, for certain polynomials $F_{k,r}(x)$ which generalize the $e_k(x)$. In this paper we give two additional generalizations of this type, involving two other families of polynomials. When $x$ consists of the eigenvalues of a matrix $A$, we give a formula for the polynomials in terms of the entries of $A$, generalizing sums of principal $k \times k$ subdeterminants. [*A.M.S. Mathematics Subject Classifications:*]{} 47A30 (05E05, 15A15). [*Key words:*]{} inequality; p-norm; symmetric polynomial; determinant; matrix function. [*Date:*]{} 17 February 2013. **§1. Introduction** Let $P_r(s)= 1 + \frac{s^1}{1!} + \dots + \frac{s^r}{r!}$, the $r$th Taylor polynomial of the exponential function $e^s$. Let $F_{k,r}(x)$ denote the coefficient of $t^k$ in the product $$\label{gen1} \prod_{i=1}^n \ P_r(x_it) = \prod_{i=1}^n \left(1 + \frac{(x_it)^1}{1!} + \dots + \frac{(x_it)^r}{r!}\right) =1+\sum_{k=1}^{nr} F_{k,r}(x)t^k,$$ where $x:=(x_1,\dots,x_n)$ for some $n$. For example, when $r=1$ we have the elementary symmetric polynomials $e_k(x)=F_{k,1}(x)$ in $n$ variables. In [@k2] it was shown that the $F_{k,r}(x)$ can be used to obtain inequalities for the $p$-norms $||x||_p := (\frac{1}{n}\sum_{i=1}^n x_i^p)^{1/p}$ in certain intervals of the real number $p$ in the following sense: [**Theorem A.**]{} [*Let $x,y \in [0,\infty)^n$ and fix an integer $r \geq 1. $ Suppose that $$\label{F0} F_{k,r}(x) \leq F_{k,r}(y)$$ for all integers $k$ in the interval $r \leq k \leq nr$. Then $$\label{p0} ||x||_p \leq ||y||_p \ \ \ {\it whenever} \ \ \ 0 \leq p \leq 1.$$ If also $ {\displaystyle \sum_{i=1}^n x_i = \sum_{i=1}^n y_i }$, then $$\label{p1} ||x||_p \geq ||y||_p \ \ \ {\it whenever} \ \ \ 1 \leq p \leq r+1.$$* ]{}\ (By continuity in $p$, the $0$-norm is defined to be the geometric mean; $||x||_0:=(\prod x_i )^{1/n}$.) The $F_{k,r}(x)$, together with Theorem A, may be viewed as one possible way to generalize the well-known case $r=1$ of the theorem [@GK Ch. 4, p. 211-212, Lemma 11.1], which only gives information in the range $p<2$ and uses only the elementary symmetric polynomials. The purpose of this note is to give two other generalizations of the case $r=1$ having the same kinds of conclusions as Theorem A in the range $p>2$, but using two new families of symmetric polynomials, different from the above $F_{k,r}(x)$. One reason for seeking such results in the range $p>2$ is a certain open problem on the $p$-norms of the eigenvalues $\{x_i\}$ of a matrix $A=QQ^*$ where $Q$ is a $(0,1)$ “interval matrix". For this motivation we refer the reader to [@k1 Theorem 1.2] and [@k2 Example 1]. In this connection, one additional feature of the new polynomials is that they obey certain identities in terms of the power sum polynomials $p_m(x) = \sum_i x_i^m \ (m=1,2, \dots)$, similar to the Newton and “cycle index" identities for elementary symmetric polynomials. When the $x_i$ are the eigenvalues of a matrix $A$ these identities lead, via the Kronecker power $A^{\otimes k}$, to certain expressions for the new polynomials in terms of the entries of $A$. These expressions generalize the formula for $e_k(x)$ as the sum of principal $k \times k$ subdeterminants of $A$ (see (\[gA\]) to (\[Dr\]) in §6). **§2. Backgroud to Theorem A.** We begin with a review of the proof of (\[p1\]) in the basic case $r=1$ in Theorem A. There is an integral formula (Mellin transform) for the power $a^p$ of a positive real number $a$: For any “suitable" function $\psi(t)$, it is easily seen that $$\label{r1} a^p = \frac{1}{C_p(\psi)}\int_0^{\infty} \psi(at) t^{-p}\frac{dt}{t}, \quad {\rm where} \quad C_p(\psi) = \int_0^{\infty} \psi(t) t^{-p}\frac{dt}{t}.$$ For $\psi$ to be “suitable", we mean that the above improper integral $C_p(\psi)$ should converge and be nonzero. For example, with $\psi(t)= t-\log(1 + t) \geq 0$, the integrals converge for $1<p<2$, and we have $C_p(\psi)>0$. The restriction $p<2$ is due to the requirement that the integrals (\[r1\]) converge as $t\to 0^+$; one sees that $t-\log(1 + t)$ decays like $t^2$. Similarly, as $t\to \infty$, the $t-\log(1 + t)$ grows like $t^1$, so that the restriction $p>1$ is needed. By applying the formula with $a=x_i$ and $a=y_i$ and summing over $i$, one sees that for the case $r=1$ of (\[p1\]) it is sufficient to assume $$\sum_i \bigg(x_it -\log(1 + x_it)\bigg) \geq \sum_i \bigg(y_it -\log(1 + y_it)\bigg)$$ for all $t>0$. For the latter, it is in turn sufficient to have the hypotheses in Theorem A; that $\sum x_i = \sum y_i$ and that each coefficient in $\prod_i(1 + x_it)$ increases when $x$ is replaced by $y$, which is exactly the hypothesis that $F_{k,1}(x) \leq F_{k,1}(y)$ where the $F_{k,1}$ are the elementary symmetric polynomials. Next, suppose that we want a formula for $a^p$ valid for some $p >2$. We could attempt to replace $t-\log(1 + t)$ by a function which decays faster as $t\to 0^+$. For example, the new function $\psi(t)=t-\log(1 + t + \frac{t^2}{2!})$ can be seen to decay like $t^3$, and thus yields a formula for $a^p$ in the range $1<p<3$. Extending this pattern, one sees that for any positive integer $r$ the function $\psi(t)=t-\log(1 + t + \frac{t^2}{2!}+\dots + \frac{t^r}{r!})$ is positive and decays like $t^{r+1}$, and thus gives $a^p$ in the range $1<p<r+1$. Unravelling the required inequalities, to obtain the result (\[p1\]) it is clearly sufficient to have $\sum x_i = \sum y_i$ and the polynomial inequalities $F_{k,r}(x)\leq F_{k,r}(y)$, as in Theorem A. There are other ways to modify the function $\psi(t)=t-\log(1 + t)$ to make it decay faster than $t^2$ as $t\to 0^+$ (while preserving some other useful aspects of the above proof). We now look at two other such modifications, thereby obtaining, after some manipulation, two new families of symmetric polynomials which can be used instead of the $F_{k,r}(x)$ in an analogous manner. **§3. The Polynomials $G_{k,r}(x)$.** Here the idea will be that instead of modifying $\log(1+t)$ from the “inside" as was done above to obtain Theorem A, we now modify it from the “outside" and see what is obtained. Thus, we will subtract the Taylor polynomial of $\log(1+t)$ of some given degree $r$. Fix an integer $r\geq 0$ and let $Q_r(t):=t-\frac{1}{2}t^2+\dots +(-1)^{r-1}\frac{1}{r}t^r$, the $r$th Taylor polynomial of $\log(1 + t)$. (For $r=0$ define $Q_0(t):=0$.) Define $$\label{psi1} \psi_r(t) := (-1)^r\bigg(\log(1 + t)- Q_r(t)\bigg).$$ Then $\psi_r(t) > 0$ when $t>0$, and $\psi_r(t) = \mathcal{O}(t^{r+1})$ as $t\to 0^+$. Also, for $r \geq 1$ we have $\psi_r(t) = \mathcal{O}(t^r)$ as $t\to \infty$, and for all $\epsilon > 0$ we have $\psi_0(t) = \mathcal{O}(t^\epsilon)$ as $t\to \infty$. The proof is a standard exercise in calculus: Differentiating and then using the formula for the sum of a geometric series we obtain\ ${{\displaystyle}\frac{d}{dt}\psi_r(t) = (-1)^r\bigg(\frac{1}{1+t}- 1+t-t^2+ \dots + (-1)^{r-1}t^{r-1}\bigg)}$\ ${{\displaystyle}=(-1)^r\frac{(-t)^{r}}{1+t} = \frac{t^{r}}{1+t}.}$\ This shows that $\psi_r'(t) > 0$ for $t>0$, and $\psi_r'(t)=\mathcal{O}(t^{r})$ as $t\to 0^+$. Since $\psi_r(0) =0$, the Mean Value Theorem implies that $\psi_r(t) > 0$ when $t>0$ and that $\psi_r(t) = \mathcal{O}(t^{r+1})$ as $t\to 0^+$. Finally, the assertions for the case $t\to \infty$ follow from the fact that the polynomial $Q_r(t)$ is of degree $r$, and from the growth properties of the logarithm. It follows that $a^p$ can be represented by (\[r1\]) using $\psi=\psi_r$ whenever $p$ is in the interval $r<p<r+1$, where the corresponding constant $C_p(\psi_r)$ is positive. Hence, we deduce: Let $x,y \in [0,\infty)^n$ and fix an integer $r\geq 0$. If $$\label{lem2} \sum_{i=1}^n \psi_r(x_it) \geq \sum_{i=1}^n \psi_r(y_it)$$ for all $t>0$, then $||x||_p \geq ||y||_p$ for all $p$ in the interval $ r \leq p \leq r+1$. Assume now the additional hypothesis $\sum_i x_i =\sum_i y_i$. Fixing the integer $r\geq 0$, our next goal is to find a set of polynomial inequalities of the form $G_{k,r}(x) \geq G_{k,r}(y)$, or perhaps the form $G_{k,r}(x) \leq G_{k,r}(y)$, which would imply (\[lem2\]). Moreover, let us agree that we want these polynomials $G_{k,r}(x)$ to have positive coefficients. Before stating our result for general $r$, let us explain what it is for the cases $r=0,1,2,3$ in turn. For $r=0$, we have $\sum_i\psi_0(x_it) = \sum_i \log(1+x_it)$. But $\log(1+x_it)$ does not have a Taylor series converging for all values of the variable $t$, so we cannot use the coefficient of $t^k$ as our choice of polynomial $G_{k,0}(x)$. A simple remedy is to exponentiate, obtaining $\prod_i (1+x_it)$, and then use the coefficient of $t^k$ to define the $G_{k,0}(x)$. Then, the hypothesis $G_{k,0}(x) \geq G_{k,0}(y)$ clearly implies (\[lem2\]) with $r=0$. (Here we did not need the assumption $\sum_i x_i =\sum_i y_i$.) For $r=1$, we have $\sum_i\psi_1(x_it) = -\sum_i \bigg(\log(1+x_it)-x_it\bigg)$. In this case, to get an entire function with positive coefficients, we first negate this, add $(\sum x_i)t$ and then exponentiate, obtaining $\prod(1+x_it)$. We denote the coefficient of $t^k$ by $G_{k,1}(x)$, which happens to be the same as $G_{k,0}(x)$. Note that to obtain inequality (\[lem2\]) for $r=1$, we now need the hypothesis to be $G_{k,1}(x) \leq G_{k,1}(y)$, because of the negation performed at the beginning. For $r=2$, we have $\sum_i\psi_2(x_it) = \sum_i \bigg(\log(1+x_it)-x_it +\frac{1}{2}x_i^2t^2 \bigg)$. To end up with positive coefficients after exponentiating, we again decide to get rid of the negative $\sum-x_it$ terms by adding on the term $(\sum x_i)t$. Hence we define $G_{k,2}(x)$ to be the coefficient of $t^k$ in the generating function $$\bigg(\prod_{i}(1 + x_it)\bigg) \exp\bigg(\frac{1}{2}\sum_i x_i^2t^2\bigg),$$ which we note is entire in $t$, whence its Taylor series converges to its value for every fixed $x$. Thus, the conditions $G_{k,2}(x) \geq G_{k,2}(y)$ and $\sum x_i=\sum y_i$ imply (\[lem2\]) with $r=2$. For $r=3$, we have $$\sum_i\psi_3(x_it) = -\sum_i \bigg(\log(1+x_it)-x_it + \frac{1}{2}x_i^2t^2 -\frac{1}{3}x_i^3t^3 \bigg).$$ As for $r=1$, we first remove the leading $-$ sign. Then add on the new terms $\big(\sum x_i\big)t +\frac{1}{3}\big(\sum x_i\big)^3t^3$ to get rid of negative coefficients. (Note that the polynomial $\big(\sum x_i\big)^3- \sum x_i^3$ has only positive coefficients.) Therefore we let $G_{k,3}(x)$ be the coefficient of $t^k$ in the generating function $$\bigg(\prod_{i}(1 + x_it)\bigg) \exp\bigg(\frac{1}{2}\sum_i x_i^2t^2 +\frac{1}{3}\bigg(\big(\sum_i x_i\big)^3- \sum_i x_i^3\bigg)t^3 \bigg) \bigg ).$$ Clearly, the conditions $G_{k,3}(x) \leq G_{k,3}(y)$ and $\sum x_i=\sum y_i$ imply (\[lem2\]) with $r=3$. We continue this pattern for the general case of $r\geq 0$: In the expression $$(-1)^r\sum_i\psi_r(x_it)=\sum_i \bigg(\log(1+x_it)-Q_r(x_it) \bigg),$$ the terms in $\sum_i -Q_r(x_it)$ with negative coefficients are precisely those of the form $-\frac{1}{m}\sum_i x_i^mt^m$ for odd $m \leq r$. So for each such odd $m$ we can add the extra term $ +\frac{1}{m}\bigg(\sum_i x_i\bigg)^mt^m$, to make all coefficients positive. If $\sum_i x_i =\sum_i y_i$, then these extra terms are the same with $y_i$ as with $x_i$. Hence, we can use the exponential of the resulting expression as a generating function to define polynomials $G_{k,r}(x)$ having positive coefficients and the other desired properties. We now re-state the latter construction of the $G_{k,r}(x)$ using more formal notation. For notational efficiency, we observe that the odd power terms of a polynomial $f(t)$ may be written as $\frac{1}{2} (f(t)-f(-t))=: f^{-}(t)$. Thus $$Q_r^-(t) = \sum_{{\rm odd} \ m \leq r} \frac{1}{m}t^m.$$ Define the generating function $g_r(x,t)$ by $$\label{gr} g_r(x,t):=\bigg(\prod_{i=1}^n (1 + x_it)\bigg) \exp\bigg( - \sum_i Q_r(x_it)+Q_r^-\big(\sum_i x_it\big)\bigg) ,$$ where we recall that $Q_r(s) :=s-\frac{1}{2}s^2+\dots +(-1)^{r-1}\frac{1}{r}s^r$, the $r$th Taylor polynomial of $\log(1 + s)$. It is clear from (\[gr\]) that $g_r(x,t)$ is an entire function of $t$ for every fixed $x$. For any integers $r\geq 0, k\geq 1$, the polynomial $G_{k,r}(x)$ is the coefficient of $t^k$ in the power series expansion of $g_r(x,t)$. The preceding discussion has shown that the $G_{k,r}(x)$ are symmetric polynomials with positive coefficients, and have the following property: Let $x,y \in [0,\infty)^n$ and fix an integer $r \geq 0. $ Suppose that $\sum_i x_i = \sum_i y_i$ and that for all positive integers $k$, $$(-1)^r\bigg(G_{k,r}(x)- G_{k,r}(y)\bigg) \geq 0.$$ Then $||x||_p \geq ||y||_p$ for all real $p$ in the interval $ r \leq p \leq r+1$. In a later section (§5) we will express $G_{k,r}(x)$ in terms of the power sums $p_m=p_m(x) = \sum_{i=1}^n x_i^m$, $m=1,2,\dots$. To that end, we note the following alternative expression for the generating function $g_r(x,t)$ in the sense of formal power series, which is easily checked by taking logarithms in (\[gr\]) and expanding each $\log(1+x_it)$ in powers of $t$: $$\label{gr2} g_r(x,t) = \exp\bigg( \sum_{m\geq 1} (-1)^{m-1}\alpha_mt^m/m \bigg)$$ where $(\alpha_m)_{m=1}^\infty$ is the “modified" sequence of power sums given by: $$\label{ag} \alpha_m = \begin{cases} p_1^m& ,\ \ {\rm for} \ m \ {\rm odd,}\ 1\leq m\leq r,\\ 0& ,\ \ {\rm for} \ m \ {\rm even,} \ 1\leq m\leq r,\\ p_m& ,\ \ {\rm for}\ m\geq r+1. \end{cases}$$ Equivalently, if $r$ is odd, $(\alpha_m) = (p_1, 0, p_1^3, 0, \dots, 0,p_1^r,p_{r+1}, p_{r+2}, \dots)$, and if $r$ is even, $(\alpha_m) = (p_1, 0, p_1^3, 0, \dots, p_1^{r-1},0,p_{r+1}, p_{r+2}, \dots)$. Note that the special cases $r=0$ and $r=1$ give the “full" sequence of power sums ($\alpha_m = p_m, \forall m \geq 1 $). Then $g_r(x,t)$ is of course the familiar generating function of the elementary symmetric polynomials $e_k(x)$. In addition to the fact that $G_{k,r}(x)$ has positive coefficients, it may be interesting to note that $\pm G_{k,r}(x)$ is [*Schur convex*]{} in the sense of majorization theory [@MO]. (The $F_{k,r}(x)$ in our introduction also have such a property [@k2].) Specifically, for $r=0$ and all odd $r\geq 1$, $-G_{k,r}(x)$ is Schur convex for all $k$ (equivalently, $G_{k,r}(x)$ is [*Schur concave*]{}), and for all even $r\geq 2$, $G_{k,r}(x)$ is Schur convex for all $k$. As discussed above, for both $r=0,1$ the $G_{k,r}(x)$ are just the elementary symmetric polynomials, which are well-known examples of Schur concave functions. The standard proof in the latter case consists in verifying the Schur-Ostrowski criterion, which is what we will also do below to prove our assertion for all $r\geq 1$. In fact, we prove the following slightly stronger statement involving the generating function $g_r(x,t)$. Fix any integer $r\geq 1$ and indices $i \neq j$. Then $$\label{sch} (-1)^{r}\bigg(\frac{\partial g_r}{\partial x_i} - \frac{\partial g_r}{\partial x_j}\bigg)=(x_i -x_j)\gamma(x,t)$$ for some function $\gamma(x,t)$ (depending on $r,i,j$ ) having positive coefficients when expanded as a power series in all of its variables $x_1, \dots, x_n, t$. In particular, the coefficient of each $t^k$ in $\gamma(x,t)$ is a polynomial in $x_1, \dots, x_n$ having positive coefficients, and thus each $(-1)^{r}G_{k,r}(x)$ is Schur convex. To work out the left-hand side of (\[sch\]), we may let $i=1$ and $j=2$ by symmetry. Since $\log g_r = \sum_i \log(1+x_it) - \sum_i Q_r(x_it)+Q_r^-\big(\sum_i x_it\big)$, we obtain $$\frac{1}{g_r}\frac{\partial g_r}{\partial x_1} =\frac{t}{1+x_1t} - tQ_r'(x_1t) + t(Q_r^-)'\big(\sum_i x_it\big)$$ $$= t (-1)^r\frac{(x_1t)^{r}}{1+x_1t} + t(Q_r^-)'\big(\sum_i x_it\big)$$ where we have used the geometric series formula $\frac{1}{1+s}- Q_r'(s)= (-1)^r\frac{s^{r}}{1+s}$, as seen in the proof of Lemma 1. Similarly for $x_2$. Hence $$\frac{1}{g_r}\bigg(\frac{\partial g_r}{\partial x_1} - \frac{\partial g_r}{\partial x_2}\bigg) =t^{r+1} (-1)^r\bigg(\frac{x_1^r}{1+x_1t}-\frac{x_2^r}{1+x_2t}\bigg)$$ $$=t^{r+1} (-1)^r\frac{(x_1^r-x_2^r)+x_1x_2t(x_1^{r-1}-x_2^{r-1})}{(1+x_1t)(1+x_2t)}.$$ Multiplying this by $(-1)^rg_r/(x_1-x_2)$, it is now clear that $\gamma(x,t)$ has a power series with positive coefficients in all variables as claimed: The denominator $(1+x_1t)(1+x_2t)$ will be cancelled by the product $\prod_i(1+x_it)$ in (\[gr\]), each of the two terms of type $(x_1^m-x_2^m)/(x_1-x_2)$ simplifies to a sum with positive coefficients, and the remaining factor $\exp\bigg( - \sum_i Q_r(x_it)+Q_r^-\big(\sum_i x_it\big)\bigg)$ in $g_r$ is also a power series with positive coefficients in all variables (see (\[gr\])). **§4. The Polynomials $H_{k,r}(x)$.** In this variant of our topic, we modify the expression $t - \log(1 + t)$, discussed in §2, by simply replacing $t$ by $t^r$ where $r=2,3, \dots$. Fix any integer $r\geq 1$ and define $$\label{phi1} \phi_r(t) := t^r - \log(1 + t^r).$$ Then $\phi_r(t) > 0$ when $t>0$, $\phi_r(t) = \mathcal{O}(t^{2r})$ as $t\to 0^+$, and $\phi_r(t) = \mathcal{O}(t^r)$ as $t\to \infty$. The lemma follows from the case $r=1$ of Lemma 1 by substituting $t^r$ for $t$. It follows that $a^p$ can be represented by (\[r1\]) using $\psi=\phi_r$ whenever $p$ is in the interval $r<p<2r$, where the corresponding constant $C_p(\phi_r)$ is again positive. Hence, we deduce: Let $x,y \in [0,\infty)^n$ and fix an integer $r\geq 1$. If $$\label{lem4} \sum_{i=1}^n \phi_r(x_it) \geq \sum_{i=1}^n \phi_r(y_it)$$ for all $t>0$, then $||x||_p \geq ||y||_p$ for all $p$ in the interval $ r \leq p \leq 2r$. Next, assume the additional hypothesis $\sum_i x_i =\sum_i y_i$. Fixing the integer $r\geq 1$, we once again wish to obtain (\[lem4\]) as a consequence of some stronger family of symmetric polynomial inequalities, say of the form $H_{k,r}(x) \leq H_{k,r}(y)$ for some family $H$ having positive coefficients. By the idea already seen in §3, the following generating function $h_r(x,t)$ seems natural for this purpose: $$\label{hr} h_r(x,t) :=\bigg(\prod_{i}(1 + x_i^rt)\bigg) \exp\bigg(\bigg(\big(\sum_i x_i\big)^r- \sum_i x_i^r\bigg)t \bigg).$$ Note that here we have in effect replaced the $t^r$ by $t$, for the sake of simplicity of our generating function. As before, it is clear that $h_r(x,t)$ is entire in $t$ for every $x$. For any integers $r\geq 1, k\geq 1$, the polynomial $H_{k,r}(x)$ is the coefficient of $t^k$ in the power series expansion of $h_r(x,t)$. It is easily seen from the preceding discussion that the $H_{k,r}(x)$ are symmetric polynomials with positive coefficients, and have the following property: Let $x,y \in [0,\infty)^n$ and fix an integer $r \geq 1$. Suppose that $\sum_i x_i = \sum_i y_i$ and that for all positive integers $k$, $$H_{k,r}(x) \leq H_{k,r}(y).$$ Then $||x||_p \geq ||y||_p$ for all real $p$ in the interval $ r \leq p \leq 2r$. As in the previous section §3, for $r\geq 1$ we may express the generating function (\[hr\]) in terms of the power sums $p_m$ using formal power series: $$\label{hr2} h_r(x,t) = \exp\bigg( \sum_{m\geq 1} (-1)^{m-1}\alpha_mt^m/m \bigg)$$ where $(\alpha_m)_{m=1}^\infty$ is a modified sequence of power sums given by: $$\label{ah} \alpha_m = \begin{cases} p_1^r& ,\ \ {\rm for} \ m=1,\\ p_{mr}& ,\ \ {\rm for} \ m \geq 2. \end{cases}$$ Equivalently $(\alpha_m) = (p_1^r, p_{2r}, p_{3r}, p_{4r}, \dots)$. Also as in §3, the polynomials $H_{k,r}(x)$ turn out to have a Schur convexity property (Schur concavity, in fact), “inherited" from their generating function: Fix any integer $r\geq 1$ and indices $i \neq j$. Then $$\label{schh} \frac{\partial h_r}{\partial x_i} - \frac{\partial h_r}{\partial x_j}=-(x_i -x_j)\delta(x,t)$$ for some function $\delta(x,t)$ (depending on $r,i,j$ ) having positive coefficients when expanded as a power series in all of its variables $x_1, \dots, x_n, t$. In particular, the coefficient of each $t^k$ in $\delta(x,t)$ is a polynomial in $x_1, \dots, x_n$ having positive coefficients, and thus each $H_{k,r}(x)$ is Schur concave. As in the proof of Theorem 2, we may let $i=1$ and $j=2$ by symmetry. Since $\log h_r = \sum_i \log(1+x_i^rt) - \sum_i x_i^rt+\big(\sum_i x_i\big)^rt$, we obtain $$\frac{1}{h_r}\frac{\partial h_r}{\partial x_1} =\frac{rx_1^{r-1}t}{1+x_1^rt} - rx_1^{r-1}t + r\big(\sum_i x_i\big)^{r-1}t =\frac{-rx_1^{2r-1}t^2}{1+x_1^rt} + r\big(\sum_i x_i\big)^{r-1}t.$$ Similarly for $x_2$. Hence $$\frac{1}{h_r}\bigg(\frac{\partial h_r}{\partial x_1} - \frac{\partial h_r}{\partial x_2}\bigg) =-rt^{2}\bigg(\frac{x_1^{2r-1}}{1+x_1^rt}-\frac{x_2^{2r-1}}{1+x_2^rt}\bigg)$$ $$=-rt^{2}\frac{(x_1^{2r-1}-x_2^{2r-1})+x_1^rx_2^rt(x_1^{r-1}-x_2^{r-1})}{(1+x_1^rt)(1+x_2^rt)}.$$ Multiplying this by $h_r/(x_1-x_2)$, it is clear that $\delta(x,t)$ has a power series with positive coefficients in all variables as claimed, in view of (\[hr\]). **§5. Expressions in terms of power sums.** Our next aim is to express $G_{k,r}(x)$ and $H_{k,r}(x)$ in terms of the power sums $p_m(x) = \sum_{i=1}^n x_i^m$ where $m$ is a positive integer. This follows immediately from (\[gr2\]) and (\[hr2\]) by well-known formulas for the exponential of a power series and the (signed) cycle index polynomials $Z_k$, which we now recall for the reader’s convenience in the form of Lemma 5 bellow. Let $\alpha_1, \alpha_2, \dots $ be any formal commuting variables or “indeterminates". If $\lambda = (\lambda_j)= (\lambda_1 \leq \lambda_2 \leq \dots )$ is a partition of $k$ we define $\alpha_\lambda = \prod_j \alpha_{\lambda_j}$. Define the sign of $\lambda$ by ${\rm sgn}(\lambda) = \prod_j (-1)^{\lambda_j-1}$ ($ = \beta_{\lambda}$ for the special sequence $\beta_n = (-1)^{n-1}$). Let $n_m(\lambda) \geq 0$ denote the number of $j$’s such that $\lambda_j=m$, and let $\mathcal{P}_k$ denote the set of all partitions $\lambda$ of $k$. If $ \sigma $ is an element of $S_k$, the group of permutations of $\{1,\dots,k\}$, and if $(\lambda_j)=: \lambda(\sigma) $ are the lengths of the cycles in the disjoint cycle decomposition of $\sigma$, define $\alpha_\sigma = \alpha_{\lambda(\sigma)}$. Also, define $n_m(\sigma)=n_m(\lambda(\sigma))$, and ${\rm sgn}(\sigma) = {\rm sgn}(\lambda(\sigma))$, the usual sign of a permutation. [**\[“Exponential Formula"\]**]{} If $\alpha=(\alpha_m)_{m=1}^\infty$ is any sequence of indeterminates, then the following identity holds in the sense of formal power series in $t$. $$\label{lem5} \exp\bigg( \sum_{m\geq 1} (-1)^{m-1}\alpha_mt^m/m \bigg) = 1+ \sum_{k\geq 1} Z_k(\alpha_1,\dots,\alpha_k)t^k,$$ where the polynomials $Z_k$ are given by: $$\label{lem5aa} Z_k(\alpha_1,\dots,\alpha_k) =\sum_{\lambda \in \mathcal{P}_k} {\rm sgn}(\lambda) \prod_{m=1}^k \frac{ (\alpha_{m}/m)^{n_m(\lambda)} }{n_m(\lambda)!}$$ $$\label{lem5a} =\frac{1}{k!} \sum_{\sigma \in S_k} {\rm sgn}(\sigma) \alpha_\sigma$$ $$\label{lem5b} = \frac{1}{k!}\left| \begin{array}{ccccccccc} \alpha_1 & 1 & 0 & 0 & \cdot & \cdot & 0\\ \alpha_2 & \alpha_1 & 2 & 0 & \cdot & \cdot & 0\\ \alpha_3 & \alpha_2 & \alpha_1 & 3 & \cdot & \cdot & 0\\ \alpha_4 & \alpha_3 & \alpha_2 & \alpha_1 & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & (k-1)\\ \alpha_k & \alpha_{k-1} & \cdot & \cdot & \cdot & \cdot & \alpha_1\\ \end{array} \right|.$$ A convenient reference for this result and further background is [@Stan1 Theorem 1.3.3]. The determinant (\[lem5b\]) is equivalent to a recursion (or “Newton identities") given in [@Stan2 Proposition 5.1.7]. (One can also recover (\[lem5a\]) and (\[lem5b\]) from [@Merris Eqn. (8.30) and (8.31)], which is the special case $\alpha_m=p_m , \forall m\geq 1$ and $Z_k = e_k , \forall k\geq 1$. This implies the case of arbitrary $\alpha_m$ by the algebraic independence of $p_1(x),\dots,p_k(x)$ for $n \geq k$.) Fix an integer $r\geq 0$. Then for all $k\geq 1$, $$\label{cor1} G_{k,r} = Z_k(\alpha_1,\dots,\alpha_k) =\frac{1}{k!} \sum_{\sigma \in S_k} {\rm sgn}(\sigma) \alpha_\sigma$$ where $(\alpha_m)_{m=1}^\infty$ is the “modified" sequence of power sums given by (\[ag\]). Let the subset $S(k,r) \subset S_k$ consist of those permutations $\sigma$ having no cycles of even length $\leq r$ (equivalently, $n_m(\sigma) = 0$ for all even $m \leq r$). Then (\[cor1\]) becomes $$\label{cor1a} G_{k,r} =\frac{1}{k!} \sum_{\sigma \in S(k,r)} {\rm sgn}(\sigma) p_1^{\sum_{j\leq r} jn_j(\sigma) } \prod_{j=r+1}^k p_j^{n_j(\sigma)} \ .$$ Fix an integer $r\geq 1$. Then for all $k\geq 1$, $$\label{cor2} H_{k,r} = Z_k(\alpha_1,\dots,\alpha_k) =\frac{1}{k!} \sum_{\sigma \in S_k} {\rm sgn}(\sigma) \alpha_\sigma$$ where $(\alpha_m)_{m=1}^\infty$ is the “modified" sequence of power sums given by (\[ah\]). Substituting (\[ah\]), this becomes $$\label{cor2a} H_{k,r} =\frac{1}{k!} \sum_{\sigma \in S_k} {\rm sgn}(\sigma) p_1^{rn_1(\sigma) } \prod_{j=2}^k p_{jr}^{n_j(\sigma)} \ .$$ The results (\[cor1a\]) and (\[cor2a\]) may also be expressed as follows. For each $\sigma \in S(k,r)$, let $\sigma_r$ denote the product of all cycles of length $\geq r+1$ in the disjoint cycle factorization of $\sigma$. By convention, an empty product is taken to mean the identity of the group $S_k$. (In terms of the mappings, $\sigma_r =\sigma$ on the $\sigma$-orbits of length $\geq r+1$, and $\sigma_r =$ the identity map on the $\sigma$-orbits of length $\leq r$.) Note that $ {\rm sgn}(\sigma_r)={\rm sgn}(\sigma)$ for $\sigma \in S(k,r)$. Then (\[cor1a\]) may be written as $$\label{cor1b} G_{k,r} =\frac{1}{k!}\sum_{\sigma \in S(k,r)}{\rm sgn}(\sigma_r)p_{\sigma_r} \ .$$ Regarding (\[cor2a\]), we may alternatively regard $H_{k,r}$ as the coefficient of $t^{(kr)}$ in the generating function $$h_r(x,t^r) = \exp \bigg( \sum_{j\geq 1} \alpha_j(-1)^{j-1}t^{jr}/j \bigg).$$ Applying the Exponential Formula (Lemma 5) to the latter and doing the arithmetic leads to $$\label{cor2b} H_{k,r} =\frac{(-1)^{k(r-1)}}{(kr)!}\sum_{\sigma \in T(k,r)} {\rm sgn}(\sigma) r^{L(\sigma)}p_{\sigma_r} \ ,$$ where $T(k,r) \subset S_{kr}$ consists of those permutations $\sigma$ of $\{1,\dots,kr\}$ whose disjoint cycles all have lengths divisible by $r$, $L(\sigma)$ denotes the number of disjoint cycles, and $\sigma_r$ is again the product of all cycles of length $> r$ in the disjoint cycle factorization of $\sigma$ (i.e. each $r$-cycle of $\sigma \in T(k,r)$ is replaced by $r$ $1$-cycles). [*Remark.*]{} Fix the integer $r \geq 1$. In the ring of symmetric functions (in infinitely many variables), we may define an algebraic homomorphism $\phi$ by arbitrarily defining the image of each power sum $p_m$, since these constitute a basis. Hence we may define a homomorphism by $\phi(p_m) = \alpha_m$ for all $m\geq 1$, where $\alpha_m$ is given by (\[ag\]). Then $\phi(e_k) = G_{k,r}$ for all $k \geq 1$, by Corollary 1. (Since $\phi(e_k)= \phi(Z_k(p_1,\dots,p_k)) =Z_k(\phi(p_1),\dots,\phi(p_k)) =Z_k(\alpha_1,\dots,\alpha_k) = G_{k,r}$.) Similarly, by Corollary 2, the $H_{k,r}$ are the images of the $e_k$ under the homomorphism $\psi$ defined by $p_m \mapsto \alpha_m$ where now $\alpha_m$ means (\[ah\]). We digress to mention the following question regarding $\psi$. Let $\mathcal{S}$ denote the set of all linear combinations, with positive real coefficients, of all Schur functions (see [@Sagan §4.4]). Is $\psi(\mathcal{S}) \subset \mathcal{S}$ ? (In particular, is each $H_{k,r}$ a linear combination, with positive real coefficients, of Schur functions ?) **§6. Expressions in terms of matrix entries.** Let $A=[a_{ij}]$ be a complex $n \times n$ matrix, and let $(x_1,\dots, x_n) = x$ be its eigenvalues. In this section we briefly consider the question of how to express $G_{k,r}(x)$ and $H_{k,r}(x)$ as polynomials in the entries $a_{ij}$ of $A$. This is possible of course for [*any*]{} symmetric polynomial $F(x)$, since by a fundamental result $F(x)$ may first be written as a polynomial in the power sums $p_k(x)$ (or, if we prefer, in the elementary symmetric polynomials $e_k(x)$), and these in turn have well-known polynomial expressions in terms of the entries $a_{ij}$; $ p_k(x) = {\rm Trace}(A^k) \ :=: \ <A^k>$ and $e_k(x)=$ sum of all principal $k \times k$ subdeterminants of $A$. We can therefore already give one answer quite explicitly using the polynomials $Z_k(\alpha)$ from Corollaries 1 or 2 of the previous section, replacing each occurrence of a $p_m$ by the trace $<A^m>$. For example we thus obtain: $$G_{5,3}(x) = Z_5(p_1,0,p_1^3,p_4,p_5) = Z_5(<A>,0,<A>^3,<A^4>,<A^5>)$$ $\displaystyle{=\frac{1}{5!} \sum_{\sigma \in S(5,3)} {\rm sgn}(\sigma) <A>^{n_1(\sigma)+3n_3(\sigma)}<A^4>^{n_4(\sigma)}<A^5>^{n_5(\sigma)} } $ $\displaystyle{=\frac{7}{40}<A>^5 - \frac{1}{4}<A><A^4> + \frac{1}{5}<A^5> .} $ However, we will now present another kind of expression which uses more explicitly the individual monomials (products) of the entries $a_{ij}$ of $A$. For example, $G_{k,1}(x) = e_k(x)$ is on the one hand given by $Z_k(<A>,<A^2>,\dots, <A^k>)$, but on the other hand $e_k(x)$ is also equal to the sum of all principal $k \times k$ subdeterminants of $A$, which can immediately be written in terms of products of $a_{ij}$ using the familiar expansion of a determinant as a sum over permutations. We thus aim to generalize this latter type of expansion for any $G_{k,r}$ or $H_{k,r}$, and will do so essentially by imitating what happens in the special case of $e_k =G_{k,1}$. We need some of the machinery of representation theory. Following the sketch given in [@Stan2 Ch. 7, Appendix 2, pp. 444-445], or the more detailed account [@Merris Ch. 6], identify $A$ with the linear map $A:V\to V$ of the vector space $V={\mathbb C}^n$ (of column vectors) as usual, and consider the $k$-fold tensor product $V^{\otimes k} = V\otimes \dots \otimes V$. There is an action of $A$ which we denote by $A^{\otimes k}$ (the “Kronecker power") from $V^{\otimes k}$ to itself, characterized by $A^{\otimes k}(v_1 \otimes \dots \otimes v_k) = (Av_1) \otimes \dots \otimes (Av_k)$ for all elements of the form $v_1 \otimes \dots \otimes v_k$. Also, there is an action for any $\sigma \in S_k$, characterized by $\sigma(v_1 \otimes \dots \otimes v_k) = v_{\sigma^{-1}(1)} \otimes \dots \otimes v_{\sigma^{-1}(k)}$. The two actions commute; $\sigma A^{\otimes k} = A^{\otimes k}\sigma$ on $V^{\otimes k}$. The trace of $A^{\otimes k}\sigma$ on $V^{\otimes k}$ depends only on the conjugacy class of $\sigma$ and eigenvalues $x$ of $A$: $$\label{tr1} {\rm Trace} (A^{\otimes k}\sigma) = p_\sigma(x) = \prod_j p_{\lambda_j(\sigma)}(x),$$ where the $p_m$ are the power sum polynomials as before. (This can be seen by using a basis of $V$ which makes $A$ triangular.) By linearity, for any sequence of permutations $\sigma_s \in S_k$ and constants $c_s \in {\mathbb C}$ on some finite index set $S$ we have $$\label{tr2} {\rm Trace} \bigg(\sum_{s\in S} c_sA^{\otimes k}\sigma_s \bigg) = \sum_{s \in S}c_s p_{\sigma_s}(x).$$ On the other hand, we may also compute a trace on $V^{\otimes k}$ using the basis $\{u_{m(1)} \otimes \dots \otimes u_{m(k)} \}$ where $\{u_j\}_{j=1}^n$ is the standard basis of $V={\mathbb C}^n$, and $m \in \Gamma_{k,n} =$ set of all functions $m:\{1,\dots,k\} \to \{1,\dots,n\}$. This will yield the above trace (\[tr2\]) directly in terms of products of entries $a_{ij}$ of $A$. This computation is well-known, but we will now reproduce it here for convenience. We will essentially follow [@Merris Ch. 6 and 7]. For a given pair of sequences $(c_s, \sigma_s)_{s\in S} =:C$ as above, define a “matrix function" $d_C$ on complex $k \times k$ matrices $B=[B_{i,j}]$ by $$\label{dc} d_C(B) = \sum_{s\in S}c_s\prod_{j=1}^k B_{\sigma_s(j),j}$$ Now consider a fixed $m \in \Gamma_{k,n}$. Recalling that $u_j$ is the $j$th standard basis column vector in ${\mathbb C}^k$, denote $$u(m) := u_{m(1)}\otimes u_{m(2)}\otimes \dots \otimes u_{m(k)}.$$ Denote the entries of $A$ by $A(i,j):=a_{i,j}$ for the sake of better legibility. Let $\langle v, w_j\rangle$ denote the coefficient of $w_j$ in $v$ whenever $v$ is an element of a vector space of which $\{w_j\}$ is a basis. Then for each $\sigma \in S_k$ we find that $$\bigg< A^{\otimes k}\sigma \bigg(u(m)\bigg)\ , \ u(m) \bigg> =\prod_{j=1}^k A\bigg(m(\sigma(j)), m(j)\bigg) =:\prod_{j=1}^k A[m|m]_{\sigma(j),j}$$ where $A[m|m]$ denotes the $k \times k$ matrix whose $(i,j)$ entry is $A(m(i), m(j))$. We remark that $A[m|m]$ is a (principal) submatrix of $A$ only when the function $m$ is strictly increasing, otherwise it may be thought of as a “submatrix" which allows repetition and permutation of the original indices. Hence, for each fixed $m \in \Gamma_{k,n} $, $$\sum_{s\in S}c_s \bigg< A^{\otimes k}\sigma_s \bigg(u(m)\bigg)\ , \ u(m) \bigg> =d_C(A[m|m]).$$ Summing over all $m$ gives $${\rm Trace} \bigg(\sum_{s\in S} c_sA^{\otimes k}\sigma_s \bigg) = \sum_{m\in \Gamma_{k,n} } d_C(A[m|m]).$$ Hence by (\[tr2\]), $$\label{tr3} \sum_{s\in S}c_s p_{\sigma_s}(x)=\sum_{m\in \Gamma_{k,n} } d_C(A[m|m]) .$$ We call $C = (c_s,\sigma_s)_{s\in S}$ a [*class function*]{} on $S_k$ if the index set $S=S_k$, $\sigma_s = s$, and $c_s$ depends only on the conjugacy class of $s$. When $C$ is a class function, the latter sum may be simplified further by noting that $d_C(A[m|m])=d_C(A[m'|m'])$ whenever $m$ and $m'$ are permutations of each other, i.e. when $m' = m\circ \tau$ for some $\tau \in S_k$. Let $(N_1,\dots, N_n) =:N$ denote an $n$-tuple of integers $N_i \geq 0$ with $\sum_i N_i = k$, and let $m^{(N)} \in \Gamma_{k,n}$ denote the unique nondecreasing function which takes on each value $i$ exactly $N_i$ times. Then there are exactly $k!/(N_1! \dots N_n!)$ distinct permutations of $m^{(N)}$ in $\Gamma_{k,n}$. Hence, whenever a matrix function $d_\gamma$ can be shown to be of the form $d_\gamma = d_C$ for some class function $C$, then we have $$\sum_{m\in \Gamma_{k,n} } d_\gamma(A[m|m]) =\sum_{\sum_i N_i = k } d_\gamma(A[m^{(N)}|m^{(N)}])\frac{k!}{N_1! \dots N_n!}.$$ For our cases of interest, it remains to apply (\[tr3\]) to the results (\[cor1b\]) or (\[cor2b\]). In the first case, one obtains $$\label{gA} G_{k,r}(x) = \frac{1}{k!}\sum_{m\in \Gamma_{k,n} } \delta_r(A[m|m]) =\sum_{\sum_i N_i = k } \delta_r(A[m^{(N)}|m^{(N)}])\frac{1}{N_1! \dots N_n!},$$ where $\delta_r(B)$ is a “modified determinant" defined for any $k \times k$ matrix $B$ by $$\label{dr} \delta_r(B) =\sum_{\sigma \in S(k,r)} {\rm sgn}(\sigma_r)\prod_{j=1}^k B_{\sigma_r(j),j} \ ,$$ which is easily seen to be of the form $d_C(B)$ for some class function $C$ on $S_k$. Similarly, from (\[cor2b\]) one obtains $$\label{hA} H_{k,r}(x) = \frac{1}{(kr)!}\sum_{m\in \Gamma_{kr,n} } D_r(A[m|m]) =\sum_{\sum_i N_i = kr } D_r(A[m^{(N)}|m^{(N)}])\frac{1}{N_1! \dots N_n!},$$ where $D_r(B)$ is defined for any $kr \times kr$ matrix $B$ as: $$\label{Dr} D_r(B) =(-1)^{k(r-1)}\sum_{\sigma \in T(k,r)} {\rm sgn}(\sigma)r^{L(\sigma)}\prod_{j=1}^{kr} B_{\sigma_r(j),j} \ ,$$ which can be checked to be $d_C(B)$ for some class function $C$ on $S_{kr}$. [MM]{} Translated from the Russian by A. Feinstein. Transl. of Math. Monographs, Vol. 18, American Math. Soc., Providence, R.I., 1969. (English) Algebra i Analiz 13 (2001), no. 1, pp. 39-59; translation in St. Petersburg Math. J. 13 (2002), no. 1, 27-40. Houston J. Math. Vol. 37, (2011), no. 1, pp. 285-295. Academic Press, New York-London, 1979. Algebra, Logic and Applications, 8. Gordon and Breach Science Publishers, Amsterdam, 1997. 2nd ed. Graduate Texts in Mathematics, 203. Springer-Verlag, New York, 2001. Vol. 1, 2nd ed. Cambridge Studies in Advanced Mathematics, 49. Cambridge University Press, Cambridge, 2012. Vol. 2, Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge, 1999.
--- abstract: '[ In this paper we study the asymptotic properties of Bayesian multiple testing procedures for a large class of Gaussian scale mixture priors. We study two types of multiple testing risks: a Bayesian risk proposed in [@MR2850212] where the data are assume to come from a mixture of normal, and a frequentist risk similar to the one proposed by [@arias-castro2017]. Following the work of [@vanderpas2016], we give general conditions on the prior such that both risks can be bounded. For the Bayesian risk, the bound is almost sharp. This result show that under these conditions, the considered class of continuous prior can be competitive with the usual two-group model (e.g. spike and slab priors). We also show that if the non-zeros component of the parameter are large enough, the minimax risk can be made asymptotically null. The separation rates obtained are consistent with the one that could be guessed from the existing literature [see @vanderpas2017UQHS]. For both problems, we then give conditions under which an adaptive version of the result can be obtained. ]{}' address: 'Université Paris-Est, Laboratoire d’Analyse et de Mathématiques Appliquées (UMR 8050) UPEM, UPEC, CNRS, F-94010, Créteil, France\' author: - bibliography: - 'sparseequidae.bib' title: Risk quantification for the thresholding rule for multiple testing using Gaussian scale mixtures --- Introduction ============ Multiple testing has become a topic of particular interest over the past decades. High dimensional models are now quite common for instance in genomics, bio-informatic or even finance. In all theses applications, it is now well known that multiple testing has to be accounted for [see for instance @MR1946571; @efron2002empirical; @MR2065197; @MR2109489; @MR2235051]. For high dimensional models, [@MR2980074] and [@rossell2015non] proposed methods based on non local prior. Recently, the multiple testing problem has been particularly well studied when the number of true positive is supposed to be small with respect to the total amount of tests. This problem is closely related to estimation under sparsity assumption which is now a well known problem. Frequentists approaches such as the LASSO [@Tibshirani1996] are now well understood, and several Bayesian approaches have been developed. In particular [@Castillo2012; @Castillo2015] studied in detail the spike and slab prior and obtained the minimax posterior contraction rate for the posterior, and some desirable properties for recovering the true model. Another celebrated class of prior are the so called *one class* priors (or one group priors). Among others, the Horseshoe prior has been studied in [@Carvalho2010] and [@vanderPas2014], [@Ghosh2015] studied a generalization of the Horseshoe type priors, [@Rockova2015] proposed a continuous version of the spike and slab prior. Recently [@vanderpas2016] proposed some general conditions on the one group prior to obtain minimax posterior concentration rate. However these results are focused on estimation of the parameter rather than testing. We consider the idealistic Gaussian sequence model $$X_i = \theta_i + \epsilon_i , ~ i= 1, \dots,n ,\label{eq:model}$$ where the $\epsilon_i$ are independent and identically distributed $\mathcal{N}(0,v^2)$. In the sequel we should fix $v^2$ to be $1$ for simplicity. We then consider a Gaussian scale mixture prior, where each coefficient $\theta_i$ receives a prior $$\theta_i | \sigma_i^2 \sim \mathcal{N}(0,\sigma_i^2), ~ \sigma_i^2 \sim \pi(\sigma^2_i), \label{eq:prior}$$ where $\pi$ is a density on the positive reals. Here the random parameter $\sigma_i^2$ both accounts for the overall sparsity of the parameter, but is still flexible enough to detect large signals. This class of prior is general enough to encompass most of the one group priors introduced in the literature so far, and all those stated above in particular. The theoretical properties of these priors are now well known for estimation, and they are widely used in practice. For these priors we can easily derive the posterior distribution on the $\theta_i$ and $\sigma_i^2$ $$\begin{aligned} \pi(\sigma_i|X_i) \propto (1+\sigma_i)^{-1/2} e^{-\frac{X_i^2}{2} \frac{\sigma_i^2}{1+\sigma_i^2}}\pi(\sigma_i), \\ \theta_i|\sigma_i^2,X_i \sim \mathcal{N}\left(X_i\frac{\sigma_i^2}{1+\sigma_i^2}, \frac{\sigma_i^2}{1+\sigma_i^2}\right).\end{aligned}$$ The parameter $\kappa_i = \frac{\sigma_i^2}{1+\sigma_i^2}$ is often call a shrinkage parameter and play a crucial role in both the estimation and testing for these models. In this paper we are interested in the problem of testing simultaneously multiple hypotheses on the parameter $\theta$. More precisely, we consider the following sequence of test $$H_{0,i} : \theta_i = 0 \text{ versus } H_{1,i}: \theta \neq 0,~ i=1,\dots, n.$$ Denote by $S_0$ the unknown support of true *signals* or true rejections i.e. $S_0 = \{ i, \theta_i \neq 0 \}$, and $S_0^c$ the support of the true positive $S_0^c = \{i, \theta_i = 0\}$. Consider a sequence of tests $\Xi = (\xi_1, \dots \xi_n)$. A test $\xi_i$ for hypothesis $H_{0,i}$ versus $H_{1,i}$ is a statistic of the observed data ${\mathbf{X}^n}=(X_1, \dots, X_n)$ such that $\xi_i = 0$ if $H_{0,i}$ is accepted and $\xi_i = 1$ otherwise. A multiple testing procedure returns a subset of $\{1, \dots, n\}$ of hypotheses that are accepted, representing the support of the *signals*. This is equivalent to returning the support of the vector $\Xi.$ A great advantage of the spike and slab approach for sparse models is that it provides exact $0$ under the posterior, and one can thus easily derive probability of inclusion and do multiple testing. One group prior models that are considered here, since they are continuous, do not share this property. Nonetheless in [@Polson2010], by analogy with the Spike and Slab approach propose to call *signals* parameters $\theta_i$ for which ${\mathbb{E}}(\kappa_i|X_i)\geq 1/2$. The authors have investigated this thresholding rule for the Horseshoe prior and observed empirically that it behaved similarly to the Spike and Slab inclusion probability. Theoretical properties of this approach to multiple testing were later studied in [@Ghosh2015] for some specific priors and a additive loss function derived from [@MR2850212]. In the latter paper the authors derived the optimal risk for this additive loss. Here we will study optimality of such multiple testing rule when only a small number $p_n \ll n$ of the parameters are true signals for different risks. In the same spirit of [@vanderpas2016] that gave conditions on the prior to bound the posterior contraction rate and convergence rate of a Bayesian estimator, in this paper, we give conditions on the prior distribution under which the we can control the Type I and Type II error for each individual test. This allows us to give upper bounds on the Bayes Risk as studied in [@Ghosh2015]. The additive Bayes Risk studied in [@MR2850212] and [@Ghosh2015] is a Bayesian integrated risk for a two group prior $$\mu: \theta \sim (1-p)\delta_{0} + p \mathcal{N}(0,\psi^2),$$ which in turns give the following marginal distribution for the data $$X_i \sim (1-p)\mathcal{N}(0,1) + p\mathcal{N}(0,\psi^2).$$ For a sequence of tests $\Xi = (\xi_1, \dots, \xi_n)$, [@MR2850212], following ideas that goes back to [@MR0084952], propose an overall additive risk $$R^\mu(\Xi) = \sum_{i=1}^{n} (1-p){\mathbb{P}}_{\mathcal{N}(0,1)}(\xi_i = 1) + p{\mathbb{P}}_{\mathcal{N}(0,\psi^2)}(\xi_i = 0).$$ Similarly to the proposition of [@Carvalho2010] to take ${\mathbb{E}}(\kappa_i|X_i)$ as a proxy of the inclusion probability, we will consider tests of the form $\xi_i = {\textbf{1}}\{m_{X_i} > \alpha\}$ where $m_x = {\mathbb{E}}(\kappa_i|X=x)$ and $\alpha$ is some fixed threshold. Thus taking $\alpha=1/2$ will result in the same procedure as the one studied in [@Carvalho2010] and [@Ghosh2015]. We also consider a minimax approach to the multiple testing problem. Minimax theory for testing has been introduced by [@Ingster93], however, the setting considered in this paper does not seems suited here. We focus on another optimality criterion based on separation rate of the test, that is the rate at which parameters in the alternative can approach the null hypothesis and still be detected by the test. In single hypothesis testing, separation rates have been studied in [@baraud2002]. In a Bayesian setting [@salomondTest] proposed general way of constructing tests and control their separation rate. Recently [@fromont2016] proposed a version of the separation rate for testing in a multiple testing setting. In a similar spirit [@arias-castro2017] studied a multiple testing risk based on the False Discovery Rate and False Nondiscovery Rates. False Discovery Rate (FDR) is a measure of the Type I error in a multiple testing setting. Let $\Xi = (\xi_1, \dots, \xi_n)$ be a sequence of tests and let $S_0$ be the sets of *true* alternative hypotheses, the False Discovery Rate is defined as $$FDR_n(\Xi) = {\mathbb{E}}\left( \frac{\sum_{i\in S_0^c} \xi_i }{\sum_{i=1}^n \xi_i \vee 1 } \right).$$ Since the seminal work [@benjamini1995controlling] there has been an abundant literature on how to control the FDR for different types of models. However only a few results are concerned with the control of the power of such procedure. Following [@arias-castro2017] we study the False Nondiscovery Rate (FNR) which is the expectation of the proportion of false negative test, defined as $$FNR_n(\Xi) = {\mathbb{E}}\left( \frac{\sum_{i\in S_0}(1- \xi_i) }{p_n} \right).$$ We consider the same risk as in [@arias-castro2017] or [@Rabinovich17] $$R^{sup}_n(\Xi) = \{FDR_n(\Xi) + FNR_n(\Xi)\}. \label{eq:intro:risk:sup}$$ In particular we will aim at bounding the supremum of this risk over a set of parameters $$\sup_{\theta, \theta \in T_n^c } R_n^{sup} (\Xi),$$ where $T_n$ is some neighborhood of $0$. Intuitively $T_n$ is the set of parameters that cannot be detected by the test. We chose $T_n$ of the form $T_n:= \{ \theta, d_n(\theta,0)\leq \rho_n \}$ and thus $\rho_n$ is the detection boundary. The smallest the $\rho_n$ is, the sharper the test. In this paper, we will show that under some conditions on the prior, the considered test will have asymptotically null risk if $\rho_n \gtrsim \sqrt{2\log(n/p_n)}$ for $d_n$ the $l_\infty$ distance. It is well known that for our model, parameters bellow $\sqrt{2\log(n)}$ will be shrunk toward $0$, implying a consequent bias for estimation. However it seems that parameter between $\sqrt{2\log(n/p_n)}$ and $\sqrt{2\log(n)}$ can still be detected by the test. We also study empirical Bayes approaches to the testing problem to adapt to the more realistic case where the number of signals $p_n$ is unknown, and give some conditions on the estimator $\hat{p}$ of $p_n$ such that both risks can be bounded. Other types of adaptive procedures have been proposed in the literature, such as empirical Bayes based on the Marginal Maximum likelihood estimator or fully Bayes procedure for the Horseshoe prior [see @vanderpas2017]. However, these procedures are difficult to adapt to the more general set up we consider here, and will thus not be treated. The paper is organized as follows, in section \[sec:main:result\] we state the main results providing conditions on the prior such that we can bound the type I and type II error for each individual test. We then give upper bounds for the Bayes Risk and give an upper bound on the posterior separation rate for the multiple testing risk. We then give the adaptive version of the main results in section \[sec:adaptive\]. Section \[sec:proofs\] is devoted to the proofs. #### Notations Throughout the rest of the paper we will denote by $a\wedge b$ and $a \vee b$ respectively the minimum and the maximum between to real numbers $a$ and $b$. The probability density function of a standard normal will be denoted $\phi$ and its cumulative distribution function $\Phi$. We will denote $a\lesssim b$ if there exist an absolute constant $C$ such that $a \leq C b$ and $a \asymp b$ if $a\lesssim b $ and $b\lesssim a$. We will denote by $\tau_n(p) = p/n$ and $\nu_n(p) = \sqrt{\log(n/p)}$. Prior conditions {#sec:main:result} ================ Because scale mixture priors are continuous, we do not have access to inclusion probability to perform multiple testing. Because the $\theta_i$ are a posteriori independent, we can use the coordinatewise posterior. In particular, we get that $$\begin{aligned} {\mathbb{E}}(\kappa_i|X_i = x) &= { \int_0^1 z (1-z)^{-3/2} e^{\frac{x^2}{2}z} \pi\left(\frac{z}{1-z}\right) dz \over \int_0^1 (1-z)^{-3/2} e^{\frac{x^2}{2}z} \pi\left(\frac{z}{1-z}\right) dz } \nonumber \\ &= { \int_0^\infty u(1+u)^{-3/2} e^{\frac{x^2}{2} \frac{u}{1+u}} \pi(u) du \over \int_0^\infty (1+u)^{-1/2} e^{\frac{x^2}{2} \frac{u}{1+u}} \pi(u) du}. \label{eq:Ekappa}\end{aligned}$$ From Tweedie’s formula [@Robbins1956], we have that the posterior mean of $\theta_i$ given an observation $x_i$ is equal to $x_i + \frac{d}{dx}p(x_i)$ where $p(x_i)$ is the marginal distribution of $x_i$. This in turns gives that the posterior mean of $\theta_i$ given $X_i$ is $\widehat{\theta}_i = {\mathbb{E}}(\kappa_i|X_i) X_i$. An advantage of scale mixtures over spike-and-slab types of priors is that the integrals in can be computed via integral approximation methods see [see @vanderPas2014; @Carvalho2010 in the context of the horseshoe prior]. This is also a great advantage for multiple testing as ${\mathbb{E}}(\kappa_i|X_i)$ plays a central role in the approach proposed in [@Carvalho2010]. In particular, because the parameters are a posteriori independent, we will be able to bound each individual type I and type II error. Our main results are conditions on the prior under which we can guaranty upper bounds on multiple testing risks. The conditions are similar to the ones proposed in [@vanderpas2016] under which the authors prove some upper bounds on the contraction rates of the order of the minimax rate. We first state the conditions. Condition \[cond1\] is required to bound the type II error of each individual tests. The remaining two are used to bound each individual type I error. The first condition involves a class of uniformly regularly varying function at infinity. defined as follows. A function $L$ is said to be uniformly regularly varying at infinity if there exists $R, u_0 > 0$ such that $$\forall a \in [1,2],~ \forall u\geq u_0, ~ \frac{1}{R} \leq \frac{L(au)}{L(u)} \leq R.$$ In particular, polynomial functions $L(u) = u^b$ and powers of logarithm $L(u) = \log^b(u)$ with $b\in {\mathbb{R}}$ are uniformly regular varying at infinity. For more details on uniformly regular varying function, see [@vanderpas2016]. The only difference with their definition is that here we allow $u_0$ to be less than 1. We now present the first condition. \[cond1\] For some $b\geq 0,$ we can write $u\mapsto \pi(u) = L_n(u) e^{-bu},$ where $L_n$ is a uniformly regularly varying function at infinity for some $R, u_0> 0$ which do not depend on $n.$ Suppose further that there are constants [$C', b'>0,$]{} $K\geq 0,$ and $u_* \geq 1,$ such that $$\begin{aligned} C' \pi(u) \geq \tau_n(p)^{K} e^{-b'u} \quad \text{for all} \ u\geq u_*. \label{eq.assump_on_lb_Ln}\end{aligned}$$ Condition \[cond1\] is a sufficient condition on the tails of the prior to bound the type II error of each individual test. This condition states that the tails of $\pi$ should decay at most exponentially fast, up to a uniformly regularly varying function. This is consistent with the conditions on the slab distribution in [@Castillo2012]. The following two conditions unsure that the prior penalizes enough small signals, which will be necessary to bound the individual type I errors. Condition \[cond2\] requires that $\pi$ puts enough mass on a neighborhood of $0$ while Condition \[cond3\] describes the decay of $\pi$ away from $0$. \[cond2\] Suppose that there is a constant $c>0$ such that $\int_0^1 \pi(u) du \geq c.$ Define $s_n$ as $$\begin{aligned} s_n:= \tau_n(p) \nu_n(p)^2, \label{eq.sn_def}\end{aligned}$$ then to get bound on the different risks, we ask that $\pi$ satisfies the following condition. \[cond3\] Assume that there is a constant $C,$ such that $$\begin{aligned} \int_{s_n}^\infty \Big( u \wedge \frac{\nu_n(p)^3}{\sqrt{u}} \Big) \pi(u) du + \nu_n(p) \int_1^{\nu_n(p)^2} \frac{\pi(u)}{\sqrt{u}} du \leq C s_n.\end{aligned}$$ We let Condition \[cond3\] depend on the constant $C$ has it will appears in the bound on the different risks. In particular, it will appears that by tuning $C$ appropriately or by taking $C = C_n \to 0$ with $n$, we can obtain almost sharp bounds especially for the Bayes risk. Deterministic $p$ ================= In this section we will assume that $p_n$ is known. Following the results of [@MR2850212], we assume that the prior $\mu$ considered for the Bayes risk $R^\mu$ satisfies the following assumption. The distribution $\mu$ is of the form $$\mu = (1-\frac{p_n}{n}) \mathcal{N}(0,1) + \frac{p_n}{n}\mathcal{N}(0,\psi_n),$$ \[ass:mu\] with ${p_n}/{n} = o(1)$ and $\psi_n = C_\psi^{-1} \log(n/p_n)(1+o(1))$. Under this assumption, [@MR2850212] showed that the Bayes Oracle $\Xi^*$ as the asymptotic risk $$R^\mu(\Xi^*) = p_n(2\Phi(\sqrt{C_\psi}) - 1)(1 + o(1)). \label{eq:RiskOracle}$$ In their work [@Ghosh2015] proved that for certain class of priors, the optimal risk could be obtained. The following theorem gives an upper bound on the risk that depends on the constants in Conditions \[cond1\]-\[cond3\]. \[thm:BayesRisk\] Work under model and assume that the prior is of the form and satisfies Conditions \[cond1\]-\[cond3\]($C$) with $p = p_n$. Assume that the distribution $\mu$ satisfies the Assumption \[ass:mu\]. Consider for the sequence of test $\xi_i = {\textbf{1}}\left\{ {\mathbb{E}}(\kappa_i|X_i) > \alpha \right\}$ for any fixed $\alpha \in (0,1)$, then $$R^\mu(\xi_1, \dots \xi_n) \leq p_n\left( \frac{8\sqrt{\pi} C}{c \alpha} + 2\Phi\left( \sqrt{2K(u_0+1)C_\psi}\right)-1 \right)(1+o(1)). \label{eq:bound:thm:BayesRisk}$$ The proof of Theorem \[thm:BayesRisk\] is postpone to section \[sec:proofs\]. This theorem thus gives bounds on the Bayes Risk for a wide class of priors, but is slightly less sharp than the ones obtained in [@Ghosh2015] for instance. In particular, when considering the horseshoe type priors as in [@Ghosh2015] with $\alpha_n= 1/2$, the bound becomes $$R^\mu(\Xi) \leq p_n\left( 2\Phi\left( \sqrt{2C_\psi}\right)-1 \right)(1+o(1)),$$ to be compared with the oracle bound from . The suboptimal constant is an artifact of the proof of the general case. More precisely, because we do not impose any particular form for the prior $\pi$ on the near $0$, one cannot control precisely the Type II error. We now go on and study the minimax Risk. Under the same conditions, the following theorem gives an upper bound on the separation such that the minimax risk can be bounded. \[thm:seprate\] Work under model and assume that the prior is of the form and satisfies Conditions \[cond1\]-\[cond3\]($C$) with $p = p_n$. For all $v_n\to \infty$, let $\rho_n =C_1 + \sqrt{2K(u_0+1)\log(n/p_n)} + v_n$ then we have, for all $\lambda \in (0,1)$ $$\sup_{\theta, \min_{i\in S_0}|\theta_i| \geq \rho_n}R^{sup}_n(\Xi) \leq \left(\frac{1}{1+\frac{\lambda \alpha c}{8 C\sqrt{\pi}}} + \Phi(-v_n)\right)(1 + o(1))$$ The proof of theorem \[thm:seprate\] is postpone to section \[sec:proofs\]. We see that with a careful choice of the prior hyper-parameters, ${\mathbb{R}}^{sup}_n$ can be made asymptotically small. The separation rate of the test $\rho_n$ is of the order of $\sqrt{\log(n/p_n)}$. This is surprising as in estimation problems, the *universal threshold* under which the parameter are shrunk toward $0$ is known to be $\sqrt{2\log(n)}$. However here it seems that this shrinkage is not too important for parameters between $\sqrt{2\log(n)}$ and $\rho_n$, which is essentially of the order of $\sqrt{2K(u_0+1)\log(n/p_n)}$, so that the proportion of false negative is not too high. Adaptive results {#sec:adaptive} ================ In this section we give an adaptive counterpart to the results presented in section \[sec:main:result\]. The conditions \[cond1\]-\[cond3\] depend on the number $p_n$ of *true signals* in the parameter $\theta$. We give general conditions on an estimator ${\widehat{p}}$ of $p$ under which we can bound each risk. The assumptions are similar to the one used in [@vanderpas2017] for instance. We then show that this conditions are satisfied for a standard estimator of $p$. Assumptions and main results ---------------------------- We first give an assumption on ${\widehat{p}}$ in order to bound the Bayes risk $R^\mu_n$. Similarly to [@vanderpas2017; @vanderpas2017UQHS], we require that the estimator do not overestimate nor underestimate the number of true positive $p_n$. \[ass:adapt:Bayes\] Let $\mu$ be some probability mesure satisfying assumption \[ass:mu\]. For some absolute constants $C^u>0$, $c_d>0$ and $C_d\geq 0$ the estimator $\hat{p}$ of $p_n$ satisfies - There exists $C^u$ such that ${\mathbb{P}}^n_\mu(\hat{p} \leq C^u p_n) =1 + o(\frac{p_n}{n})$ - There exists absolute constants $C_d\geq 0$, $c_d >0$ and $\zeta \geq 0$ such that ${\mathbb{P}}^n_\mu\left({\widehat{p}}\geq c_d p_n \left(\frac{n}{p_n}\right)^{-\zeta}e^{-C_d\sqrt{K\log(n/p_n)}} \right) = 1 + o(1)$ The conditions is stronger here compare to the ones proposed in [@vanderpas2017; @vanderpas2017UQHS], as we require that the probability of overestimating $p_n$ is $o(p_n/n)$. This is due to the fact that when over estimating the number of true signals, the risk is of the order of $n$. Note that although this conditions is stronger than the one proposed in the literature, however the simple estimator of $p_n$ proposed in [@vanderPas2014] for instance will satisfy conditions \[ass:adapt:Bayes\] under some additional assumptions on the number of true signals similar to the ones considered in [@Ghosh2015]. The second part of the condition requires that the estimator ${\widehat{p}}$ does not under estimate the number of true signal. Note that this condition is quite flexible and could me made trivial with a proper choice of $\zeta$ when $p_n$ is a of the polynomial order of $n$. However a larger $\zeta$ will deteriorate the bound on the risk. The following theorem gives a upper bound on the Bayes risk for the empirical Bayes method. \[thm:AdaptBayesRisk\] Work under model and assume that the prior is of the form and satisfies Conditions \[cond1\]-\[cond3\]($C$) with $p = \hat{p}$ and let $\hat{p}$ be an estimator of $p_n$ satisfying Assumtion \[ass:adapt:Bayes\]. Assume that the distribution $\mu$ satisfies the Condition \[ass:mu\]. Consider for the sequence of test $\xi_i = {\textbf{1}}\left\{ {\mathbb{E}}(\kappa_i|X_i) > \alpha \right\}$ for any fixed $\alpha \in (0,1)$, then $$R^\mu(\xi_1^{\hat{p}}, \dots \xi_n^{\hat{p}}) \leq p_n\left( \frac{8\sqrt{\pi} C C^u}{c \alpha} + 2\Phi\left( \sqrt{2K(u_0+1)(1 + \zeta)C_\psi}\right)-1 \right)(1+o(1)). \label{eq:bound:thm:AdaptBayesRisk}$$ The proof of theorem \[thm:AdaptBayesRisk\] is postpone to appendix \[seq:proof:adaptive\]. We see that if the Condition \[ass:adapt:Bayes\] is satisfied with $\zeta=0$, we get a bound similar to the one obtained in the non-adaptive case. When $p_n$ is of a polynomial order of $n$, [@ghosh2016] have shown that the simple estimator considered in [@vanderPas2014], satisfies condition \[ass:adapt:Bayes\] with $\zeta =0$ (see their remark 4) . We also propose an adaptive version of Theorem \[thm:seprate\] with however less strong conditions on the estimator of $p_n$. \[ass:adapt:Minimax\] For some sequence $\rho_n$, let $\theta$ be such that $\forall i \in S_0, |\theta_i| \geq \rho_n^0$. For some absolute constants $C^u>0$, $c_d>0$ and $C_d\geq 0$ the estimator $\hat{p}$ of $p_n$ satisfies $${\mathbb{P}}_\theta^n \left( \gamma_n \leq {\widehat{p}}\leq C^u p_n)\right) =1 + o(1) \label{eq:condition:adapt:minimax}$$ This condition is fairly similar to Condition \[ass:adapt:Bayes\], however we only require that the probability of over estimating the number of signals goes to $0$. Here again by choosing $\nu$ large enough, the assumption on the lower bound on ${\widehat{p}}$ in will be satisfied, however this will impact the separation rate of the test. \[thm:adapt:seprate\] Work under model and assume that the prior is of the form and satisfies Conditions \[cond1\]-\[cond3\]($C$) with $p = \hat{p}$ and let $\hat{p}$ be an estimator of $p_n$ satisfying Condition \[ass:adapt:Minimax\] with $\rho_n^0$. For all $v_n\to \infty$, let $\rho_n = C_1 + \sqrt{2K(u_0+1)\log(n/\gamma_n)} + v_n$ then we have, for all $0 < \lambda < \Phi(v_n) $ $$\sup_{\theta, \min_{i\in S_0}|\theta_i| \geq \rho_n \vee \rho_n^0} R^{sup}_n(\Xi) \leq \left(\frac{1}{1+\frac{\lambda \alpha c}{8 C^uC\sqrt{\pi}}} + \Phi(-v_n)\right)(1 + o(1))$$ The proof of this theorem is postponed to appendix \[seq:proof:adaptive\]. The bounds are comparable with the non adaptive one. The separation rate is of the order of $\sqrt{\log(n/\gamma_n)}$ where $\gamma_n$ is a lower bound on the estimated number of true signals. Thus for the simple estimator, given the results of [@vanderpas2017; @vanderpas2017UQHS] Condition \[ass:adapt:Minimax\] holds true for $\gamma_n = 1$ and the separation rate is of the same order as the contraction rate. Moreover this rate is consistent with the ones that could be obtained using tests base on the confidence intervals obtained by [@vanderpas2017UQHS]. Proofs {#sec:proofs} ====== Proof of Theorem \[thm:BayesRisk\] ---------------------------------- ### Control of the type I Error We first control the Type one error ${\mathbb{P}}_0({\mathbb{E}}(\kappa_i|X_i) \geq \alpha )$, where $P_0$ is the probability distribution of a $\mathcal{N}(0,1)$. Following the proof of [@vanderpas2016] we have denote by $m_x = {\mathbb{E}}(\kappa_i|X_i = x)$, and we have $$m_x = \frac{\int_0^1 z(1-z)^{-3/2} e^{\frac{x^2}{2}z} \pi\left( \frac{z}{1-z}\right) dz }{\int_0^1 z(1-z)^{-3/2} e^{\frac{x^2}{2}z} \pi\left( \frac{z}{1-z}\right) dz } = \frac{\int_0^\infty u(1+u)^{-3/2} e^{\frac{x^2u}{2(1+u)}} \pi\left( u\right) dz }{\int_0^\infty u(1+u)^{-1/2} e^{\frac{x^2u}{2(1+u)}} \pi\left( u\right) dz } \label{eq:proof:def:mx}$$ Following the proof of lemma A.6. of [@vanderpas2016], we have that working under Conditions \[cond1\] and \[cond2\], $$\begin{aligned} m_x &= \frac{\int_0^\infty u(1+u)^{-3/2} e^{\frac{x^2u}{2(1+u)}} \pi\left( u\right) dz }{\int_0^\infty u(1+u)^{-1/2} e^{\frac{x^2u}{2(1+u)}} \pi\left( u\right) dz } \\ & \leq s_n + \frac{\sqrt{2}}{c} \int_{s_n}^\infty \frac{u}{(1+u)^{3/2}} e^{\frac{x^2}{2} \frac{u}{1+u}} \pi(u) du \\ &\leq s_n \left( 1 + \frac{\sqrt{2}}{c} C e^{x^2/4} \right) + \frac{\sqrt{2}}{c} \int_1^{\infty} \frac{u}{(1+u)^{3/2}} e^{\frac{x^2}{2} \frac{u}{1+u}} \pi(u) du \\ & \leq s_n \left( 1 + \frac{\sqrt{2}}{c} C e^{x^2/4} \right) + e^{\frac{x^2}{2}}\frac{\sqrt{2}}{c} \int_1^{\infty} \frac{\pi(u)}{\sqrt{u}} du \\ & \leq s_n \left( 1 + \frac{\sqrt{2}}{c} C e^{x^2/4} \right) + e^{\frac{x^2}{2}}\frac{\sqrt{2}}{c} 2Cs_n \frac{1}{\sqrt{\log(n/p_n)}},\end{aligned}$$ where the last line is obtained by splitting the integral $\int_1^\infty$ in $\int_1^{\log(n/p_n)}$ and $\int_{\log(n/p_n)}^\infty$ and using condition \[cond3\] on both. Using this inequality, we can thus control the type I error of the testing procedure. We have $$\begin{aligned} {\mathbb{P}}_0(m_X \geq \alpha) \leq& {\mathbb{P}}_0\left(s_n \left( 1 + \frac{\sqrt{2}}{c} C e^{X^2/4} \right) + e^{\frac{X^2}{2}}\frac{\sqrt{2}}{c} 2Cs_n \frac{1}{\sqrt{\log(n/p_n)}} \geq \alpha \right) \\ \leq& {\mathbb{P}}_0\left(s_n \left( 1 + \frac{\sqrt{2}}{c} C e^{X^2/4} \right) \geq \alpha \right) + \\ & {\mathbb{P}}_0\left(e^{\frac{X^2}{2}}\frac{\sqrt{2}}{c} 2Cs_n \frac{1}{\sqrt{\log(n/p_n)}} \geq \alpha \right) \\ =& P_1 + P_2\end{aligned}$$ We will control each term separately. The first term can be bounded by $$\begin{aligned} P_1 &= {\mathbb{P}}_0\left(X^2 \geq 4 \log\left( \frac{c}{Cs_n \sqrt{2}}(\alpha - s_n) \right) \right) \\ &= 4\sqrt{2 \pi} \left(\log\left( \frac{c}{Cs_n \sqrt{2}}(\alpha - s_n) \right) \right)^{-1/2} e^{-2 \log\left( \frac{c}{Cs_n \sqrt{2}}(\alpha - s_n) \right) }(1+o(1)) \\ &\lesssim C \frac{p_n^2}{n^2} \log(n/p_n)^2(1 + o(1)),\end{aligned}$$ where we have used the fact that $\int_z^\infty x^{s-1} e^{-x/a} dx = az^{s-1}e^{-z/a}(1+o(1))$ as $z\to \infty$. We now bound the second term using the same arguments $$\begin{aligned} P_2 &={\mathbb{P}}_0 \left(X^2 \geq 2 \log\left( \frac{\alpha c}{2\sqrt{2} C} \frac{\sqrt{\log(n/p_n)}}{s_n} \right) \right) \\ & = 2 \sqrt{2 \pi} \log\left( \frac{\alpha c}{2\sqrt{2} C} \frac{\sqrt{\log(n/p_n)}}{s_n} \right)^{-1/2} e^{-\log\left( \frac{\alpha c}{2\sqrt{2} C} \frac{\sqrt{\log(n/p_n)}}{s_n} \right)}(1 + o(1)) \\ &= 8 \sqrt{\pi} \frac{C}{\alpha c} \frac{p_n}{n} (1+o(1)).\end{aligned}$$ We thus have that $${\mathbb{P}}_0(m_X \geq \alpha) \leq 8 \sqrt{\pi} \frac{C}{\alpha c} \frac{p_n}{n}(1 + o(1)) \label{eq:proof:BayesRisk:t1}$$ ### Control of the Type II Error We now control the type II error for both risks i.e. either ${\mathbb{P}}_{\mathcal{N}(0,\psi)}(\xi = 0)$ or ${\mathbb{P}}_{\mathcal{N}(\theta,1)}(\xi = 0)$ depending on the considered risk, when the prior satisfies condition \[cond1\]. The following Lemma gives a lower bound on $m_x$ for $x$ large enough, which will prove useful to bound the type two error of each individual tests. Let $m_x$ be defined as . If $\pi$ satisfies condition \[cond1\], there exists a constant $C_1$ depending only on $\alpha$, $u_*$ and $u_0$ such that for all $|x| \geq C_1 + \sqrt{2K(1+u_0)\log(n/p)}$ we have $ m_x \geq \alpha. $ \[lem:proof:bound:mxCondition1\] The proof of this lemma is postponed to appendix \[sec:appendix:proof:lem:bound:mxCondition1\]. Given this Lemma, we bound the individual type two error, let $t_{2,i} = {\mathbb{P}}_{1+\psi}(\xi_i = 0 )$ and define $T_n:= C_1 + \sqrt{2K(1+u_0)\log(n/p_n)}$ we have, given the preceding lemma $$\begin{aligned} t_{2,i} &= {\mathbb{P}}_{1+\psi_n}(m_X \leq \alpha \cap X \leq T_n) \\ &\leq {\mathbb{P}}_{1+\psi_n}(X \leq T_n) \\ &= 2\Phi\left( \frac{T_n}{\sqrt{1+\psi_n^2}} \right) -1 \\ &= \left(2\Phi\left( \sqrt{2K(u_0 + 1)C_\psi} \right) -1\right)(1+o(1)).\end{aligned}$$ This conclude the proof of Theorem \[thm:BayesRisk\]. Proof of Theorem \[thm:seprate\] -------------------------------- ### False discovery rate Recall that for a collection of test $\xi_1, \dots \xi_n$, and a true parameter $\theta$ with support $S_0$ the False Discovery Rate is defined by $$FDR_n(\theta) = {\mathbb{E}}_\theta \left( \frac{\sum_{i \in S_0^c} \xi_i }{\max(\sum_{i=1}^n \xi_i,1)} \right).$$ First note that using Lemma \[lem:proof:bound:mxCondition1\], under the assumption that $\min_{i\in S_0} |\theta_i| >T_n + v_n$ we have $$t_{2,i} \leq \Phi(T_n - \theta_i) - \Phi(-T_n - \theta_i) \leq \Phi\left(T_n - \min_{i \in S_0}(|\theta_i|)\right) = o(1)$$ We now show that the number of true discoveries $\sum_{i \in S_0} \xi_i$ is at least a fraction of $p_n$ with probability that goes to $1$. Let $a_n = \Phi(T_n - \min_{i\in S_0}|\theta_i|)$, and let $t_n = \log(1/a_n)$, then using Chernoff inequality we have $$\begin{aligned} {\mathbb{P}}_\theta(\sum_{i\in S_0} \xi_i &\leq \lambda p_n ) = e^{\lambda t_n p_n}{\mathbb{E}}(e^{-t_n\sum_{i\in S_0} \xi_i }) \\ & \leq e^{-(1-\lambda) t_n p_n + \sum_{i \in S_0} \log(1 + t_{2,i}(e^{t_n} - 1))} \\ & \leq e^{-p_n((1-\lambda) t_n - \log(2 - a_n))} = o(1)\end{aligned}$$ We thus have the following upper bound for the false discovery rate provided that $\min_{i \in S_0}(|\theta_i|)> T_n + v_n$ $$\begin{aligned} FDR_n(\theta) &= {\mathbb{E}}_\theta \left( \frac{\sum_{i \in S_0^c} \xi_i }{\max(\sum_{i=1}^n \xi_i,1)} {\textbf{1}}_{\sum_{i=1}^n \xi_i > \lambda p_n} \right) + o(1) \nonumber \\ & \leq \frac{(n-p_n)t_1}{(n-p_n)t_1 + \lambda p_n} + o(1)\label{eq:boundfdr:jensen} \\ & \leq \frac{8 \sqrt{\pi} \frac{C}{\alpha c} (1 + o(1))}{(8 \sqrt{\pi} \frac{C}{\alpha c} + \lambda)(1 + o(1))} + o(1)\nonumber \\ & = \frac{1}{1 + \frac{\lambda{\alpha c}}{8 \sqrt{\pi} C} (1 + o(1))}+ o(1) \nonumber\end{aligned}$$ where is obtained using Jensen inequality given that the function $x \mapsto \frac{x}{x+a}$ is concave. Thus by choosing $C$ small enough, a bound on the FDR. ### False Nondiscovery rate Using the same method, we also bound the False Nondiscovery Rate (FNR) defined as $$FNR_n(\theta) = \frac{{\mathbb{E}}_\theta\left( \sum_{i\in S_0} (1-\xi_i) \right)}{p_n}$$ Using the preceding results we have $$\begin{aligned} FNR_n(\theta) &= \frac{1}{p_n}\sum_{i\in S_0} t_{2,i} \\ &\leq \Phi(T_n - \min_{i\in S_0} |\theta_i|) + o(1)\end{aligned}$$ which ends the proof. Discussion ========== We expanded the class of Gaussian scale mixture priors with theoretical guaranties for the multiple testing problems for two different types of risks. The conditions are similar to the ones considered in [@vanderpas2016] which indicates that theses priors are well suited both for estimation and testing. What seems to be key in the bounds obtained is the tails of the prior distribution. More precisely, the tails of the local variance should be at least heavy as an exponential to allow for the detection of signals. In regards of the results in [@Ghosh2015] it seems that the bounds obtained for the Bayes Risk could be sharpen. However, this will require some more restrictive conditions on the prior distribution. To our best knowledge, there were no existing bounds on the minimax risk for any Bayesian multiple testing procedure. The bounds proposed here are competitive with the one found in [@arias-castro2017] or [@Rabinovich17]. Although not aiming at the same problem [@vanderpas2017UQHS] showed that honest uncertainty quantification cannot be attained for true signals bellow a similar threshold, we thus believe that theses bounds are of the correct order. We only considered here the simplistic Gaussian sequence model, but the results obtained for the Gaussian Scale mixture priors are encouraging and we could thus hope to extend it to more complex high dimensional models. This would be of great interest since these priors are computationally attractive [see @MR3620452], contrariwise to the Spike and Slab priors. Technical Lemmas ================ \[lem:upperbound:Lau\] Let $L$ be a uniformly regularly varying function at infinity with constants $u_0$ and $R$. For all $u>u_0$ and $a>1$ we have $$\frac{L(u)}{L(au)} \leq (2a)^{\log_2(R)}$$ \[lem:shiftedLtilda\] Let $L$ be a uniformly regularly varying function at infinity with constants $u_0$ and $R$. Then $\tilde{L}(\cdot) = L(\cdot-1)$ is also uniformly regularly varying at infinity with constants $u_0+1$ and $R^3$. Proof of Lemma \[lem:proof:bound:mxCondition1\] {#sec:appendix:proof:lem:bound:mxCondition1} =============================================== The proof is adapted from the the proof of Theorem 2.1 in [@vanderpas2016]. Given the definition of $m_x$ we have $$1 - m_x = \frac{ \int_1^\infty e^{-\frac{x^2}{2v}} v^{-3/2} \pi(v-1)dv }{ \int_1^\infty e^{-\frac{x^2}{2v}} v^{-1/2} \pi(v-1)dv } = \frac{N_n}{D_n}.$$ Denote by $I(a,b) = \int_a^b e^{-\frac{x^2}{2v}} v^{-3/2} \pi(v-1)dv/D_n$, we immediately have $$I\left( \frac{1+u_0}{1-\alpha}, + \infty \right) \leq \frac{1-\alpha}{1+u_0}.$$ We now control the remaining term $I\left( 1,\frac{1+u_0}{1-\alpha} \right)$. Given that $\pi$ satisfies condition \[cond1\] and taking $C_1>2u_0$, we have the following lower bound on $D_n$ $$\begin{aligned} D_n &\geq \int_{|x|/2}^{|x|} e^{-\frac{x^2}{2v}} v^{-1/2} \pi(v-1)dv \nonumber \\ & \geq e^{b - |x|(1+b)} |x|^{1/2} \int_{1/2}^1 L_n(|x|z-1) dz \nonumber \\ & \geq \frac{1}{2R^3} e^{b - |x|(1+b)} |x|^{1/2} L_n(|x|/2 - 1), \label{eq:proofs:lowerbound:Dn}\end{aligned}$$ where the last inequality follows from Lemma \[lem:shiftedLtilda\]. Now consider $I\left( u_0+1,\frac{1+u_0}{1-\alpha} \right)$, we have taking $C_1$ large enough $$\begin{aligned} I\left( u_0+1,\frac{1+u_0}{1-\alpha} \right) &= \frac{\int_{u_0+1}^{\frac{u_0+1}{1-\alpha}} e^{-\frac{x^2}{2v}} v^{-3/2} \pi(v-1)dv}{D_n} \nonumber \\ &\lesssim e^{-\frac{x^2(1-\alpha)}{2(u_0+1)} + |x|(b+1) } |x|^{-1/2} \int_{u_0+1}^{\frac{u_0+1}{1-\alpha}} \frac{L_n(v-1)}{L_n(\frac{|x|}{2} - 1)} dv \nonumber \\ &\lesssim e^{-\frac{x^2(1-\alpha)}{2(u_0+1)} + |x|(b+1)} |x|^{3\log_2(R)-1/2} \label{eq:proof:Imiddle}\\ &\leq \frac{1-\alpha}{2}\frac{u_0}{u_0+1}, \nonumber\end{aligned}$$ where the inequality follows from Lemmas \[lem:upperbound:Lau\] and \[lem:shiftedLtilda\] and $|x|>C_1$ for $C_1$ a sufficiently large constant that only depends on $R$, $b$, $u_0$ and $\alpha$. We now consider the last term $I(1,u_0+1)$. From equation we get and condition \[cond1\] we have $$D_n \geq \frac{1}{2R^3} e^{b + b' - |x|(1+b+b'/2)} |x|^{1/2} \left(\frac{p_n}{n} \right)^K.$$ For the numerator we have, given that $\pi$ is a density $$\int_1^{u_0+1} e^{-\frac{x^2}{2v}} v^{-1/2} \pi(v-1)dv \leq e^{-\frac{x^2}{2(u_0+1)}},$$ and thus $$\begin{aligned} I(1,u_0+1) &\leq 2R^3 e^{-\frac{x^2}{2(u_0+1)} + |x|(1+b+b'/2) -b - b' } |x|^{-1/2} \left( \frac{n}{p_n} \right)^K \\ &\leq \frac{1-\alpha}{2}\frac{u_0}{1+u_0}, \end{aligned}$$ given that $|x| \geq u_* + C_1 + \sqrt{2K(1+u_0)\log(n/p_n)}$, where $C_1$ is a constant large enough that only depends on $R$, $b$, $b'$, and $u_0$. This complete the proof . Proofs for the adaptive results {#seq:proof:adaptive} =============================== Bayes Risk ---------- ### False Positive The number of false positives is given by $\sum_{i=1}^n {\textbf{1}}_{\xi_i = 1 \cap v_i = 0} $. $$\begin{aligned} {\mathbb{E}}\left( \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 1 \cap v_i = 0}\right) &= {\mathbb{E}}\left({\textbf{1}}_{\hat{p} \leq C^u p_n} \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 1 \cap v_i = 0}\right) + o(p_n) \\ &\leq \sum_{i=1}^n {\mathbb{E}}\left( {\textbf{1}}_{\hat{p} \leq C^u p_n} {\textbf{1}}_{\xi_i^p = 1} {\textbf{1}}_{v_i = 0} \right) + o(p_n)\end{aligned}$$ Given that $\{ \xi_i^p = 1 \} = \{m_x^p > \alpha\}$ we have that for all $p \leq C^u p_n$ using Conditions 1 and 2 $$\begin{aligned} {\textbf{1}}\left\{\xi_i^p\right\} &= {\textbf{1}}\left\{m_x^p > \alpha\right\} \\ & \leq {\textbf{1}}\left\{ \frac{p}{n} \log\left(\frac{p}{n} \right) \left( 1+ \frac{C\sqrt{2}}{c} e^{x^2/4} \right) + \frac{p}{n} \log\left(\frac{p}{n} \right)^{1/2} \frac{2C\sqrt{2}}{c} e^{x^2/2} > \alpha \right\} \\ & \leq {\textbf{1}}\left\{ \frac{C^up_n}{n} \log\left(\frac{C^up_n}{n} \right) \left( 1+ \frac{C\sqrt{2}}{c} e^{x^2/4} \right) + \frac{C^up_n}{n} \log\left(\frac{C^up_n}{n} \right)^{1/2} \frac{2C\sqrt{2}}{c} e^{x^2/2} > \alpha \right\}\end{aligned}$$ We can thus conclude that $${\mathbb{E}}\left( \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 1 \cap v_i = 0}\right) \leq 8 \sqrt{\pi} \frac{C}{\alpha c} C^u p_n(1 + o(1)),$$ using the same arguments as before. ### False negative $$\begin{aligned} {\mathbb{E}}\left( \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 0 \cap v_i = 1}\right) &= {\mathbb{E}}\left({\textbf{1}}_{\hat{p} \geq p_n \left(\frac{p_n}{n}\right)^{-\zeta}e^{-C_d\sqrt{\log(n/p_n)}}} \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 0 \cap v_i = 1}\right) + o(p_n) \\ &\leq \sum_{i=1}^n {\mathbb{E}}\left( {\textbf{1}}_{\hat{p} \geq c_d p_n \left(\frac{p_n}{n}\right)^{-\zeta}e^{-C_d\sqrt{\log(n/p_n)}}} {\textbf{1}}_{\xi_i^{\hat{p}} = 0} {\textbf{1}}_{v_i = 1} \right) + o(p_n)\end{aligned}$$ Let $T_n = C + \sqrt{2K(u_0+1) (1+\zeta)\log(n/p_n)}$ for some $\zeta \geq 0$. Given the preceding results we have if $|x| > T_n$ we have for $C_1$, $C_2$ and $C_3$ some absolute constants $$\begin{aligned} 1-m_x^p &\leq \frac{C_1}{|x|}\left( C_2 + e^{(1 + \frac{b + b'}{2})|x| - \frac{x^2}{2u_0} + K\log(n/p) + \frac{1}{2}\log(|x|)} \right) \\ & \leq \frac{C_1}{|x|} \left( C_2 + C_3 \left(\frac{p_n}{p}\right)^K \left(\frac{p_n}{n}\right)^{K\zeta} e^{-KC_d \sqrt{\log(n/p_n)}} \right). \end{aligned}$$ For all $|x| > T_n$ $$\begin{aligned} {\textbf{1}}\{\xi_i^p = 0\} & = {\textbf{1}}\{1-m^p_x \geq 1- \alpha\} \\ & \leq {\textbf{1}}\left\{ \frac{C_1}{|x|} \left( C_2 + C_3 \left(\frac{p_n}{p}\right)^K \left(\frac{p_n}{n}\right)^{K\zeta} e^{-KC_d \sqrt{\log(n/p_n)}} \right) > 1-\alpha \right\} \end{aligned}$$ Thus $$\begin{aligned} {\mathbb{E}}\left( {\textbf{1}}_{\hat{p} \geq c_d p_n \left(\frac{p_n}{n}\right)^{-\zeta}e^{-C_d\sqrt{\log(n/p_n)}}} {\textbf{1}}_{\xi_i^{\hat{p}} = 0} {\textbf{1}}_{v_i = 1} \right) \leq& {\mathbb{E}}({\textbf{1}}_{|x| \leq T_n} {\textbf{1}}_{v_i = 1}) + \\ &{\mathbb{E}}( {\textbf{1}}_{|x|> T_n} {\textbf{1}}_{\frac{C_1 C_3 c_d}{|x|}\geq 1-\alpha} {\textbf{1}}_{v_i = 1})\end{aligned}$$ Using the same approach as we did for the deterministic $p$ we get $${\mathbb{E}}\left( \sum_{i=1}^n {\textbf{1}}_{\xi_i^{\hat{p}} = 0 \cap v_i = 1}\right) \leq p_n\left(2 \Phi\left( \sqrt{2u_0K(1+\zeta)C_\psi} \right) - 1 \right) (1 + o(1)).$$ Minimax risk ------------ One can easily adapt the proof of Lemma \[lem:proof:bound:mxCondition1\] and show that if the prior distribution satisfies condition \[cond1\] with $p = {\widehat{p}}$ and if ${\widehat{p}}$ satisfies Condition \[ass:adapt:Minimax\], then there exists a constant $C_1$ depending only on $u_*$, $u_0$ and $\alpha$ such that for all $|x| \geq C_1 + \sqrt{2K(u_0+1)\log(n/\gamma_n)}$ we have $m_x \leq \alpha$. We prove Theorem \[thm:adapt:seprate\] following the same lines as for the proof of Theorem \[thm:seprate\].
--- abstract: 'We determine the modular Hamiltonian of chiral fermions on the torus, for an arbitrary set of disjoint intervals at generic temperature. We find that, in addition to a local Unruh-like term, each point is non-locally coupled to an infinite but discrete set of other points, even for a single interval. These accumulate near the boundaries of the intervals, where the coupling becomes increasingly redshifted. Remarkably, in the presence of a zero mode, this set of points “condenses” within the interval at low temperatures, yielding continuous non-locality.' author: - Pascal Fries - 'Ignacio A. Reyes' bibliography: - 'Refs.bib' title: The entanglement spectrum of chiral fermions on the torus --- Introduction ============ Amongst the predictions stemming from the interplay between Quantum Field Theory (QFT) and the causal structure of spacetime, one of the most robust is the celebrated Unruh effect: An accelerated observer in the vacuum measures a thermal bath, with a temperature proportional to its proper acceleration [@Fulling:1972md; @Davies:1974th; @Unruh:1976db]. Intimately connected with the thermodynamics of black holes via Hawking radiation, this lies at the heart of our current understanding of the quantum nature of gravity [@Hawking:1978jn]. Therefore, it is natural to explore its generalisations and investigate it further. In recent years, these phenomena have been extended into the framework of quantum information theory. There, this temperature is understood as arising from the entanglement structure of the vacuum. Starting from a state $\rho$ and some entangling subregion $V$, one defines the reduced density matrix $\rho_V$ by tracing out the complement of $V$. Then, just as the entanglement entropy $S_V=-{\operatorname{Tr}}[\rho_V \log \rho_V]$ generalises the thermal entropy, the usual Hamiltonian is an instance of the more general concept of a *modular* (or entanglement) Hamiltonian $\mathcal{K}_V$ defined via $$\begin{aligned} \label{} \rho_V:=\frac{e^{-\mathcal{K}_V}}{\text{tr}\, e^{-\mathcal{K}_V} }\end{aligned}$$ Originally introduced within algebraic QFT [@Haag:1992hx], the modular Hamiltonian has aroused much interest across a wide community due its close connection to quantum information measures. In the context of many body quantum systems, the spectrum of this operator is known as the “entanglement spectrum” and has been proposed as a fingerprint of topological order [@Li:2008kjh; @Chandran:2011sdf; @Dalmonte:2017bzm] and investigated in lattice models [@Peschel:2009zgv; @Eisler:2018zsf; @Zhu:2018wsx; @Parisen:2018gdj; @Luitz:2014zgv], as well as tensor networks [@Cirac:2011iuz; @Hsieh:2014jba; @Vanderstraeten:2017unh]. In QFT, it is fundamental for the study of relative entropy [@Sarosi:2017rsq; @Casini:2017roe] and its many applications to energy and information inequalities [@Casini:2008cr; @Blanco:2013lea; @Faulkner:2016mzt]. In the context of the AdS/CFT correspondence, it is instrumental in the program of reconstructing a gravitational bulk from the holographic data [@Casini:2011kv; @Blanco:2013joa; @Jafferis:2014lza; @Jafferis:2015del; @Lashkari:2013koa; @Koeller:2017njr; @Chen:2018rgz; @Belin:2018juv; @Abt:2018zif; @Jefferson:2018ksk]. However, the modular Hamiltonian is known in only a handful of cases. The result is universal and local for the vacuum of any QFT reduced to Rindler space [@Unruh:1976db; @Bisognano:1976za] and hence any CFT vacuum on the plane reduced to a ball [@Casini:2011kv]. For any CFT$_2$, the same applies for a single interval, for the vacuum on the cylinder or a thermal state on the real line [@Hartman:2015apr; @Cardy:2016fqc]. More generically, modular flows can be non-local, as is the case for multiple intervals in the vacuum of chiral fermions on the plane or the cylinder [@Casini:2009vk; @Klich:2015ina] and scalars on the plane [@Arias:2018tmw]. The exact nature of the transit from locality to non-locality however is not fully understood, and remains an active topic of research. In this paper we report progress regarding this problem, by providing a new entry to this list. We show that the chiral fermion on the torus (finite temperature on the circle) is a solvable model that undergoes such a transition between locality and non-locality. We compute the modular Hamiltonian by restating the problem as a singular integral equation, which in turn we solve via residue analysis. Let us quickly quote our main result. For generic temperature, the modular Hamiltonian takes the form $$\mathcal{K}_{\text{loc}} + \mathcal{K}_{\text{bi-loc}}.$$ The local flow is of the standard Rindler form , with entanglement temperature given in . The novel result is the second term, given in and depicted in Fig. \[fig:mu-roots\], involving bi-local couplings between a discrete but infinite set of other points within the subregion. In the low temperature limit, the sector with a zero mode experiences a “condensation” of these points, resulting in a completely non-local flow. The resolvent ============= We start by introducing the resolvent method, following [@Casini:2009vk; @Klich:2017qmt; @Arias:2018tmw]. For any spatial region $V$, the reduced density matrix $\rho_V$ is defined as to reproduce expectation values of local observables supported on $V$. Now, for free fermions, Wick’s theorem implies that it is sufficient that $\rho_V$ reproduces the equal-time Green’s function $${\operatorname{Tr}}[\rho_V \psi^{}(x)\psi^\dagger(y)] = \langle \psi^{}(x)\psi^\dagger(y) \rangle =: G(x,y)$$ for $x,y\in V$. This requirement fixes the modular Hamiltonian to be a quadratic operator given by [@Peschel:2003xzh] $$\label{rho} \mathcal K_V = \int_V \!{{{{\mathrm{d}^{} \!}} x \,}} \int_V \!{{{{\mathrm{d}^{} \!}} y \,}} K_V(x,y) \psi^\dag(x) \psi(y)$$ with kernel $K_V = -\log[G|_V^{-1} - 1]$. This is specific for the free fermion. $G_V$ refers to the propagator as the *kernel of an operator* acting on functions with support on $V$. As shown in [@Casini:2009vk] the modular Hamiltonian can be rewritten as $$\label{eq:mod-ham-int} K_V = -\int_{1/2}^\infty {{{{\mathrm{d}^{} \!}} \xi \,}} [R_V(\xi)+R_V(-\xi)]$$ in terms of the *resolvent* of the propagator, $$\label{eq:def-resolvent} R_V(\xi) := (G|_V + \xi - 1/2)^{-1}.$$ A derivation of   is provided in the supplemental material. In essence, it is the operator version of $$\log X = \frac{1}{2\pi i} \oint_\gamma {{{{\mathrm{d}^{} \!}} z \,}} \frac{\log z}{z-X}$$ for a suitable choice of contour $\gamma$. In , the inverse of an operator is understood in the sense of a kernel, $$\int_V \!{{{{\mathrm{d}^{} \!}} z \,}} R_V(\xi;x,z) \Big[G(z,y)+(\xi-1/2)\delta(z,y)\Big] = \delta(x-y).$$ Thus, provided $G$ of the global state and the entangling region $V$, this equation completely determines the resolvent $R_V$ and hence the modular Hamiltonian via . To obtain the resolvent, let us first do the redefinition $$\label{eq:resolvent-ansatz} R_V(\xi;x,y) = \frac{\delta(x-y)}{\xi-1/2} - \frac {F_V(\xi;x,y)}{(\xi-1/2)^2}.$$ The convenience of this is that the first term of  will cancel the RHS of the previous equation, translating  into a singular integral equation $$\begin{aligned} &0 = G(x,y) - F_V(\xi;x,y) \nonumber \\ &\hspace{2.3cm} - \frac 1{\xi-1/2} \int_V {{{{\mathrm{d}^{} \!}} z \,}} G(x,z)F_V(\xi;z,y). \label{eq:singular-int-eq}\end{aligned}$$ All previous considerations hold for free fermions on a generic Riemann surface. The simplest case is the plane where the solution of  is a standard result [@Muskhelishvili:1958zgf], which was used by [@Casini:2009vk] to derive the corresponding modular Hamiltonian. They found that for multiple intervals, it consists of a local and a bi-local term. The former can be written as $$\label{eq:unruh} \mathcal K = \int_V {{{{\mathrm{d}^{} \!}} x \,}} \beta(x) T(x)$$ in terms of the stress tensor $T = \frac{{{\mathrm i}}}{2} [\psi^\dag \partial_x \psi-\psi \partial_x \psi^\dag]$, where $\beta(x)$ is known as the *entanglement temperature*. On the other hand, the bi-local term couples the field between different intervals. Let us now proceed to the case of a chiral fermion on the torus. As is customary, we take the periods to be $1,\tau$ with $\Im(\tau) > 0$, such that the nome $q:= {{\mathrm e}}^{{{\mathrm i}}\pi\tau}$ is inside the unit disk. We restrict to purely imaginary modulus $\tau=i\beta$, where $\beta$ is the inverse temperature – the general case can be recovered by analytic continuation. For simplicity, we move to radial coordinates $w = {{\mathrm e}}^{{{\mathrm i}}\pi z}$. Since we are dealing with fermions, the correlator $G(u,v)$ with $u = {{\mathrm e}}^{{{\mathrm i}}\pi x}$ and $v = {{\mathrm e}}^{{{\mathrm i}}\pi y}$ is either periodic (Ramond; R) or anti-periodic (Neveu-Schwarz; NS) with respect to either of the two periods of the torus. We shall restrict to the “thermal” case, with NS periodicity with respect to $\tau$. Combining this with the requirement to reproduce the UV correlator $G^{\text{UV}}(x,y) = [2\pi{{\mathrm i}}(x-y)]^{-1}$ on small scales, this fully determines the standard Green’s functions [@DiFrancesco:1997nk] $$\label{eq:correlator} G^\nu(u,v) = \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}{{\mathrm e}}^\epsilon|q)} \frac{\vartheta_\nu(uv^{-1}|q)}{\vartheta_\nu(1|q)},$$ where $\eta(q)$ and $\vartheta_\nu(z|q)$ are the Dedekind eta and Jacobi theta functions (see supplemental material). Here, the superscript $$\nu = 2,3 = (\text{R},\text{NS}), (\text{NS},\text{NS})$$ labels the different spin-structures, and we introduced a regulator $\epsilon$ in order to treat the distribution $G^\nu$ as a function. The sign of $\epsilon$ depends on the chirality—without loss of generality, we choose $\epsilon>0$. With the notation settled, we now go back to the integral equation . In radial coordinates, it reads $$\begin{aligned} &0 = G^\nu(u,v) - F^\nu_V(\xi;u,v) \nonumber \\ &\hspace{1.3cm} - \frac 1{\xi-1/2} \frac 1{{{\mathrm i}}\pi} \int_A \frac{{{\mathrm{d}^{} \!}}w}w G^\nu(u,w)F^\nu_V(\xi;w,v) \label{eq:singular-int-eq-rad}\end{aligned}$$ with $A := {{\mathrm e}}^{{{\mathrm i}}\pi V}$ being the entangling region. The key observation of this paper is that  resembles the result of a contour integral, involving simple poles and branch cuts. Thus the strategy to solving  is to recast it as a contour integral. To this end, we start by listing a set of sufficient properties that $F^\nu_V$ must possess in order to solve this equation: **(A) Periodicities.** First, it must have the same periodicities in the $w$ argument as $G^\nu$, such that $G^\nu F^\nu_V$ is well defined on the torus. The reason is that doubly periodic functions have vanishing residue along the boundary $\gamma$ of any fundamental region (see Fig. \[fig:gamma\]): $$\label{eq:gamma} 0 = \frac 1{{{\mathrm i}}\pi} \oint_\gamma \frac{{{\mathrm{d}^{} \!}}w}w G^\nu(u,w)F^\nu_V(\xi;w,v).$$ Our aim is now to rewrite this in the form of . **(B) Location of poles and branch cuts.** The next property we demand is that $F^\nu_V$ have a simple pole $F^\nu_V(u,v) \sim 1/2(uv^{-1}-1)$ at $u \to v$, together with a branch cut along the entangling region $A$, which we specify below. Everywhere else it must be analytic. Note that, similarly to $G^\nu$, we need to introduce a regulator $\epsilon'>0$ for the pole of $F^\nu_V$. ![ The complex plane analysis in the argument. The black solid line is the entangling region—here for simplicity two intervals. The blue line represents the contour of integration $\gamma$ in , which leads to the residues evaluated along the green dotted curves.[]{data-label="fig:gamma"}](torus_contour.png){width="7.0cm"} If these conditions are met, a simple residue analysis shows that  reduces to $$\begin{aligned} 0 &= G^\nu(u,v{{\mathrm e}}^{-\epsilon'}) - F^\nu_V(\xi;u{{\mathrm e}}^\epsilon,v{{\mathrm e}}^{-\epsilon'}) \nonumber \\ &\hspace{.3cm} - \frac 1{\xi-1/2} \frac 1{{{\mathrm i}}\pi} \int_{A^\circlearrowleft}\!\! \frac{{{\mathrm{d}^{} \!}}w}w G^\nu(u{{\mathrm e}}^\epsilon,w)F^\nu_V(\xi;w,v{{\mathrm e}}^{-\epsilon'}),\label{eq:snug} \end{aligned}$$ where we made the regulators explicit and $A^\circlearrowleft$ denotes a snug path around the cut on $A$ as depicted in Fig. \[fig:gamma\]. **(C) Residues.** This last integral decomposes into three contributions: one along $A$ just inside the unit circle, one along $A$ just outside the unit circle, and contributions from the boundary points $\alpha_n=e^{i\pi a_n},\beta_n=e^{i\pi b_n}$ of $A$ as can be seen from Fig. \[fig:gamma\]. Our final requirements on $F_V^\nu$ are that the residues at ${{\partial^{} \!}}A$ vanish, while $F^\nu_V$ has to have a multiplicative branch cut along $A$: at every point, the ratio of the function just above and below the cut is a fixed number $$\label{eq:branch-cut} \frac{F^\nu_V(u{{\mathrm e}}^{-\epsilon''},v)}{F^\nu_V(u{{\mathrm e}}^{+\epsilon''},v)} = \frac{\xi+1/2}{\xi-1/2} =: {{\mathrm e}}^{2\pi h}.$$ The solution to in the plane is familiar: $F(z)=z^m$ with $m\not\in \mathbb Z$ possesses such a cut. Below we find the analogue of this on the torus. If properties **(A),(B),(C)** are satisfied, it is easy to show that such an $F_V$ indeed solves the problem: our contour equation becomes exactly the original singular integral equation . The requirement that the residues on ${{\partial^{} \!}}A$ vanish is equivalent to demanding that the modular flow behaves like Rindler space in the vicinity of ${{\partial^{} \!}}A$. This is analogous to the derivation of the black hole temperature by the smoothness condition at the horizon. In the supplemental material, we explicitly derive $F^\nu_V$ satisfying all of the above assumptions. The general procedure is as follows: 1. Start with the standard solution for the requirement of a multiplicative branch cut  on the cylinder [@Klich:2015ina]. 2. Average over all fundamental domains in the direction of $\tau$. This yields a quasiperiodic function. 3. Multiply with a slightly modified form of the Green’s function  to turn the quasiperiodicity into a periodicity and introduce the correct pole. We are now in position to state one of the main results of this paper: the resolvent for a finite union of disjoint intervals on the torus, $V=\cup_{n=1}^N (a_n,b_n)$. The exact expression lives in the complex plane, but is vastly simplified along $A$. Introducing the shorthand notation $$\label{eq:def-lambda} \lambda := \bigg[\prod_{n=1}^N \frac{\alpha_n}{\beta_n}\bigg]^{{{\mathrm i}}h} \!\!\!\!= {{\mathrm e}}^{\pi h L},$$ where $L$ is the total length of $V$, our result is $$\begin{aligned} F^\nu_V(\xi;u,v) &= \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}{{\mathrm e}}^{\epsilon'}|q)} \frac{\vartheta_\nu(\lambda uv^{-1}|q)}{\vartheta_\nu(\lambda|q)} \nonumber \\ &\hspace{2.5cm} \times {{\mathrm e}}^{-2\pi h} \bigg[\frac{\Omega_V(u)}{\Omega_V(v)}\bigg]^{{{\mathrm i}}h} \label{eq:solution}\end{aligned}$$ with $h$ defined in , and $$\label{eq:def-omega} \Omega_V(w) := - \prod_{n=1}^N \frac{\vartheta_1(w\alpha_n^{-1}|q)}{\vartheta_1(w\beta_n^{-1}|q)}.$$ Some comments are in order. The term in the second line of is the complex power of a quotient, which introduces the required branch cut along $A$. This function is quasi-periodic, acquiring a factor of $\lambda^2$ when translated into the next fundamental domain. The first factor resembles the propagator  and introduces the desired pole, as described above. Additionally, the extra factor of $\lambda$ in the argument of $\vartheta_\nu$ is there to precisely cancel the quasi-periodicity of the second term. This allows the product $G^\nu F^\nu_V$ to be exactly doubly periodic, as required. Modular Hamiltonian =================== Finally, now that we have found the resolvent $R^\nu_V$, we can go back to  to obtain the modular Hamiltonian $K^\nu_V$. First, note that the leading divergence of $F^\nu_V(u,v) \sim 1/2(uv^{-1}{{\mathrm e}}^{\epsilon'}-1)$ at $u \to v$ can be rewritten as a Cauchy principle value $$\label{eq:cpval} \frac 12 \frac 1{uv^{-1}{{\mathrm e}}^{\epsilon'}-1} = \frac{\delta(x-y)}2 + {\mathcal P}\frac 12\frac 1{uv^{-1}-1}.$$ For the sake of readability, we shall keep ${\mathcal P}$ implicit for the rest of this paper. Equation  implies that the $\delta$-terms from  drop out in , yielding $$\label{eq:mod-ham-int-2} K^\nu_V = \int_{1/2}^\infty \frac{{{{{\mathrm{d}^{} \!}} \xi \,}}}{(\xi-1/2)^2} \Big[F^\nu_V(\xi) + F^\nu_V(-\xi)\Big].$$ The main characteristic of is that the integrand is highly oscillatory and divergent around $\xi=1/2$. Indeed, notice that when $\xi\to 1/2$ the prefactor in  diverges quadratically while $F(\xi)$ vanishes linearly but oscillates wildly due to the last factor in . However, this behaviour is well understood in the theory of distributions, and in this sense the expression  is well defined and closely related to the Dirac delta. In the supplemental material, we evaluate  analytically. Here we will simply quote the result, but the main steps in the derivation are the following: 1. Change variables to isolates all the infinite poles along the negative axis, which then lie in successive fundamental domains. 2. Regularize  by placing a contour that includes increasingly many poles, and express it by residues. 3. Use the quasiperiodicites of $\vartheta_\nu$ to bring every pole to the fundamental region, expressing  as a highly oscillatory function with a divergent prefactor. 4. Remove the regulator, leading to standard Dirichlet kernel representations of the periodic/antiperiodic Dirac delta. The final expression for the modular Hamiltonian depends on the spin sector. Let us focus on the results for a single interval. Both sectors $\nu=2,3$ have a local and a bi-local term. The local term is identical in both cases and takes the form $$\label{eq:k-loc-23} K_{\text{loc}}(x,y) = \beta(x) [{{\mathrm i}}\partial_x+ f(x)]\delta (x-y),$$ with the entanglement temperature $$\label{eq:beta} \beta(x) = \frac{2\pi\beta}{2\pi + \beta\partial_x\log\Omega_V({{\mathrm e}}^{{{\mathrm i}}\pi x})},$$ where $\Omega_V$ is as defined in  and the function $f(x)$ is fixed by requiring that $K_{\text{loc}}$ is hermitian. Note that the expression  is equivalent to the more familiar Rindler-like representation . The bi-local term represents the central result of this paper and shows a novel feature: In both sectors, it involves a coupling between an infinite but discrete set of points, and is given by $$\begin{aligned} &K^\pm_{\text{bi-loc}}(x,y) =\frac{{{\mathrm i}}\pi}{L\sinh \pi\mu(x,y)} \nonumber\\ &\hspace{1.5cm}\times \!\!\sum_{k\in {{\mathbb Z}}\setminus\{0\}}\!\!\! (\pm 1)^k \delta(x-y+\beta\mu(x,y)-k), \label{eq:k-biloc}\end{aligned}$$ where the sign $\pm$ corresponds to $\nu=\genfrac{}{}{0pt}{1}{2}{3}$. Here, we used the function $$\label{eq:def-mu} \mu(x,y) =\frac{1}{2\pi L} \log \frac{\Omega_V({{\mathrm e}}^{{{\mathrm i}}\pi x})}{\Omega_V({{\mathrm e}}^{{{\mathrm i}}\pi y})},$$ which will play an important role in the analysis below. Note that $K^\pm_{\text{bi-loc}}$ couples pairs $(x,y)$ which are solutions of $$\label{eq:mu-roots} x-y+\beta\mu(x,y)-k=0, \quad k \in{{\mathbb Z}}\setminus\{0\}.$$ Since $\mu(x,y)$ is monotonic in $y$ and diverges at the endpoints, eq.  possesses a unique solution for every $k$, as shown in Fig. \[fig:mu-roots\]. Solutions accumulate near the endpoints. In the next section, we analyse the above expressions and discuss their physical meaning. A summary of the results is presented in table \[tab:summary\]. ![For finite $\beta$ (black solid) the point at the centre is bi-locally coupled to an infinite set $x_k(x)$ (black dots), solutions to for a single interval. For large $\beta$ (blue dashed), the solutions distribute densely, whereas for $\beta\to 0$ (green dotted) they all localise at the endpoints. The strength $\alpha(x,x_k)$ of the coupling (red, dot-dashed) decays towards the endpoints.[]{data-label="fig:mu-roots"}](mu_roots3.png){width="7.3cm"} Discussion ========== In this paper we computed the modular Hamiltonian of chiral fermions in a thermal state on the circle, reduced to an arbitrary set of disjoint intervals. Our main result is that for arbitrary temperature, the modular Hamiltonian contains a local term, as well as an infinite number of bi-local contributions, even for a single interval. Let us now analyse the bi-local terms in more detail. Inserting the kernel  back into , the bi-local modular Hamiltonian reads $$\label{eq:Kpmbiloc} \mathcal{K}^\pm_{\text{bi-loc}} = \sum_{k\neq 0} (\pm 1)^k \int_V \!{{{{\mathrm{d}^{} \!}} x \,}} \alpha(x,x_k) \psi^\dag(x) \psi(x_k(x)).$$ As depicted in Fig. \[fig:mu-roots\], the $x_k(x)$ are an infinite set of points within the interval, solutions to equation . The bi-local coupling $\alpha(x,x_k)$ has dimensions of energy and is given by $$\begin{aligned} \alpha(x,y)=\frac{i\pi}{L\sinh \pi\mu(x,y)} \frac{1}{|1-\beta \partial_y\mu(x,y)|}\, .\nonumber\end{aligned}$$ Although determining the exact location of the $x_k$ is difficult, two properties are simple to extract:   The infinite set of $x_k$ accumulate near the endpoints of the interval. Indeed, since $\mu$ diverges there, there is an infinite number of solutions near the boundaries, located at $$\begin{aligned} \label{x_k} x_k = a + e^{-2\pi L k/\beta}, \quad \text{as } k \to \infty\end{aligned}$$ and similarly near $b$.   Their contributions vanish as they approach the endpoints. Using , the coupling in  goes as $$\begin{aligned} \label{alphaxxk} |\alpha(x,x_k)| \xrightarrow{k\to\infty} \frac{4\pi^2}\beta (x_k-a)^{1+1/2L}\end{aligned}$$ The energy scale of $\alpha(x,x_k)$ is set by the temperature $\beta^{-1}$, whereas the fall-off is determined by the length of the interval $L$. Interestingly, the strength of the non-local couplings appears to be “redshifted” due to their proximity to the local Rindler horizons located at the endpoints. As a next step, let us see how to recover the known results at very high [@Cardy:2016fqc] and low [@Klich:2015ina] temperatures. We start with the high temperature limit $\beta\to 0$. One easily sees from  that the local term goes as the inverse temperature, $\beta(x)\sim\beta$, as expected. On the other hand, as depicted in Fig. \[fig:mu-roots\], the bi-local contributions  all approach the endpoints, where they vanish exponentially. Moving now to the low temperature limit $\beta\to\infty$, the entanglement temperature  approaches the well known result for the cylinder [@Klich:2015ina] $$\label{eq:beta-large-beta} \lim_{\beta\to\infty} \beta(x) = \frac{2\pi}{\partial_x \log \frac{\sin(x-a)}{\sin(b-x)}}.$$ The bi-local contributions however behave remarkably. As can be understood from Fig. \[fig:mu-roots\], as we lower the temperature, the curve gets increasingly steep. Thus, the solutions to  form a partition of the interval which becomes denser and denser in the limit $\beta\to\infty$. Now, recall that the modular Hamiltonian must always be thought of as a distribution, i.e. as integrated against regular test functions. In this limiting procedure, the solutions to  “condense” in the interval, and it can be shown that the sequence of Dirac deltas in  reproduce precisely the definition of a Riemann integral. Indeed, one can show that in this sense  becomes completely non-local $$\label{eq:k-nonloc-large-beta} \lim_{\beta\to \infty} K^+_{\text{bi-loc}}(x,y) = \frac{{{\mathrm i}}\pi}{L\sinh\pi\mu(x,y)},$$ in agreement with [@Klich:2015ina], whereas $\lim_{\beta\to\infty} K^-_{\text{bi-loc}} = 0$ due to the oscillating $(-1)^k$. The previous analysis provides a new insight into the structure of fermionic entanglement: At any finite temperature, non-locality couples a given point only to an infinite but discrete set of other points. The characteristic scale needed to resolve this discreteness goes as $1/\beta$. Hence, continuous non-locality emerges strictly in the limit of zero temperature. We summarize the structure of the modular Hamiltonian in table \[tab:summary\]. $\nu$ $\beta\to\infty$ $\beta$ finite $\beta\to 0$ ------- ------------------------- -------------------------------------- ------------------------------------------- 2 local + cont. non-local $K_{\text{loc}}+K^+_{\text{bi-loc}}$ $\beta{{\mathrm i}}\partial_x\delta(x-y)$ 3 local $K_{\text{loc}}+K^-_{\text{bi-loc}}$ $\beta{{\mathrm i}}\partial_x\delta(x-y)$ : Summary of our results for the modular Hamiltonian in different spin sectors. The definitions for $K_{\text{loc}}$ and $K^\pm_{\text{bi-loc}}$ are in –. The local and non-local terms at low temperature ($\beta\to\infty$) are given in  and .[]{data-label="tab:summary"} For multiple intervals, the only difference is that  now possesses one solution *per interval* for a given $k$, including the non-trivial ($x\neq y$) solutions for $k=0$. In the low temperature limit, these extra terms yield precisely the bi-local terms of [@Casini:2009vk; @Klich:2015ina]. During the final stage of this project, related results were independently reported in [@Hollands:2019hje] and [@Blanco:2019xwi]. Eqs. (145) and (146) of [@Hollands:2019hje] give the modular flow of the correlator. The generator of this flow corresponds to the expectation value of our result for the modular Hamiltonian. Finally, the versatility of the resolvent method has allowed to compute the associated entanglement entropy[@Fries:2019acy], and can also be used to study other quantities related to the entanglement spectrum. Acknowledgments =============== We are very grateful to D. Blanco and G. Pérez-Nadal for collaboration in the initial stages of this project. We thank J. Camps, B. Czech, M. Heller, H. Hinrichsen, C. Northe and G. Wong for helpful discussions on the subject. PF is financially supported by the DFG project DFG HI 744/9-1. The Gravity, Quantum Fields and Information group at AEI is generously supported by the Alexander von Humboldt Foundation and the Federal Ministry for Education and Research through the Sofja Kovalevskaja Award. IR also acknowledges the hospitality of Perimeter Institute, where part of this work was done. Supplemental material ===================== Conventions ----------- We work with the Dedekind eta and Jacobi theta functions defined as: $$\begin{aligned} \eta(q^2) &:= q^{1/12} \prod_{k\geq 1}(1-q^{2k}), \\ \vartheta_3(w|q) &:= \sum_{k\in{{\mathbb Z}}} q^{k^2} w^{2k}, \\ \vartheta_4(w|q) &:= \vartheta_3({{\mathrm i}}w|q), \\ \vartheta_2(w|q) &:= q^{1/4} w \vartheta_3(\sqrt q w|q), \quad \text{and} \\ \vartheta_1(w|q) &:= -{{\mathrm i}}q^{1/4} w \vartheta_3({{\mathrm i}}\sqrt q w|q).\end{aligned}$$ The resolvent from the propagator --------------------------------- Let us give a quick derivation of . The basic idea is that given a holomorphic function $f(z)$ and an operator $\mathcal{G}$, we can determine $f(\mathcal{G})$ by using Cauchy’s integral formula for each of the eigenvalues of $\mathcal{G}$. In our case, we wish to find $K=-\log [G^{-1}-1]$. Since the spectrum of $K$ is real, this equation tells us that the spectrum of the propagator $G$ is a subset of the open interval $(0,1)$. Consider now a specific eigenvalue $g$ of $G$. We can use Cauchy’s integral formula to find $$\begin{aligned} \log [g^{-1}-1] &=\log [1-g] - \log g\\ &=\frac 1{2\pi {{\mathrm i}}} \oint_{\gamma_g} \!{{{{\mathrm{d}^{} \!}} z \,}} \bigg[\frac 1{z-1+g} - \frac 1{z-g}\bigg] \log z, \end{aligned}$$ where $\gamma_g$ is a contour that encircles both $g$ and $1-g$. Notice that $\log z $ possesses a branch cut, which we choose to place along the negative real axis – the contour $\gamma_g$ must not intersect this cut. Then, since the function is holomorphic everywhere else in the plane, we can freely deform $\gamma_g$ such that integration only has to be done along the branch cut and a circle at infinity. The latter contribution however vanishes since the integrand is bounded by $z^{-2} \log z$ for large $z$, and we obtain $$\frac 1{2\pi{{\mathrm i}}} \int_{-\infty}^0 \!{{{{\mathrm{d}^{} \!}} z \,}} \bigg[\frac 1{z-1+g} - \frac 1{z-g}\bigg] \\ \times \Big[\log^+ z - \log^- z\Big],$$ with $\log^\pm$ referring to the values just above and below the branch cut. After evaluating the last bracket to $2\pi{{\mathrm i}}$, we can change variables to $\xi = 1/2 -z$ to find $$\log [g^{-1}-1] = \int_{1/2}^\infty \!{{{{\mathrm{d}^{} \!}} \xi \,}} \bigg[\frac 1{g-\xi-1/2}+\frac 1{g+\xi-1/2}\bigg].$$ As this formula is valid for every eigenvalue $g$ of $G$, it also holds as an operator statement, hence we arrive at . Deriving the resolvent ---------------------- In this section, we derive the solution $F^\nu_V$ (given in ) to the singular integral equation . Let us start with the functions [@Klich:2015ina] $$\label{eq:circle-omega} \omega_n(w) := \frac{\sin (\pi(a_n-z))}{\sin (\pi(b_n-z))} = \frac{\beta_n}{\alpha_n} \frac{\alpha_n^2-w^2}{\beta_n^2-w^2},$$ which provide the correct branch-cut on the cylinder. Choosing the branch cut of the logarithm along the negative real line, we see that $$\label{eq:branch-cut-circle} \prod_{n=1}^N \frac{\omega_n^{{{\mathrm i}}h}(w {{\mathrm e}}^{-\epsilon''})} {\omega_n^{{{\mathrm i}}h}(w {{\mathrm e}}^{\epsilon''})} = {{\mathrm e}}^{2\pi h}$$ for $w \in A$. Note that  is not well defined on the torus since it transforms non-trivially under $w \to qw$. We shall remedy this by defining $$\label{eq:big-omega} \log \Omega_n(w) := \sum_{k\in{{\mathbb Z}}} \big[\log(\omega_n(q^k w)) - \log(\omega_n(q^k))\big],$$ where the second term in the brackets is to ensure absolute convergence. We made the logarithms explicit in order not to break the behaviour  at the branch cut. At first sight, $\Omega(w)$ seems doubly-periodic: by construction, $\omega$ is periodic with respect to the spatial circle, and now we sum over all translations along imaginary time. However, $\Omega(w)$ has a non-vanishing residue within each fundamental region due to the branch-cut, and thus cannot be elliptic. Instead, it turns out to be quasi-periodic, as is seen by putting a cutoff in the sum , and then computing $\Omega(q w)$. Then, the series acquires a prefactor originating in $$\lim_{k \to \pm \infty} \omega_n(q^k w) = \bigg[\frac{\beta_n}{\alpha_n}\bigg]^{\mp 1}.$$ This yields the quasi-periodicity $$\prod_{n=1}^N \Omega_n^{{{\mathrm i}}h}(q w) = \lambda^2 \prod_{n=1}^N \Omega_n^{{{\mathrm i}}h}(w),$$ where $\lambda$ is defined in . To cancel off the quasi-periodicity and to introduce the desired pole, we multiply with a combination of theta functions. We find $$\begin{aligned} \label{eq:solution-cpl-plane} F^\nu_V(\xi;u,v) &= \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}{{\mathrm e}}^{\epsilon'}|q)} \frac{\vartheta_\nu(\lambda uv^{-1}|q)}{\vartheta_\nu(\lambda|q)} \nonumber \\ &\hspace{2.5cm} \times \prod_{n=1}^N\frac{\Omega_n^{{{\mathrm i}}h}(u {{\mathrm e}}^{\epsilon''})} {\Omega_n^{{{\mathrm i}}h}(v{{\mathrm e}}^{-\epsilon''})},\end{aligned}$$ where we made our choice of branches in $\Omega^{{{\mathrm i}}h}_n$ explicit (our choice is such that the residue evaluation in the main body of the paper does not cross the branch cut). Eq.  now solves . Finally, let us rewrite this in terms of more familiar elliptic functions. Note that  implies $$\prod_{n=1}^N\frac{\Omega_n^{{{\mathrm i}}h}(u {{\mathrm e}}^{\epsilon''})} {\Omega_n^{{{\mathrm i}}h}(v{{\mathrm e}}^{-\epsilon''})} = {{\mathrm e}}^{-2\pi h} \prod_{n=1}^N\frac{\Omega_n^{{{\mathrm i}}h}(u {{\mathrm e}}^{\epsilon''})} {\Omega_n^{{{\mathrm i}}h}(v{{\mathrm e}}^{\epsilon''})}$$ and, now that the numerator and denominator are on the same side of the branch cut, we can move the product into the complex power to find $$\prod_{n=1}^N\frac{\Omega_n^{{{\mathrm i}}h}(u {{\mathrm e}}^{\epsilon''})}{\Omega_n^{{{\mathrm i}}h}(v{{\mathrm e}}^{-\epsilon''})} = {{\mathrm e}}^{-2\pi h} \bigg[\prod_{n=1}^N \prod_{k \in {{\mathbb Z}}} \frac{\omega_n(q^k u {{\mathrm e}}^{\epsilon''})} {\omega_n(q^k v {{\mathrm e}}^{\epsilon''})}\bigg]^{{{\mathrm i}}h}$$ for $u,v \in A$. After some algebra and an application of the Jacobi triple product [@Whittaker:2009zvh], this simplifies the solution  to  with $\Omega_V$ from . Deriving the modular Hamiltonian -------------------------------- In this section, we provide the mains steps to evaluate the integral expression  for the modular Hamiltonians. We restrict to purely imaginary $\tau = {{\mathrm i}}\beta$—the general case can be restored by analytic continuation. Let us first change the variable of integration from $\xi$ to $$\Lambda := \lambda^2 = {{\mathrm e}}^{2\pi L h} = \bigg[\frac{\xi+1/2}{\xi-1/2}\bigg]^L,$$ such that  turns into $$K^\nu_V = \frac 1L \int_0^\infty \frac{{{{{\mathrm{d}^{} \!}} \Lambda \,}}}\Lambda \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \frac{\vartheta_\nu(\sqrt\Lambda uv^{-1}|q)}{\vartheta_\nu(\sqrt\Lambda|q)} \Lambda^{i \mu},$$ where we use the shorthand notation $$\mu := \frac 1{2\pi L} \log \frac{\Omega_V(u)}{\Omega_V(v)}.$$ To evaluate the above integral, note the following two facts: - Since we merged the two occurences of $F^\nu_V(\pm\xi)$ in  into a single integral, integration has to be done symmetrically with respect to $\xi \leftrightarrow -\xi$, i.e., $\Lambda \leftrightarrow \Lambda^{-1}$. - The integrand is oscillatory for $\Lambda \to 0, \infty$. This requires that we introduce a symmetric regulator $r_\epsilon(\Lambda) = r_\epsilon(\Lambda^{-1})$ to tame the integral, allowing us to evaluate it via standard complex analysis methods. We choose $$\label{eq:def-reg} r_\epsilon(\Lambda) := \frac{(1+\epsilon)^2}{(\Lambda+\epsilon)(\Lambda^{-1}+\epsilon)}$$ to otain $$\begin{aligned} \label{eq:mod-ham-int-3} K^\nu_V &= \lim_{\epsilon \searrow 0} \frac 1L \int_0^\infty {{{{\mathrm{d}^{} \!}} \Lambda \,}} \frac{(1+\epsilon)(1+\epsilon^{-1})}{(\Lambda+\epsilon)(\Lambda+\epsilon^{-1})} \nonumber \\ &\hspace{1.5cm}\times \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \frac{\vartheta_\nu(\sqrt\Lambda uv^{-1}|q)}{\vartheta_\nu(\sqrt\Lambda|q)} \Lambda^{{{\mathrm i}}\mu}.\end{aligned}$$ The integral  can now be evaluated using contour integration. To this end, consider the integral $$\begin{aligned} \label{eq:pacman} I^\nu_\epsilon &:= \frac 1L \oint_\gamma {{{{\mathrm{d}^{} \!}} \Lambda \,}} \frac{(1+\epsilon)(1+\epsilon^{-1})}{(\Lambda+\epsilon)(\Lambda+\epsilon^{-1})} \nonumber\\ &\hspace{2cm}\times \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \frac{\vartheta_\nu(\sqrt\Lambda uv^{-1}|q)}{\vartheta_\nu(\sqrt\Lambda|q)} \Lambda^{{{\mathrm i}}\mu},\end{aligned}$$ where the contour $\gamma$ is as depicted in Fig. \[fig:pacman\]. ![Contour for the integral . The integral along the blue solid line is equal to the sum of all residues at $\Lambda \to -q^{2k+1}$ (black dots) and at $\Lambda \to -\epsilon^{\pm 1}$ (black crosses). The contour avoids the branch cut along the positive real axis (green dashed line).[]{data-label="fig:pacman"}](pacman_contour.png){width="7.1cm"} The circular contributions vanish due to the falloff of the regulator. Choosing the branch cut of $\Lambda^{{{\mathrm i}}\mu}$ along the positive real axis, we see that two remaining horizontal contributions yields two almost identical terms, differing differ only by a global prefactor of $-{{\mathrm e}}^{-2\pi \mu}$. We thus find $$\label{eq:mod-ham-i} \lim_{\epsilon \searrow 0} I^\nu_\epsilon = (1-{{\mathrm e}}^{-2\pi \mu}) K^\nu_V.$$ By Cauchy’s theorem, $I^\nu_\epsilon$ can also be expressed as a sum over residues, yielding a series expression for $K^\nu_V$. We shall do this explicitly for $\nu = 3$ and briefly mention the differences for $\nu = 2,4$ at the end. The poles of the integrand are of two types (see Fig. \[fig:pacman\]): Two come from the regulator , located at $\Lambda\to-\epsilon$ and $\Lambda\to-\epsilon^{-1}$. The other (infinitely many) poles come from the poles of the ‘propagator-like’ term. As can be seen from either the Laurent expansion of this term (see section below) or directly from the Jacobi triple product, we have the leading divergences $$\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \frac{\vartheta_3(\sqrt\Lambda uv^{-1}|q)}{\vartheta_3(\sqrt\Lambda|q)} \sim -\frac{(q^{-1}uv^{-1})^{-2k-1}}{\Lambda + q^{2k+1}}$$ at $\Lambda \to -q^{2k+1}$ for $k\in {{\mathbb Z}}$. Keeping in mind that the negative sign of poles always has to be written as ${{\mathrm e}}^{+{{\mathrm i}}\pi}$ due to our choice of branch cut, this yields $$\begin{aligned} K^3_V &= \frac{2\pi{{\mathrm i}}}L \frac 1{{{\mathrm e}}^{\pi \mu}-{{\mathrm e}}^{-\pi \mu}} \lim_{\epsilon \searrow 0} \bigg[\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \nonumber\\ &\hspace{.8cm}\times \bigg( \frac{\vartheta_4(\sqrt\epsilon uv^{-1}|q)}{\vartheta_4(\sqrt\epsilon|q)} \epsilon^{{{\mathrm i}}\mu} - (\epsilon \to \epsilon^{-1})\bigg) \nonumber\\ &\hspace{2cm}+ \sum_{k\in{{\mathbb Z}}} \frac{(uv^{-1}q^{-{{\mathrm i}}\mu})^{-2k-1}}{(q^{2k+1}-\epsilon)(q^{-2k-1}-\epsilon)} \bigg]. \label{eq:mod-ham-series}\end{aligned}$$ Let us have a look at the series in the last line: Using the Laurent expansions below, this can be rewritten as $$\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1} q^{-i\mu} |q)} \frac{\vartheta_4(\sqrt\epsilon uv^{-1} q^{-i\mu} |q)}{\vartheta_4(\sqrt\epsilon|q)} - (\epsilon\rightarrow \epsilon^{-1}).$$ We choose the cutoff to be $\epsilon = q^{2m}$ with very large $m\in\mathbb Z$ to avoid the poles at $q^{2k+1}$, so that we only deal with simple poles. Then, putting everything together into  and using the quasiperiodicities of $\vartheta_4$, one finds $$K^3_V(x,y) = \lim_{m \to \infty} P(x,y) \sin \big(2m\pi(x-y+\beta \mu)\big)$$ with $$\begin{aligned} \label{} P(x,y) &= \frac{2\pi}{L\sinh \pi\mu(x,y)} \bigg[ \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}|q)} \frac{\vartheta_4(uv^{-1}|q)}{\vartheta_4(1|q)} \nonumber\\ &\hspace{1.5cm}- \frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(uv^{-1}q^{-i\mu}|q)} \frac{\vartheta_4( uv^{-1}q^{-i\mu}|q)}{\vartheta_4(1|q)} \bigg].\end{aligned}$$ As already stated above, this limit must be understood in the sense of distributions. We see that $K^3_V$ contains essentially two factors: the term involving the sine function is highly oscillatory for $m\to\infty$, except at solutions of $$\label{eq:mu-roots-2} x-y+\beta\mu(x,y) = k\in\mathbb Z.$$ As a distribution, it vanishes when integrated against any regular test function. However, the remaining factor $P(x,y)$ is not regular since it has poles, and thus we must examine its behaviour in their vicinity, which will lead to finite contributions. These poles coincide precisely with the solutions to , which are of two kinds: the trivial solution $x=y$ will lead to a local term, while the other solutions $x\neq y$ will give bi-local contributions. Let us start with the latter. Close to these solutions, a straightforward calculation shows that $$P(x,y) \sim \frac{{{\mathrm i}}\pi}{L\sinh \pi\mu(x,y)} \frac 1{\sin (\pi [x-y+\beta\mu(x,y)])}.$$ Combined with the oscillatory term, we recognize the Dirichlet kernel [@Rudin:1976ksh] representation of the anti-periodic Dirac delta $$\label{eq:dirichlet} \lim_{m\to\infty} \frac{\sin 2m\pi z}{\sin\pi z} = \sum_{k\in {{\mathbb Z}}} (-1)^k \delta(z-k),$$ yielding the final expression for the modular Hamiltonian for $x\neq y$, $$\label{eq:biloc} \frac{{{\mathrm i}}\pi}{L\sinh\pi\mu} \sum_{k\in {{\mathbb Z}}} (-1)^k \delta(x-y+\beta\mu(x,y)-k).$$ Now we turn to the solution $x=y$, which is special as it leads to a second order pole in $P$. Similarly to before, in the vicinity of that solution, $P(x,y)$ takes the form $$-\frac{{{\mathrm i}}\beta}L \frac 1{x-y}\frac 1{\sin (\pi[x-y+\beta\mu(x,y)])},$$ which together with the oscillatory term leads to $$\label{eq:sing-frac} -\frac{{{\mathrm i}}\beta}L \frac{\delta(x-y+\beta \mu(x,y))}{x-y}.$$ Note that we did not need to consider the terms with $k\neq 0$ as in  since we only deal with the solution $x=y$. As a last step, we use the methods from [@Casini:2009vk] to rewrite the singular fraction  as $$\label{eq:loc} \frac{\beta}{L} \frac{[{{\mathrm i}}\partial_x + f(x)] \delta(x-y)}{1+\beta (\partial_x\mu)(y,y)},$$ where $f(x)$ is fixed by hermiticity. Now we focus on the case of a single interval. Again we begin by considering on the bi-local terms. Since $\mu(x,y)$ is monotonically increasing with respect to $x$ in the interval, eq.  has a unique solution for each $k\in{{\mathbb Z}}$. In particular, note that for $k=0$, the solution is $x=y$. Since we already consider this contribution separately in , we can explicitly exclude it from the series . The final expression for the modular Hamiltonian for a single interval is then given by the sum of  and . Finally, replacing  in , the bi-local modular Hamiltonian takes the form . An analogous calculation holds for $\nu=2$, with one small adjustment: Since the poles of the Laurent expansion are instead located at $-q^{2k}$, we obtain the periodic version of the Dirichlet kernel in . The rest of the calculation is identical. Laurent expansion ================= To better understand the location and behaviour of the poles of the “propagator-like” terms in , we derived their Laurent expansions. The coefficients may be computed as contour integrals which vastly simplify due to the quasi-periodicities of the theta functions. In the fundamental domain $|q|^{1/2} < |w| < |q|^{-1/2}$, the result then takes the form of Lambert series $$\begin{aligned} &\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(w|q)}\frac{\vartheta_3(\lambda w|q)} {\vartheta_3(\lambda|q)} = \frac 1{w-w^{-1}} \\ &\hspace{3cm}+ \sum_{\substack{k\geq 1\\k\text{ odd}}} \left[\frac{w^kq^k} {\lambda^{-2}+q^k}-\frac{w^{-k}q^k}{\lambda^2+q^k}\right], \\ &\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(w|q)}\frac{\vartheta_4(\lambda w|q)} {\vartheta_4(\lambda|q)} = \frac 1{w-w^{-1}} \\ &\hspace{3cm}- \sum_{\substack{k\geq 1\\k\text{ odd}}} \left[\frac{w^kq^k} {\lambda^{-2}-q^k}-\frac{w^{-k}q^k}{\lambda^2-q^k}\right], \\ &\frac{\eta^3(q^2)}{{{\mathrm i}}\vartheta_1(w|q)}\frac{\vartheta_2(\lambda w|q)} {\vartheta_2(\lambda|q)} = \frac 12\frac{w+w^{-1}}{w-w^{-1}} + \frac 12 \frac{\lambda^2-1}{\lambda^2+1} \\ &\hspace{3cm}+ \sum_{\substack{k\geq 2\\k\text{ even}}} \left[\frac{w^kq^k} {\lambda^{-2}+q^k}-\frac{w^{-k}q^k}{\lambda^2+q^k}\right].\end{aligned}$$
--- author: - | [^1]\ I.N.F.N. Padova (Italy)\ E-mail: title: 'Search for Sterile Neutrinos at OPERA and other Long–Baseline Experiments' --- Introduction {#sect-1} ============ The present scenario of the Standard Model (SM) for particle physics, being, so to say, suspended by the discovery of the Higgs boson, is desperately looking for new experimental inputs to provide a more comfortable and conformable theory. From another point of view, experiments on neutrinos have been so far an outstanding source of novelty and unprecedented results. In the latest two decades several results were obtained from the study of neutrinos, either from the atmospheric, solar and reactor ones or, more recently, from accelerator–based beams. Almost all these results contributed to strengthen a wonderful fullfilment of the flavour–SM. On the other hand, some of the neutrino results (fortunately) concern anomalies that do not fit in the standard scenario. In particular three different kinds of experiments hint at the possible existence of at least a 4$^{\rm{th}}$ and *sterile* (i.e. not coupled to the weak interaction) neutrino. The excess of [$\nu_{e}$]{}([$\overline{\nu}_{e}$]{}) observed by the LSND [@lsnd] and MiniBooNE [@miniB] collaborations and the so-called reactor [@reactor] and Gallium [@gallium1; @gallium2] neutrino anomalies can be coherently interpreted as due to the existence of a fourth sterile neutrino with a mass at the eV scale. There are presently many proposals and experiments [@next-prop] that tentatively address the issue, and in the next few years results are expected to confirm/disprove the previous anomalies. However, to the conviction of the author, none of them will be able to establish the existence/interpretation of the results in terms of sterile neutrinos. In fact, any extension of the 3–flavour model, which is on the contrary perfectly adequate to the “standard” results, owns internal strong tensions between the interpretation of [$\nu_{e}$]{}([$\overline{\nu}_{e}$]{}) appearance/disappearance and the corresponding, required, appearance/disappearance of the other flavours, [$\nu_{\mu}$]{}and [$\nu_{\tau}$]{} [@tension]. Therefore it would be imperative to search for anomalies related to the appearance/disappearance of [$\nu_{\mu}$]{}and [$\nu_{\tau}$]{}neutrinos. Following this approach there are investigations pursued with neutrino accelerator beams, even if only (unfortunately) at Long–Baseline (LBL). In this paper we will present the very recent, preliminary result on the [$\nu_{\tau}$]{}non–standard appearance/disappearance, following the observation of the 5$^{\rm{th}}$ [$\nu_{\tau}$]{}candidate by the OPERA experiment [@opera-5th]. After a description of the oscillation constraints at LBL (section \[sect-2\]), we will focus on the OPERA analyses, also including some very recent results in the [$\nu_\mu \rightarrow \nu_e$]{}channel (section \[sect-3\]). In section \[sect-4\] reports on the MINOS and SuperK analyses will be shortly described. Conclusions and perspectives will be finally drawn (section \[sect-5\]). The Long-Baseline scenario for sterile neutrinos {#sect-2} ================================================ The presence of an additional sterile–state can be expressed in the extended PMNS [@pmns] mixing matrix ($U_{\alpha i}$ with $\alpha = e, \mu, \tau, s$, and $i = 1,\ldots,4$). In this model, called “3+1”, the neutrino mass eigenstates $\nu_1,\ldots,\nu_4$ are labeled such that the first three states are mostly made of active flavour states and contribute to the “standard” three flavour oscillations with the squared mass differences $\Delta\, m_{21}^2 \sim 7.5\times 10^{-5}~{\rm eV^2}$ and $|\Delta\, m_{31}^2| \sim 2.4\times 10^{-3}~{\rm eV^2}$, where $\Delta\, m_{ij}^2 = m^2_i - m^2_j$. The fourth mass eigenstate, which is mostly sterile, is assumed to be much heavier than the others, $0.1~{\rm eV^2}\lesssim \Delta\, m_{41}^2 \lesssim 10~{\rm eV^2}.$ The opposite case in hierarchy, i.e. negative values of $\Delta\, m_{41}^2$, produces a similar phenomenology from the oscillation point of view but is disfavored by cosmological results on the sum of neutrino masses [@cosmo-data]. In a Short-Baseline experiment the oscillation effects due to $\Delta\, m^2_{21}$ and $\Delta\, m^2_{31}$ can be neglected since $L/E\sim 1$ km/GeV. Therefore the oscillation probability depends only on $\Delta\, m^2_{41}$ and $U_{\alpha 4}$ with $\alpha = e,\mu,\tau$. In particular the survival probability of muon neutrinos can be given by an effective two–flavour oscillation formula. Differently when $L/E \gg 1$ km/GeV, that is the case for the Long-Baseline experiments, the two–flavour oscillation is not a good approximation. In the case of the CNGS beam, when studying the [$\nu_{\tau}$]{}oscillation rate the only valid approximations correspond to neglect the solar–driven term, i.e. $\Delta\, m_{21}^2 \sim 0$, and to discard the [$\nu_{e}$]{}component of the beam. However when the [$\nu_\mu \rightarrow \nu_e$]{}channel is studied the intrinsic [$\nu_{e}$]{}beam–component becomes a non–negligible factor [@palazzo]. Considering the ([$\nu_{\mu}$]{}, [$\nu_{\tau}$]{}, $\nu_s$) triplet, together with the above two approximations, the most general oscillation probability [${\ensuremath{\nu_{\mu}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}can be written as: $$\begin{aligned} {P_{\nu_\mu\to\nu_{\tau}} = {4 |U_{\mu 3}|^2|U_{\tau 3}|^2\sin^2\frac{\Delta_{31}}{2}} + {4 |U_{\mu 4}|^2|U_{\tau 4}|^2\sin^2\frac{\Delta_{41}}{2}}}\\ { +\, { 2\Re[U^*_{\mu 4}U_{\tau 4}U_{\mu 3}U_{\tau 3}^*]\sin\Delta_{31}\sin\Delta_{41}}}\\ { -\, { 4\Im[U^*_{\mu 4}U_{\tau 4}U_{\mu 3}U_{\tau 3}^*]\sin^2\frac{\Delta_{31}}{2}\sin\Delta_{41}}}\\ { +\, { 8\Re[U^*_{\mu 4}U_{\tau 4}U_{\mu 3}U_{\tau 3}^*]\sin^2\frac{\Delta_{31}}{2}\sin^2\frac{\Delta_{41}}{2}}}\\ {+\, {4 \Im[U^*_{\mu 4}U_{\tau 4}U_{\mu 3}U_{\tau 3}^*]\sin\Delta_{31}\sin^2\frac{\Delta_{41}}{2}}},\end{aligned}$$ using the definition $\Delta_{ij}=1.27\; \Delta\, m^2_{ij}\; L/E$ ([*i,j=1,2,3,4*]{}), with $\Delta_{31}$ and $\Delta_{41}$ expressed in eV$^2$, $L$ in km and $E$ in GeV. The first term corresponds to the standard oscillation, the second one to the pure exotic oscillation, while the next 4 terms correspond to the interference between the standard and sterile neutrinos. By defining $C=2|U_{\mu3}||U_{\tau3}|$, $\phi_{\mu\tau}=Arg(U^{\star}_{\mu3}U^{\star}_{\tau3}U^{\star}_{\mu4}U^{\star}_{\tau4})$ and $\sin\, 2\theta_{\mu\tau}=2|U_{\mu4}||U_{\tau4}|$ the expression can be re–written as: $$\begin{aligned} {P(Energy) = {\color{black} C^2 \sin^2\frac{\Delta_{31}}{2}} + {\sin^2\, 2\theta_{\mu\tau}\sin^2\frac{\Delta_{41}}{2}}}\\ { +\, { \frac{1}{2} C \sin\, 2\theta_{\mu\tau}\cos\phi_{\mu\tau}\sin\Delta_{31}\sin\Delta_{41}}}\\ { -\, { C\sin2\theta_{\mu\tau}\sin\phi_{\mu\tau}\sin^2\frac{\Delta_{31}}{2}\sin\Delta_{41}}}\\ { +\, { 2\, C\sin2\theta_{\mu\tau}\cos\phi_{\mu\tau}\sin^2\frac{\Delta_{31}}{2}\sin^2\frac{\Delta_{41}}{2}}}\\ {+\, {C\sin 2\theta_{\mu\tau}\sin\phi_{\mu\tau}\sin\Delta_{31}\sin^2\frac{\Delta_{41}}{2}}},\end{aligned}$$ where interesting dependences rise up, namely the sign of $\Delta\, m^2_{13}$ (3$^{rd}$ and 6$^{th}$ terms) and the $\phi_{\mu\tau}$ CP-violating phase (4$^{th}$ and 6$^{th}$ terms). Finally, since at LBL $\sin\Delta_{41}\approx 0$ and $\sin^2\frac{\Delta_{41}}{2} \approx \frac{1}{2}$, the following expression is obtained: $$\begin{aligned} {P(Energy) \simeq { C^2 \sin^2\frac{\Delta_{31}}{2}} + {\frac{1}{2}\sin^22\theta_{\mu\tau}}}\\ { +\, { C\sin2\theta_{\mu\tau}\cos\phi_{\mu\tau}\sin^2\frac{\Delta_{31}}{2}}}\\ {+\, {\frac{1}{2}C\sin\, 2\theta_{\mu\tau}\sin\phi_{\mu\tau}\sin\Delta_{31}}}.\end{aligned}$$ This formula indicates that we are sensitive to the effective sterile mixing angle, $\theta_{\mu\tau}$, the mass hierarchy (MH, Normal NH or Inverted IH) and to the CP–violating phase. The method carried out by the OPERA collaboration is independently applied to the NH and IH cases. Maximization of the likelihood is performed over the CP-violation phase, $\phi_{\mu\tau}$, and the two effective mixing angles of the 3$^{rd}$ and $4^{th}$ mass–states with [$\nu_{\mu}$]{}and [$\nu_{\tau}$]{}, the variables $C$ and $\theta_{\mu\tau}$, respectively. OPERA preliminary results on sterile neutrinos from [$\nu_{\tau}$]{}and [$\nu_{e}$]{} {#sect-3} ===================================================================================== Results on sterile limits based on four [$\nu_{\tau}$]{}candidates have been very recently published by OPERA [@opera-4th]. We report here the preliminary updated analysis based on the just discovered 5$^{th}$ candidate [@opera-5th]. In figure \[fig1\] the net result on the expected number of [$\nu_{\tau}$]{}candidates is shown in presence of a 4$^{\rm{th}}$ neutrino state, as a function of $\sin^2\, 2\theta_{\mu\tau}$ and $\Delta\, m^2_{41}$. ![The expected increase/decrease of events for the 3+1 model, as function of $\Delta\, m^2_{41}$ and the effective mixing angle, $\theta_{\mu\tau}$, of [$\nu_{\mu}$]{}and [$\nu_{\tau}$]{}with the 4$^{\rm{th}}$ neutrino state, and maximization over the other parameters.[]{data-label="fig1"}](nexp-NH.pdf "fig:"){width=".5\textwidth"} ![The expected increase/decrease of events for the 3+1 model, as function of $\Delta\, m^2_{41}$ and the effective mixing angle, $\theta_{\mu\tau}$, of [$\nu_{\mu}$]{}and [$\nu_{\tau}$]{}with the 4$^{\rm{th}}$ neutrino state, and maximization over the other parameters.[]{data-label="fig1"}](nexp-IH.pdf "fig:"){width=".5\textwidth"} From the picture it is evident that OPERA sensitivity is limited to the region ($\sin^2\, 2\theta_{\mu\tau}\gtrsim 0.1$, $\Delta\, m_{41}\gtrsim 0.01$ eV$^2$). Therefore the OPERA analysis was two–fold. In the first case the $\Delta\, m_{41}~>~1$ eV$^2$ region was considered, where almost no correlation with the effective mixing angle is present. Then the exclusion limit on the plane of the phase vs the mixing angle can be extracted (figure \[fig2\]). When marginalization over the phase is made, the limit $\sin^2\, 2\theta_{\mu\tau}< 0.11$ at 90% C.L. is obtained (preliminary). ![90% C.L. exclusion limits in the $\phi_{\mu\tau}$ vs $\sin^2\, 2\theta_{\mu\tau}$ parameter space for normal (NH, dashed red) and inverted (IH, solid blue) hierarchies, and assuming $\Delta\, m_{41}>1$ eV$^2$. Bands are drawn to indicate the excluded regions.[]{data-label="fig2"}](plotphi_1dof.pdf){width=".6\textwidth"} To extend the search for a possible fourth sterile neutrino down to small $\Delta\, m^2_{\mu\tau}$ values, a second kind of analysis has been performed by OPERA using the GLoBES software, which takes into account the non-zero $\Delta\, m^2_{12}$ value and also matter effects, the Earth density being approximated by a constant value estimated with the PREM shell–model. This time the $\Delta\, m^2_{31}$ parameter has been profiled out (see [@opera-4th] for more details and references). In figure \[fig3\] the preliminary 90% CL exclusion plot is reported in the $\Delta\, m^2_{41}$ vs $\sin^2\, 2\theta_{\mu\tau}$ parameter space. The most stringent limits of direct searches for [${\ensuremath{\nu_{\mu}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}oscillations at short-baselines obtained by the NOMAD [@nomad] and CHORUS [@chorus] experiments are also shown. -0.7cm ![OPERA preliminary 90% C.L. exclusion limits in the $\Delta\, m^2_{41}$ vs $\sin^2\, 2\theta_{\mu\tau}$ parameter space for the normal (NH, red) and inverted (IH, blue) hierarchy of the three standard neutrino masses. The exclusion plots by NOMAD [@nomad] and CHORUS [@chorus] are also shown. Bands are drawn to indicate the excluded regions.[]{data-label="fig3"}](exclusion "fig:"){width=".7\textwidth"} Another very interesting analysis is ongoing, in progress in OPERA on the [$\nu_\mu \rightarrow \nu_e$]{}search. The 2013 analysis will be updated using the full data set and a much less approximate analysis as previously done. The preliminary selection is shown in table \[tab1\]. The exclusion region will be set in the plane $\Delta\, m^2_{41}$ vs $\sin^2\, 2\theta_{\mu e}$. -0.5cm all energy range $E<20$ GeV ------------------------------------ ------------------ ------------ [$\nu_{e}$]{}candidates (30% data) 19 4 [$\nu_{e}$]{}candidates (all data) 52 9 : The preliminary selected [$\nu_{e}$]{}candidates by OPERA.[]{data-label="tab1"} MINOS and SuperK analyses {#sect-4} ========================= The MINOS and SuperK collaborations have also studied in detail the [$\nu_\mu \rightarrow \nu_\mu$]{}and [${\ensuremath{\nu_{e}}\xspace}\rightarrow {\ensuremath{\nu_{e}}\xspace}$]{}oscillations to exclude extra contributions from [$\nu_\mu \rightarrow \nu_s$]{}oscillations. Recent results have been given by MINOS that makes use of the NuMi beam at FNAL [@minos], and SuperK by using the atmospheric flux [@superk]. MINOS is also analyzing the [$\overline{\nu}\ $]{}running–mode data–sample and their updated analysis on [$\nu_\mu \rightarrow \nu_e$]{}will be soon released. The SuperK analysis is two–fold, considering either $|U_{e4}|=0$ with matter effects or the full PMNS and discarding the matter effect. In the latter case a strong limit is obtained, $|U_{\mu4}|<0.04$ at 90% C.L., for $\Delta\, m^2_{41} > 0.1$ eV$^2$, for a total exposure of 282 kton–year. Conclusions and Perspectives {#sect-5} ============================ The long–standing issue on the existence of sterile neutrino states at the eV mass scale can receive new relevant inputs from the accelerator Long–Baseline experiments, like OPERA, MINOS and SuperK. From one side LBL, owing to the large $L/E$ values, can only look at the averaged oscillation pattern (lacking any oscillatory behavior of data). From the other side, the not–negligible interference between flavours rises up dependencies on the mass hierarchy and the CP–violation phase. New results were recently published by the three collaborations, either on [$\nu_\mu \rightarrow \nu_\mu$]{}disappearance or on the [${\ensuremath{\nu_{\mu}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}appearance. All the results put stringent exclusion limits on the effective mixing angles between [$\nu_{\mu}$]{}/[$\nu_{\tau}$]{}and [$\nu_s$]{}, so increasing the tension with the positive results on [$\nu_{e}$]{}appearance/disappearance. With regards to [$\nu_{e}$]{}, OPERA and MINOS$+$ will soon release reliable results with their large data–set, by properly taking into account the extended $3+1$ scenario. In case of existence of a sterile neutrino at the eV mass–scale this situation points towards a rather low effective mixing angle, of the order of 1%, between sterile and the standard neutrino flavours. Therefore for any experiment/proposal aiming to provide new results it is mandatory to reach a sensitivity of that level. The sterile neutrino story has so far been developed either by trying to establish the hints (each at 2–3 $\sigma$ level) on [$\nu_{e}$]{}appearance/disappearance, or looking at flavour connected channels, like the [$\nu_{\mu}$]{}disappearance one. Within the next 2–3 years experiments on reactors and with radioactive sources can confirm or disprove the [$\nu_{e}$]{}anomalies, while there is presently no reliable experiment [@nessie] looking at the interference at the level of 1% mixing between sterile and [*muon/tau*]{} neutrino states other than the LBL ones. These however have no possibility to observe the oscillation pattern. Therefore, new specific experiments should be settled and approved, in case reactor/source current investigations would provide positive results. [99]{} A. Aguilar et al. (LSND Collaboration), *Evidence for neutrino oscillations from the observation of $\overline\nu_e$ appearance in a $\overline\nu_{\mu}$ beam*, *Phys. Rev.* [**D 64**]{} (2001) 112007 \[[hep-ex/0104049]{}\]. A. Aguilar-Arevalo et al. (MiniBooNE Collaboration), *Improved Search for $\overline\nu_{\mu}\rightarrow \overline \nu_e$ Oscillations in the MiniBooNE Experiment*, *Phys. Rev. Lett.* [**110**]{} (2013) 161801 \[[arXiv:1303.2588]{}\]. G. Mention et al., *The reactor antineutrino anomaly*, *Phys. Rev.* [**D 83**]{} (2011) 073006 \[[arXiv:1101.2755]{}\]. M.A. Acero, C. Giunti and M. Laveder, *Limits on [$\nu_{e}$]{}and [$\overline{\nu}_{e}$]{}disappearance from Gallium and reactor experiments*, *Phys. Rev.* [**D 78**]{} (2008) 073009 \[[arXiv:0711.4222]{}\]. C. Giunti and M. Laveder, *Statistical significance of the Gallium anomaly*, *Phys. Rev.* [**C 83**]{} (2011) 065504 \[[arXiv:1006.3244]{}\]. See e.g. the proceedings of this conference related to the Neutrino afternoon sessions of July 23$^{\rm{rd}}$ and 24$^{\rm{th}}$. J. Kopp, P. A. N. Machado, M. Maltoni, and T. Schwetz, *Sterile Neutrino Oscillations: The Global Picture*, *J. High Energy Phys.* [**05**]{} (2013) 050 \[[arXiv:1303.3011]{}\];\ T. Schwetz, *Status of sterile neutrino oscillations*, *Nucl. Phys.* [**B 235–236**]{} (2013) 2292235;\ C. Giunti, M. Laveder, Y. F. Li, Q. Y. Liu, and H. W. Long, *A Pragmatic View of Short–Baseline Neutrino Oscillations*, *Phys. Rev.* [**D 86**]{} (2012) 113014. G. Sirri (for the OPERA Collaboration), *Results from the OPERA experiment in the CNGS beam*, these proceedings; N. Agafonova et al. (OPERA Collaboration), *Discovery of tau neutrino appearance in the CNGS neutrino beam with the OPERA experiment*, *Phy. Rev. Lett.* 115 (2015) 121802, \[[arXiv:1507.01417]{}\]. B. Pontecorvo, *Sov. Phys.JETP* [**26**]{} (1968) 984; Z. Maki, M. Nakagawa, and S. Sakata, *Prog. Theor. Phys.* [**28**]{} (1962) 870. P.A.R. Ade et al. (Planck Collaboration), submitted to *A&A* (2015), arXiv:1502.01589v2. A. Palazzo, *Consistent analysis of the [$\nu_\mu \rightarrow \nu_e$]{}sterile neutrinos searches*, *Phys. Rev.* [**D 91**]{} (2015) 091301(R) \[[arXiv:1503.03966]{}\]. N. Agafonova et al. (OPERA Collaboration), *Limits on muon–neutrino to tau–neutrino oscillations induced by a sterile neutrino state obtained by OPERA at the CNGS beam*, *JHEP* [**06**]{} (2015) 069 \[[arXiv:1503.01876]{}\]. P. Astier et al. (NOMAD Collaboration), *Final NOMAD results on [${\ensuremath{\nu_{\mu}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}and [${\ensuremath{\nu_{e}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}oscillations including a new search for [$\nu_{\tau}$]{}appearance using hadronic $\tau$ decays*, *Nucl. Phys.* [**B 611**]{} (2001) 3 \[[hep-ex/0106102]{}\]. E. Eskut et al. (CHORUS Collaboraiton), *Final results on [${\ensuremath{\nu_{\mu}}\xspace}\rightarrow {\ensuremath{\nu_{\tau}}\xspace}$]{}oscillation from the CHORUS experiment*, *Nucl. Phys.* [**B 793**]{} (2008) 326 \[[arXiv:0710.3361]{}\]. A.B. Sousa (for the MINOS and MINOS$+$ Collaborations), *First MINOS$+$ Data and New Results from MINOS*, in proceedings of *Neutrino2014 conference*, [arXiv:1502.07715]{}; A. Timmons (MINOS Collaboration), *Searching for Sterile Neutrinos at MINOS*, in proceedings of *NuPhys2014 conference*, [arXiv:1504.04046]{}. K. Abe et al. (SuperKamiokande Collaboration), *Limits on sterile neutrino mixing using atmospheric neutrinos in Super–Kamiokande*, *Phys. Rev.* [**D 91**]{} (2015) 052019. A. Anokhina et al. (NESSiE Collaboration), *Search for Sterile Neutrinos in the Muon Neutrino Disappearance Mode at FNAL*, arXiv:1503.07471v2. The proposal has not been accepted by FNAL. [^1]: On behalf of the OPERA Collaboration.
--- abstract: 'We prove that ${\mathcal Z}$-stable, simple, separable, nuclear, non-unital $\mathrm{C}^*$-algebras have nuclear dimension at most $1$. This completes the equivalence between finite nuclear dimension and $\mathcal{Z}$-stability for simple, separable, nuclear, non-elementary $\mathrm{C}^*$-algebras.' address: - '-Jorge Castillejos, Department of Mathematics, KU Leuven, Celestijnenlaan 200b, 3001 Leuven, Belgium' - '-Samuel Evington, Institute of Mathematics, Polish Academy of Sciences, ul. [Ś]{}niadeckich 8, 00-656 Warszawa, Poland' author: - Jorge Castillejos - Samuel Evington title: 'Nuclear dimension of simple stably projectionless $\mathrm{C}^*$-algebras' --- Introduction {#introduction .unnumbered} ============ The Elliott Classification Programme, a 40-year endeavour involving generations of researchers, asks the following question: when are K-theory and traces a complete invariant for simple, separable, nuclear C$^*$-algebras? Fundamentally, there are two cases to consider: the *unital* case and the *stably projectionless* case. (This dichotomy follows from Brown’s Theorem ([@Br77]) and is discussed further below.) Recall that a C$^*$-algebra $A$ is said to be stably projectionless if there are no non-zero projections in the matrix amplification $M_n(A)$ for any $n \in \mathbb{N}$. Stably projectionless, simple, separable, nuclear C$^*$-algebras arise naturally as crossed products ([@KK96]), and can also be constructed using inductive limits with a wide variety of K-theoretic and tracial invariants occurring ([@Bl80; @Ra02; @Ts05; @Ja13; @Go16; @Go17; @EGLN17; @EGLN:gtr1]). In the unital case, a definitive answer for when K-theory and traces form a complete invariant is now known ([@Ki95; @Phi00; @GLN15; @EGLN15; @TWW17; @Wi14]). Firstly, Rosenberg and Schochet’s universal coefficient theorem ([@RS:UCT]) must hold for the C$^*$-algebras concerned. Secondly, the C$^*$-algebras must have *finite nuclear dimension* ([@WZ10]). This second condition has a geometric flavour and generalises the notation of finite covering dimension for topological spaces. Recent results ([@Go16; @E17; @EGLN17; @Go17]) are now converging on a similar classification result in the stably projectionless case; the state of the art will be discussed below. A major programme of research now focuses on providing methods for verifying finite nuclear dimension in practice. In the unital setting, a recent result of the authors together with Tikuisis, White and Winter ([@CETWW]) shows that finite nuclear dimension can be accessed through the tensorial absorption condition known as *${\mathcal Z}$-stability*, where ${\mathcal Z}$ is the *Jiang–Su algebra* (discussed in more detail below). In concrete examples, it can be very hard to prove directly that a C$^*$-algebra has finite nuclear dimension. The strategy of verifying ${\mathcal Z}$-stability instead has recently been used to prove that certain unital, simple, separable, nuclear C$^*$-algebras coming from dynamical systems are classifiable ([@CJKMST17; @KS18]). However, since this strategy relies on [@CETWW], it has until now only been available in the unital setting. In this paper, we consider and overcome the conceptual and technical challenges unique to the non-unital setting, allowing us prove the following: \[thm:NewMain2\] Let $A$ be a simple, separable, nuclear, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra. Then $A$ has nuclear dimension at most 1. For the following reasons, the non-unital case is harder than the unital case and needs new methods. Obviously, we cannot just unitise our C$^*$-algebras because this breaks both simplicity and ${\mathcal Z}$-stability. A more fundamental issue is that non-unital, simple C$^*$-algebras need not actually be *algebraically simple*. There can be non-trivial (non-closed) ideals. Examples of such ideals are the domains of *unbounded traces*, which may now exist and must therefore be taken into account. Furthermore, [@CETWW] builds on the foundations of [@BBSTWW], which has a global assumption of unitality and makes explicit use of the unit at a number of critical points in the argument (an example is the $2 \times 2$ matrix trick of [@BBSTWW Section 2], which is inspired by ideas of Connes). To understand how we circumvent the issues associated to unbounded traces, it will be helpful to first discuss the folklore result, alluded to above, that Brown’s Theorem ([@Br77]) implies a dichotomy for simple C$^*$-algebras between the unital and the stably projectionless case. Writing ${\mathbb{K}}$ for the C$^*$-algebra of compact operators (on a separable, infinite-dimensional Hilbert space), recall that C$^*$-algebras $A,B$ are *stably isomorphic* if $A \otimes {\mathbb{K}}\cong B \otimes {\mathbb{K}}$. Suppose now that $A$ is a simple, separable C$^*$-algebra that is not stably projectionless. Then there exists a non-zero projection $p \in A \otimes {\mathbb{K}}$, and so the hereditary subalgebra $p(A \otimes {\mathbb{K}})p$ is unital. By [@Br77 Theorem 2.8], $p(A \otimes {\mathbb{K}})p$ is stably isomorphic to $A \otimes {\mathbb{K}}$, and hence stably isomorphic to $A$ (see Section \[sec:Reductions\] for more details). Crucial to proving Theorem \[thm:NewMain2\] in general is the observation that the hypotheses and the conclusion depend only on the stable isomorphism class of $A$.[^1] Hence, by [@CETWW Theorem B] and the folklore result above based on Brown’s Theorem, it suffices to prove Theorem \[thm:NewMain2\] in the stably projectionless case. However, this folklore reduction is not enough for us. We go a step further and pass to a hereditary subalgebra $A_0 \subseteq A \otimes {\mathbb{K}}$ on which all tracial functionals are bounded and the set of tracial states $T(A_0)$ is weak$^*$ compact. The existence of such hereditary subalgebra follows from the Cuntz semigroup computation of [@ERS11] for ${\mathcal Z}$-stable $\mathrm{C}^*$-algebras, and Brown’s Theorem assures us that $A_0$ is stably isomorphic to $A$. This second reduction puts us in a position where a similar proof strategy to that of [@BBSTWW] can be implemented, and where the key new ingredient from [@CETWW], *complemented partitions of unity* (CPoU), is also available. Of course, we still have to deal with the global assumption of unitality in [@BBSTWW]. A key tool in this endeavour is our unitisation lemma for order zero maps into ultrapowers (Lemma \[lem:Unitisation\]), which allows us to assume the domains of certain maps are unital in places where simplicity and ${\mathcal Z}$-stability are only really needed on the codomain side in [@BBSTWW].\ \ We now turn to the broader context of Theorem \[thm:NewMain2\] and its applications. As alluded to above, *nuclear dimension* for C$^*$-algebras is a non-commutative dimension theory that reduces to the covering dimension of the spectrum in the commutative case. *Finite nuclear dimension* has proven to be a technically useful strengthening of nuclearity, that is both necessary for classification ([@Vi99; @Ro03; @To08; @GK14]) and a vital ingredient of the most recent classification theorems ([@Ki95; @Phi00; @GLN15; @EGLN15; @TWW17; @Wi14]). The *Jiang–Su algebra* ${\mathcal Z}$ ([@JS99]) is a simple $\mathrm{C}^*$-algebra, which plays a fundamental role in the classification of simple C$^*$-algebras since $A$ and $A \otimes {\mathcal Z}$ have the same K-theory and traces under mild hypotheses. A C$^*$-algebra is said to be *${\mathcal Z}$-stable* if $A \cong A \otimes {\mathcal Z}$. Moreover, any C$^*$-algebra can be ${\mathcal Z}$-stabilised by tensoring with the Jiang–Su algebra because ${\mathcal Z}\cong {\mathcal Z}\otimes {\mathcal Z}$. In many ways, the Jiang–Su algebra is the $\mathrm{C}^*$-algebraic analogue of the hyperfinite $\mathrm{II}_1$ factor $\mathcal{R}$ ([@MvN43]), with ${\mathcal Z}$-stability analogous to the McDuff property ([@McD70]). Combining Theorem \[thm:NewMain2\] with the main results of [@Wi12] and [@Ti14], we arrive at the following relationship between finite nuclear dimension and ${\mathcal Z}$-stability, which was conjectured by Toms–Winter ([@To09]). \[thm:NewMain\] Let $A$ be a non-elementary, simple, separable, nuclear $\mathrm{C}^*$-algebra. The following are equivalent: (i) $A$ has finite nuclear dimension; (ii) $A$ is ${\mathcal Z}$-stable. One striking consequence of Theorems \[thm:NewMain2\] and \[thm:NewMain\] is that nuclear dimension can only attain three different values in the simple setting. \[cor:NewTrichotomy\] The nuclear dimension of a simple $\mathrm{C}^*$-algebra is $0, \, 1$ or $\infty$. This is in stark contrast to the commutative case, where all non-negative integers can occur. Moreover, we remark that the $\mathrm{C}^*$-algebras of nuclear dimension zero are known to be precisely the approximately finite dimensional $\mathrm{C}^*$-algebras ([@WZ10 Remark 2.2.(iii)]).[^2] Whilst Corollary \[cor:NewTrichotomy\] is interesting in its own right, we believe the main applications of our results will be in classification of simple, stably projectionless C$^*$-algebras. Theorem \[thm:NewMain2\] opens up a new pathway to proving that concrete examples of stably projectionless, simple, separable, nuclear C$^*$-algebras, such as C$^*$-algebras coming from flows on C$^*$-algebras or from actions of more general locally compact groups, have finite nuclear dimension: it now suffices to verify ${\mathcal Z}$-stability. We end this introduction with a discussion of the state of the art for the classification of simple, stably projectionless C$^*$-algebras. As mentioned above, there has been impressive progress in recent years ([@Go16; @Go17; @E17]). As in the unital case, the classification is via a functor constructed from the K-theory and the tracial data of the $\mathrm{C}^*$-algebra; this functor is called the *Elliott invariant* and is typically denoted $\mathrm{Ell}(\cdot)$ (see [@Go17 Definition 2.9] for a precise definition). By combining Theorem \[thm:NewMain2\] with [@Go17 Theorem 1.2], one obtains a classification of simple, separable, nuclear $\mathrm{C}^*$-algebras in the UCT class that tensorially absorb the C$^*$-algebra $\mathcal{Z}_0$ – a stably projectionless analogue of the Jiang–Su algebra introduced in [@Go17 Definition 8.1]. \[cor:Newclassification\] Let $A$ and $B$ be simple, separable, nuclear $\mathrm{C}^*$-algebras which satisfy the UCT. Then $$A \otimes \mathcal{Z}_0 \cong B \otimes \mathcal{Z}_0 \text{ if and only if } \mathrm{Ell}(A \otimes \mathcal{Z}_0) \cong \mathrm{Ell}(B \otimes \mathcal{Z}_0). \notag$$ Corollary \[cor:Newclassification\] reduces to the celebrated Kirchberg-Phillips classification ([@Ki95; @Phi00]) in the traceless case and is otherwise a result about stably projectionless C$^*$-algebras. For these C$^*$-algebras, the difference between $\mathcal{Z}_0$-stability and $\mathcal{Z}$-stability, roughly speaking, comes down to how complex the interaction between the K-theory and traces is allowed to be; see [@Go17] for more details. Structure of Paper {#structure-of-paper .unnumbered} ------------------ Section 1 reviews the necessary preliminary material as appropriate to the non-unital setting. Section 2 is concerned with the invariance of $\mathrm{C}^*$-algebraic properties under stable isomorphism and the reduction argument outlined above. The next three sections generalise the necessary technical machinery from [@BBSTWW] and [@CETWW]. Section 3 concerns the existence of an order zero embedding $\Phi:A \rightarrow A_\omega$ with appropriate finite dimensional approximations. Section 4 contains the aforementioned unitisation lemma for order zero maps into ultrapowers. Section 5 is devoted to a uniqueness theorem for maps into ultrapowers, which we shall use to compare (unitisations of) $\Phi$ and the canonical embedding $A \rightarrow A_\omega$. Theorem \[thm:NewMain2\] and its corollaries are proved in Section 6, with analogous results for decomposition rank (a forerunner of nuclear dimension) proved in Section 7. Since some preliminary lemmas from [@BBSTWW] are stated only in the unital case, we include an appendix with their non-unital versions. Acknowledgements {#acknowledgements .unnumbered} ---------------- Part of this work was undertaken during a visit of JC to IMPAN. JC thanks SE and IMPAN for their hospitality. SE would like to thank George Elliott for his helpful comments on this research during SE’s secondment at the Fields Institute, which was supported by the EU RISE Network *Quantum Dynamics*. The authors would also like to thank Jamie Gabe, Gábor Szabó, Stefaan Vaes and Stuart White for useful comments on an earlier version of this paper. Preliminaries ============= In this section, we recall the most important definitions and results that will be used in the sequel, and we introduce the notation used in this paper. We write $\mathbb{K}$ to denote the $\mathrm{C}^*$-algebra of compact operators (on a separable, infinite-dimensional Hilbert space). Given a $\mathrm{C}^*$-algebra $A$, we write $A_+$ for the positive elements of $A$ and $A_{+,1}$ for the positive contractions; we write $\mathrm{Ped}(A)$ for the Pedersen ideal of $A$, which is the minimal dense ideal of $A$ (see [@Ped79 Section 5.6]); and we write $A^\sim$ for the unitisation of $A$. Our convention is that, if $A$ is already unital, then we adjoin a new unit, so $A^\sim \cong A \oplus \mathbb{C}$ as $\mathrm{C}^*$-algebras. For $S \subseteq A$ self-adjoint, we set $S^\perp := \lbrace a \in A: ab = ba = 0, \forall b \in S\rbrace$. For $\epsilon>0$ and $a,b \in A$, the notation $a \approx_{\epsilon} b$ means $\|a - b\|< \epsilon$. For $a,b \in A$ with $b$ self-adjoint, we write $a \vartriangleleft b$ to mean that $ab= ba = a$. We use the common abbreviation c.p.c. for completely positive and contractive maps between $\mathrm{C}^*$-algebras. A c.p.c. map $\phi:A \rightarrow B$ is *order zero* if it preserves orthogonality in the sense that, for $a,b \in A_+$, $\phi(a)\phi(b) = 0$ whenever $ab = 0$. Following [@WZ10 Definition 2.1], a $\mathrm{C}^*$-algebra $A$ has *nuclear dimension at most $n$*, if there is a net $(F_i, \psi_i,\phi_i)_{i \in I}$, where $F_i$ is a finite dimensional $\mathrm{C}^*$-algebra, $\psi_i: A \to F_i$ is a c.p.c. map, and $\phi_i: F_i \to A$ is a c.p. map, such that $\phi_i \circ \psi_i (a) \to a$ for all $a \in A$ and, moreover, each $F_i$ decomposes into $n+1$ ideals $F_i = F_i^{(0)} \oplus \cdots \oplus F_i^{(n)}$ for which the restrictions $\phi_i|_{F_i^{(k)}}$ are c.p.c. order zero. The *nuclear dimension* of $A$, denoted by $\dim_{\mathrm{nuc}} A$, is defined to be the smallest such $n$ (and to be $\infty$, if no such $n$ exists). The *decomposition rank*, a forerunner of nuclear dimension, is obtained if one additionally requires $\phi_i$ to be a c.p.c. map [@KW04 Definition 3.1]. We shall denote the decomposition rank of a $\mathrm{C}^*$-algebra $A$ by $\mathrm{dr}(A)$. By a *trace* on a $\mathrm{C}^*$-algebra we will typically mean a tracial state, i.e. a positive linear functional $\tau:A \rightarrow \mathbb{C}$ of operator norm $1$ such that $\tau(ab)=\tau(ba)$ for all $a, b \in A$. We write $T(A)$ for the set of tracial states on $A$ endowed with the weak$^*$-topology. More general notions of traces are discussed in Section \[sec:GenTraces\] below. By a *cone* we will mean a convex subset $C$ of a locally convex space that satisfies $C + C \subseteq C$, $\lambda C \subset C$ for $\lambda > 0$, and $C \cap (-C) = \lbrace 0 \rbrace$. A *base* for a cone $C$ is a closed, convex, and bounded subset $X$ such that for any non-zero $c \in C$ there exist unique $\lambda > 0$ and $x \in X$ such that $c = \lambda x$. By [@Alf71 Theorem II.2.6], a cone is locally compact if and only if it has a compact base. A map $f: C \rightarrow D$ between cones is *linear* if $f(\lambda x + \mu y) = \lambda f(x) + \mu f(y)$ for $\lambda, \mu \geq 0$ and $x,y \in C$. If $X$ is a compact base for the cone $C$, then any continuous affine map $X \rightarrow D$ extends uniquely to a continuous linear map $C \rightarrow D$. Generalised Traces {#sec:GenTraces} ------------------ In this preliminary section, we briefly discuss the generalisations of traces that arise in the general theory of $\mathrm{C}^*$-algebras. A *quasitrace*[^3] on a $\mathrm{C}^*$-algebra $A$ is a function $\tau:A_+ \rightarrow [0,\infty]$ with $\tau(0) = 0$ such that - $\tau(a^*a) = \tau(aa^*)$ for all $a \in A$, - $\tau(a + b) = \tau(a) + \tau(b)$ for all commuting elements $a,b \in A_+$, - $\tau$ extends to a function $\tau_2:M_2(A)_+ \rightarrow [0,\infty]$ for which (i) and (ii) hold. The quasitrace $\tau$ is *additive* if (ii) holds for all $a,b \in A_+$.[^4] Setting $\mathrm{Dom}_{1/2}(\tau) := \lbrace a \in A: \tau(a^*a) < \infty \rbrace$, we say that $\tau$ is *densely-defined* if $\mathrm{Dom}_{1/2}(\tau)$ is dense in $A$, and that $\tau$ is *bounded* if $\mathrm{Dom}_{1/2}(\tau) = A$. We write $Q\widetilde{T}(A)$ for the cone of densely-defined, lower-semicontinuous quasitraces; $\widetilde{T}(A)$ for the cone of densely-defined, lower-semicontinuous, additive quasitraces; and $\widetilde{T}_b(A)$ for the cone of bounded, additive quasitraces. The topology on these cones is given by pointwise convergence on $\mathrm{Ped}(A)$. Since the traces on a $\mathrm{C}^*$-algebra will play a crucial role in the arguments of this paper, the following existence theorem of Blackadar–Cuntz is fundamental. Let $A$ be a simple $\mathrm{C}^*$-algebra such that $A \otimes {\mathbb{K}}$ contains no infinite projections. Then $Q\widetilde{T}(A) \neq 0$. It is an open question whether $Q\widetilde{T}(A) = \widetilde{T}(A)$ in general. However, when $A$ is exact, this is a famous result of Haagerup; see [@Ha14] for the unital case and [@BK04 Remark 2.29(i)] for how to deduce the general case from [@Ha14]. Every $\tau \in Q\widetilde{T}(A)$ has a unique extension to a densely-defined, lower-semicontinuous quasitrace on $A \otimes {\mathbb{K}}$, which is additive whenever $\tau$ is additive [@BK04 Remark 2.27(viii)]. Therefore, we have canonical isomorphisms $Q\widetilde{T}(A) \cong Q\widetilde{T}(A \otimes {\mathbb{K}})$ and $\widetilde{T}(A) \cong \widetilde{T}(A \otimes {\mathbb{K}})$, which we treat as identifications. Furthermore, every $\tau \in \widetilde{T}_b(A)$ has a unique extension to a positive linear functional on $A$, which we also denote $\tau$, satisfying the trace condition $\tau(ab) = \tau(ba)$ for all $a,b \in A$. Let $a,b \in A_+$. If there exists a sequence $(x_n)_{n\in{\mathbb{N}}}$ in $A$ such that $b = \sum_{n=1}^\infty x_n^*x_n$ and $\sum_{n=1}^\infty x_nx_n^* \leq a$, then $b$ is said to be *Cuntz–Pedersen subequivalent* to $a$; see [@Cu79]. Our notation for this subequivalence will be $b \preccurlyeq a$. The following proposition is proven by the same method as [@Cu79 Proposition 4.7]. For the benefit of the reader, we give full details. \[prop:ExtendingTraces\] Let $A$ be a simple, separable $\mathrm{C}^*$-algebra and $B \subseteq A$ a non-zero hereditary subalgebra. The restriction map $\rho:\widetilde{T}(A) \rightarrow \widetilde{T}(B)$ is a linear homeomorphism of cones. Since $\mathrm{Ped}(B) \subseteq \mathrm{Ped}(A)$, the restriction of a densely-defined quasitrace on $A$ is a densely-defined quasitrace on $B$. Restriction also preserves additivity and lower-semicontinuity. Hence, $\rho$ is well defined. Continuity of $\rho$ follows immediately from the fact that $\mathrm{Ped}(B) \subseteq \mathrm{Ped}(A)$, and it is clear that $\rho$ is linear. We now turn to proving that $\rho$ is surjective. Let $\sigma \in \widetilde{T}(B)$. Define $\tau:A_+ \rightarrow [0,\infty]$ by $\tau(a) := \sup \lbrace \sigma(b): b \in B_+, b \preccurlyeq a \rbrace$. The following properties of $\tau$ are easy to verify $$\begin{aligned} \tau(0) &= 0,\\ \tau(a^*a) &= \tau(aa^*), &a \in A,\\ \tau(\lambda a) &= \lambda \tau(a), &\lambda \geq 0, \ a \in A_+,\\ \tau(a_1 + a_2) &\geq \tau(a_1) + \tau(a_2), &a_1,a_2 \in A_+. \end{aligned}$$ Let $a_1,a_2 \in A_+$. Suppose $b \in B_+$ and $b \preccurlyeq a_1 + a_2$. By [@Pe69 Corollary 1.2], there exist $b_1,b_2 \in A_+$ with $b = b_1 + b_2$ such that $b_1 \preccurlyeq a_1$ and $b_2 \preccurlyeq a_2$. Since $B$ is a hereditary subalgebra, $b_1,b_2 \in B_+$. Hence, $$\sigma(b) = \sigma(b_1) + \sigma(b_2) \leq \tau(a_1) + \tau(a_2).$$ Taking the supremum, we get $\tau(a_1 + a_2) \leq \tau(a_1) + \tau(a_2)$. Therefore, we have $\tau(a_1 + a_2) = \tau(a_1) + \tau(a_2)$. This completes the proof that $\tau$ is an additive quasitrace. Since $B$ is a hereditary subalgebra of $A$, the restriction of the Cuntz–Pedersen subequivalence relation on $A$ to $B$ is the same as the Cuntz–Pedersen subequivalence relation on $B$. It follows that $\tau|_{B_+}$ is $\sigma$. As $\sigma$ is densely defined on $B$ and $A$ is simple, $\tau$ is densely defined. Let $\widetilde{\tau}(a) := \sup_{\epsilon > 0} \tau((a - \epsilon)_+)$ be the *lower-semicontinuous regularisation* of $\tau$ (see [@BK04 Remark 2.27.(iv)] and [@ERS11 Lemma 3.1]). Then $\widetilde{\tau}$ is a densely-defined, lower-semicontinuous, additive quasitrace on $A$, and we still have $\widetilde{\tau}|_{B_+} = \sigma$ because $\sigma$ is lower-semicontinuous. Therefore, $\rho$ is surjective. We now prove that $\rho$ is injective. Let $\sigma$, $\tau$, and $\widetilde{\tau}$ be as above. Suppose $\psi \in \widetilde{T}(A)$ also satisfies $\psi|_B = \sigma$. Since $\psi(b) \leq \psi(a)$ whenever $b \preccurlyeq a$, we must have $\tau \leq \psi$. Since taking lower-semicontinuous regularisations is order-preserving, we have $\widetilde{\tau} \leq \psi$. By [@ERS11 Proposition 3.2], there exists $\varphi \in \widetilde{T}(A)$ such that $\psi = \widetilde{\tau} + \varphi$. However, $\psi|_{B_+} = \widetilde{\tau}|_{B_+} = \sigma$ and so $\varphi$ vanishes on $B_+$. Since $A$ is simple, it follows that $\varphi = 0$ and so $\psi = \widetilde{\tau}$. Therefore, $\rho$ is injective. Finally, we prove $\rho$ that is a homeomorphism. Fix $b \in \mathrm{Ped}(B) \setminus \lbrace 0 \rbrace$. Note that $b$ is also in $\mathrm{Ped}(A)$ and is full in both $A$ and $B$ by simplicity. Set $X_A := \lbrace \tau \in \widetilde{T}(A): \tau(b) = 1\rbrace$ and $X_B := \lbrace \tau \in \widetilde{T}(B): \tau(b) = 1\rbrace$. By [@TT15 Proposition 3.4], $X_A$ is a compact base for the cone $\widetilde{T}(A)$ and $X_B$ is a compact base for the cone $\widetilde{T}(B)$. Since $b \in B$, we have that $\rho(X_A) = X_B$. Hence, $\rho$ defines a continuous, affine bijection from $X_A$ to $X_B$. Since $X_A$ and $X_B$ are compact Hausdorff space, $\rho$ in fact defines an affine homeomorphism between compact bases for the the cones $\widetilde{T}(A)$ and $\widetilde{T}(B)$. Therefore, $\rho$ is a linear homeomorphism of the cones $\widetilde{T}(A)$ and $\widetilde{T}(B)$. Strict Comparison {#sec:CuntzComparison} ----------------- We first recall the definition of *Cuntz sub-equivalence*. Let $A$ be a $\mathrm{C}^*$-algebra and $a,b \in A_+$. Then $a \precsim b$ if and only if there exists a sequence $(x_n)_{n\in{\mathbb{N}}}$ in $A$ such that $$\lim_{n\rightarrow \infty} \|x_n^*bx_n - a\| = 0.$$ If $a \precsim b$ and $b \precsim a$, then $a$ is said to be *Cuntz equivalent* to $b$. We shall write $[a]$ for the Cuntz equivalence class of $a$. The *Cuntz semigroup* $\mathrm{Cu}(A)$ is the ordered abelian semigroup obtained by considering the Cuntz equivalence classes of positive elements in $A \otimes {\mathbb{K}}$ under orthogonal addition and the order induced by Cuntz subequivalence; see [@CEI08]. If one only considers the Cuntz equivalence classes of positive elements in $\bigcup_{k=1}^\infty M_k(A)$, then one obtains the *classical Cuntz semigroup* $W(A)$; see [@To11]. Informally, a $\mathrm{C}^*$-algebra $A$ has *strict comparison* if traces determine the Cuntz comparison theory. In order to formalise this notion, we need to recall the *rank function* associated to a lower-semicontinuous quasitrace. Suppose $\tau:A_+ \rightarrow [0,\infty]$ is a lower-semicontinuous quasitrace. Then the rank function $d_\tau:(A \otimes {\mathbb{K}})_+ \rightarrow [0,\infty]$ is given by $$d_\tau(a) = \lim_{n\rightarrow\infty} \tau(a^{1/n}),$$ where we have made use of the unique extension of $\tau$ to $A\otimes{\mathbb{K}}$. We have $d_\tau(a) \leq d_\tau(b)$ whenever $a,b \in (A \otimes {\mathbb{K}})_+$ satisfy $a \precsim b$ by [@BH82 Theorem II.2.2]. Strict comparison can be viewed as a partial converse. Since we will be adapting the methods of [@BBSTWW], we shall be working with the same definition of strict comparison that is used there. \[defn:StrictComp\] A $\mathrm C^*$-algebra $A$ has *strict comparison (of positive elements, with respect to bounded traces)* if $$\begin{aligned} \label{eq:StrictComp} (\forall \tau\in T(A),\ d_\tau(a) < d_\tau(b))\Longrightarrow a\precsim b \end{aligned}$$ for $k\in\mathbb{N}$ and $a,b\in M_k(A)_+$. We alert the reader to two facts about this definition. Firstly, it only concerns positive elements in $\bigcup_{k=1}^\infty M_k(A)$, so it is a property of the classical Cuntz semigroup $W(A)$. Secondly, we only require the condition $d_\tau(a) < d_\tau(b)$ to be shown when $\tau$ is a tracial state. In light of the potential confusion that could arise from the variety of definitions of strict comparison that appear in the literature, we include a proof that ${\mathcal Z}$-stability implies strict comparison in the sense of Definition \[defn:StrictComp\] for the benefit of the reader. The key ingredient in the proof is that $W(A)$ is almost unperforated whenever $A$ is ${\mathcal Z}$-stable, which is due to R[ø]{}rdam [@Ro04]. \[prop:ZstableToSC\] Let $A$ be a simple, separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(A) = \widetilde{T}_b(A) \neq 0$. Then $A$ has strict comparison of positive elements with respect to bounded traces. As $A$ is ${\mathcal Z}$-stable, so is $A \otimes {\mathbb{K}}$. Hence, by [@Ro04 Theorem 4.5], $\mathrm{Cu}(A) = W(A \otimes {\mathbb{K}})$ is almost unperforated. Applying [@ERS11 Proposition 6.2] together with [@ERS11 Proposition 4.4], we find that $A$ has strict comparison in the following sense: for all $a,b \in (A\otimes{\mathbb{K}})_+$ with $[a] \leq \infty[b]$ in $\mathrm{Cu}(A)$ if $d_\tau(a) < d_\tau(b)$ for all lower-semicontinuous quasitraces with $d_\tau(b) = 1$ then $[a] \leq [b]$ in $\mathrm{Cu}(A)$. We show that under our hypothesis on $A$ this implies that $A$ has strict comparison in the sense of Definition \[defn:StrictComp\]. Consider $a,b \in M_k(A)_+$ and let $\epsilon > 0$ and $f_\epsilon:[0,1]\rightarrow[0,1]$ be the function that is $0$ on $[0,\epsilon]$, affine on $[\epsilon,2\epsilon]$ and $1$ on $[2\epsilon,1]$. Since $M_k(A)$ is simple, there exists $n \in \mathbb{N}$ such that $[f_\epsilon(a)] \leq n[b] \leq \infty[b]$ in $\mathrm{Cu}(A)$ by [@Bl06 Corollary II.5.2.12]. As $\epsilon$ is arbitrary, we have $[a] \leq \infty[b]$. Since $Q\widetilde{T}(A) = \widetilde{T}_b(A)$, if $d_\tau(a) < d_\tau(b)$ for all $\tau \in T(A)$, then $a \precsim b$ in $A\otimes {\mathbb{K}}$. As $M_k(A)$ is a hereditary subalgebra of $A\otimes{\mathbb{K}}$, we have $a \precsim b$ in $M_k(A)$ by [@KR00 Lemma 2.2(iii)]. \[rem:ZstableToSC\] By replacing $M_k(A)$ with $A \otimes {\mathbb{K}}$ in the proof of Proposition \[prop:ZstableToSC\], we see that holds for all $a,b \in (A \otimes {\mathbb{K}})_+$. Therefore, $A$ also has strict comparison by traces in the sense of [@NgR16 Definition 3.1] under the hypotheses of Proposition \[prop:ZstableToSC\]. Ultraproducts and Kirchberg’s Epsilon Test ------------------------------------------ Let $\omega$ be a free ultrafilter on $\mathbb{N}$, which we regard as fixed for the entirety of the paper. The *ultraproduct* $\prod_{n\rightarrow\omega} A_n$ of a sequence of $\mathrm{C}^*$-algebras $(A_n)_{n\in{\mathbb{N}}}$ is defined by $$\prod_{n\rightarrow\omega} A_n := \frac{\prod_{n\in{\mathbb{N}}} A_n}{\lbrace (a_n)_{n\in{\mathbb{N}}} \in \prod_{n\in{\mathbb{N}}} A_n: \lim\limits_{n\rightarrow\omega} \|a_n\| = 0\rbrace}.$$ The *ultrapower* $A_\omega$ of a $\mathrm{C}^*$-algebra $A$ is the ultraproduct of the constant sequence $(A_n)_{n\in{\mathbb{N}}}$ with $A_n = A$ for all $n\in{\mathbb{N}}$. We identify $A$ with the subalgebra of $A_\omega$ given by constant sequences $(a)_{n\in{\mathbb{N}}}$. Every sequence $(\tau_n)_{n\in{\mathbb{N}}}$ where $\tau_n \in T(A_n)$ defines a tracial state on the ultrapower $\prod_{n\rightarrow\omega} A_n$ via $(a_n) \mapsto \lim_{n\rightarrow\omega} \tau_n(a_n)$. Tracial states of this form are known as *limit traces*. The set of all limit traces will be denoted by $T_\omega(\prod_{n\rightarrow\omega} A_n)$. Not all traces on an ultraproduct are limit traces but we have the following density result due to Ng–Robert [@NgR16 Theorem 1.2] (generalising an earlier result of Ozawa [@Oz13 Theorem 8]). \[thm:NoSillyTraces\] Let $(A_n)_{n\in{\mathbb{N}}}$ be a sequence of simple, separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebras with $Q\widetilde{T}(A_n) = \widetilde{T}_b(A_n)$ for all $n \in {\mathbb{N}}$. Then $T_\omega(\prod_{n\rightarrow\omega} A_n)$ is weak$^*$-dense in $T(\prod_{n\rightarrow\omega} A_n)$. By Proposition \[prop:ZstableToSC\] and Remark \[rem:ZstableToSC\], each $A_n$ has strict comparison by traces in the sense of [@NgR16 Definition 3.1]. The result now follows by [@NgR16 Theorem 1.2]. We shall also need uniform tracial ultraproducts. Recall that any trace $\tau \in T(A)$ defines a 2-seminorm $\|a\|_{2,\tau} := \tau(a^*a)^{1/2}$. The uniform 2-seminorm is then defined by $$\|a\|_{2,T(A)} := \sup_{\tau \in T(A)} \|a\|_{2,\tau} = \sup_{\tau \in T(A)} \tau(a^*a)^{1/2}.$$ We can then define the *uniform tracial ultraproduct* of a sequence of $\mathrm{C}^*$-algebras $(A_n)_{n\in{\mathbb{N}}}$ by $$\prod^{n\rightarrow\omega} A_n := \frac{\prod_{n\in{\mathbb{N}}} A_n}{\lbrace (a_n)_{n\in{\mathbb{N}}} \in \prod_{n\in{\mathbb{N}}} A_n: \lim\limits_{n\rightarrow\omega} \|a_n\|_{2,T(A_n)} = 0\rbrace}.$$ The *uniform tracial ultrapower* $A^\omega$ of a $\mathrm{C}^*$-algebra $A$, which can be defined as the uniform tracial ultraproduct of the constant sequence $(A_n)_{n\in{\mathbb{N}}}$ with $A_n = A$ for all $n\in{\mathbb{N}}$, was introduced in [@CETWW]. We identify $A$ with the subalgebra of $A^\omega$ given by constant sequences $(a)_{n\in{\mathbb{N}}}$. Since $\|a\|_{2,T(A)} \leq \|a\|$ for all $a \in A$, there exists a canonical surjection from the ultraproduct to the uniform tracial ultraproduct. The kernel of this $^*$-homomorphism is the *trace kernel ideal* given by $$\begin{aligned} J_{(A_n)} &:= \lbrace (a_n)_{n\in{\mathbb{N}}} \in \prod_{n\rightarrow\omega} A_n: \lim_{n\rightarrow\omega} \|a_n\|_{2,T(A_n)} = 0\rbrace \notag \\ &= \lbrace x \in \prod_{n\rightarrow\omega} A_n: \|x\|_{2,\tau} = 0, \ \tau \in T_\omega(\prod_{n\rightarrow\omega}A_n) \rbrace.\end{aligned}$$ It follows that limit traces also induce traces on the uniform tracial ultraproduct. In the ultrapower case, we therefore use a unified notation $T_\omega(A)$ for the limit traces on $A_\omega$ or the induced traces on $A^\omega$. An important tool for working with ultrapowers are re-indexing arguments, which allow one to find elements of the ultrapower exactly satisfying some given condition provided one can find elements of the ultrapower which approximately satisfy the condition for any given tolerance. A precise and very general formulation of such re-indexing arguments is *Kirchberg’s Epsilon Test*, which we state below. \[epstest\] Let $X_1,X_2,\dots$ be a sequence of non-empty sets, and for each $k,n\in{\mathbb{N}}$, let $f^{(k)}_n:X_n\rightarrow [0,\infty)$ be a function. Define $f^{(k)}_\omega:\prod_{n=1}^\infty X_n\to[0,\infty]$ by $f^{(k)}_\omega((s_n)_{n=1}^\infty):=\lim_{n\rightarrow\omega}f^{(k)}_n(s_n)$ for $(s_n)\in\prod_{n=1}^\infty X_n$. Suppose that for all $m\in{\mathbb{N}}$ and $\epsilon>0$, there exists $(s_n)_{n=1}^\infty\in \prod_{n=1}^\infty X_n$ with $f^{(k)}_\omega((s_n))<\epsilon$ for $k=1,\dots,m$. Then there exists $(t_n)_{n=1}^\infty\in \prod_{n=1}^\infty X_n$ such that $f^{(k)}_\omega((t_n))=0$ for all $k\in{\mathbb{N}}$. Stable Rank One --------------- A unital $\mathrm{C}^*$-algebra $A$ is said to have *stable rank one* if the invertible elements form a dense subset. In this paper, we shall make use of the following non-unital generalisation. \[def:StableRankOneInUnitisation\] Let $A$ be a $\mathrm{C}^*$-algebra. We say that $A$ has *stable rank one in $A^\sim$* if every element of $A$ is a limit of invertible elements in $A^\sim$. In the unital case, $A^\sim \cong A \oplus \mathbb{C}$, so $A$ has stable rank one in $A^\sim$ if and only if $A$ has stable rank one. In the non-unital case, $A$ having stable rank one in $A^\sim$ is weaker than requiring that $A^\sim$ itself has stable rank one; see [@Rob16 Example 3.4]. A related notion is Robert’s *almost stable rank one* [@Rob16 Definition 3.1], which requires that, for all hereditary subalgebras $B \subseteq A$, $B$ has stable rank one in $B^\sim$. Robert proved the following. \[thm:ZstableProjectionless\] Let $A$ be a $\mathcal{Z}$-stable, projectionless $\mathrm{C}^*$-algebra. Then $A$ has almost stable rank one. In particular, $A$ has stable rank one in $A^\sim$. We now prove that having stable rank one in the unitisation passes to ultraproducts. We employ the notation $[(a_n)]$ for the element of the ultraproduct defined by the bounded sequence $(a_n)$. First, let us record that taking unitisations commutes with taking the ultraproduct. The proof of this lemma is straightforward and we omit it. \[lem:UnitisationUltraproduct\] Let $(A_n)_{n\in{\mathbb{N}}}$ be a sequence of $\mathrm{C}^*$-algebras. The canonical inclusion $\prod_{n \rightarrow \omega} A_n \rightarrow \prod_{n \rightarrow \omega} A_n^\sim$ extends to an isomorphism $$\left(\prod_{n \rightarrow \omega} A_n\right)^\sim \cong \prod_{n \rightarrow \omega} A_n^\sim.$$ We now proceed to show that having stable rank one in the unitisation passes to ultraproducts. \[prop:SR1inUnitisationUltrapowers\] Let $(A_n)$ be a sequence of $\mathrm{C}^*$-algebras. Suppose for each $n \in \mathbb{N}$, $A_n$ has stable rank one in $A_n^\sim$. Then $A_\omega := \prod_{n \rightarrow \omega} A_n$ has stable rank one in $A_\omega^\sim$. Let $x \in A_\omega$ and say $x = [(a_n)]$. By Theorem \[thm:ZstableProjectionless\] and [@BBSTWW Lemma 1.20], for each $n \in \mathbb{N}$ there is a unitary $u_n \in A_n^\sim$ such that $a_n \approx_{1/n} u_n|a_n|$. We then have $x = [(u_n)]|x| \in \prod_{n \rightarrow \omega} A_n^\sim$. By [@BBSTWW Lemma 1.20] once more, $x$ is a norm limit of invertible elements in $\prod_{n \rightarrow \omega} A_n^\sim$. By Lemma \[lem:UnitisationUltraproduct\], $\prod_{n \rightarrow \omega} A_n^\sim$ is just $A_\omega^\sim$. Complemented Partitions of Unity -------------------------------- The key technical tool in [@CETWW] was the complemented partitions of unity technique which enabled Theorem \[thm:NewMain2\] to be proven in the unital case. This property is best formulated in terms of the tracial ultrapower $A^\omega$ of a separable $\mathrm{C}^*$-algebra with $T(A)$ non-empty and compact. These assumptions imply that $A^\omega$ is unital, with any sequential approximate identity representing the unit [@CETWW Proposition 1.11]. We refer to [@CETWW Definition G] for a detailed explanation of the ideas behind this definition. \[defn:CPOU\] Let $A$ be a separable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(A) = \widetilde{T}_b(A) \neq 0$ and $T(A)$ compact. We say that $A$ has *complemented partitions of unity* (CPoU) if for every $\|\cdot\|_{2,T_\omega(A)}$-separable subset $S$ of $A^\omega$, every family $a_1,\dots,a_k \in (A^\omega)_+$, and any scalar $$\label{eq:CPoUTraceIneq1} \delta>\sup_{\tau\in T_\omega(A)}\min_{i=1,\dots,k}\tau(a_i),$$ there exist orthogonal projections $p_1,\dots,p_k\in A^\omega\cap S'$ such that $$p_1+\cdots+p_k = 1_{A^\omega} \ \text{and}\ \tau(a_ip_i)\leq \delta\tau(p_i), \tau\in T_\omega(A), i=1,\dots,k.$$ The following theorem gives sufficient conditions for a $\mathrm{C}^*$-algebra to have complemented partitions of unity. Although not necessary for our purposes, the hypothesis of ${\mathcal Z}$-stability can be weakened to *uniform property $\Gamma$*; see [@CETWW Section 2] for more details. Let $A$ be a separable, nuclear, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(A) = \widetilde{T}_b(A) \neq 0$ and $T(A)$ compact. Then $A$ has complemented partitions of unity. Reductions {#sec:Reductions} ========== In this section, we show how Brown’s Theorem [@Br77 Theorem 2.8] can be used to reduce the task of proving Theorem \[thm:NewMain2\] in general to proving it for unital $\mathrm{C}^*$-algebras and for stably projectionless $\mathrm{C}^*$-algebras with a compact trace space. We begin with the general statement of Brown’s Theorem. \[thm:Brown\] Let $B$ be a full hereditary subalgebra of a $\mathrm{C}^*$-algebra $A$. Suppose both $A$ and $B$ are $\sigma$-unital. Then $B$ is stably isomorphic to $A$. In our applications, we shall be working with $\mathrm{C}^*$-algebras that are simple and separable. Hence, the fullness and $\sigma$-unitality conditions will be satisfied. We shall therefore use the following form of Brown’s Theorem. \[cor:Brown\] Let $B$ be a non-zero hereditary subalgebra of a simple, separable $\mathrm{C}^*$-algebra $A$. Then $B$ is stably isomorphic to $A$. The utility of Brown’s Theorem for this paper derives from the fact that the hypotheses and conclusion of Theorem \[thm:NewMain2\] are invariant under stable isomorphism. We state this formally below. \[thm:StableIsoInvariants\] Let $A$ be a $\mathrm{C}^*$-algebra. Then 1. $A$ is simple if and only if $A \otimes \mathbb{K}$ is simple, 2. $A$ is separable if and only if $A \otimes \mathbb{K}$ is separable, 3. $A$ is nuclear if and only if $A \otimes \mathbb{K}$ is nuclear, 4. ${\mathrm{dr}}(A) = {\mathrm{dr}}(A \otimes \mathbb{K})$, 5. $\mathrm{dim}_{\mathrm{nuc}}(A) = \mathrm{dim}_{\mathrm{nuc}}(A \otimes \mathbb{K})$, 6. $A$ is separable and ${\mathcal Z}$-stable if and only if $A \otimes \mathbb{K}$ is separable and ${\mathcal Z}$-stable. Properties (i-iii) are well known; see for example [@Bl06 Chapter IV.3]. Part (iv) is [@KW04 Corollary 3.9]. Part (v) is [@WZ10 Corollary 2.8]. Part (vi) is [@TW07 Corollary 3.2]. Next, we recall that a $\mathrm{C}^*$-algebra $A$ is *stably projectionless* if there are no non-zero projections in $A \otimes \mathbb{K}$. By definition, this property is preserved under stable isomorphism. Stably projectionless $\mathrm{C}^*$-algebras can be viewed as highly non-unital $\mathrm{C}^*$-algebras. Indeed, the following folklore result establishes a dichotomy for simple, separable $\mathrm{C}^*$-algebras. \[prop:Dichotomy\] Let $A$ be a non-zero, simple, separable $\mathrm{C}^*$-algebra. Then exactly one of the following holds. - $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra. - $A$ is stably projectionless. Let $A$ be a simple, separable $\mathrm{C}^*$-algebra $A$. Then $A \otimes {\mathbb{K}}$ is simple and separable by Proposition \[thm:StableIsoInvariants\]. Suppose that $A$ is not stably projectionless. Then there exists a non-zero projection $p \in A \otimes K$. Set $B := p(A \otimes {\mathbb{K}})p$. Then $B$ is a unital $\mathrm{C}^*$-algebra with unit $1_B = p$. Moreover, $B$ is a non-zero hereditary subalgebra of $A \otimes {\mathbb{K}}$. Therefore, $B$ is stably isomorphic to $A \otimes {\mathbb{K}}$ by Corollary \[cor:Brown\], and hence is stably isomorphic to $A$. Now suppose that $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra $B$. Then there exists an isomorphism $\phi:B \otimes {\mathbb{K}}\rightarrow A \otimes {\mathbb{K}}$. Writing $1_B$ for the unit of $B$ and $e_{ij}$ for the matrix units of ${\mathbb{K}}$, we have that $\phi(1_B \otimes e_{ii})$ is non-zero projection in $A \otimes {\mathbb{K}}$. Hence, $A$ cannot be stably projectionless. This dichotomy justifies the terminology *stably unital* for the non–stably projectionless, simple, separable $\mathrm{C}^*$-algebras. The stably unital case of Theorem \[thm:NewMain2\] follows immediately from [@CETWW Theorem B] together with Propositions \[prop:Dichotomy\] and \[thm:StableIsoInvariants\]. The stably projectionless case on the other hand requires a further reduction and a technical modification of the methods of [@BBSTWW]. The purpose of the additional reduction is to pass to the case where the trace space is compact, and it is based on the following folklore result. \[lem:HerditartyContRankFunction\] Let $A$ be a simple, separable $\mathrm{C}^*$-algebra with $\widetilde{T}(A) \neq 0$. Let $A_0 := \overline{a(A \otimes {\mathbb{K}})a}$ be the hereditary subalgebra generated by a non-zero positive contraction $a \in (A \otimes {\mathbb{K}})_{+,1}$ for which the function $\tau \mapsto d_\tau(a)$ is continuous and finite-valued. Then $\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ is compact. By Proposition \[prop:ExtendingTraces\], the restriction map $\rho:\widetilde{T}(A \otimes {\mathbb{K}}) \rightarrow \widetilde{T}(A_0)$ is a linear homeomorphism. Let $\sigma \in \widetilde{T}(A_0)$. Then $\sigma$ has an extension $\tau := \rho^{-1}(\sigma) \in \widetilde{T}(A \otimes {\mathbb{K}})$. By [@Ti14 Proposition 2.4], we have $\|\sigma\|_{A_0^*} = d_{\tau}(a) < \infty$. Therefore, $\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$. Since $\sigma \mapsto d_{\rho^{-1}(\sigma)}(a)$ is continuous, $T(A_0)$ is a weak$^*$-closed subspace of the unit ball of $A_0^*$. Therefore, $T(A_0)$ is compact. We now explain how the results of [@ERS11] can be used to prove the existence of positive contractions with continuous rank functions under suitable hypotheses. \[prop:ExistenceRankFunction\] Let $A$ be a simple, separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(A) = \widetilde{T}(A) \neq 0$. Let $f:\widetilde{T}(A) \rightarrow [0,\infty)$ be a strictly positive, continuous, linear function. Then there exists a non-zero positive contraction $a \in (A \otimes {\mathbb{K}})_{+,1}$ with $d_\tau(a) = f(\tau)$ for all $\tau \in \widetilde{T}(A)$. Following [@ERS11 Section 4.1], we write $F(\mathrm{Cu(A)})$ for the space of functionals on the Cuntz semigroup of $A$. In [@ERS11 Theorem 4.4], it is shown that all functionals on the Cuntz semigroup are of the form $d_\tau$ for some lower-semicontinuous quasitrace $\tau$ on $A$. However, we alert the reader to the fact that quasitraces are not assumed to be densely defined in [@ERS11]. Since $A$ is simple, this means that either $\tau \in Q\widetilde{T}(A)$ or $\tau$ is the *trivial quasitrace*, which satisfies $\tau(0) = 0$ and is infinite otherwise. We now consider the topology on $F(\mathrm{Cu(A)})$, defined in general in [@ERS11 Section 4.1], and its relation to the topology on $\widetilde{T}(A)$, which is given by pointwise convergence on $\mathrm{Ped}(A)$. By [@ERS11 Theorem 4.4] and our assumption that all quasitraces are additive, the topology on $F(\mathrm{Cu(A)})$ agrees with the topology on set $\widehat{T}(A)$ of (not-necessarily-densely-defined) lower-semicontinuous, additive quasitraces defined in [@ERS11 Section 3.2].[^5] This topology is shown to be compact and Hausdorff in [@ERS11 Theorem 3.7]. By [@ERS11 Theorem 3.10], the restriction of this topology to $\widetilde{T}(A)$ is pointwise convergence on $\mathrm{Ped}(A)$. Since $\widehat{T}(A) \setminus \widetilde{T}(A)$ is just one point, it follows that the topology on $\widehat{T}(A)$ is simply the one point compactification of the topology on $\widetilde{T}(A)$. By [@TT15 Proposition 3.4], the cone $\widetilde{T}(A)$ has a compact base $K$. Since $f$ is strictly positive and continuous, $\inf_{\tau \in K} f(\tau) > 0$. Hence, we may extend $f$ to the one-point compactification of $\widetilde{T}(A)$ by setting $f(\infty) = \infty$ and the resulting map is still continuous. It follows that $f$ defines an element of the dual cone $L(F(\mathrm{Cu(A)}))$; see [@ERS11 Section 5.1]. Therefore, as $A$ is ${\mathcal Z}$-stable, there exists $a \in (A \otimes {\mathbb{K}})_{+,1}$ such that $f(\tau) = d_\tau(a)$ for all $\tau \in \widetilde{T}(A)$ by [@ERS11 Theorem 6.6]. We end this section with the following summary of all the reductions. \[thm:simple.reduction\] Let $A$ be a non-zero, simple, separable, exact $\mathrm{C}^*$-algebra. Then one of the following holds. - $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra, - $A$ is stably isomorphic to a stably projectionless $\mathrm{C}^*$-algebra $A_0$ with $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ is compact. Suppose (a) does not hold. Then $A$ is stably projectionless by Proposition \[prop:Dichotomy\]. By Theorem \[thm:TracialStatesExist\], $Q\widetilde{T}(A) \neq 0$. Since $A$ is exact, the non-unital version of Haagerup’s Theorem gives $Q\widetilde{T}(A) = \widetilde{T}(A) \neq 0$; see [@Ha14] and [@BK04 Remark 2.29(i)]. Since $\widetilde{T}(A)$ is a cone with a compact base, there exists a strictly positive, continuous, linear function $f:\widetilde{T}(A) \rightarrow [0,\infty)$. By Proposition \[prop:ExistenceRankFunction\], there is a positive contraction $a \in (A \otimes {\mathbb{K}})_{+,1}$ such that $f(\tau) = d_\tau(a)$ for all $\tau \in \widetilde{T}(A)$. Set $A_0 := \overline{a(A \otimes {\mathbb{K}})a}$. By Lemma \[lem:HerditartyContRankFunction\], $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ compact. By Corollary \[cor:Brown\], $A$ is stably isomorphic to $A_0$. Hence, $A_0$ is stably projectionless. Existence ========= Let $A$ be a separable, nuclear C$^*$-algebra with complemented partitions of unity (CPoU). In this section, we will construct a sequence of maps $A \xrightarrow{\theta_n} F_n \xrightarrow{\eta_n} A$, where $F_n$ are finite dimensional $\mathrm{C}^*$-algebras, $\theta_n$ are c.p.c. maps and $\eta_n$ are c.p.c. order zero maps, which induces a $^*$-homomorphism $A \rightarrow A^\omega$ that agrees with the diagonal inclusion $A \rightarrow A^\omega$. We will do this in two steps. First, we will fix a trace $\tau$ and produce maps $A \to F \to A$ that approximate the identity map on $A$ in $\|\cdot \|_{2, \tau}$-norm. We shall then construct the required sequence of maps using complemented partitions of unity (CPoU). The following lemma will be deduced from [@CETWW Lemma 5.1], but it can also be proved by directly applying the methods of [@BCW16 Lemma 2.5]. \[lem:preexistence\] Let $A$ be a separable, nuclear $\mathrm{C}^*$-algebra and let $\tau \in T(A)$. For any finite subset $\mathcal{F} \subseteq A$ and $\epsilon >0$ there exist a finite dimensional $\mathrm{C}^*$-algebra $F$, a c.p.c. map $\theta: A \to F$, and a c.p.c. order zero map $\eta: F \to A$ such that $$\begin{aligned} \label{eq:OneTraceMap1} \|\theta(a)\theta(b)\| &< \epsilon \quad && \text{for }a,b \in \mathcal{F} \text{ such that }ab=0\text{, and} \\ \label{eq:OneTraceMap2} \|\eta \circ \theta(a)-a\|_{2,\tau} &< \epsilon &&\text{for }a \in \mathcal{F}. \end{aligned}$$ If all traces are quasidiagonal,[^6] one can additionally arrange that $$\begin{aligned} \| \theta(ab) - \theta(a)\theta(b) \| < \epsilon, && a,b \in \mathcal{F}. \end{aligned}$$ The trace $\tau$ extends to a trace on $A^\sim$. By [@CETWW Lemma 5.1] applied to $A^\sim$, there exist a finite dimensional $F$, a c.p.c. map $\tilde \theta: A^\sim \to F$, and a c.p.c. order zero map $\tilde \eta : F \to A^\sim$ such that $$\begin{aligned} \|\tilde\theta(a)\tilde\theta(b)\| &< \frac{\epsilon}{2} \quad && \text{for }a,b \in \mathcal{F} \text{ satisfying }ab=0\text{, and} \\ \|\tilde\eta \circ \tilde\theta(a)-a\|_{2,\tau} &< \frac{\epsilon}{2} &&\text{for }a \in \mathcal{F}. \end{aligned}$$ Let $(e_n)_{n\in \mathbb{N}}$ be an approximate identity of $A$. Then $e_n \nearrow 1_{A^\sim}$ in $\|\cdot\|_{2,\tau}$. Hence, the c.p.c. maps $\hat\eta_n: F \to A$ given by $\hat\eta_n(x)= e_n \tilde\eta(x)e_n$ converge to $\tilde \eta$ in the point-$\|\cdot\|_{2,\tau}$ topology. The sequence $\hat\eta_n$ is asymptotically order zero in $\|\cdot\|_{2,\tau}$. Since $F$ is finite dimensional, we can make use of order zero lifting to obtain a sequence of c.p.c. order zero maps $\eta_n:F \rightarrow A$ converging to $\tilde \eta$ in the point-$\|\cdot\|_{2,\tau}$ topology.[^7] Set $\theta := \tilde \theta |_A$. Choose $n \in {\mathbb{N}}$ such that $\|\eta_n(\theta(a)) - \tilde{\eta}(\theta(a))\|_{2,\tau} < \tfrac{\epsilon}{2}$ for all $a \in \mathcal{F}$. We then have $$\begin{aligned} \|\eta \circ \theta(a) - a \|_{2, \tau} & < \| \tilde{\eta} \circ \theta (a) - a \|_{2,\tau} + \frac{\epsilon}{2} < \epsilon, && a \in \mathcal{F}. \end{aligned}$$ If all traces are quasidiagonal, the map $\tilde \theta$ given by [@CETWW Lemma 5.1] is approximately a $^*$-homomorphism. Hence, so is $\theta$. With the previous lemma in hand, we can now utilise complemented partitions of unity (CPoU) to prove the following. \[lem:NiceFactoring\] Let $A$ be a separable, nuclear $\mathrm{C}^*$-algebra with $Q\widetilde{T}(A) = \widetilde{T}_b(A) \neq 0$ and $T(A)$ compact. Suppose A has CPoU. Then there exists a sequence of c.p.c. maps $\phi_n:A \to A$ which factor through finite dimensional algebras $F_n$ as $$\label{eq:NiceFactoring1} \xymatrix{A\ar[dr]_{\theta_n}\ar[rr]^{\phi_n}&&A\\&F_n \ar[ur]_{\eta_n}}$$ with $\theta_n$ c.p.c. and $\eta_n$ c.p.c. order zero, in such a way that the induced map $(\theta_n)_{n=1}^\infty:A \to \prod_\omega F_n$ is order zero, and the induced map $\overline{\Phi}=(\phi_n)_{n=1}^\infty:A\rightarrow A^\omega$ agrees with the diagonal inclusion $A \to A^\omega$. If all traces on $A$ are quasidiagonal, then we may arrange that the induced map $(\theta_n)_{n=1}^\infty:A \to \prod_\omega F_n$ is a $^*$-homomorphism. As in [@CETWW Lemma 5.2], by a standard application of Kirchberg’s Epsilon Test, it suffices to show that for a finite set $\mathcal F \subseteq A$ and a tolerance $\epsilon>0$, there is a sequence $(F_n, \theta_n, \eta_n)$ such that $\theta_n: A \to F_n$ is approximately order zero (or approximately multiplicative if all traces are quasidiagonal), $\eta_n: F_n \to A$ is an order zero map, and the induced map $\overline{\Phi}_\epsilon= (\eta_n \circ \theta_n)_{n=1}^\infty: A \to A^\omega$ satisfies $\| a - \overline{\Phi}_\epsilon (a) \|_{2, T_\omega (A)} < \epsilon$ for all $a \in \mathcal{F}$. In fact, we will arrange for all the $F_n$ to be the same finite dimensional algebra $F$, and all the $\theta_n$ to be the same map $\theta$. Let $\mathcal{F} \subseteq A$ be a finite subset and $\epsilon> 0$. By Lemma \[lem:preexistence\], for any $\tau \in T(A)$ there exist a finite dimensional $\mathrm{C}^*$-algebra $F_\tau$, a c.p.c. map $\theta_\tau: A \to F_\tau$, and an order zero map $\eta_\tau: F_\tau \to A$ such that $$\begin{aligned} \| \theta(a) \theta(b) \| &< \epsilon, && a,b \in \mathcal{F} \text{ such that } ab = 0,\\ \| \eta_\tau \circ \theta_\tau(x) - x \|_{2, \tau}^2 &< \frac{\epsilon^2}{|\mathcal{F}|}, && x \in \mathcal{F}. \end{aligned}$$ Set $a_\tau := \sum\limits_{x \in \mathcal{F}} | x - \eta_\tau\tau \circ \theta_\tau (a) |^2$. By compactness of $T(A)$, there exist $\tau_1, \ldots, \tau_k \in T(A)$ such that for all $\tau \in T(A)$ there is some $\tau_i$ such that $\tau(a_{\tau_i}) < \epsilon^2$. By CPoU, there exist pairwise orthogonal projections $p_1, \ldots, p_k \in A^\omega \cap A'$ adding up to $1_{A^\omega}$ such that $\tau(a_i p_i ) \leq \epsilon^2 \tau(p_i)$ for all $\tau \in T_\omega (A)$. Set $F := \bigoplus_{i=1}^k F_{\tau_i}$, $\theta: A \to F$ and $\eta: F \to A^\omega$ by $$\theta(a) := ( \theta_{\tau_1}(a), \ldots, \theta_{\tau_k}(a)), \quad \eta(x_1, \ldots, x_k) := \sum_{i=1}^k \eta_{\tau_i}(x_i) p_i,$$ where $a \in A$ and $x_i \in F_{\tau_i}$. By construction (see [@CETWW Lemma 5.2. Equation (5.16)]), we obtain $$\| a - \eta \circ \theta (a) \|_{2, T_\omega(A)} < \epsilon, \qquad a \in \mathcal{F}.$$ By [@Wi09 Proposition 1.2.4], $\eta: F \to A^\omega$ can be lifted to a sequence of order zero maps $\eta_n: F \to A$. Thus the sequence $(F, \theta, \eta_n)$ is the required sequence. Finally, if all traces are quasidiagonal, the map $\theta$ is approximately multiplicative by Lemma \[lem:preexistence\]. Combining the previous argument with Kirchberg’s Epsilon Test yields that the induced map $(\theta_n): A \to \prod_\omega F_n$ is a $^*$-homomorphism. Unitisation =========== In this section, we prove that a c.p.c. order zero map $\phi:A \rightarrow B_\omega$ from a separable $\mathrm{C}^*$-algebra into a $\mathrm{C}^*$-ultrapower extends to a c.p.c. order zero map $\phi^\sim:A^\sim \rightarrow B_\omega$. Moreover, under appropriate conditions, Dini’s Theorem can be used to construct an extension for which the tracial behaviour of $\phi^\sim(1_{A^\sim})$ is determined by $\phi$. These results were inspired by the structure theory for order zero maps developed in [@WZ09] and the existence of *supporting order zero maps* proved in [@BBSTWW Lemma 1.14]. We begin with a technical lemma. \[lem:unitising-orderzero\] Let $\phi:A \rightarrow B$ be a c.p.c. order zero map between $\mathrm{C}^*$-algebras. Suppose that $h \in B$ is a positive contraction such that $$\label{eqn:OrderZeroIndentity1} \phi(a)\phi(b) = h\phi(ab), \qquad \qquad a,b \in A_+.$$ Then the map $\phi^\sim:A^\sim \rightarrow B$ defined by $\phi^\sim(a+\lambda 1_{A^\sim}) := \phi(a) + \lambda h$ is c.p.c. order zero. By [@WZ09 Corollary 4.1], there exists a $^*$-homomorphism $\pi:C_0(0,1] \otimes A \rightarrow B_\omega$ such that $\phi(a) = \pi(t \otimes a)$ for all $a \in A$, where $t$ denotes the canonical generator of the cone. In terms of $\pi$, equation (\[eqn:OrderZeroIndentity1\]) gives $h\pi(t \otimes ab) = \pi(t^2 \otimes ab)$ for $a,b \in A_+$, from which we deduce that $$h\pi(t \otimes a) = \pi(t^2 \otimes a), \qquad \qquad a \in A, \label{eqn:hAction1}$$ since $(A_+)^2 = A_+$ and $A_+$ spans $A$. It then follows that $h^n\pi(t^m \otimes a) = \pi(t^{n+m} \otimes a)$ for $a \in A$ and for all $n,m \in {\mathbb{N}}_{\geq 1}$, from which we obtain $$g(h)\pi(f \otimes a) = \pi(gf \otimes a), \qquad \qquad a \in A, f,g \in C_0(0,1]\label{eqn:hAction3},$$ since $\mathrm{span}\lbrace t^n:n \in {\mathbb{N}}_{\geq 1}\rbrace$ is dense in $C_0(0,1]$. Taking adjoints, we also obtain $\pi(f \otimes a)g(h) = \pi(fg \otimes a)$ for all $a \in A, f,g \in C_0(0,1]$. We now define a map $\pi^\sim:C_0(0,1] \odot A^\sim \rightarrow B$ from the algebraic tensor product by setting $\pi^\sim(f \otimes (a + \lambda 1_{A^\sim})) := \pi(f \otimes a) + \lambda f(h)$ on elementary tensors. A straightforward computation using and its adjoint shows that $\pi^\sim$ is a $^*$-homomorphism. Hence, $\pi^\sim$ extends to a map $C_0(0,1] \otimes A^\sim \rightarrow B$. Finally, define $\phi^\sim:A^\sim \rightarrow B$ by $\phi^\sim(x) := \pi^\sim(t \otimes x)$. Then $\phi^\sim$ is a c.p.c. order zero map and $\phi^\sim(a + \lambda 1_{A^\sim}) = \phi(a) + \lambda h$ as required. We now prove the unitisation lemma for order zero maps. \[lem:Unitisation\] Let $A$, $B$ be $\mathrm{C}^*$-algebras with $A$ separable and let $\phi:A \rightarrow B_\omega$ be a c.p.c. order zero map. - There exists a c.p.c. order zero map $\phi^\sim:A^\sim \rightarrow B_\omega$ which extends $\phi$. - Suppose now that $T(B)$ is compact and non-empty. Let $(e_n)_{n\in\mathbb{N}}$ be an approximate unit for $A$ and suppose the function $$\begin{aligned} \theta:\overline{T_\omega(B)}^{w*} &\rightarrow [0,1]\\ \tau &\mapsto \lim_{n \rightarrow \infty} \tau(\phi(e_n)) \end{aligned}$$ is continuous. Then there exists a c.p.c. order zero map $\phi^\sim:A^\sim \rightarrow B_\omega$ which extends $\phi$ and satisfies $\tau(\phi^\sim(1_{A^\sim})) = \theta(\tau)$ for all $\tau \in \overline{T_\omega(B)}^{w*}$. \(a) Let $(e_n)_{n\in\mathbb{N}}$ be an approximate unit for $A$. By [@WZ09 Corollary 4.1], there exists a $^*$-homomorphism $\pi:C_0(0,1] \otimes A \rightarrow B_\omega$ such that $\phi(a) = \pi(t \otimes a)$ for all $a \in A$, where $t$ denotes the canonical generator of the cone. For any $a, b \in A_+$, we have $$\begin{aligned} \lim_{n\rightarrow\infty} \phi(e_n)\phi(ab) &= \lim_{n\rightarrow\infty} \pi(t^2 \otimes e_nab) \notag \\ &= \pi(t^2 \otimes ab) \notag \\ &= \phi(a)\phi(b). \label{eqn:CorrectInTheLimit} \end{aligned}$$ We shall now prove the existence of a positive contraction $h \in B_\omega$ such that (\[eqn:OrderZeroIndentity1\]) holds for all $a,b \in A_+$ by an application of Kirchberg’s Epsilon Test (Lemma \[epstest\]). Let $X_n := B_{+,1}$ for all $n \in \mathbb{N}$. Let $\phi_n:A_+\rightarrow B_+$ be a sequence of functions such that $(\phi_n(a))_{n \in {\mathbb{N}}}$ is a representative for $\phi(a)$ for all $a \in A_+$. Fix a dense sequence $(a_n)_{n\in \mathbb{N}}$ in $A_+$. Define $f^{(r,s)}_n:X_n \rightarrow [0,\infty]$ for $r,s \in \mathbb{N}$ by $$f^{(r,s)}_n(x) := \|x\phi_n(a_ra_s) - \phi_n(a_r)\phi_n(a_s)\|.$$ Then define $f^{(r,s)}_\omega:\prod_{n=1}^\infty X_n \rightarrow [0,\infty]$ by $(x_n)_{n\in\mathbb{N}} \mapsto \lim_{n\rightarrow\omega} f^{(r,s)}_n(x_n).$ Let $m \in \mathbb{N}$ and $\epsilon > 0$. By (\[eqn:CorrectInTheLimit\]), there is $k \in \mathbb{N}$ such that $$\begin{aligned} \|\phi(e_k)\phi(a_ra_s) - \phi(a_r)\phi(a_s)\| &< \epsilon, &1 \leq r,s \leq m. \end{aligned}$$ Let $x = (x_n)_{n \in \mathbb{N}}$ be a sequence of positive contractions in $B$ representing $\phi(e_k)$. Then $f^{(r,s)}_\omega(x) < \epsilon$ whenever $1 \leq r,s \leq m$. By Kirchberg’s Epsilon Test, there exists a sequence of positive contractions $y = (y_n)_{n\in\mathbb{N}}$ in $B$ such that $f^{(r,s)}_\omega(y) = 0$ for all $r,s \in \mathbb{N}$. Let $h$ be the positive contraction in $B_\omega$ represented by $(y_n)_{n\in\mathbb{N}}$. Then $h$ satisfies (\[eqn:OrderZeroIndentity1\]) for all $a,b \in \lbrace a_n: n \in \mathbb{N} \rbrace$. By density, $h$ satisfies (\[eqn:OrderZeroIndentity1\]) for all $a,b \in A_+$. The result now follows by Lemma \[lem:unitising-orderzero\]. \(b) By Dini’s Theorem, $\tau(\phi(e_n)) \nearrow \theta(\tau)$ uniformly for $\tau \in \overline{T_\omega(B)}^{w*}$.[^8] For each $l \in \mathbb{N}$, set $$\label{lem:unitisation.gamma.eq} \gamma_l := \max_{\tau \in \overline{T_\omega(B)}^{w*}} (\theta(\tau) - \tau(\phi(e_l))).$$ Then $\gamma_l \geq 0$ as $\tau(\phi(e_n))$ increases with $n$, and $\lim_{l\rightarrow\infty} \gamma_l = 0$ as the convergence is uniform. We shall now prove the existence of a positive contraction $h \in B_\omega$ such that (\[eqn:OrderZeroIndentity1\]) holds for all $a,b \in A_+$ and that $$\tau(h) = \lim_{n \rightarrow \infty} \tau(\phi(e_n)), \qquad \tau \in \overline{T_\omega(B)}^{w*}. \label{eqn:hTrace}$$ Once again, we use Kirchberg’s Epsilon Test (Lemma \[epstest\]). Let $X_n$, $\phi_n$, $f^{(r,s)}_n$, and $f^{(r,s)}_\omega$ be as in (a). Define $g^{(l,+)}_n,g^{(l,-)}_n:X_n \rightarrow [0,\infty]$ for $l \in \mathbb{N}$ by $$\begin{aligned} g^{(l,+)}_n(x) &:= \max\left(\sup_{\tau \in T(B)} (\tau(x) - \tau(\phi_n(e_l))) - \gamma_l, 0 \right),\\ g^{(l,-)}_n(x) &:= \max\left(\sup_{\tau \in T(B)} (\tau(\phi_n(e_l)) - \tau(x)), 0 \right). \end{aligned}$$ Then define $g^{(l,+)}_\omega,g^{(l,-)}_\omega:\prod_{n=1}^\infty X_n \rightarrow [0,\infty]$ by $(x_n)_{n\in\mathbb{N}} \mapsto \lim_{n\rightarrow\omega} g^{(l,+)}_n(x_n)$ and $(x_n)_{n\in\mathbb{N}} \mapsto \lim_{n\rightarrow\omega} g^{(l,-)}_n(x_n)$ respectively. The key observation is that a sequence $x = (x_n)_{n \in \mathbb{N}}$ representing a positive contraction $b \in B_\omega$ satisfies $g^{(l,+)}_\omega(x) = g^{(l,-)}_\omega(x)=0$ if and only if $$\begin{aligned} \tau(\phi(e_l)) &\leq \tau(b) \leq \tau(\phi(e_l)) + \gamma_l, &\tau \in \overline{T_\omega(B)}^{w*}. \end{aligned}$$ Let $m \in \mathbb{N}$ and $\epsilon > 0$. By (\[eqn:CorrectInTheLimit\]), there is $k > m$ such that $$\begin{aligned} \|\phi(e_k)\phi(a_ra_s) - \phi(a_r)\phi(a_s)\| &< \epsilon, &1 \leq r,s \leq m. \end{aligned}$$ Let $x = (x_n)_{n \in \mathbb{N}}$ be a sequence of positive contractions in $B$ representing $\phi(e_k)$. Then $f^{(r,s)}_\omega(x) < \epsilon$ whenever $1 \leq r,s \leq m$. Furthermore, as $k > m$, we have by that for any $l \leq m$ $$\begin{aligned} \tau(\phi(e_l)) \leq \tau(\phi(e_k)) \leq \theta(\tau) \leq \tau(\phi(e_l)) + \gamma_l, \ \tau \in \overline{T_\omega(B)}^{w*}. \end{aligned}$$ Therefore, $g^{(l,+)}_\omega(x) = g^{(l,-)}_\omega(x) = 0$ for $l \leq m$. By Kirchberg’s Epsilon Test, there exists a sequence of positive contractions $y = (y_n)_{n\in\mathbb{N}}$ in $B$ such that $f^{(r,s)}_\omega(y) = g^{(l,+)}_\omega(y) =g^{(l,-)}_\omega(y) = 0$ for all $r,s,l \in \mathbb{N}$. Let $h$ be the positive contraction in $B_\omega$ represented by $(y_n)_{n\in\mathbb{N}}$. Then $h$ satisfies (\[eqn:OrderZeroIndentity1\]) for all $a,b \in A_+$ as in (a) and $$\begin{aligned} \tau(\phi(e_l)) &\leq \tau(h) \leq \tau(\phi(e_l)) + \gamma_l, &\tau \in \overline{T_\omega(B)}^{w*}, l \in \mathbb{N}. \end{aligned}$$ Letting $l \rightarrow \infty$, we obtain (\[eqn:hTrace\]) because $\lim_{l \rightarrow \infty} \gamma_l = 0$. The result now follows by Lemma \[lem:unitising-orderzero\]. The Uniqueness Theorem ====================== In this section, we establish the uniqueness theorem for maps from a $\mathrm{C}^*$-algebra into a $\mathrm{C}^*$-ultrapower, which will be used to bound the nuclear dimension of ${\mathcal Z}$-stable $\mathrm{C}^*$-algebras. This theorem is a non-unital version of [@CETWW Lemma 4.8] which in turn builds on [@BBSTWW Theorem 5.5]. For notational convenience, we work with ultrapowers throughout rather than general ultraproducts. \[thm:Uniqueness\] Let $B$ be a simple, separable, $\mathcal Z$-stable $\mathrm{C}^*$-algebra with CPoU, stable rank one in $B^\sim$, $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$, and $T(B)$ compact. Let $A$ be a unital, separable, nuclear $\mathrm{C}^*$-algebra, let $\phi_1:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $\phi_1(a)$ is full for all non-zero $a \in A$ and the induced map $\bar{\phi}_1: A \to B^\omega$ is a $^*$-homomorphism, and let $\phi_2:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $$\label{eq:uniqueness.thm.st} \tau\circ\phi_1=\tau\circ\phi_2^m,\quad \tau\in T(B_\omega),\ m\in\mathbb N,$$ where order zero functional calculus is used to interpret $\phi_2^m$.[^9] Let $k\in \mathcal Z_+$ be a positive contraction with spectrum $[0,1]$, and define c.p.c. order zero maps $\psi_i:A\rightarrow (B\otimes\mathcal Z)_\omega$ by $\psi_i(a):=\phi_i(a)\otimes k$. Then $\psi_1$ and $\psi_2$ are unitarily equivalent in $(B\otimes\mathcal Z)_\omega^\sim$. The proof of Theorem \[thm:Uniqueness\] follows by a careful adaptation of the arguments from [@BBSTWW; @CETWW] to handle the potential non-unitality of $B$. In the subsections that follow, we shall first review the key ingredients of the proof of [@CETWW Lemma 4.8] and [@BBSTWW Theorem 5.5] and explain clearly the modifications needed in the non-unital setting. We shall then return to the proof of Theorem \[thm:Uniqueness\]. The $2 \times 2$ Matrix Trick ----------------------------- We begin by reviewing the $2 \times 2$ matrix trick, which converts the problem of unitary equivalence of maps into the problem of unitary equivalence of positive elements. The version stated below is very similar to [@BBSTWW Lemma 2.3]; however, for our applications, we must weaken the stable rank one assumption and we have no need for the Kirchberg algebra case. \[prop:2x2matrixtrick\] Let $A$ be a separable, unital $\mathrm C^*$-algebra and $B$ be a separable $\mathrm{C}^*$-algebra. Let $\phi_1,\phi_2:A\rightarrow B_\omega$ be c.p.c. order zero maps and $\hat{\phi}_1,\hat{\phi}_2:A\rightarrow B_\omega$ be supporting order zero maps (as in ). Suppose that $B_\omega$ has stable rank one in $B_\omega^\sim$. Let $\pi:A\rightarrow M_2(B_\omega)$ be given by $$\pi(a) := \begin{pmatrix} \hat\phi_1(a) & 0 \\ 0 & \hat\phi_2(a) \end{pmatrix}, \quad a\in A,$$ and set $C:= M_2(B_\omega)\cap \pi(A)'\cap \{1_{M_2(B_\omega^\sim)}-\pi(1_A)\}^\perp$. If $$\begin{pmatrix}\phi_1(1_A)&0\\0&0\end{pmatrix}\text{ and }\begin{pmatrix}0&0\\0&\phi_2(1_A)\end{pmatrix}$$ are unitarily equivalent in $C^\sim$, then $\phi_1$ and $\phi_2$ are unitarily equivalent in $B_\omega^\sim$. Let $$u=\begin{pmatrix} u_{11}&u_{12} \\ u_{21}&u_{22} \end{pmatrix} \in C^\sim$$ be a unitary implementing the unitary equivalence of the positive elements. Since $B_\omega$ has stable rank one in $B_\omega^\sim$, we have that $u_{21}^*\phi_2(1_A)$ is the limit of invertibles in $B_\omega^\sim$. Hence, by [@BBSTWW Lemma 1.20] and Kirchberg’s Epsilon Test, there is a unitary $w \in B_\omega^\sim$ with $u_{21}^*\phi_2(1_A)= w\,\left|u_{21}^*\phi_2(1_A)\right|$. Arguing exactly as in the proof of [@BBSTWW Lemma 2.3], we obtain that $\phi_1(a) = w\phi_2(a)w^*$ for all $a \in A$. Property (SI) ------------- Our goal in this section is to show that c.p.c. order zero maps from separable, unital $\mathrm{C}^*$-algebras into ultrapowers of $\mathrm{C}^*$-algebras with compact trace space satisfy property (SI). The following definition is a variant of [@BBSTWW Definition 4.2], which in turn goes back to [@MS-CP12], that allows us to handle cases when the codomain is not unital. Let $B$ be a simple, separable, $\mathrm{C}^*$-algebra with $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$. Write $J_{B_\omega}$ for the trace kernel ideal. Let $A$ be a separable, unital $\mathrm C^*$-algebra, let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map, and define $$\label{eq.defC} C:=B_\omega \cap \pi(A)' \cap \lbrace 1_{B_\omega^\sim}-\pi(1_A) \rbrace^\perp.$$ The map $\pi$ has *property (SI)* if the following holds. For all $e,f\in C_+$ such that $e\in J_{B_\omega}$, $\|f\|=1$ and $f$ has the property that, for every non-zero $a \in A_+$, there exists $\gamma_a > 0$ such that $$\tau(\pi(a)f^n) > \gamma_a,\quad \tau\in T_\omega(B),\ n\in\mathbb N,$$ there exists $s \in C$ such that $$s^*s = e \quad\text{and}\quad fs=s.$$ The main result of this subsection is that under certain hypotheses, c.p.c. maps $A \to B_\omega$ have property (SI). This result is a non-unital version of [@BBSTWW Lemma 4.4] and its proof is almost identical to the original proof. Since this result is one of the most delicate parts of this work, we include its proof. \[prop:SI\] Let $B$ be a simple, separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$. Let $A$ be a separable, unital, nuclear $\mathrm C^*$-algebra. Then every c.p.c. order zero map $\pi:A\rightarrow B_\omega$ has property (SI). Let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map with $A$ and $B$ as in the statement. Let $C$ be as in and set $\overline{C} := C / (C \cap J_{B_\omega}). $ Let $e,f \in C_+$ and $\gamma_a$ be as in the definition of property (SI). As in the proof of [@BBSTWW Lemma 4.4], it is enough to exhibit an element $s \in B_\omega$ approximately satisfying $$\label{eq:propSI1} s^*\pi(a)s = \pi(a)e, \quad \text{for all }a \in A\text{ and}\quad fs=s.$$ Let $\mathcal{F}\subseteq A$ be a finite subset of contractions and $\epsilon > 0$. Since $B$ is $\mathcal Z$-stable, using Lemma \[lem:Zfacts\].(ii) we can find a c.p.c. order zero map $\alpha:{\mathcal Z}\to B_\omega\cap \pi(A)'\cap\{e,f\}'$ such that $\alpha(1_{\mathcal Z})$ acts as unit on $\pi(A)$. Therefore, we may define a new c.p.c. map $\tilde{\pi}:A\otimes{\mathcal Z}\to B_\omega$ by setting $\tilde{\pi}(a\otimes z):=\pi(a)\alpha(z)$. It follows by construction that $\pi(a)=\tilde\pi(a \otimes 1_{\mathcal Z})$ for $a\in A$. By [@WZ09 Corollary 4.3], $\tilde{\pi}$ is a c.p.c. order zero map and note that $e$ and $f$ are elements of the relative commutant $B_\omega\cap \tilde{\pi}(A\otimes{\mathcal Z})'\cap\{1_{B_\omega^\sim}-\tilde{\pi}(1_{A\otimes{\mathcal Z}})\}^\perp$. Arguing as in the proof of [@BBSTWW Lemma 4.4], for any $b \in (A \otimes \mathcal{Z})_+$, there exists a positive constant $\tilde \gamma_b$ such that $$\label{eq.relSI1} \tau (\tilde \pi(b)f^n) > \tilde{\gamma}_b, \qquad \tau \in T_\omega (B_\omega), \ n \in \mathbb{N}.$$ Next, we will apply [@BBSTWW Lemmas 4.7 and 4.8] to the unital, separable, nuclear $\mathrm{C}^*$-algebra $A$. Set $\mathcal G := \{x \otimes 1_{\mathcal Z}: x \in \mathcal{F} \} \subseteq A\otimes{\mathcal Z}$. Since no irreducible representation of $A \otimes {\mathcal Z}$ contains any compact operator, by [@BBSTWW Lemma 4.8] there exist $L,N \in \mathbb{N}$, pairwise inequivalent pure states $\lambda_1, \ldots, \lambda_L$ on $A \otimes {\mathcal Z}$ and elements $c_i, d_{i,l} \in A \otimes {\mathcal Z}$ for $i=1, \ldots, N, \ l = 1,\ldots, L$ such that $$\label{eq.relSI2} x \approx_\epsilon \sum_{l=1}^L \sum_{i,j=1}^N \lambda_l (d_{i,l}^* x d_{j,l}) c_i^* c_j, \qquad x\in\mathcal G.$$ By [@BBSTWW Lemma 4.7], applied to the set $\{d_{i,l}^*xd_{j,l'}:x\in\mathcal G,\ i,j=1,\dots,N,\ l,l'=1,\dots,L\}$, there exist positive contractions $a_1,\dots,a_L\in (A\otimes \mathcal Z)_+$ such that for $l=1,\dots,L$, $\lambda_l(a_l)= 1$ and $$\label{eq:relSIExcision} a_ld_{i,l}^*xd_{j,l}a_l \approx_{\delta} \lambda_l(d_{i,l}^*xd_{j,l})a_l^2,\quad x\in\mathcal G,\ i,j=1,\dots,N,$$ and for $l\neq l'$, $$\label{eq:relSIOrthog} a_ld_{i,l}^*xd_{j,l'}a_{l'}\approx_{\delta} 0,\quad x\in\mathcal G,\ i,j=1,\dots,N,$$ with $\delta := \epsilon/(N^2L\max_k\|c_k\|^2)$. Note, the condition $\lambda_l(a_l)= 1$ ensures that the $a_l$ have norm 1. By hypothesis, $B$ is simple, separable, ${\mathcal Z}$-stable and $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$. Hence, by Proposition \[prop:ZstableToSC\], $B$ has strict comparison of positive elements by bounded traces. Thus, for $l=1,\ldots,L$, we may apply Lemma \[lem:relSItrick\] with $a_l$ in place of $a$. Let $S_l \subseteq (A\otimes \mathcal Z)_+ \setminus \{0\}$ denote the countable subset such that the conclusion of Lemma \[lem:relSItrick\] is satisfied with $a_l$ in place of $a$. Let $\hat{\pi}:A\otimes \mathcal Z\rightarrow B_\omega\cap \{f\}'$ be a supporting c.p.c. order zero map for $\tilde\pi$. As in [@BBSTWW Lemma 4.4], using (\[eq.relSI1\]) and Lemma \[lem:LargeTraceSubordinate\] twice (taking $x:=0$ and with $S_0:=\tilde\pi(S_1 \cup \cdots \cup S_L)$), we find $t,h \in B_\omega \cap \hat\pi(A\otimes \mathcal Z)'\cap \tilde\pi(A\otimes \mathcal Z)'$ satisfying $h \vartriangleleft t \vartriangleleft f$ and, for every $b\in S_1\cup \cdots \cup S_L$, $$\tau(\tilde\pi(b)h^n) \geq \tilde{\gamma}_b, \quad \tau \in T_\omega(B),\ n\in\mathbb N.$$ By Lemma \[lem:relSItrick\] (with $\tilde\pi$ in place of $\pi$), there is a contraction $r_l \in B_\omega$ such that $\tilde\pi(a_l)r_l=tr_l=r_l$ and $r_l^*r_l=e$. Using $t \vartriangleleft f\vartriangleleft \tilde\pi(1_A)$, we obtain $\tilde\pi(1_A)r_l=r_l$ for each $l$, and hence, $$\begin{aligned} \label{eq:relSIrDef} r_l^*\tilde\pi(a_l^2)r_l &=& \tilde\pi(1_A)^{1/2}e\tilde\pi(1_A)^{1/2}. \end{aligned}$$ Set $$\label{eq:relSIsDef} s := \sum_{l=1}^L\sum_{i=1}^N \hat{\pi}(d_{i,l}a_l)r_l \hat{\pi}(c_i) \in B_\omega.$$ Using $r_l= tr_l$, $t \vartriangleleft f$ and that $t$ commutes with the image of $\hat\pi$, we can obtain $fs=s$. For $x\in \mathcal F$, the calculations of [@BBSTWW Lemma 4.4, Equation 4.46] shows $$\begin{aligned} s^*\pi(x)s = \pi(x)e, \end{aligned}$$ as required. Then Kirchberg’s Epsilon Test produces an element $s \in B_\omega$ that exactly satisfies . As in the proof of [@BBSTWW Lemma 4.10], $s\in C$. Structural Results for Relative Commutants ------------------------------------------ Combining property (SI) with complemented partitions of unity (CPoU), one can now prove important structural properties for the relative commutant algebras $C:=B_\omega\cap \pi(A)'\cap \lbrace 1_{\widetilde{B}_\omega}-\pi(1_A)\rbrace^\perp$ arising from the $2\times 2$ matrix trick. \[theo:Omnibus\] Let $B$ be a simple, separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$ and $T(B)$ compact. Suppose additionally that $B$ has CPoU. Let $A$ be a separable unital nuclear $\mathrm{C}^*$-algebra and $\pi:A\rightarrow B_\omega$ a c.p.c. order zero map which induces a $^*$-homomorphism $\overline{\pi}:A\rightarrow B^\omega$. Let $$C:=B_\omega\cap \pi(A)'\cap \lbrace 1_{\widetilde{B}_\omega}-\pi(1_A)\rbrace^\perp,\quad \overline{C}:=C/(C\cap J_{B_\omega}).$$ Then: 1. All traces on $C$ factor through $\bar{C}$. 2. $C$ has strict comparison of positive elements by bounded traces. 3. The traces on $C$ are the closed convex hull of traces of the form $\tau(\pi(a)\cdot)$ for $\tau\in T(B_\omega)$ and $a\in A_+$ with $\tau(\pi(a))=1$. First, we discuss two preliminary lemmas, which originate from [@BBSTWW Lemma 3.20, Lemma 3.22], and were generalised in [@CETWW Lemma 4.3, Lemma 4.6] where the newly discovered CPoU was used in place of the earlier methods that required further assumptions on $T(B)$. Both results are proven by checking that these lemmas approximately hold for $\pi_\tau(B^\omega)''$ for any trace in $\tau \in \overline{T_\omega(B)}^{w*}$, which in turn follows from the fact that $\pi_\tau(B^\omega)''$ is a finite von Neumann algebra, and then using CPoU to patch local solutions together. In [@CETWW], these results are stated for $B$ unital, but the proofs do not make use of the unit. They only require that $T(B)$ is compact, as this guarantees that $B^\omega$ is unital [@CETWW Proposition 1.11]. \[lem:StrictClosureStrictComp\] Let $B$ be a separable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$ and $T(B)$ compact. Suppose $B$ has CPoU. Let $S\subseteq B^\omega$ be a $\|\cdot\|_{2,T_\omega(B)}$-separable and self-adjoint subset, and let $p$ be a projection in the centre of $B^\omega\cap S'$. Then $p(B^\omega\cap S')$ has strict comparison of positive elements by bounded traces. \[prop:CommTraces\] Let $B$ be a separable $\mathrm{C}^*$-algebra with $Q\widetilde{T}(B) = \widetilde{T}_b(B) \neq 0$ and $T(B)$ compact. Suppose $B$ has CPoU. Let $A$ be a separable, unital, nuclear $\mathrm{C}^*$-algebra and $\phi:A\rightarrow B^\omega$ a $^*$-homomorphism. Set $C:= B^\omega\cap\phi(A)'\cap \lbrace 1_{B^\omega}-\phi(1_A) \rbrace^\perp$. Define $T_0$ to be the set of all traces on $C$ of the form $\tau(\phi(a)\cdot)$ where $\tau \in T(B^\omega)$ and $a \in A_+$ satisfies $\tau(\phi(a))=1$. Suppose $z \in C$ is a contraction and $\delta>0$ satisfies $|\rho(z)|\leq \delta$ for all $\rho\in T_0$. Write $K := 12 \cdot 12 \cdot (1+\delta)$. Then there exist contractions $w, x_1,\dots,x_{10},y_1,\dots,y_{10} \in C$, such that $$\label{T3.19:3.37} z = \delta w + K\sum_{i=1}^{10} [x_i,y_i].$$ In particular, $T(C)$ is the closed convex hull of $T_0$. With these preparatory lemmas now established, we explain how to adapt the original proof of [@BBSTWW Theorem 4.1] to prove Proposition \[theo:Omnibus\]. For (i), the proof of [@BBSTWW Theorem 4.1(i)] still works in our situation with the following minor modifications. We use Lemma \[lem:LargeTraceSubordinate\] instead of [@BBSTWW Lemma 1.18], Proposition \[prop:SI\] in place of [@BBSTWW Lemma 4.4] and Lemma \[lem:RelSurjectivity\] in place of [@BBSTWW Lemma 1.19]. Similarly, for (ii) we use the proof from [@BBSTWW Lemma 3.20] with the following modifications. Since $B$ is $\mathcal{Z}$-stable, any matrix algebra embeds into $B^\omega \cap \overline{\pi}(A)' \cap \{\overline{c}\}'$ [@CETWW Proposition 2.3]. We use Lemma \[lem:StrictClosureStrictComp\] to see that $\overline{C}$ has strict comparison of positive elements by traces in place of [@BBSTWW Lemma 3.20], and [@CETWW Lemma 1.8] in place of [@BBSTWW Lemma 3.10]. In the same vein, (iii) follows from (i), [@CETWW Lemma 1.5], and Proposition \[prop:CommTraces\]. Unitary Equivalence of Totally Full Positive Elements ----------------------------------------------------- The main theorem of this section is a non-unital version of the classification of totally full positive elements up to unitary equivalence in relative commutant sequence algebras obtained in [@BBSTWW Lemma 5.1].[^10] Let us begin by stating the following lemma which can be proved in exactly the same way as [@BBSTWW Lemma 5.3] since the Robert–Santiago argument ([@RS10]) at the core of the proof has no unitality hypothesis. All that is required is to formally replace all occurrences of $1_{B_\omega}$ with $1_{B^\sim_\omega}$, and replace [@BBSTWW Lemma 1.17] with Lemma \[NewEpsLemmaNew\], [@BBSTWW Lemma 2.2] with Lemma \[lem:GapInvertibles\], [@BBSTWW Lemma 5.4] with Lemma \[lem:RSLem\]. \[lem:TotallyFullClass\] Let $B$ be a separable, $\mathcal Z$-stable $\mathrm C^*$-algebra and let $A$ be a separable, unital $\mathrm C^*$-algebra. Let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $$\label{eq:TotallyFullClassCdef} C:=B_\omega \cap \pi(A)' \cap \{1_{B_\omega^\sim}-\pi(1_A)\}^\perp$$ is full in $B_\omega$. Assume that every full hereditary subalgebra $D$ of $C$ satisfies the following: if $x \in D$ is such that there exist totally full elements $e_l,e_r \in D_+$ such that $e_l x = xe_r = 0$, then there exists a full element $s \in D$ such that $sx = xs = 0$. Let $a,b \in C_+$ be totally full positive contractions. Then $a$ and $b$ are unitarily equivalent in $C^\sim$ if and only if for every $f \in C_0(0,1]_+$, $f(a)$ is Cuntz equivalent to $f(b)$ in $C$. With this lemma in hand, we can now prove the main theorem of this section. \[thm:TotallyFullClassFinite\] Let $B$ be a separable, $\mathcal Z$-stable $\mathrm C^*$-algebra with $Q\widetilde{T}(B)=\widetilde{T}_b(B) \neq 0$. Let $A$ be a separable, unital $\mathrm C^*$-algebra and let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $$\label{e6.1} C:=B_\omega \cap \pi(A)' \cap \{1_{B_\omega^\sim}-\pi(1_A)\}^\perp$$ is full in $B_\omega$ and has strict comparison of positive elements with respect to bounded traces. Let $a,b \in C_+$ be totally full positive elements. Then $a$ and $b$ are unitarily equivalent in $C^\sim$ if and only if $\tau(a^k) = \tau(b^k)$ for every $\tau \in T(C)$ and $k\in {\mathbb{N}}$. Let $a,b \in C_+$ be totally full positive elements satisfying $\tau(a^k)= \tau(b^k)$ for every $\tau \in T(C)$ and $k \in \mathbb{N}$. Without loss of generality, assume that $a$ and $b$ are contractions. After replacing [@BBSTWW Lemma 1.22(iv)] with Lemma \[lem:Zfacts\](iv), part (i) of the proof of [@BBSTWW Theorem 5.1] shows that the technical hypothesis of Lemma \[lem:TotallyFullClass\] is satisfied for every full hereditary subalgebra $D \subseteq C$. The argument of part (ii) of the proof of [@BBSTWW Theorem 5.1] then shows that $f(a)$ is Cuntz equivalent to $f(b)$ for all $f\in C_0(0,1]_+$. (This part of the proof of [@BBSTWW Theorem 5.1] does not make any use of the unit; only strict comparison is needed.) By Lemma \[lem:TotallyFullClass\], $a$ and $b$ are unitarily equivalent by unitaries in $C^\sim$. The converse is straightforward. Proof of the Uniqueness Theorem ------------------------------- We now have all the ingredients we need for the proof of Theorem \[thm:Uniqueness\]. By hypothesis, $\overline{\phi}_1(1_A) \in B^\omega$ is a projection. Hence $d_\tau (\phi_1 (1_A)) = \tau(\phi_1(1_A))$ and we inmediately can conclude that the map $\tau \mapsto d_\tau (\phi_1 (1_A))$ is continuous. Similarly, by equation , the map $\tau \mapsto d_\tau (\phi_2 (1_A))$ is continuous. Hence, by Lemma \[lem:SupportingMapNew\], there exist supporting order zero maps $\hat{\phi}_1, \hat{\phi}_2: A \to B_\omega$ such that $$\tau( \hat{\phi}_i (a)) = \lim_{m \to \infty} \tau ( \phi_i^{1/m}(a)),\, \qquad a\in A, \, \tau \in T_\omega(B), \ i=1,2,$$ and the maps $\overline{\hat{\phi}}_i: A \to B^\omega$ are $^*$-homomorphisms. In particular, $$\begin{aligned} \tau( \hat{\phi}_2 (a)) \overset{\eqref{eq:uniqueness.thm.st}}{=} \tau(\phi_1(a)), \qquad a \in A, \ \tau \in T_\omega(B). \end{aligned}$$ By Proposition \[prop:SR1inUnitisationUltrapowers\], $B_\omega$ has stable rank one in $B_\omega^\sim$. Thus, we may use the $2 \times 2$ matrix trick (Proposition \[prop:2x2matrixtrick\]). Recall $\psi_i (a) := \phi_i (a)\otimes k$ and define $\hat{\psi}_1, \hat{\psi}_2 : A \to (B \otimes \mathcal{Z})_\omega$ by $\hat{\psi}_i(a) := \hat{\phi}_i (a) \otimes 1_{\mathcal{Z}}$, with $i=1,2$. It is immediate that $\hat{\psi}_i$ is a supporting order zero map for $\psi_i$. Then define $\pi: A \to M_2 (B_\omega) \subseteq M_2 ((B \otimes {\mathcal Z})_\omega)$ by $$\pi(a) := \left( \begin{matrix} \hat{\psi}_1(a) & 0 \\ 0 & \hat{\psi}_2(a) \end{matrix} \right), \qquad a \in A,$$ and set $C:= M_2 ((B \otimes {\mathcal Z})_\omega) \cap \pi(A)' \cap \lbrace 1_{M_2((B \otimes {\mathcal Z})_\omega^\sim)} - \pi(1_A)\rbrace^\perp$. We will show that $$h_1:=\left( \begin{matrix} \psi_1(1_A) & 0 \\ 0 & 0 \end{matrix}\right) \qquad \text{and} \qquad h_2:=\left( \begin{matrix} 0 & 0 \\ 0 & \psi_2(1_A) \end{matrix}\right)$$ are unitarily equivalent in $C^\sim$. For non-zero $a \in A$, observe that $$0 \leq \left( \begin{matrix} \psi_1(a) & 0 \\ 0 & 0 \end{matrix}\right) \leq \left( \begin{matrix} \hat{\psi}_1(a) & 0 \\ 0 & \hat{\psi}_2(a) \end{matrix}\right) = \pi(a),$$ and using that $\psi_1 (a)$ is full in $(B \otimes \mathcal{Z})_\omega$ since $\phi_1(a)$ is full, we conclude that $\pi(a)$ is full in $M_2((B \otimes {\mathcal Z})_\omega)$. By construction, the induced map $\overline{\pi}:A \to M_2(B^\omega)$ is a $^*$-homomorphism. Thus, by Theorem \[theo:Omnibus\], $C$ has strict comparison. Notice that $h_1 \in C$ is full in $M_2(B_\omega)$, and hence $C$ is also full in $M_2(B_\omega)$. Let $\rho$ be a trace on $C$ of the form $\tau (\pi(x) \cdot)$ where $\tau \in T(M_2((B \otimes {\mathcal Z})_\omega)), x \in A_+$ and $\tau (\pi(x))=1$. Set a trace $\tilde{\tau}$ on $B_\omega$ by $\tilde{\tau}(b):= \tau(1_{M_2} \otimes b \otimes 1_{\mathcal{Z}})$. Thus, as in [@BBSTWW Theorem 5.5, equation (5.41)], $$\begin{aligned} \rho(h_1^m) = \frac{1}{2} \tau_\mathcal{Z}(k^m) = \rho(h_2^m), \qquad m \in \mathbb{N}. \label{eq.uniq.thm.1} \end{aligned}$$ By Theorem \[theo:Omnibus\], equation holds for any trace on $C$. An standard strict comparison argument shows that $f(h_1)$ and $f(h_2)$ are full in $C$ for any $f \in C_0(0,1]_+$, so $h_1$ and $h_2$ are totally full. By Theorem \[thm:TotallyFullClassFinite\], $h_1$ is unitarily equivalent to $h_2$ in $C^\sim$. By the $2 \times 2$ matrix trick (Proposition \[prop:2x2matrixtrick\]), $\psi_1$ and $\psi_2$ are unitarily equivalent in $(B\otimes\mathcal Z)_\omega^\sim$. Nuclear Dimension and ${\mathcal Z}$-Stability {#section.nucleardim} ============================================== In this section, we prove Theorems \[thm:NewMain2\] and \[thm:NewMain\], and deduce Corollaries \[cor:NewTrichotomy\] and \[cor:Newclassification\]. \[thm:Main2\] Let $A$ be a simple, separable, nuclear and $\mathcal{Z}$-stable $\mathrm{C}^*$-algebra. Then $\dim_{\mathrm{nuc}} A \leq 1$. By Theorem \[thm:simple.reduction\], either $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra $B$, or $A$ is stably isomorphic to a stably projectionless $\mathrm{C}^*$-algebra $A_0$ with $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ compact. The stably unital case follows immediately from [@CETWW Theorem B] together with Proposition \[thm:StableIsoInvariants\]. Indeed, if $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra $B$, then $B$ is also simple, separable, nuclear and $\mathcal{Z}$-stable by Proposition \[thm:StableIsoInvariants\]. Hence, ${\dim_{\mathrm{nuc}}}B \leq 1$ by [@CETWW Theorem B]. Therefore, ${\dim_{\mathrm{nuc}}}A \leq 1$ by a second application of Proposition \[thm:StableIsoInvariants\]. We now consider the case when $A$ is stably isomorphic to a stably projectionless $\mathrm{C}^*$-algebra $A_0$ with $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ compact. By Proposition \[thm:StableIsoInvariants\], $A_0$ is simple, separable, nuclear and $\mathcal{Z}$-stable. Since $A_0$ is stably projectionless and $\mathcal{Z}$-stable, $A_0$ has stable rank one in $A_0^\sim$ by Theorem \[thm:ZstableProjectionless\]. Furthermore, $A_0$ has CPoU by Theorem \[thm:CPoU\]. In light of Proposition \[thm:StableIsoInvariants\], it suffices to prove that $\dim_{\mathrm{nuc}} A_0 \leq 1$. We now show this using the same fundamental strategy of [@BBSTWW] (taking into account the modification introduced in [@CETWW]). We shall estimate the nuclear dimension of the first factor embedding $j:A_0 \rightarrow A_0 \otimes \mathcal{Z}$, $j(x)= x\otimes 1_{\mathcal{Z}}$ in the sense of [@TW14 Definition 2.2]. Since $A_0$ is $\mathcal{Z}$-stable and $\mathcal{Z}$ is strongly self-absorbing, we have $\dim_{\mathrm{nuc}}(A_0)=\dim_{\mathrm{nuc}}(j)$; see [@TW14 Proposition 2.6]. Let $\iota:A_0 \rightarrow (A_0)_\omega$ be the canonical embedding. Let $h$ be a strictly positive contraction in $A_0$, and let $(e_n)_{n \in {\mathbb{N}}}$ be the approximate identity given by $e_n := h^{1/n}$. Then $\lim_{n\rightarrow\infty}\tau(e_n) = 1$ for all $\tau \in T(A_0)$. Since $T(A_0)$ is compact, $\tau \circ \iota \in T(A_0)$ for all $\tau \in T_\omega(A_0)$ and so for all $\tau \in \overline{T_\omega(A_0)}^{w*}$. It follows that $$\lim_{n\rightarrow\infty} \tau(\iota(e_n)) = 1, \quad \quad \tau \in \overline{T_\omega(A_0)}^{w*}.$$ Therefore, applying Lemma \[lem:unitising-orderzero\], we obtain a c.p.c. order zero extension $\iota^\sim:A_0^\sim \rightarrow (A_0)_\omega$ with $\tau(\iota^\sim(1_{A_0^\sim})) = 1$ for all $\tau \in \overline{T_\omega(A_0)}^{w*}$. Writing $\overline{\iota^\sim}:A_0^\sim \rightarrow A_0^\omega$ for the induced map into the uniform tracial ultrapower, we observe that $1_{A_0^\omega} - \overline{\iota^\sim}(1_{A_0^\sim})$ is a positive element in $A_0^\omega$ that vanishes on all limit traces, so must be zero. Hence, $\overline{\iota^\sim}$ is a unital c.p.c. order zero map, so must be a unital $^*$-homomorphism. Let $(\phi_n:A_0 \rightarrow A_0)_{n=1}^\infty$ be the sequence of c.p.c. maps constructed in Lemma \[lem:NiceFactoring\], which factorize as $\eta_n\circ\theta_n$ through finite dimensional algebras $F_n$ as in . By construction, the induced map $\Phi:A_0 \rightarrow (A_0)_\omega$ is c.p.c. order zero and the induced map $\overline{\Phi}: A_0 \to A_0^\omega$ agrees with the diagonal inclusion $\overline{\iota}:A_0 \rightarrow A_0^\omega$. It follows that $\tau \circ \Phi = \tau \circ \iota$ for all $\tau \in \overline{T_\omega(A)}^{w*}$. Hence, $$\lim_{n\rightarrow\infty} \tau(\Phi(e_n)) = 1, \quad \quad \tau \in \overline{T_\omega(A_0)}^{w*}.$$ Therefore, applying Lemma \[lem:unitising-orderzero\] again, we obtain a c.p.c. order zero extension $\Phi^\sim:A_0^\sim \rightarrow (A_0)_\omega$ with $\tau(\Phi^\sim(1_{A_0^\sim})) = 1$ for all $\tau \in \overline{T_\omega(A_0)}^{w*}$. Arguing as before, $\overline{\Phi^\sim}:A_0^\sim \rightarrow A_0^\omega$ is a unital $^*$-homomorphism. In fact, we have $\overline{\Phi^\sim} = \overline{\iota^\sim}$ since both maps agree on $A_0$ by construction and are unital. We are almost ready to apply Theorem \[thm:Uniqueness\] to the c.p.c. order zero maps $\iota^\sim$ and $\Phi^\sim$. We observe that $A_0$ is a simple, separable, ${\mathcal Z}$-stable with CPoU, stable rank one in $A_0^\sim$, $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$, and $T(A_0)$ compact; that $A_0^\sim$ is unital, separable and nuclear; and that both maps induce a unital $^*$-homomorphism $\overline{\iota^\sim} = \overline{\Phi^\sim}:A_0^\sim \rightarrow A_0^\omega$. Since $\overline{\iota^\sim} = \overline{\Phi^\sim}$ and both maps are $^*$-homomorphisms, we have $$\tau \circ \iota = \tau \circ \Phi^m, \quad \quad \tau \in \overline{T_\omega(A_0)}^{w*}, \ m \in {\mathbb{N}}.$$ The tracial condition follows because $T_\omega(A_0)$ is dense in $T((A_0)_\omega)$ by Theorem \[thm:NoSillyTraces\]. Before we may apply Theorem \[thm:Uniqueness\], we must show that $\iota^\sim(x)$ is full for all non-zero $x \in A_0^\sim$. By Proposition \[prop:ZstableToSC\], $A_0$ has strict comparison by bounded traces because $A_0$ is simple, separable, ${\mathcal Z}$-stable and $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$. Hence, $(A_0)_\omega$ has strict comparison in the sense of Lemma \[lem:StrictCompLimTraces\]. Using that $A_0$ is simple and $T(A_0)$ is compact, the minimum $\gamma_a := \min_{\tau \in T(A)}\tau(a)$ exists and is strictly positive for any non-zero $a \in (A_0)_{+,1}$. Since $\tau \circ \iota \in T(A_0)$ for any $\tau \in \overline{T_\omega(A_0)}^{w*}$, we have $d_\tau(\iota(a)) \geq \gamma_a$ for any $\tau \in \overline{T_\omega(A_0)}^{w*}$. Hence, $\iota(a)$ is full in $(A_0)_\omega$ using Lemma \[lem:StrictCompLimTraces\]. For any non-zero $x \in A_0^\sim$, the ideal $I_x$ of $A_0^\sim$ generated by $x$ contains a non-zero positive contraction $a \in A_{+,1}$. A simple computation using supporting order zero maps shows that the ideal of $(A_0)_\omega$ generated by $\iota^\sim(x)$ contains $\iota^\sim(I_x)$, which is full since it contains the full element $\iota^\sim(a)$. Hence, $\iota^\sim(x)$ is full in $(A_0)_\omega$. Fix a positive contraction $k \in {\mathcal Z}_+$ of full spectrum. Applying Theorem \[thm:Uniqueness\] to the maps $\iota^\sim$ and $\Phi^\sim$, we obtain unitaries $w^{(0)},w^{(1)}\in (A_0 \otimes{\mathcal Z})_\omega^\sim$ such that $$\begin{aligned} x \otimes k &=w^{(0)}(\Phi(x) \otimes k)w^{(0)}{}^*,\\ x \otimes (1_{\mathcal Z}-k)&=w^{(1)}(\Phi(x)\otimes (1_{\mathcal Z}-k))w^{(1)}{}^*,\quad x\in A. \end{aligned}$$ Choose representing sequences $(w^{(0)}_n)_{n=1}^\infty$ and $(w^{(1)}_n)_{n=1}^\infty$ of unitaries in $(A_0 \otimes {\mathcal Z})^\sim$ for $w^{(0)}$ and $w^{(1)}$, respectively. We have c.p.c. maps $\theta_n\oplus\theta_n:A_0 \rightarrow F_n\oplus F_n$, and $\tilde{\eta}_n:F_n\oplus F_n\rightarrow A_0 \otimes {\mathcal Z}$, where $$\tilde{\eta}_n(y_0,y_1):=w^{(0)}_n(\eta_n(y_0)\otimes k)w_n^{(0)}{}^*+w_n^{(1)}(\eta_n(y_1)\otimes (1_{\mathcal Z}-k))w_n^{(1)}{}^*.$$ Hence, $j(x)$ is the limit, as $n \to \omega$, of $(\tilde{\eta}_n\circ(\theta_n\oplus\theta_n)(x))_{n=1}^\infty$ and, since $\tilde{\eta}_n$ is the sum of two c.p.c. order zero maps, ${\dim_{\mathrm{nuc}}}(j)\leq 1$. \[cor:TW2\] Let $A$ be a non-elementary, simple, separable, nuclear $\mathrm{C}^*$-algebra. Then $A$ has finite nuclear dimension if and only if it is $\mathcal{Z}$-stable. Let $A$ be a non-elementary, simple, separable, nuclear $\mathrm{C}^*$-algebra. If $A$ is ${\mathcal Z}$-stable, then ${\dim_{\mathrm{nuc}}}(A) \leq 1 < \infty$ by Theorem \[thm:Main2\]. Conversely, if ${\dim_{\mathrm{nuc}}}(A) < \infty$, then $A$ is ${\mathcal Z}$-stable by [@Ti14 Theorem 8.5] \[cor:Trichotomy2\] The nuclear dimension of a simple $\mathrm{C}^*$-algebra is $0, \, 1$ or $\infty$. Let $A$ be a simple, separable $\mathrm{C}^*$-algebra with finite nuclear dimension. Then, in particular, $A$ is nuclear. If $A$ is elementary, then ${\dim_{\mathrm{nuc}}}(A) = 0$; otherwise, $A$ is ${\mathcal Z}$-stable by Corollary \[cor:TW2\]. Hence, ${\dim_{\mathrm{nuc}}}(A) \leq 1$ by Theorem \[thm:Main2\]. The non-separable case follows from the separable one as in the proof of [@CETWW Corollary C]. In [@Ell96 Theorem 5.2.2], a stably projectionless, simple, separable, nuclear C$^*$-algebra with a unique trace, $K_0 = \mathbb{Z}$ and $K_1 = 0$ is constructed as a limit of 1 dimensional non-commutative CW complexes. By [@Go17 Theorem 1.4], there is a unique C$^*$-algebra with theses properties that has finite nuclear dimension and satisfies the UCT. This C$^*$-algebra is denoted $\mathcal{Z}_0$ [@Go17 Definition 8.1], reflecting its role as an stably projectionless analogue of the Jiang–Su algebra $\mathcal{Z}$. An important further property of $\mathcal{Z}_0$, which follows from its construction, is that $\mathcal{Z}_0$ is $\mathcal{Z}$-stable [@Go17 Remark 7.3, Definition 8.1]. It has recently been shown that simple, separable $\mathrm{C}^*$-algebras which satisfy the UCT and have finite nuclear dimension are classified up to stabilisation with $\mathcal{Z}_0$ by the Elliott invariant [@Go17 Theorem 1.2]. The appropriate form of the Elliott invariant in this setting is detailed in [@Go17 Definition 2.9]. In light of the main result of this paper, we can weaken the hypothesis of finite nuclear dimension in [@Go17 Theorem 1.2] to that of nuclearity. Let $A$ and $B$ be simple, separable, nuclear $\mathrm{C}^*$-algebras which satisfy the UCT. Then $$A \otimes \mathcal{Z}_0 \cong B \otimes \mathcal{Z}_0 \text{ if and only if } \mathrm{Ell}(A \otimes \mathcal{Z}_0) \cong \mathrm{Ell}(B \otimes \mathcal{Z}_0). \notag$$ Since $A$ and $\mathcal{Z}_0$ are simple, separable and nuclear, so is $A\otimes \mathcal{Z}_0$. Using that ${\mathcal Z}_0$ is ${\mathcal Z}$-stable, it follows that $A\otimes \mathcal{Z}_0$ is $\mathcal{Z}$-stable. Therefore, $\dim_{\mathrm{nuc}} A \otimes \mathcal{Z}_0 \leq 1$ by Theorem \[thm:NewMain2\]. Similarly, $\dim_{\mathrm{nuc}} B \otimes \mathcal{Z}_0 \leq 1$. The result now follows from [@Go17 Theorem 1.2]. Decomposition Rank and ${\mathcal Z}$-Stability =============================================== Using the machinery developed to prove Theorem \[thm:NewMain2\], we can also prove similar results for the decomposition rank of simple ${\mathcal Z}$-stable $\mathrm{C}^*$-algebras under suitable finiteness and quasidiagonality assumptions. To this end, we recall the definition of quasidiagonality for tracial states. \[dfn:QDtraces\] Let $A$ be a $\mathrm{C}^*$-algebra. A tracial state $\tau \in T(A)$ is *quasidiagonal* if there exists a net[^11] of c.p.c. maps $\phi_n:A \rightarrow M_{k_n}(\mathbb{C})$ with $\|\phi_n(ab) - \phi_n(a)\phi_n(b)\| \rightarrow 0$ and $\mathrm{tr}_{k_n}(\phi_n(a)) \rightarrow \tau(a)$. In the unital case, the c.p.c. maps in Definition \[dfn:QDtraces\] can be taken to be unital (see the proof of [@Br08 Lemma 7.1.4]). Moreover, a trace $\tau \in T(A)$ is quasidiagonal if and only if its extension to $A^\sim$ is quasidiagonal [@Br06 Proposition 3.5.10]. We write $T_{QD}(A)$ for the set of all quasidiagonal tracial states on $A$. We can now state a decomposition rank version of Theorem \[thm:NewMain2\]. \[thm:dr\] Let $A$ be a simple, separable, nuclear and $\mathcal{Z}$-stable $\mathrm{C}^*$-algebra. Suppose further that $A$ is stably finite and that $T(B) = T_{QD}(B)$ for all non-zero hereditary subalgebras $B \subseteq A \otimes {\mathbb{K}}$. Then ${\mathrm{dr}}(A) \leq 1$. By Theorem \[thm:simple.reduction\], either $A$ is stably isomorphic to a unital $\mathrm{C}^*$-algebra $B$, or $A$ is stably isomorphic to a stably projectionless $\mathrm{C}^*$-algebra $A_0$ with $Q\widetilde{T}(A_0) = \widetilde{T}_b(A_0) \neq 0$ and $T(A_0)$ compact. In the first case, $B$ is simple, separable, nuclear and ${\mathcal Z}$-stable by Proposition \[thm:StableIsoInvariants\]. Moreover, $B$ is finite and $T(B) = T_{QD}(B)$ by our additional hypotheses on $A$. Hence ${\mathrm{dr}}(B) \leq 1$ by [@CETWW Theorem B]. By Proposition \[thm:StableIsoInvariants\] once more, ${\mathrm{dr}}(A) \leq 1$. In the second case, we have that $T(A_0) = T_{QD}(A_0)$ by our additional hypotheses on $A$, so in the proof of Theorem \[thm:Main2\] the maps $\theta_n$ from Lemma \[lem:NiceFactoring\] can be taken to be approximately multiplicative. Therefore, ${\mathrm{dr}}(A_0)\leq 1$ by [@BGSW Lemma 1.9]. Hence, ${\mathrm{dr}}(A) \leq 1$ by Proposition \[thm:StableIsoInvariants\]. If $A$ is a simple, separable, nuclear $\mathrm{C}^*$-algebra in the UCT class, then $T(B) = T(B)_{QD}$ for all hereditary subalgebras $B \subseteq A \otimes {\mathbb{K}}$ by [@TWW17 Theorem A] since the UCT class is closed under stable isomorphism. As with nuclear dimension, we obtain a trichotomy result for decomposition rank as a corollary of Theorem \[thm:dr\]. The decomposition rank of a simple $\mathrm{C}^*$-algebra is $0, \, 1$ or $\infty$. Elementary $\mathrm{C}^*$-algebras have decomposition rank zero, so are covered by this result. Let $A$ be a non-elementary, simple, separable $\mathrm{C}^*$-algebra with finite decomposition rank. Then $A$ has finite nuclear dimension, and so is ${\mathcal Z}$-stable by [@Ti14 Corollary 8.6]. Since ${\mathrm{dr}}(A) < \infty$, $A$ is stably finite and $T(A) = T_{QD}(A)$.[^12] Moreover, by Corollary \[cor:Brown\] and Proposition \[thm:StableIsoInvariants\], ${\mathrm{dr}}(B) = {\mathrm{dr}}(A) < \infty$ for any non-zero hereditary subalgebra $B \subseteq A \otimes {\mathbb{K}}$. Therefore, we have $T(B) = T_{QD}(B)$. Now, ${\mathrm{dr}}(A) \leq 1$ by Theorem \[thm:dr\]. The non-separable case follows from the separable case as in the proof of [@CETWW Corollary C] since the proof of [@WZ10 Proposition 2.6] works equally well for decomposition rank. Non-unital Lemmas ================= The purpose of this appendix is to state appropriate non-unital versions of the technical lemmas from [@BBSTWW]. In cases where substantial modifications to the proof are required, we give full details. In cases where the modifications are trivial, we refer the reader to the proof of the corresponding result from [@BBSTWW] and explain the modifications in a remark. We begin with the existence of supporting order zero maps. \[lem:SupportingMapNew\] Let $A,B_n$ be $\mathrm C^*$-algebras with $A$ separable and unital, set $B_\omega := \prod_\omega B_n$, and suppose that $S\subseteq B_\omega$ is separable and self-adjoint. Let $\phi:A \to B_\omega\cap S'$ be a c.p.c. order zero map. Then there exists a c.p.c. order zero map $\hat\phi:A \to B_\omega\cap S'$ such that $$\begin{aligned} \label{eq:Supporting} \phi(ab)&=\hat{\phi}(a)\phi(b)=\phi(a)\hat{\phi}(b),\quad a,b\in A. \end{aligned}$$ Suppose now that $T(B_n)$ is non-empty for all $n \in {\mathbb{N}}$. If the map $\tau \mapsto d_\tau(\phi(1_A))$ from $\overline{T_\omega(B_\omega)}^{w*}$ to $[0,1]\subseteq\mathbb R$ is continuous (with respect to the weak$^*$-topology) then we can, in addition, arrange that $$\begin{aligned} \label{eq:SupportingTrace} \tau(\hat{\phi}(a)) &= \lim_{m\to\infty} \tau(\phi^{1/m}(a)), \quad a\in A_+,\ \tau \in T_\omega(B_\omega), \end{aligned}$$ where order zero map functional calculus is used to interpret $\phi^{1/m}$. In this case, the induced map $\overline{\hat{\phi}}:A \to B^\omega$ is a $^*$-homomorphism. The proof of [@BBSTWW Lemma 1.14] only actually requires continuity of $\tau \mapsto d_\tau(\phi(1_A))$ on $\overline{T_\omega(B_\omega)}^{w*}$ (as opposed to $T(B_\omega)$) and $\overline{T_\omega(B_\omega)}^{w*}$ is compact in the non-unital case too. There is no further use of the unitality of the $B_n$ in the proof of [@BBSTWW Lemma 1.14]. We now record some more straightforward applications of the Kirchberg’s Epsilon Test. These results are almost identical to those proven in [@BBSTWW Section 1]. However, we shall need slightly more general statements because we wish to apply them to the algebras of the form $B_\omega \cap S' \cap \lbrace 1_{B_\omega^\sim}-d \rbrace^\perp$. \[lem:ActAsUnitNew\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathrm C^*$-algebras and set $B_\omega := \prod_\omega B_n$. Let $S_1,S_2$ be separable self-adjoint subsets of $B_\omega^\sim$, and let $T$ be a separable subset of $B_\omega \cap S_1' \cap S_2^\perp$. Then there exists a contraction $e \in (B_\omega \cap S_1' \cap S_2^\perp)_{+}$ that acts as a unit on $T$, i.e., such that $et=te=t$ for every $t\in T$. The only change to the statement is that $S_1,S_2$ are subsets of $B_\omega^\sim$ (as opposed to $B_\omega$). The proof is not affected. \[NewEpsLemmaNew\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathrm C^*$-algebras and set $B_\omega := \prod_\omega B_n$. Let $S_1,S_2$ be separable self-adjoint subsets of $B_\omega^\sim$, and set $C:=B_\omega\cap S_1'\cap S_2^\perp$. 1. Let $h_1,h_2\in C_+$. Then $h_1$ and $h_2$ are unitarily equivalent via a unitary from $C^\sim$ if and only if they are approximately unitarily equivalent, i.e., for any $\epsilon>0$ there exists a unitary $u\in C^\sim$ with $uh_1u^*\approx_\epsilon h_2$. 2. Let $a\in C$. Then there exists a unitary $u\in C^\sim$ with $a=u|a|$ if and only if for each $\epsilon>0$ there exists a unitary $u\in C^\sim$ with $a\approx_\epsilon u|a|$. 3. Let $h_1,h_2 \in C_+$. Then $h_1$ and $h_2$ are Murray-von Neumann equivalent if and only if they are approximately Murray-von Neumann equivalent, i.e., for any $\epsilon>0$ there exists $x\in C$ with $xx^*\approx_\epsilon h_1$ and $x^*x\approx_\epsilon h_2$. The statement of [@BBSTWW Lemma 1.17] uses the convention that $C^\sim := C$ when $C$ is already unital. In this paper, we use the convention that a new unit is still adjoined, so $C^\sim \cong C \oplus \mathbb{C}$ when $C$ is unital. The choice of convention does not affect the validity of the lemma.[^13] Apart from this, the only change to the statement is that $S_1,S_2$ are subsets of $B_\omega^\sim$ (as opposed to $B_\omega$), which does not affect the proof. \[lem:LargeTraceSubordinate\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathrm C^*$-algebras with $T(B_n)$ non-empty for each $n\in{\mathbb{N}}$. Write $B_\omega := \prod_\omega B_n$. Let $S_0$ be a countable self-adjoint subset of $(B_\omega)_+$ and let $T$ be a separable self-adjoint subset of $B_\omega$. If $x,f \in (B_\omega \cap S_0'\cap T')_+$ are contractions with $x\vartriangleleft f$ and with the property that for all $a \in S_0$ there exists $\gamma_a \geq 0$ such that $\tau(af^m) \geq \gamma_a$ for all $m\in {\mathbb{N}},\ \tau \in T_\omega(B_\omega)$, then there exists a contraction $f' \in (B_\omega \cap S_0'\cap T')_+$ such that $x\vartriangleleft f' \vartriangleleft f$ and $\tau(a(f')^m) \geq \gamma_a$ for all $m\in{\mathbb{N}},\ \tau \in T_\omega(B_\omega)$, and $a\in S_0$. If each $B_n$ is simple, separable, $\mathcal Z$-stable and $Q\widetilde{T}(B_n) = \widetilde{T}_b(B_n) \neq 0$ for all $n \in {\mathbb{N}}$, then the above statement holds with $T(B_\omega)$ in place of $T_\omega(B_\omega)$. The only change to the proof of [@BBSTWW Lemma 1.18] is to replace $\min_{\tau \in T(B_n)}$ with $\inf_{\tau \in T(B_n)}$ in [@BBSTWW Equation (1.34)], as the minimum need not exist in the non-unital case. The final sentence follows since $T_\omega(B_\omega)$ is weak$^*$-dense in $T_\omega(B_\omega)$ under the additional hypotheses by Theorem \[thm:NoSillyTraces\]. \[lem:RelSurjectivity\] Let $(B_n)_{n=1}^\infty$ be a sequence of separable $\mathrm C^*$-algebras with $T(B_n)\neq\emptyset$ for each $n\in{\mathbb{N}}$ and set $B_\omega:=\prod_\omega B_n$. Let $A$ be a separable, unital $\mathrm C^*$-algebra and let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $\pi(1_A)$ is full and the induced map $\bar{\pi}:A \to B^\omega$ is a $^*$-homomorphism. Define $C:=B_\omega \cap \pi(A)' \cap \{1_{B_\omega^\sim}-\pi(1_A)\}^\perp$. Let $S \subseteq C$ be a countable self-adjoint subset and let $\bar{S}$ denote the image of $S$ in $B^\omega$. 1. Then the image of $C\cap S'$ in $B^\omega$ is precisely $$\begin{aligned} \lefteqn{ \bar{\pi}(1_A)\left(B^\omega \cap \bar{\pi}(A)' \cap \bar{S}'\right) } \nonumber \\ & = &B^\omega \cap \bar{\pi}(A)'\cap \bar{S}'\cap \{1_{(B^\omega)^\sim}-\bar{\pi}(1_A)\}^\perp, \end{aligned}$$ a $\mathrm C^*$-subalgebra of $B^\omega$ with unit $\bar{\pi}(1_A)$. 2. Let $\tau\in T_\omega(B_\omega)$ be a limit trace and $a\in A_+$ and form the tracial functional $\rho:=\tau(\pi(a)\cdot)$ on $C$. Then $\|\rho\|=\tau(\pi(a))$. If each $B_n$ is additionally simple, $\mathcal Z$-stable and $Q\widetilde{T}(B_n) = \widetilde{T}_b(B_n) \neq 0$ for all $n \in {\mathbb{N}}$, then this holds for all traces $\tau\in T(B_\omega)$. The proof of [@BBSTWW Lemma 1.19] does not need the $B_n$ to be unital. Note that the notation $\{1_{B_\omega^\sim}-\pi(1_A)\}^\perp$ is just an alternative notation for subalgebra on which $\pi(1_A)$ acts as a unit, and similarly for $\{1_{(B^\omega)^\sim}-\bar{\pi}(1_A)\}^\perp$. The final sentence of (ii) follows since $T_\omega(B_\omega)$ is weak$^*$-dense in $T_\omega(B_\omega)$ under the additional hypotheses by Theorem \[thm:NoSillyTraces\]. Next, we consider some properties of ultraproducts of separable, ${\mathcal Z}$-stable $\mathrm{C}^*$-algebras. \[lem:Zfacts\] Let $(B_n)_{n\in{\mathbb{N}}}$ be a sequence of separable, ${\mathcal Z}$-stable $\mathrm C^*$-algebras and set $B_\omega:=\prod_\omega B_n$. Then: - \[lem:Zfacts2\] If $S\subseteq B_\omega$ is separable, then there exist isomorphisms $\phi_n:B_n \to B_n \otimes {\mathcal Z}$ such that the induced isomorphism $\Phi:B_\omega \to \prod_\omega (B_n \otimes {\mathcal Z})$ maps $x\in S$ to $x \otimes 1_{\mathcal Z} \in (\prod_\omega B_n) \otimes {\mathcal Z}\subseteq \prod_\omega (B_n\otimes{\mathcal Z})$. - \[lem:Zfacts1\] Let $S_1,S_2 \subseteq B_\omega^\sim$ be separable and self-adjoint. For any separable subset $ T \subseteq B_\omega \cap S_1' \cap S_2^\perp$, there exists a c.p.c. order zero map $\psi:{\mathcal Z}\rightarrow B_\omega \cap S_1' \cap S_2^\perp \cap T'$ such that $\psi(1_{\mathcal Z})$ acts as a unit on $T$. - \[lem:Zfacts1’\] Let $S_1,S_2 \subseteq B_\omega^\sim$ be separable and self-adjoint. For any separable subalgebra $C \subseteq B_\omega \cap S_1' \cap S_2^\perp$, there exists a $^*$-homomorphism $\Psi:C \otimes {\mathcal Z}\rightarrow B_\omega \cap S_1' \cap S_2^\perp$ such that $\Psi(x \otimes 1_{\mathcal Z}) = x$ for all $x \in C$. - \[lem:Zfacts4\] If each $B_n$ is projectionless, then $B_\omega$ has stable rank one in $B_\omega^\sim$. - \[lem:Zfacts6\] If $S \subseteq B_\omega$ is separable and self-adjoint, and $b \in (B_\omega \cap S')_+$, then for any $n\in \mathbb{N}$ there exists $c\in (B_\omega\cap S')_+$ with $c \leq b$ such that $n[c] \leq [b] \leq (n+1)[c]$ in $W(B_\omega\cap S')$. Observe that (i) is the same as in [@BBSTWW Lemma 1.22.(i)] and follows as ${\mathcal Z}$ is strongly self-absorbing. For (ii), by Lemma \[lem:ActAsUnitNew\], there exists a positive contraction $h \in B_\omega \cap S_1' \cap S_2^\perp$ that acts as a unit on $T$. Set $S := S_1 \cup S_2 \cup T \cup \lbrace h \rbrace$. Let $\Phi:B_\omega \rightarrow\prod_{\omega} (B_n \otimes {\mathcal Z})$ be the isomorphism from (i) with $\Phi(x) = x \otimes 1_{\mathcal Z}$ for all $x \in S$. Define a c.p.c. order zero map $\psi':{\mathcal Z}\rightarrow \prod_{\omega} (B_n \otimes {\mathcal Z}) $ by $\psi'(z) := h \otimes z$. By the choice of $h$, $\psi(1_{\mathcal Z})$ acts as a unit on $T \otimes 1_{\mathcal Z}$ and the image of $\psi$ lies in $(B_\omega \otimes 1_{\mathcal Z}) \cap (S_1 \otimes 1_{\mathcal Z})' \cap (S_2 \otimes 1_{\mathcal Z})^\perp \cap (T \otimes 1_{\mathcal Z})'$. Now set $\psi := \Phi^{-1} \circ \psi'$. For (ii’), by part (ii), there exists a c.p.c. order zero map $\psi:{\mathcal Z}\rightarrow B_\omega \cap S_1' \cap S_2^\perp \cap T'$ such that $\psi(1_Z)$ acts as a unit on $C$. Since ${\mathcal Z}$ is nuclear, we can define a c.p.c. order zero map $\Psi:C \otimes {\mathcal Z}\rightarrow B_\omega \cap S_1' \cap S_2^\perp$ by $x \otimes z \mapsto x\psi(z)$. Let $x_1,x_2 \in C$ and $z_1,z_2 \in {\mathcal Z}$. Then $$\begin{aligned} \Psi(x_1 \otimes z_1)\Psi(x_1 \otimes z_1) &= x_1\psi(z_1)x_2\psi(z_2) \notag \\ &= x_1x_2\psi(1_Z)\psi(z_1z_2) \notag \\ &= x_1x_2\psi(z_1z_2) \notag \\ &= \Psi(x_1x_2 \otimes z_1z_2), \end{aligned}$$ where we have used the order zero identity in the second line. Hence, $\Psi$ is in fact a $^*$-homomorphism. Moreover, we have $\Psi(x \otimes 1_{\mathcal Z}) = x\psi(1_{\mathcal Z}) = x$ for all $x \in C$. Part (iii) follows by combining Theorem \[thm:ZstableProjectionless\] with Proposition \[prop:SR1inUnitisationUltrapowers\]. For (iv), let $C$ be the $\mathrm{C}^*$-algebra generated by $b$. By (ii’), there is a $^*$-homomorphism $\Psi:C \otimes {\mathcal Z}\rightarrow B_\omega \cap S'$. By [@Ro04 Lemma 4.2], there exists $e_n \in {\mathcal Z}_{+,1}$ with $n[e_n] \leq [1_{\mathcal Z}] \leq (n+1)[e_n]$. Set $c := \Psi(b \otimes e_n)$. The following lemmas are crucial to the results of [@BBSTWW Section 5]. The proof of the first needs to be adapted slightly to the non-unital setting. \[lem:Robertsr1New\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathcal Z$-stable $\mathrm C^*$-algebras and set $B_\omega :=\prod_\omega B_n$. Let $S \subseteq B_\omega$ be separable and self-adjoint, and let $d \in (B_\omega \cap S')_+$ be a contraction. Suppose that $x,f \in C:=B_\omega \cap S' \cap \{1_{B_\omega^\sim}-d\}^\perp$ are such that $xf=fx=0$, $f \geq 0$ and $f$ is full in $C$. Then $x$ is approximated by invertibles in $C^\sim$. By Lemma \[lem:ActAsUnitNew\] (with $T:=\{x^*x,xx^*\},S_1:=S,S_2:=\{1_{B_\omega^\sim}-d,f\}$), we obtain a contraction $e \in C_+$ such that $xx^*,x^*x\vartriangleleft e$ and $ef=0$. Polar decomposition yields $ex=xe=x$. As in the proof of [@BBSTWW Lemma 2.1], we may find a separable subalgebra $C_0$ of $C$ containing $x,e$, and $f$, such that $f$ is full in $C_0$. By Lemma \[lem:Zfacts\](ii’), there is a $^*$-homomorphism $\Psi: C_0 \otimes {\mathcal Z}\rightarrow C$ such that $\Psi(x \otimes 1_{\mathcal Z}) = x$ for all $x \in C_0$. By [@Rob16 Lemma 2.1], $x\otimes 1_{\mathcal Z}$ is a product of two nilpotent elements $n_1,n_2 \in C_0\otimes\mathcal Z$. It follows that $x = \Psi(x \otimes 1_{\mathcal Z}) = \Psi(n_1)\Psi(n_2)$ is the product of two nilpotent elements in $C$. If $y\in C$ is nilpotent and $\epsilon>0$, the operator $y+\epsilon 1_{C^\sim}$ is invertible in $C^\sim$ (with inverse $-\sum_{k=1}^N(-\epsilon)^{-k}y^{k-1}$, where $N\in\mathbb{N}$ satisfies $y^N=0$). Therefore, $x$ can be approximated by invertible elements in $C^\sim$. \[lem:GapInvertibles\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathcal Z$-stable $\mathrm C^*$-algebras and set $B_\omega :=\prod_\omega B_n$. Let $S \subseteq B_\omega$ be separable and self-adjoint, and let $d \in (B_\omega \cap S')_+$ be a contraction. Suppose that $x,s \in C:=B_\omega \cap S' \cap \{1_{B_\omega^\sim}-d\}^\perp$ are such that $xs=sx=0$ and $s$ is full in $C$. Then $x$ is approximated by invertibles in $C^\sim$. Let $C_0$ be the $\mathrm{C}^*$-subalgebra of $C$ generated by $x$ and $s$. By Lemma \[lem:Zfacts\](ii’), there exists a $^*$-homomorphism $\Psi:C_0 \otimes {\mathcal Z}\rightarrow C$ with $\Psi(y \otimes 1_{\mathcal Z}) = y$ for all $y \in C_0$. Let $z_1,z_2 \in \mathcal Z_+$ be non-zero orthogonal elements. Set $s' := \Psi(s \otimes z_1)$ and $f:=\Psi(|s| \otimes z_2)$. As in the proof of [@BBSTWW Lemma 2.2], it follows by Lemma \[lem:Robertsr1New\] (with $s'$ in place of $x$) that $s'$ is approximated by invertibles in $C^\sim$. We finish the proof exactly as in the proof of [@BBSTWW Lemma 2.2] where we replace [@BBSTWW Lemma 1.17] with Lemma \[NewEpsLemmaNew\], and [@BBSTWW Lemma 2.1] with Lemma \[lem:Robertsr1New\]. \[lem:RSLem\] Let $B$ be a separable, $\mathcal Z$-stable $\mathrm C^*$-algebra and let $A$ be a separable, unital $\mathrm C^*$-algebra. Let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map such that $$C:=B_\omega \cap \pi(A)' \cap \{1_{B^\sim_\omega}-\pi(1_A)\}^\perp$$ is full in $B_\omega$. Assume that every full hereditary subalgebra $D$ of $C$ satisfies the following: if $x \in D$ is such that there exist totally full elements $e_l,e_r \in D_+$ such that $e_l x = xe_r = 0$, then there exists a full element $s \in D$ such that $sx = xs = 0$. Let $e,f,f',\alpha, \beta \in C_+$ be such that $$\label{eq:RSLem2Hyp1} \alpha \vartriangleleft e,\quad \alpha \sim \beta \vartriangleleft f, \quad\text{and} \quad f \sim f' \vartriangleleft e.$$ Suppose also that there exist $d_e,d_f \in C_+$ that are totally full, such that $$\begin{aligned} \notag d_e &\vartriangleleft e, \quad d_e\alpha = 0, \quad \text{and} \\ d_f &\vartriangleleft f, \quad d_f\beta = 0.\label{eq:RSLem2Hyp2} \end{aligned}$$ Then there exists $e' \in C_+$ such that $$\begin{aligned} \alpha \vartriangleleft e' \vartriangleleft e, \quad \text{and} \quad \alpha+e' \sim \beta+f. \end{aligned}$$ The proof of [@RS10 Lemma 2] does not assume unitality. The proof from [@BBSTWW] is still valid after replacing [@BBSTWW Lemma 1.17] with Lemma \[NewEpsLemmaNew\], [@BBSTWW Lemma 2.2] with Lemma \[lem:GapInvertibles\], and using $1_{B^\sim_\omega}$ in place of $1_{B_\omega}$. The following lemma concerns the interplay between strict comparison and ultraproducts in the non-unital setting. \[lem:StrictCompLimTraces\] Let $(B_n)_{n=1}^\infty$ be a sequence of $\mathrm C^*$-algebras with $T(B_n)$ non-empty and set $B_\omega:=\prod_\omega B_n$. Suppose each $B_n$ has strict comparison of positive elements with respect to bounded traces. Then $B_\omega$ has strict comparison of positive elements with respect to limit traces, in the following sense: If $a,b \in M_k(B_\omega)_+$ for some $k\in\mathbb N$ satisfy $d_\tau(a) < d_\tau(b)$ for all $\tau$ in the weak$^{*}$-closure of $T_\omega(B_\omega)$, then $a \preceq b$. The proof is identical to that of [@BBSTWW Lemma 1.23]. However, it is important to note that, in the non-unital case, we do not necessarily have that $\overline{T_\omega(B_\omega)}^{w*} \subseteq T(B_\omega)$, as the later need not be closed. Indeed, we may have $0 \in \overline{T_\omega(B_\omega)}^{w*}$ in which case $d_\tau(a) < d_\tau(b)$ cannot hold for all $\tau \in \overline{T_\omega(B_\omega)}^{w*}$. Finally, we record a technical lemma needed for the proof of the main theorem of the property (SI) section. \[lem:relSItrick\] Let $B$ be a simple, separable, $\mathrm{C}^*$-algebra with $Q\widetilde T(B) = \widetilde{T}_b(B)\neq 0$. Suppose $B$ has strict comparison of positive elements by bounded traces. Let $A$ be a separable, unital $\mathrm{C}^*$-algebra and let $\pi:A\rightarrow B_\omega$ be a c.p.c. order zero map. Let $a \in A_+$ be a positive contraction of norm $1$. Then there exists a countable set $S \subseteq A_+\setminus \{0\}$ such that the following holds: If $e,t,h \in (B_\omega \cap \pi(A)' \cap \lbrace 1_{B_\omega^\sim}-\pi(1_A)\rbrace^\perp)_+$ are contractions such that $$e \in J_{B_\omega}\quad \text{and} \quad h \vartriangleleft t,$$ and if for all $b \in S$, there exists $\gamma_b > 0$ such that $$\label{eq:relSItrickTr} \tau(\pi(b)h) > \gamma_b,\quad \tau\in T_\omega(B_\omega),$$ then there exists a contraction $r \in B_\omega$ such that $$\label{eq:relSItrickrDef} \pi(a)r=tr=r \quad \text{and} \quad r^*r=e.$$ The proof of [@BBSTWW Lemma 4.9] works in our situation using Lemma \[lem:StrictCompLimTraces\] in place of [@BBSTWW Lemma 1.23]. [10]{} E. M. Alfsen. . Springer-Verlag, New York-Heidelberg, 1971. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 57. P. Ara, F. Perera, and A. S. Toms. -theory for operator algebras. [C]{}lassification of [$\mathrm{C}^*$]{}-algebras. In [*Aspects of operator algebras and applications*]{}, volume 534 of [*Contemp. Math.*]{}, pages 1–71. Amer. Math. Soc., Providence, RI, 2011. B. Blackadar. A simple [C$^*$]{}-algebra with no nontrivial projections. , 78(4):504–508, 1980. B. Blackadar. , volume 122. Springer, 2006. B. Blackadar and J. Cuntz. The structure of stable algebraically simple [$\mathrm{C}^*$]{}-algebras. , 104(4):813–822, 1982. B. Blackadar and D. Handelman. Dimension functions and traces on [$\rm C^{\ast} $]{}-algebras. , 45(3):297–340, 1982. E. Blanchard and E. Kirchberg. Non-simple purely infinite [$\mathrm{C}^*$]{}-algebras: the [H]{}ausdorff case. , 207(2):461–513, 2004. J. Bosa, N. Brown, Y. Sato, A. Tikuisis, S. White, and W. Winter. Covering dimension of [$\mathrm{C}^*$]{}-algebras and [$2$]{}-coloured classification. , 257(1233):viii+97, 2019. J. Bosa, J. Gabe, A. Sims, and S. White. The nuclear dimension of [$\mathcal{O}_\infty$]{}-stable [$\mathrm{C}^*$]{}-algebras. arXiv:1906.02066. L. G. Brown. Stable isomorphism of hereditary subalgebras of [$\mathrm{C}^*$]{}-algebras. , 71(2):335–348, 1977. L. G. Brown and G. K. Pedersen. -algebras of real rank zero. , 99(1):131–149, 1991. N. P. Brown. Invariant means and finite representation theory of [$\mathrm{C}^*$]{}-algebras. , 184(865):viii+105, 2006. N. P. Brown, J. Carri[ó]{}n, and S. White. Decomposable approximations revisited. In [*Operator algebras and applications: The Abel Symposium 2015. Abel Symposia 12.*]{}, pages 45–59. Springer, 2016. N. P. Brown and N. Ozawa. , volume 88. American Mathematical Soc., 2008. J. Castillejos. Decomposable approximations and approximately finite dimensional [$\mathrm{C^*}$]{}-algebras. , 162(1):1–12, 2017. J. Castillejos, S. Evington, A. Tikuisis, S. White, and W. Winter. Nuclear dimension of simple [$\mathrm{C}^*$]{}-algebras. arXiv:1901.05853. C. T. Conley, S. Jackson, D. Kerr, A. Marks, B. Seward, and R. Tucker-Drob. Folner tilings for actions of amenable groups. , 371(1–2):663–683, 2018. K. T. Coward, G. A. Elliott, and C. Ivanescu. The [C]{}untz semigroup as an invariant for [$\mathrm{C}^*$]{}-algebras. , 623:161–193, 2008. J. Cuntz and G. K. Pedersen. Equivalence and traces on [$\mathrm{C}^*$]{}-algebras. , 33(2):135–164, 1979. G. A. Elliott. An invariant for simple [$\mathrm{C}^*$]{}-algebras. In [*Canadian [M]{}athematical [S]{}ociety. 1945–1995, [V]{}ol. 3*]{}, pages 61–90. Canadian Math. Soc., Ottawa, ON, 1996. G. A. Elliott, G. Gong, and L. Li. On the classification of simple inductive limit [$\mathrm{C}^*$]{}-algebras. [II]{}. [T]{}he isomorphism theorem. , 168(2):249–320, 2007. G. A. Elliott, G. Gong, H. Lin, and Z. Niu. On the classification of simple amenable [$\mathrm{C}^*$]{}-algebras with finite decomposition rank, [II]{}. arXiv:1507.03437. G. A. Elliott, G. Gong, H. Lin, and Z. Niu. Simple stably projectionless [$\mathrm{C}^*$]{}-algebras with generalized tracial rank one. arXiv:1711.01240. G. A. Elliott, G. Gong, H. Lin, and Z. Niu. The classification of simple separable [KK]{}-contractible [$\mathrm{C}^*$]{}-algebras with finite nuclear dimension. arXiv:1712.09463. G. A. Elliott, G. Gong, H. Lin, and Z. Niu. Simple stably projectionless [$\mathrm{C}^*$]{}-algebras with generalized tracial rank one. arXiv:1711.01240. G. A. Elliott, L. Robert, and L. Santiago. The cone of lower semicontinuous traces on a [$\mathrm{C}^*$]{}-algebra. , 133(4):969–1005, 2011. I. Farah and T. Katsura. Nonseparable [UHF]{} algebras [I]{}: [D]{}ixmier’s problem. , 225(3):1399–1430, 2010. J. Giol and D. Kerr. Subshifts and perforation. , 639:107–119, 2010. G. Gong and H. Lin. On classification of non-unital simple amenable [$\mathrm{C}^*$]{}-algebras, [I]{}. arXiv:1611.04440. G. Gong and H. Lin. On classification of non-unital simple amenable [$\mathrm{C}^*$]{}-algebras, [II]{}. arXiv:1702.01073. G. Gong, H. Lin, and Z. Niu. Classification of finite simple amenable [$\mathcal{Z}$]{}-stable [$\mathrm{C}^*$]{}-algebras. arXiv:1501.00135. U. Haagerup. Quasitraces on exact [$\mathrm{C}^*$]{}-algebras are traces. , 36(2-3):67–92, 2014. B. Jacelon. A simple, monotracial, stably projectionless [$\mathrm{C}^\ast$]{}-algebra. , 87(2):365–383, 2013. X. Jiang and H. Su. On a simple unital projectionless [$\mathrm{C}^*$]{}-algebra. , 121(2):359–413, 1999. D. Kerr and G. Szab[ó]{}. Almost finiteness and the small boundary property. arXiv:1807.04326. E. Kirchberg. Exact [C$^*$]{}-algebras, tensor products, and the classification of purely infinite algebras. In [*Proceedings of the [I]{}nternational [C]{}ongress of [M]{}athematicians, [V]{}ol. 1, 2 ([Z]{}ürich, 1994)*]{}, pages 943–954. Birkhäuser, Basel, 1995. E. Kirchberg. Central sequences in [$\mathrm{C}^*$]{}-algebras and strongly purely infinite algebras. In [*Operator [A]{}lgebras: [T]{}he [A]{}bel [S]{}ymposium 2004*]{}, volume 1 of [*Abel Symp.*]{}, pages 175–231. Springer, Berlin, 2006. E. Kirchberg and M. R[ø]{}rdam. Non-simple purely infinite [$\mathrm{C}^*$]{}-algebras. , 122(3):637–666, 2000. E. Kirchberg and W. Winter. Covering dimension and quasidiagonality. , 15(1):63–85, 2004. A. Kishimoto and A. Kumjian. Simple stably projectionless [$\mathrm{C}^*$]{}-algebras arising as crossed products. , 48(5):980–996, 1996. D. McDuff. Central sequences and the hyperfinite factor. , 21:443–461, 1970. F. Murray and J. von Neumann. On rings of operators. [IV]{}. , 44:716–808, 1943. H. Matui and Y. Sato. -stability of crossed products by strongly outer actions. , 314(1):193–228, 2012. P. W. Ng and L. Robert. Sums of commutators in pure [$\mathrm{C}^*$]{}-algebras. , 9(1):121–154, 2016. N. Ozawa. Dixmier approximation and symmetric amenability for [$\rm C^*$]{}-algebras. , 20(3):349–374, 2013. G. K. Pedersen. Measure theory for [$\rm C^{\ast} $]{}-algebras. [III]{}, [IV]{}. , 25:121–127, 1969. G. K. Pedersen. , volume 14 of [*London Mathematical Society Monographs*]{}. Academic Press, Inc. \[Harcourt Brace Jovanovich, Publishers\], London-New York, 1979. N. C. Phillips. A classification theorem for nuclear purely infinite simple [$\mathrm{C}$]{}-algebras. , 5:49–114, 2000. S. Razak. On the classification of simple stably projectionless [$\mathrm{C}^*$]{}-algebras. , 54(1):138–224, 2002. M. A. Rieffel. Dimension and stable rank in the [$K$]{}-theory of [$\mathrm{C}^*$]{}-algebras. , 46(2):301–333, 1983. L. Robert. Remarks on [$\mathcal{Z}$]{}-stable projectionless [$\mathrm{C}^*$]{}-algebras. , 58(2):273–277, 2016. L. Robert and L. Santiago. Classification of [$\mathrm{C}^\ast$]{}-homomorphisms from [$C_0(0,1]$]{} to a [$\mathrm{C}^\ast$]{}-algebra. , 258(3):869–892, 2010. M. R[ø]{}rdam. A simple [$\mathrm{C}^*$]{}-algebra with a finite and an infinite projection. , 191(1):109–142, 2003. M. R[ø]{}rdam. The stable and the real rank of [$\mathcal{Z}$]{}-absorbing [$\mathrm{C}^*$]{}-algebras. , 15(10):1065–1084, 2004. J. Rosenberg and C. Schochet. The Künneth theorem and universal coefficient theorem for Kasparov’s generalized $\mathrm{K}$-functor. , 55(2), 431–474, 1987. A. Tikuisis. Nuclear dimension, [$\mathcal {Z}$]{}-stability, and algebraic simplicity for stably projectionless [$C^\ast$]{}-algebras. , 358(3-4):729–778, 2014. A. Tikuisis, S. White, and W. Winter. Quasidiagonality of nuclear [$\mathrm{C}^*$]{}-algebras. , 185(1):229–284, 2017. A. Tikuisis and W. Winter. Decomposition rank of [${\mathcal Z}$]{}-stable [$\mathrm{C}^*$]{}-algebras. , 7(3):673–700, 2014. A. Tikuisis and A. S. Toms. On the structure of [C]{}untz semigroups in (possibly) nonunital [$\mathrm{C}^*$]{}-algebras. , 58(2):402–414, 2015. A. S. Toms. Flat dimension growth for [$\mathrm{C}^*$]{}-algebras. , 238(2):678–708, 2006. A. S. Toms. On the classification problem for nuclear [$\mathrm{C}^*$]{}-algebras. , pages 1029–1044, 2008. A. S. Toms and W. Winter. Strongly self-absorbing [$\mathrm{C}^*$]{}-algebras. , 359(8):3999–4029, 2007. A. S. Toms and W. Winter. The [E]{}lliott conjecture for [V]{}illadsen algebras of the first type. , 256(5):1311–1340, 2009. K. W. Tsang. On the positive tracial cones of simple stably projectionless [$C^*$]{}-algebras. , 227(1):188–199, 2005. J. Villadsen. Simple [$\mathrm{C}^*$]{}-algebras with perforation. , 154(1):110–116, 1998. J. Villadsen. On the stable rank of simple [$\mathrm{C}^*$]{}-algebras. , 12(4):1091–1102, 1999. W. Winter. Covering dimension for nuclear [$\mathrm{C}^*$]{}-algebras. [II]{}. , 361(8):4143–4167, 2009. W. Winter. Nuclear dimension and [$\mathcal{Z}$]{}-stability of pure [$\rm C^*$]{}-algebras. , 187(2):259–342, 2012. W. Winter. Localizing the [E]{}lliott conjecture at strongly self-absorbing [$C^*$]{}-algebras. , 692:193–231, 2014. W. Winter and J. Zacharias. Completely positive maps of order zero. , 2:311–324, 2009. W. Winter and J. Zacharias. The nuclear dimension of [$\mathrm{C}^*$]{}-algebras. , 224(2):461–498, 2010. [^1]: i.e. (a) $A$ is a simple, separable, nuclear and ${\mathcal Z}$-stable $\mathrm{C}^*$-algebra if and only if $A \otimes \mathbb{K}$ is likewise, and (b) $\dim_{\mathrm{nuc}} A \leq 1$ if and only if $\dim_{\mathrm{nuc}} (A \otimes \mathbb{K}) \leq 1$; see Proposition \[thm:StableIsoInvariants\] for details and references. [^2]: In the non-separable case, there are different and non-equivalent definitions of approximately finite dimensional ([@FK10]). The one required here is that any finite set is approximately contained in a finite dimensional subalgebra; see for example [@Cas17 Definition 2.2]. [^3]: Strictly speaking, a *2-quasitrace*; however, we shall not need this terminology. [^4]: We use the terminology *additive quasitrace* because we are reserving the word *trace* for tracial states. For additive quasitraces, condition (iii) is automatic with $\tau_2$ given by the usual formula. [^5]: In [@ERS11], the notation $T(A)$ is used instead of $\widehat{T}(A)$, but this clashes with the notation for the tracial states used in this paper. [^6]: See Definition \[dfn:QDtraces\]. [^7]: Indeed, let $J_\tau := \lbrace (a_n)_{n \in{\mathbb{N}}} \in \ell^\infty(A): \lim_{n\rightarrow\infty}\tau(a_n^*a_n) = 0\rbrace$ and consider the diagonal map $(\hat\eta_n):F \rightarrow \ell^\infty(A)/J_\tau$. This map is c.p.c. order zero and so has a c.p.c. order zero lift $(\eta_n): F \rightarrow \ell^\infty(A)$ by [@Wi09 Proposition 1.2.4]. [^8]: Our convention is that approximate units for C$^*$-algebras are by default assumed to be increasing. [^9]: Suppose $\phi_2(x) = \pi_2(t \otimes x)$ where $\pi_2:C_0(0,1] \otimes A \rightarrow B_\omega$ is a $^*$-homomorphism and $t$ is the canonical generator of $C_0(0,1]$. Then $\phi_2^m(x) = \pi_2(t^m \otimes x)$; see [@WZ09 Corollary 4.2]. [^10]: Recall that a non-zero $h \in C_+$ is *totally full* if $f(h)$ is full in $C$ for every non-zero $f \in C_0((0,\|h\|])_+$ [@BBSTWW Definition 1.1]. [^11]: When $A$ is separable, one can work with sequences instead of general nets. [^12]: One can reduce to the unital case because ${\mathrm{dr}}(A) = {\mathrm{dr}}(A^\sim)$ [@KW04 Proposition 3.4]. Then $T(A) = T_{QD}(A)$ by [@BBSTWW Proposition 8.5]. Stably finiteness of $A$ follows from [@KW04 Proposition 5.1] and [@Br08 Theorem 7.1.15] for example. [^13]: For example, if $C$ is unital $h_1,h_2$ are unitary equivalent in $C$ if and only if they are unitary equivalent in $C \oplus \mathbb{C}$.
--- abstract: 'Several integrable semi-discretizations are known in the literature for the massive Thirring system in characteristic coordinates. We present for the first time an integrable semi-discretization of the massive Thirring system in laboratory coordinates. Our approach relies on the relation between the continuous massive Thirring system and the Ablowitz–Ladik lattice. In particular, we derive the Lax pair for the integrable semi-discretization of the massive Thirring system by combining together the time evolution of the massive Thirring system in laboratory coordinates and the Bäcklund transformation for solutions to the Ablowitz–Ladik lattice.' author: - | Nalini Joshi$^{1}$ and Dmitry E. Pelinovsky$^{2}$\ [$^{1}$ School of Mathematics and Statistics F07, University of Sydney, NSW 2006, Australia]{}\ [$^{2}$ Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada, L8S 4K1]{} title: '**Integrable semi-discretization of the massive Thirring system in laboratory coordinates**' --- Introduction ============ The purpose of this work is to find an integrable semi-discretization of the massive Thirring model (MTM) in laboratory coordinates, which is a relativistically invariant nonlinear Dirac equation derived in general relativity [@Thirring]. In the space of (1+1) dimensions, the MTM can be written as a system of two semi-linear equations for $(u,v) \in \mathbb{C}^2$ in the normalized form: $$\label{MTM} \left\{ \begin{array}{l} \displaystyle i \left( \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} \right) + v = |v|^2 u, \\ \displaystyle i \left( \frac{\partial v}{\partial t} - \frac{\partial v}{\partial x} \right) + u = |u|^2 v. \end{array} \right.$$ The standard Cauchy problem is posed in the time coordinates $t \in \mathbb{R}$ for the initial data $(u_0,v_0)$ extended in the spatial coordinate $x \in \mathbb{R}$. Solutions of the Cauchy problem were studied recently in [@Candy1; @Candy2; @Huh11; @Huh13; @Huh15; @SelTes; @Zhang; @ZhangZhao]. Integrability of the MTM follows from the existence of the following Lax operators: $$\label{lablax1} L(\lambda;u,v) = \frac{1}{2} (|u|^2-|v|^2) \sigma_3 + \left[ \begin{matrix} 0 & \lambda v + \lambda^{-1} u \\ \lambda \bar{v} + \lambda^{-1} \bar{u} & 0 \end{matrix}\right] + \frac{1}{2}\left(\lambda^2 - \lambda^{-2} \right) \sigma_3,$$ and $$\label{lablax2} A(\lambda;u,v) = -\frac{1}{2}(|u|^2+|v|^2)\sigma_3 + \left[\begin{matrix} 0 & \lambda v - \lambda^{-1} u \\ \lambda \bar{v} - \lambda^{-1} \bar{u} & 0 \end{matrix}\right] +\frac{1}{2}\left(\lambda^2 + \lambda^{-2}\right)\sigma_3,$$ where $\lambda$ is a spectral parameter, $\sigma_3 = {\rm diag}(1,-1)$ is the third Pauli spin matrix, and $(\bar{u},\bar{v})$ stands for the complex conjugate of $(u,v)$. The MTM system (\[MTM\]) is the compatibility condition $$\label{compatibility} \frac{\partial^2 \vec{{\varphi}}}{\partial x \partial t} = \frac{\partial^2 \vec{{\varphi}}}{\partial t \partial x},$$ for $\vec{{\varphi}} \in \mathbb{C}^2$ satisfying the following Lax equations: $$\label{laxeq} -2i \frac{\partial \vec{{\varphi}}}{\partial x} = L(\lambda;u,v) \vec{{\varphi}}\quad \mbox{and}\quad -2i \frac{\partial \vec{{\varphi}}}{\partial t} = A(\lambda;u,v) \vec{{\varphi}}.$$ Lax operators in the form (\[lablax1\]) and (\[lablax2\]) were introduced by Mikhailov [@Mikhailov] and then studied in [@KN; @KM]. More recently, the same Lax operators have been used for many purposes, e.g. for the inverse scattering transform [@KMI; @KW; @PelSaal; @Villarroel; @W], for spectral stability of solitary waves [@KL], for orbital stability of Dirac solitons [@Yusuke1; @Yusuke2], and for construction of rogue waves [@Deg]. Numerical simulations of nonlinear PDEs rely on spatial semi-discretizations obtained either with finite-difference or spectral methods. Because the energy functional of the MTM system (\[MTM\]) near the zero equilibrium $(u,v) = (0,0)$ is not bounded, either from above or below, the spatial discretizations of the MTM system (like in many other massive Dirac models) suffer from spurious eigenvalues and other numerical instabilities [@BZ; @Kevrekidis; @Cuevas; @Lakoba; @Shao]. An integrable semi-discretization would preserve the integrability scheme and model dynamics of nonlinear waves without such spurious instabilities and other artifacts. This is why it is important to derive an integrable semi-discretization of the MTM system (\[MTM\]). Since the pioneering works of Ablowitz and Ladik on equations related to the Ablowitz–Kaup–Newell–Segur (AKNS) spectral problem [@AL; @AL2], it is well known that continuous nonlinear evolution equations integrable with the inverse scattering transform can be semi-discretized in spatial coordinates or fully discretized in both spatial and temporal coordinates in such a way as to preserve integrability [@Joshi-book]. Since the literature on this subject is vast, we shall only restrict our attention to the relevant results on the massive Thirring model. With the rotation of coordinates $$\label{transformation} \tau = \frac{1}{2} (t-x), \quad \xi = -\frac{1}{2} (x+t),$$ the MTM system (\[MTM\]) in laboratory coordinates $(x,t)$ can be rewritten in characteristic coordinates $(\xi,\tau)$: $$\label{MTM-char} \left\{ \begin{array}{l} \displaystyle -i \frac{\partial u}{\partial \xi} + v = |v|^2 u, \\ \displaystyle i \frac{\partial v}{\partial \tau} + u = |u|^2 v. \end{array} \right.$$ The Cauchy problem in the time coordinate $\tau$ and the spatial coordinate $\xi$ for the system (\[MTM-char\]) corresponds to the Goursat problem for the system (\[MTM\]) and vice versa. Therefore, the spatial discretization of the system (\[MTM-char\]) in the spatial coordinate $\xi$ is not relevant for the Cauchy problem for the system (\[MTM\]). Unfortunately, it is the only integrable discretization available for the MTM up to now, thanks to the relatively simple connection of the Lax operators for the system (\[MTM-char\]) to the first negative flow of the Kaup–Newell (KN) spectral problem [@KN]. The first result on the integrable discretizations of the MTM system in characteristic coordinates goes back to the works of Nijhoff [*et al.*]{} in [@Nijhoff1; @Nijhoff2]. By using the Bäcklund transformation for the continuous equations related to the KN spectral problem [@Nijhoff1], integrable semi-discretizations in $\xi$ or full discretizations in $\xi$ and $\tau$ were obtained for the MTM system (\[MTM-char\]) and its equivalent formulations [@Nijhoff2]. Since the relevant Bäcklund transformation contains a square root singularity [@Nijhoff1], the resulting discretizations inherit a square root singularity [@Nijhoff2], which may cause problems because of ambiguity in the choice of square root branches and sign-indefinite expressions under the square root signs. Unlike the continuous system (\[MTM-char\]), the spatial discretizations constructed in [@Nijhoff2] were not written in terms of the cubic nonlinear terms. In a different direction, Tsuchida in [@Tsuchida] explored a gauge transformation of the KN spectral problem to the AKNS spectral problem and constructed integrable semi-discretizations of nonlinear equations related to the KN spectral problem. The semi-discretization constructed in [@Tsuchida] had cubic nonlinearity but had a limitation of a different kind. The complex conjugate symmetry of the semi-discrete MTM system was related to the lattice shift by half of the lattice spacing, where the variables were not defined. In the latest work [@Tsuchida-preprint], Tsuchida obtained another semi-discretization of the MTM system (\[MTM-char\]) by generalizing the Ablowitz–Ladik (AL) spectral problem [@AL3] and Bäcklund–Darboux transformations for nonlinear equations related to the AL spectral problem [@Tsuchida2010; @Vek; @Zullo]. The new semi-discretization of the MTM system (\[MTM-char\]) in [@Tsuchida-preprint] contains the cubic nonlinearity and the complex conjugate reduction, which resemble those in the continuous system (\[MTM-char\]). How to transfer integrable semi-discretizations of the second-order equations in characteristic coordinates (such as the sine–Gordon equation or the MTM system) to the integrable semi-discretizations of these equations in laboratory coordinates has been considered to be an open problem for many years. The main obstacle here is that the rotation of coordinates mixes positive and negative powers of the spectral parameter $\lambda$ in the Lax operators related to the continuous case, hence the semi-discretization scheme needs to be revised. At the same time, the temporal and spatial coordinates are already different in the semi-discrete case (one is continuous and the other one is discrete) so that the rotation of coordinates produces a complicated difference-differential equation. Since it is the Cauchy problem for the MTM system in laboratory coordinates that is used for most of physical applications of the MTM system (\[MTM\]), constructing a proper integrable semi-discretization of it becomes relevant and important. The present work solves the problem stated above. We derive the semi-discrete MTM system in laboratory coordinates together with its Lax pair by implementing the method of Tsuchida from his recent work [@Tsuchida-preprint]. In particular, we combine together the time evolution of the massive Thirring system in laboratory coordinates and the Bäcklund transformation for solutions to the Ablowitz–Ladik lattice. The main result is stated in Section \[sec-2\]. The proof of the main result appears in Section \[sec-3\]. The conclusion is given in Section \[sec-4\]. Main result {#sec-2} =========== We start with the gauge-modified Lax operators for the MTM system (\[MTM\]) derived by Barashenkov and Getmanov in [@BG87]: $$\label{lablax1-new} \mathcal{L}(\lambda;u,v) = \left[\begin{matrix} \lambda^2 - |v|^2 & \lambda v + \lambda^{-1} u \\ \lambda \bar{v} + \lambda^{-1} \bar{u} & \lambda^{-2} - |u|^2 \end{matrix}\right],$$ and $$\label{lablax2-new} \mathcal{A}(\lambda;u,v) = \left[\begin{matrix} \lambda^2 - |v|^2 & \lambda v - \lambda^{-1} u \\ \lambda \bar{v} - \lambda^{-1} \bar{u} & -\lambda^{-2} + |u|^2 \end{matrix}\right].$$ The gauge-modified Lax formulation (\[lablax1-new\])–(\[lablax2-new\]) differs from the classical (zero-trace) Lax formulation (\[lablax1\])–(\[lablax2\]) only in the diagonal terms. The MTM system (\[MTM\]) still arises as the compatibility condition (\[compatibility\]), where $\vec{{\varphi}} \in \mathbb{C}^2$ satisfies the Lax equations $$\label{laxeq-new} -2i \frac{\partial \vec{{\varphi}}}{\partial x} = \mathcal{L}(\lambda;u,v) \vec{{\varphi}}\quad \mbox{and}\quad -2i \frac{\partial \vec{{\varphi}}}{\partial t} = \mathcal{A}(\lambda;u,v) \vec{{\varphi}},$$ which are related to the new operators $\mathcal{L}$ and $\mathcal{A}$ in (\[lablax1-new\])–(\[lablax2-new\]). The main result of this work is a derivation of the following spatial discretization of the MTM system (\[MTM\]) suitable for the Cauchy problem in the laboratory coordinates: $$\label{MTM-discrete} \left\{ \begin{array}{l} \displaystyle 4 i \frac{d U_n}{dt} + Q_{n+1} + Q_n + \frac{2i}{h} (R_{n+1}-R_n) + U_n^2 (\bar{R}_n + \bar{R}_{n+1}) \\ \qquad - U_n (|Q_{n+1}|^2 + |Q_n|^2 + |R_{n+1}|^2 + |R_n|^2) - \frac{ih}{2} U_n^2 (\bar{Q}_{n+1}-\bar{Q}_n) = 0, \\ \displaystyle -\frac{2i}{h} (Q_{n+1}-Q_n) + 2 U_n - |U_n|^2 (Q_{n+1} + Q_n) = 0, \\ \displaystyle R_{n+1} + R_n - 2 U_n + \frac{ih}{2} |U_n|^2 (R_{n+1} - R_n) = 0, \end{array} \right.$$ where $h$ is the lattice spacing parameter and $n$ is the discrete lattice variable. In the limit $h \to 0$ the slowly varying solutions between the lattice nodes can be represented by $$U_n(t) = U(x=hn,t), \quad R_n(t) = R(x = hn,t), \quad Q_n(t) = Q(x=nh,t),$$ where the field variables satisfy the continuum limit of the system (\[MTM-discrete\]) given by $$\label{MTM-continuous} \left\{ \begin{array}{l} \displaystyle 2 i \frac{\partial U}{\partial t} + i \frac{\partial R}{\partial x} + Q + U^2 \bar{R} - U (|Q|^2 + |R|^2) = 0, \\ \displaystyle -i \frac{\partial Q}{\partial x} + U - |U|^2 Q = 0, \\ \displaystyle R - U = 0. \end{array} \right.$$ The system (\[MTM-continuous\]) in variables $U(x,t) = u(x,t-x)$ and $Q(x,t) = v(x,t-x)$ yields the MTM system (\[MTM\]). Therefore, the system (\[MTM-discrete\]) is a proper integrable spatial semi-discretization of the continuous MTM system in laboratory coordinates. Note that the last two difference equations of the system (\[MTM-discrete\]) are uncoupled between the components $\{ R_n \}_{n \in \mathbb{Z}}$ and $\{ Q_n \}_{n \in \mathbb{Z}}$ and, moreover, they are linear with respect to $\{ R_n \}_{n \in \mathbb{Z}}$ and $\{ Q_n \}_{n \in \mathbb{Z}}$. If the sequence $\{ U_n \}_{n \in \mathbb{Z}}$ decays to zero as $|n| \to \infty$ sufficiently fast, then one can express $R_n$ and $Q_n$ in an explicit form by $$\label{expression-R-n} R_n = 2 \sum_{k=-\infty}^{n-1} (-1)^{n-k-1} U_k \frac{\prod_{m=k+1}^{n-1} (1 - i h |U_m|^2/2)}{\prod_{m=k}^{n-1} (1 + i h |U_m|^2/2)},$$ and $$\label{expression-Q-n} Q_n = -ih \sum_{k=-\infty}^{n-1} U_k \frac{\prod_{m=k+1}^{n-1} (1 + i h |U_m|^2/2)}{\prod_{m=k}^{n-1} (1 - i h |U_m|^2/2)}.$$ The time evolution of $U_n$ is obtained by solving the first differential equation of the system (\[MTM-discrete\]). Although the representations (\[expression-R-n\]) and (\[expression-Q-n\]) express $\{ R_n \}_{n \in \mathbb{Z}}$ and $\{ Q_n \}_{n \in \mathbb{Z}}$ in terms of $\{ U_n \}_{n \in \mathbb{Z}}$, it may be easier computationally to solve the last two difference equations of the system (\[MTM-discrete\]) instantaneously at every time $t \in \mathbb{R}$. As follows from our construction, the semi-discrete MTM system (\[MTM-discrete\]) is the compatibility condition $$\label{Lax-discrete} -2i\frac{d}{dt} N_n = P_{n+1} N_n - N_n P_n,$$ for $\vec{{\varphi}} \in \mathbb{C}^2$ on $n \in \mathbb{Z}$ and $t \in \mathbb{R}$ satisfying the following Lax equations: $$\label{Lax-N-P} \vec{{\varphi}}_{n+1} = N_n \vec{{\varphi}}_n, \quad -2i \frac{d\vec{{\varphi}}_n}{dt} = P_n \vec{{\varphi}}_n,$$ associated with the Lax operators $$\label{Lax-N} N_n := \left[ \begin{array}{cc} \lambda + 2i h^{-1} \lambda^{-1} \frac{1 + i h |U_n|^2/2}{1 - i h |U_n|^2/2} & \frac{2 U_n}{1 - i h |U_n|^2/2} \\ \frac{2 \bar{U}_n}{1 - i h |U_n|^2/2} & -\lambda \frac{1 + i h |U_n|^2/2}{1 - i h |U_n|^2/2} + 2i h^{-1} \lambda^{-1} \end{array} \right]$$ and $$\label{Lax-P} P_n := \left[\begin{matrix} \lambda^2 - |R_n|^2 & \lambda R_n - \lambda^{-1} Q_n \\ \lambda \bar{R}_n - \lambda^{-1} \bar{Q}_n & -\lambda^{-2} + |Q_n|^2 \end{matrix}\right].$$ The trivial zero solution satisfies the semi-discrete MTM system (\[MTM-discrete\]) and reduces the Lax equations (\[Lax-N-P\]) to uncoupled equations for components of $\vec{{\varphi}}$ which are readily solvable. Non-trivial solutions of the semi-discrete MTM system (\[MTM-discrete\]) will be constructed in future work. Proof of the main result {#sec-3} ======================== Here we derive the semi-discrete MTM system (\[MTM-discrete\]) by using the method of Tsuchida [@Tsuchida-preprint] with suitable modifications. Since the work [@Tsuchida-preprint] relies on the MTM system in characteristic coordinates (\[MTM-char\]) and the Ablowitz–Ladik lattice, we will incorporate these details in our subsequent analysis. The proof is broken into several subsections. MTM system in characteristic coordinates ---------------------------------------- The MTM system (\[MTM-char\]) is the compatibility condition $$\frac{\partial^2 \vec{{\varphi}}}{\partial \xi \partial \tau} = \frac{\partial^2 \vec{{\varphi}}}{\partial \tau \partial \xi}.$$ for $\vec{{\varphi}} \in \mathbb{C}^2$ satisfying the following Lax equations: $$\label{Lax-equation} i \frac{\partial \vec{{\varphi}}}{\partial \xi} = \left[ \begin{array}{cc} \lambda^2 - |v|^2 & \lambda v \\ \lambda \bar{v} & 0 \end{array} \right] \vec{{\varphi}} \quad \mbox{\rm and} \quad i \frac{\partial \vec{{\varphi}}}{\partial \tau} = \left[ \begin{array}{cc} 0 & \lambda^{-1} u \\ \lambda^{-1} \bar{u} & \lambda^{-2} - |u|^2 \end{array} \right] \vec{{\varphi}}.$$ Consequently, by using the transformation (\[transformation\]) in the Lax equations (\[Lax-equation\]), we obtain the Lax equations (\[laxeq\]) with $L$ and $A$ given by (\[lablax1-new\]) and (\[lablax2-new\]). The Lax equations (\[Lax-equation\]) with triangular matrices are different from the classical Lax equations with zero-trace matrices [@KM]: $$\label{Lax-equation-classical} i \frac{\partial \vec{{\varphi}}}{\partial \xi} = \left[ \begin{array}{cc} \frac{1}{2} (\lambda^2 - |v|^2) & \lambda v \\ \lambda \bar{v} & -\frac{1}{2} (\lambda^2 - |v|^2) \end{array} \right] \vec{{\varphi}}, \quad i \frac{\partial \vec{{\varphi}}}{\partial \tau} = \left[ \begin{array}{cc} -\frac{1}{2} (\lambda^{-2} - |u|^2) & \lambda^{-1} u \\ \lambda^{-1} \bar{u} & \frac{1}{2} (\lambda^{-2} - |u|^2) \end{array} \right] \vec{{\varphi}}.$$ where the $\xi$-dependent problem is referred to as Kaup–Newell spectral problem [@KN] and the $\tau$-dependent problem is the first negative flow of the Kaup–Newell hierarchy. As shown in [@BG87], the two formulations are gauge-equivalent by using the conservation law $$\label{MTM-conservation} \frac{\partial |v|^2}{\partial \tau} = \frac{\partial |u|^2}{\partial \xi},$$ which holds for the MTM system in characteristic coordinates (\[MTM-char\]). Adding $\frac{1}{2} (\lambda^2 - |v|^2) I_{2 \times 2}$ to the $\xi$-dependent problem and $\frac{1}{2} (\lambda^{-2} - |u|^2) I_{2 \times 2}$ to the $\tau$-dependent problem in the classical Lax equations (\[Lax-equation-classical\]) with zero-trace matrices, where $I_{2 \times 2}$ is the $2 \times 2$ identity matrix, yields the Lax equations (\[Lax-equation\]) with triangular matrices. The representations (\[Lax-equation\]) and (\[Lax-equation-classical\]) are equivalent because the MTM system (\[MTM-char\]) is invariant under the gauge transformation of the Lax operators [@BG87]. Relation to the Ablowitz–Ladik lattice -------------------------------------- The gauge-modified Lax formulation (\[Lax-equation\]) of the MTM system (\[MTM-char\]) is related to the following Ablowitz–Ladik (AL) lattice: $$\label{AL-lattice} \left\{ \begin{array}{l} \displaystyle \frac{dQ_m}{dt} = a (1 - Q_m R_m) (Q_{m+1} - Q_{m-1}) + ib (1-Q_m R_m) (Q_{m+1} + Q_{m-1}), \\ \displaystyle \frac{dR_m}{dt} = a (1 - Q_m R_m) (R_{m+1} - R_{m-1}) - i b (1-Q_m R_m) (R_{m+1} + R_{m-1}),\end{array} \right. \quad m \in \mathbb{Z},$$ where $a,b \in \mathbb{R}$ are parameters of the model. In order to show how the AL lattice (\[AL-lattice\]) is related to the MTM system (\[MTM-char\]), we write the Lax equations for the AL lattice: $$\label{Lax-equation-AL} \vec{{\varphi}}_{m+1} = \left[ \begin{array}{cc} \lambda & Q_m \\ R_m & \lambda^{-1} \end{array} \right] \vec{{\varphi}}_m,$$ and $$\label{Lax-equation-AL-time} \frac{d \vec{{\varphi}}_m}{\partial t} = (a+ib) \left[ \begin{array}{cc} \lambda^2 - Q_m R_{m-1} & \lambda Q_m \\ \lambda R_{m-1} & 0 \end{array} \right] \vec{{\varphi}}_m + (a-ib) \left[ \begin{array}{cc} 0 & \lambda^{-1} Q_{m-1} \\ \lambda^{-1} R_m & \lambda^{-2} - Q_{m-1} R_m \end{array} \right] \vec{{\varphi}}_m.$$ The compatibility condition for the system (\[Lax-equation-AL\])–(\[Lax-equation-AL-time\]) yields the AL lattice (\[AL-lattice\]). With the correspondence $$Q_{m-1} = u, \quad Q_{m} = v, \quad R_{m-1} = \bar{v}, \quad R_{m} = \bar{u},$$ the two matrix operators in the linear combination of the time evolution equation (\[Lax-equation-AL-time\]) can be used in the Lax equations (\[Lax-equation\]). In this context, the index $m$ can be dropped and the variables $(u,v)$ satisfy the MTM system (\[MTM-char\]) from commutativity of the Lax equations (\[Lax-equation\]). Bäcklund–Darboux transformation for the AL lattice -------------------------------------------------- It is known [@Vek; @Zullo] that a new solution $\{\tilde{Q}_m,\tilde{R}_m\}_{m \in \mathbb{Z}}$ of the AL lattice (\[AL-lattice\]) can be obtained from another solution $\{Q_m,R_m\}_{m \in \mathbb{Z}}$ of the same lattice by the Bäcklund–Darboux transformation. The Bäcklund–Darboux transformation also relates the eigenvectors $\vec{{\varphi}}_m$ and $\tilde{\vec{{\varphi}}}_m$ satisfying the Lax equations (\[Lax-equation-AL\]) and (\[Lax-equation-AL-time\]) for the same spectral parameter $\lambda$. The relation between $\vec{{\varphi}}_m$ and $\tilde{\vec{{\varphi}}}_m$ can be written in the form used in [@Tsuchida-preprint]: $$\label{BT} \tilde{\vec{{\varphi}}}_m = \left[ \begin{array}{cc} \alpha \lambda + \delta \lambda^{-1} & 0 \\ 0 & \gamma \lambda + \beta \lambda^{-1} \end{array} \right] \vec{{\varphi}}_m + (\alpha \beta - \gamma \delta) \left[ \begin{array}{cc} \gamma \lambda & U_m \\ V_m & \delta \lambda^{-1} \end{array} \right]^{-1} \vec{{\varphi}}_m,$$ where $(\alpha,\beta,\gamma,\delta)$ are arbitrary parameters such that $\alpha \beta - \gamma \delta \neq 0$ and $\{ U_m,V_m\}_{m \in \mathbb{Z}}$ are some potentials. For the purpose of the Bäcklund–Darboux transformation for the AL lattice, the potentials $\{ U_m,V_m\}_{m \in \mathbb{Z}}$ are expressed in terms of eigenfunctions satisfying the spectral problem (\[Lax-equation-AL\]) at a fixed value of the spectral parameter $\lambda$. However, for the purpose of the integrable semi-discretization of the MTM system, we specify constraints on $\{ U_m,V_m\}_{m \in \mathbb{Z}}$ from the commutativity condition below. Derivation of the Lax equations (\[Lax-N-P\]) with (\[Lax-N\]) and (\[Lax-P\]) ------------------------------------------------------------------------------ Let us drop the index $m$ and forget about the AL lattice (\[AL-lattice\]) and the spectral problem (\[Lax-equation-AL\]). If the Bäcklund–Darboux transformation (\[BT\]) is iterated in new index $n$, we can introduce the new spectral problem $$\label{BT-Lax} \vec{{\varphi}}_{n+1} = N_n \vec{{\varphi}}_n$$ with $$N_n := \left[ \begin{array}{cc} \alpha \lambda + \delta \lambda^{-1} & 0 \\ 0 & \gamma \lambda + \beta \lambda^{-1} \end{array} \right] + (\alpha \beta - \gamma \delta) \left[ \begin{array}{cc} \gamma \lambda & U_n \\ V_n & \delta \lambda^{-1} \end{array} \right]^{-1},$$ The spectral problem (\[BT-Lax\]) is coupled with the time evolution problem $$\label{Time-Lax} -2i \frac{d \vec{{\varphi}}_n}{\partial t} = P_n \vec{{\varphi}}_n,$$ with $$P_n := \left[\begin{matrix} \lambda^2 - |R_n|^2 & \lambda R_n - \lambda^{-1} Q_n \\ \lambda \bar{R}_n - \lambda^{-1} \bar{Q}_n & -\lambda^{-2} + |Q_n|^2 \end{matrix}\right].$$ The time evolution problem (\[Time-Lax\]) is obtained from the time evolution problem in (\[lablax2-new\]) and (\[laxeq-new\]) for the MTM system (\[MTM\]). In the Lax equations (\[BT-Lax\]) and (\[Time-Lax\]), we have unknown potentials $\{ U_n, V_n, R_n, Q_n\}_{n \in \mathbb{Z}}$ and arbitrary parameters $(\alpha,\beta,\gamma,\delta)$ such that $\alpha \beta - \gamma \delta \neq 0$. In what follows, we consider the compatibility condition (\[Lax-discrete\]) and obtain the constraints on these unknown potentials. Since the spectral parameter $\lambda$ is independent of $t$, we obtain $$\frac{d}{dt} N_n = - (\alpha \beta - \gamma \delta) \left[ \begin{array}{cc} \gamma \lambda & U_n \\ V_n & \delta \lambda^{-1} \end{array} \right]^{-1} \left[ \begin{array}{cc} 0 & \frac{d U_n}{dt} \\ \frac{d V_n}{dt} & 0 \end{array} \right] \left[ \begin{array}{cc} \gamma \lambda & U_n \\ V_n & \delta \lambda^{-1} \end{array} \right]^{-1}.$$ For $N_n$ in $P_{n+1} N_n$, we can use the equivalent representation $$N_n = \left[ \begin{array}{cc} \alpha \lambda (\gamma \lambda + \beta \lambda^{-1}) & U_n (\alpha \lambda + \delta \lambda^{-1}) \\ V_n (\gamma \lambda + \beta \lambda^{-1}) & \beta \lambda^{-1} (\alpha \lambda + \delta \lambda^{-1}) \end{array} \right] \left[ \begin{array}{cc} \gamma \lambda & U_n \\ V_n & \delta \lambda^{-1} \end{array} \right]^{-1}.$$ For $N_n$ in $N_n P_n$, we can use the equivalent representation $$N_n = \left[ \begin{array}{cc} \gamma \lambda & U_n \\ V_n & \delta \lambda^{-1} \end{array} \right]^{-1} \left[ \begin{array}{cc} \alpha \lambda (\gamma \lambda + \beta \lambda^{-1}) & U_n (\gamma \lambda + \beta \lambda^{-1}) \\ V_n (\alpha \lambda + \delta \lambda^{-1}) & \beta \lambda^{-1} (\alpha \lambda + \delta \lambda^{-1}) \end{array} \right].$$ ### (1,1) and (2,2) entries Substituting the equivalent expressions for $N_n$ to the Lax equation (\[Lax-discrete\]) yields the following two constraints arising at different powers of $\lambda$ in the $(1,1)$ entries: $$\begin{aligned} \label{constraint-11a} \alpha \gamma (|R_{n+1}|^2 - |R_n|^2) + \gamma (U_n \bar{R}_n - V_n R_{n+1}) + \alpha (V_n R_n - U_n \bar{R}_{n+1}) & = & 0, \\ \label{constraint-11b} U_n V_n (|Q_{n}|^2 - |Q_{n+1}|^2) + \alpha (U_n \bar{Q}_{n+1} - V_n Q_n) + \gamma (V_n Q_{n+1} - U_n \bar{Q}_{n}) & = & 0\end{aligned}$$ and the following two constraints arising at different powers of $\lambda$ in the $(2,2)$ entries: $$\begin{aligned} \label{constraint-22a} U_n V_n (|R_{n+1}|^2 - |R_n|^2) + \beta (U_n \bar{R}_n - V_n R_{n+1}) + \delta (V_n R_n - U_n \bar{R}_{n+1}) & = & 0, \\ \label{constraint-22b} \beta \delta (|Q_{n}|^2 - |Q_{n+1}|^2) + \delta (U_n \bar{Q}_{n+1} - V_n Q_n) + \beta (V_n Q_{n+1} - U_n \bar{Q}_{n}) & = & 0.\end{aligned}$$ We claim that the constraints (\[constraint-11a\])–(\[constraint-22b\]) are equivalent to the following constraints: $$\begin{aligned} \label{constraint1} \gamma (\alpha \beta - U_n V_n) R_{n+1} - \alpha (\delta \gamma - U_n V_n) R_n & = & (\alpha \beta - \delta \gamma) U_n, \\ \label{constraint2} \alpha (\gamma \delta - U_n V_n) \bar{R}_{n+1} - \gamma (\alpha \beta - U_n V_n) \bar{R}_n & = & -(\alpha \beta - \delta \gamma) V_n,\end{aligned}$$ and $$\begin{aligned} \label{constraint3} \beta (\gamma \delta - U_n V_n) Q_{n+1} - \delta (\alpha \beta - U_n V_n) Q_n & = & -(\alpha \beta - \delta \gamma) U_n, \\ \label{constraint4} \delta (\alpha \beta - U_n V_n) \bar{Q}_{n+1} - \beta (\gamma \delta - U_n V_n) \bar{Q}_n & = & (\alpha \beta - \delta \gamma) V_n.\end{aligned}$$ To show the equivalence, we use linear combinations of (\[constraint-11a\]) and (\[constraint-22a\]) and obtain $$\begin{aligned} \alpha (\gamma \delta - U_n V_n) (|R_{n+1}|^2 - |R_n|^2) - (\alpha \beta - \gamma \delta) (U_n \bar{R}_n - V_n R_{n+1}) & = & 0, \\ \gamma (\alpha \beta - U_n V_n) (|R_{n+1}|^2 - |R_n|^2) + (\alpha \beta - \gamma \delta) (V_n R_n - U_n \bar{R}_{n+1}) & = & 0.\end{aligned}$$ These relations are regrouped as follows: $$\begin{aligned} R_{n+1} \left[ \alpha (\gamma \delta - U_n V_n) \bar{R}_{n+1} + (\alpha \beta - \gamma \delta) V_n \right] - \bar{R}_n \left[ \alpha (\gamma \delta - U_n V_n) R_n + (\alpha \beta - \gamma \delta) U_n \right] & = & 0, \\ \bar{R}_{n+1} \left[ \gamma (\alpha \beta - U_n V_n) R_{n+1} - (\alpha \beta - \gamma \delta) U_n \right] - R_n \left[ \gamma (\alpha \beta - U_n V_n) \bar{R}_n - (\alpha \beta - \gamma \delta) V_n \right] & = & 0,\end{aligned}$$ where each constraint is satisfied if and only if constraints (\[constraint1\]) and (\[constraint2\]) hold. The equivalence of constraints (\[constraint-11b\]) and (\[constraint-22b\]) to constraints (\[constraint3\]) and (\[constraint4\]) is established by similar operations. ### (1,2) entries At different powers of $\lambda$ in the $(1,2)$ entries of the Lax equation (\[Lax-discrete\]), we obtain two constraints $$\begin{aligned} \label{constraint-12a} \alpha \gamma U_n (|R_{n+1}|^2 - |R_n|^2) + (\alpha \beta - \gamma \delta) U_n + \alpha \gamma (\delta R_n - \beta R_{n+1}) + U_n^2 (\gamma \bar{R}_n - \alpha \bar{R}_{n+1}) & = & 0, \\ \label{constraint-12b} \beta \delta U_n (|Q_{n}|^2 - |Q_{n+1}|^2) + (\alpha \beta - \gamma \delta) U_n + \beta \delta (\gamma Q_{n+1} - \alpha Q_{n}) + U_n^2 (\delta \bar{Q}_{n+1} - \beta \bar{Q}_{n}) & = & 0,\end{aligned}$$ and the evolution equation $$\begin{aligned} \nonumber 2i (\alpha \beta - \gamma \delta) \frac{d U_n}{dt} + \alpha \gamma (\beta Q_{n+1} - \delta Q_n) + \beta \delta (\alpha R_n - \gamma R_{n+1}) + U_n^2 (\alpha \bar{Q}_{n+1} - \gamma \bar{Q}_n) \\ \label{evolution-1} + U_n^2 (\beta \bar{R}_n - \delta \bar{R}_{n+1}) + U_n (\gamma \delta |Q_n|^2 - \alpha \beta |Q_{n+1}|^2) + U_n (\gamma \delta |R_{n+1}|^2 - \alpha \beta |R_n|^2) = 0.\end{aligned}$$ We claim that the two constraints (\[constraint-12a\]) and (\[constraint-12b\]) are redundant in view of the constraints (\[constraint1\])–(\[constraint4\]). Indeed, substituting $$\alpha \gamma (\beta R_{n+1} -\delta R_n) - (\alpha \beta - \delta \gamma) U_n = U_n V_n (\gamma R_{n+1} - \alpha R_n)$$ from (\[constraint1\]) into (\[constraint-12a\]) and dividing the result by $U_n$, we obtain $$\alpha \gamma (|R_{n+1}|^2 - |R_n|^2) - V_n(\gamma R_{n+1} - \alpha R_n) + U_n (\gamma \bar{R}_n - \alpha \bar{R}_{n+1}) = 0,$$ which is nothing but (\[constraint-11a\]). Similar transformations apply to (\[constraint-12b\]) with (\[constraint3\]) to end up at (\[constraint-22b\]). Hence the constraints (\[constraint-12a\]) and (\[constraint-12b\]) are satisfied if the constraints (\[constraint1\])–(\[constraint4\]) hold. ### (2,1) entries Similarly, at different powers of $\lambda$ in the $(2,1)$ entries of the Lax equation (\[Lax-discrete\]), we obtain two constraints $$\begin{aligned} \label{constraint-21a} \alpha \gamma V_n (|R_{n+1}|^2 - |R_n|^2) - (\alpha \beta - \gamma \delta) V_n + \alpha \gamma (\beta \bar{R}_n - \delta \bar{R}_{n+1}) + V_n^2 (\alpha R_n - \gamma R_{n+1}) & = & 0, \\ \label{constraint-21b} \beta \delta V_n (|Q_{n}|^2 - |Q_{n+1}|^2) - (\alpha \beta - \gamma \delta) V_n + \beta \delta (\alpha \bar{Q}_{n+1} - \gamma \bar{Q}_{n}) + V_n^2 (\beta Q_{n+1} - \delta Q_{n}) & = & 0,\end{aligned}$$ and the evolution equation $$\begin{aligned} \nonumber 2i (\alpha \beta - \gamma \delta) \frac{d V_n}{dt} + \alpha \gamma (\delta \bar{Q}_{n+1} - \beta \bar{Q}_n) + \beta \delta (\gamma \bar{R}_n - \alpha \bar{R}_{n+1}) + V_n^2 (\gamma Q_{n+1} - \alpha Q_n) \\ \label{evolution-2} + V_n^2 (\delta R_n - \beta R_{n+1}) + V_n (\alpha \beta |Q_n|^2 - \gamma \delta |Q_{n+1}|^2) + V_n (\alpha \beta |R_{n+1}|^2 - \gamma \delta |R_n|^2) = 0.\end{aligned}$$ The two constraints (\[constraint-21a\]) and (\[constraint-21b\]) are once again redundant in view of the constraints (\[constraint1\])–(\[constraint4\]). The proof of this is achieved by transformations similar to the computations of $(1,2)$ entries. Summarizing it up, we have shown that the Lax equation (\[Lax-discrete\]) is satisfied under the four constraints (\[constraint1\])–(\[constraint4\]) and the two evolution equations (\[evolution-1\]) and (\[evolution-2\]). Choice of parameters -------------------- The constraints (\[constraint3\])–(\[constraint4\]) and the time evolution equations (\[evolution-1\]) and (\[evolution-2\]) with $R_n = 0$ are obtained in [@Tsuchida-preprint], where the Bäcklund–Darboux transformation (\[BT-Lax\]) is coupled with the time flow given by the negative powers of $\lambda$. Similarly, the constraints (\[constraint1\])–(\[constraint2\]) and the time evolution equations (\[evolution-1\]) and (\[evolution-2\]) with $Q_n = 0$ are obtained when the Bäcklund–Darboux transformation (\[BT-Lax\]) is coupled with the time flow given by the positive powers of $\lambda$. All four constraints (\[constraint1\])–(\[constraint4\]) and the full system of time evolution equations (\[evolution-1\]) and (\[evolution-2\]) arise in the full time flow (\[Time-Lax\]). The complex-conjugate symmetry $V_n = \bar{U}_n$ between the constraints (\[constraint1\]) and (\[constraint3\]) on one side and the constraints (\[constraint2\]) and (\[constraint4\]) on the other side as well as between the evolution equations (\[evolution-1\]) and (\[evolution-2\]) is preserved if $\alpha = \bar{\gamma}$ and $\beta = \bar{\delta}$. Without loss of generality, we normalize $\alpha = \gamma = 1$ and introduce the parameter $h \in \mathbb{R}$ by $\beta = -\delta = 2i/h$. Then, equations (\[constraint1\]), (\[constraint3\]), and (\[evolution-1\]) divided by $2i/h$ give respectively the third, second, and first equations of the system (\[MTM-discrete\]). Thus, integrability of the system (\[MTM-discrete\]) is verified from the Lax equations (\[Lax-N-P\]) with the Lax operators given by (\[Lax-N\]) and (\[Lax-P\]). Conclusion {#sec-4} ========== We have derived an integrable semi-discretization of the MTM system in laboratory coordinates (\[MTM\]). Integrability of the semi-discrete MTM system (\[MTM-discrete\]) with the Lax operators (\[Lax-N\]) and (\[Lax-P\]) is a starting point for derivation of conserved quantities, multi-soliton solutions, and other useful facts of the semi-discrete MTM in laboratory coordinates. It is also a starting point for numerical simulations of the integrable semi-discretizations of the continuous MTM system (\[MTM\]). We conclude by mentioning some relevant works on the semi-discretizations of other integrable nonlinear evolution equations. By the semi-discretization of the Hirota bilinear method, one can obtain an integrable semi-discretization of many continuous nonlinear equations as done in [@ChenChen] for the coupled Yajima–Oikawa system. One can verify [@Pel-personal] that this technique applied to the Chen–Lee–Liu system yields the same semi-discretization as the one obtained by Tsuchida [@Tsuchida-preprint] from coupling the Bäcklund transformation for the AL lattice and the time evolution of the Chen–Lee–Liu system. From a different point of view, Lax operators for the AL-type lattices with quadratic and higher-order polynomial dependence were considered by Vakhnenko (see the recent review in [@Vakhnenko] and earlier references therein). This approach brings many interesting semi-discretizations of coupled nonlinear Schrödinger equations, but these equations are not related to the semi-discretizations of the massive Thirring model [@Bronsard]. We conclude that the present contribution contains the only integrable semi-discretization of the MTM system in the laboratory coordinates available in the present time. [**Acknowledgement.**]{} D.E.P. thanks P.G. Kevrekidis for suggesting a search for integrable semi-discretization of the MTM system back in 2015, as well as Th. Ioannidou and S.A. Bronsard for collaboration on the early (unsuccessful) stages of the project in 2016 and 2017 respectively. Critical advices from T. Tsuchida helped the authors to achieve the goal. This work was completed while D.E.P. visited Department of Mathematics at University of Sydney during January-June 2018. The research of N.J. was supported by an Australian Laureate Fellowship \# FL 120100094 and grant \# DP160101728. [99]{} M.J. Ablowitz and J.F. Ladik, “Nonlinear differential-difference equations", J. Math. Phys. [**16**]{} (1975), 598–603. M.J. Ablowitz and J.F. Ladik, “Nonlinear differential-difference equations and Fourier analysis,” J. Math. Phys. [**17**]{} (1976), 1011–1018. M.J. Ablowitz and J.F. Ladik, “On the solution of a class of nonlinear partial difference equations,” Stud. Appl. Math. [**57**]{} (1977), 1–12. I.V. Barashenkov and B.S. Getmanov, “Multisoliton solutions in the scheme for unified description of integrable relativistic massive fields. Non-degenerate $sl(2,C)$ case", Commun. Math. Phys. [**112**]{} (1987) 423–446. I.V. Barashenkov and E.V. Zemlyanaya, “Oscillatory instability of gap solitons: a numerical study", Comput. Phys. Commun. [**126**]{} (2000) 22–27. S.A. Bronsard and D.E. Pelinovsky, “New integrable semi-discretizations of the coupled nonlinear Schrödinger equations", arXiv: 1705.05974 (2017). T. Candy, “Global existence for an $L^2$-critical nonlinear Dirac equation in one dimension", Adv. Diff. Eqs. [**7-8**]{} (2011), 643–666. T. Candy and H. Lindblad, “Long range scattering for the cubic Dirac equation on $\mathbb{R}^{1+1}$", Diff. Integral Equat. [**31**]{} (2018), 507–518. J. Chen, Y. Chen, B.F. Feng, K. Maruno, and Y. Ohta, “An integrable semi-discretization of the coupled Yajima–Oikawa system", J. Phys. A: Math. Theor. [**49**]{} (2016), 165201 (19 pp). A. Contreras, D.E. Pelinovsky, and Y. Shimabukuro, “$L^2$ orbital stability of Dirac solitons in the massive Thirring model", Communications in PDEs [**41**]{} (2016), 227–255. J. Cuevas-Maraver, P.G. Kevrekidis, and A. Saxena, “Solitary waves in a discrete nonlinear Dirac equation", J. Phys. A: Math. Theor. [**48**]{} (2015), 055204 (22 pp). J. Cuevas-Maraver, P.G. Kevrekidis, A. Saxena, F. Cooper, and F. Mertens, “Solitary waves in the nonlinear Dirac equation at the continuum limit: Stability and dynamics", in [*Ordinary and Partial Differential Equations*]{} (Nova Science Publishers, New York, 2015). A. Degasperis, S. Wabnitz, and A.B. Aceves, “Bragg grating rogue wave", Phys. Lett. A [**379**]{} (2015), 1067–1070. H. Huh, “Global strong solutions to the Thirring model in critical space", J. Math. Anal. Appl. [**381**]{} (2011), 513–520. H. Huh, “Global solutions to Gross–Neveu equation", Lett. Math. Phys. [**103**]{} (2013), 927–931. H. Huh and B. Moon, “Low regularity well-posedness for Gross–Neveu equations", Comm. Pure Appl. Anal. [**14**]{} (2015), 1903–1913. J. Hietarinta, N. Joshi, and F. Nijhoff, [*Discrete Systems and Integrability*]{} (Cambridge University Press, Cambridge, 2016). D.J. Kaup and T.I. Lakoba, “The squared eigenfunctions of the massive Thirring model in laboratory coordinates”, J. Math. Phys. [**37**]{} (1996), 308–323. D.J. Kaup and A.C. Newell, “On the Coleman correspondence and the solution of the Massive Thirring model”, Lett. Nuovo Cimento [**20**]{} (1977), 325–331. K. Kawata, T. Morishima, and H. Inoue, “Inverse scattering method for the two-dimensional massive Thirring model", J. Phys. Soc. Japan [**47**]{} (1979), 1327–1334. Y. Komori and M. Wadati, “Massless Thirring model and Bethe ansatz wavefunction", J. Phys. Soc. Japan [**65**]{} (1996), 722–724. E.A. Kuznetzov and A.V. Mikhailov, “On the complete integrability of the two-dimensional classical Thirring model", Theor. Math. Phys. [**30**]{} (1977), 193–200. T.I. Lakoba, “Numerical study of solitary wave stability in cubic nonlinear Dirac equations in 1D", Phys. Lett. A [**382**]{} (2018), 300–308. A.V. Mikhailov, “Integrability of the two-dimensional Thirring model", JETP Lett. [**23**]{} (1976), 320–323. F.W. Nijhoff, H.W. Capel, G.R.W. Quispel, and J. van der Linden, “The derivative nonlinear Schrödinger equation and the massive Thirring model", Phys. Lett. A [**93**]{} (1983), 455–458. F.W. Nijhoff, H.W. Capel, and G.R.W. Quispel, “Integrable lattice version of the massive Thirring model and its linearization", Phys. Lett. A [**98**]{} (1983), 83–86. D.E. Pelinovsky, unpublished notes (2016). D.E. Pelinovsky and A. Saalmann, “Inverse scattering for the massive Thirring model", Fields Institute Communications (2018), arXiv:1801.00039. D. E. Pelinovsky and Y. Shimabukuro, “Orbital stability of Dirac solitons”, Lett. Math. Phys. [**104**]{} (2014), 21–41. S. Shao, N.R. Quintero, F.G. Mertens, F. Cooper, A. Khare, and A. Saxena, “Stability of solitary waves in the nonlinear Dirac equations with arbitrary nonlinearity", Phys. Rev. E [**90**]{} (2014), 032915 (12pp). S. Selberg and A. Tesfahun, “Low regularity well-posedness for some nonlinear Dirac equations in one space dimension”, Diff. Integral Eqs. [**23**]{} (2010), 265–278. W. Thirring, “A soluble relativistic field theory", Annals of Physics [**3**]{} (1958), 91-–112. T. Tsuchida, “Integrable discretizations of derivative nonlinear Schrödinger equations", J. Phys. A: Math. Gen. [**35**]{} (2002), 7827–7847. T. Tsuchida, “A systematic method for constructing time discretizations of integrable lattice systems: local equations of motion", J. Phys. A: Math. Theor. [**43**]{} (2010), 415202 (22 pages). T. Tsuchida, “On a new integrable discretization of the derivative nonlinear Schrödinger (Chen–Lee–Liu) equation", arXiv:1501.01956 (2015) (22 pages). O.O. Vakhnenko, “Semi-discrete integrable nonlinear Schrödinger system with background-controlled inter-site resonant coupling", J. Nonlin. Math. Phys. [**24**]{} (2017), 250–302. V.E. Vekslerchik, “Functional representation of the Ablowitz–Ladik hierarchy II", J. Nonlin. Math. Phys. [**9**]{} (2002), 157–180. J. Villarroel, “The DBAR problem and the Thirring model", Stud. Appl. Math. [**84**]{} (1991), 207–220. M. Wadati, “General solution and Lax pair for 1-D classical massless Thirring model", J. Phys. Soc. Japan [**52**]{} (1983), 1084–1085. Y. Zhang, “Global strong solution to a nonlinear Dirac type equation in one dimension, Nonlin. Anal.: Th. Meth. Appl. [**80**]{} (2013), 150–155. Y. Zhang and Q. Zhao, “Global solution to nonlinear Dirac equation for Gross–Neveu model in $1+1$ dimensions, Nonlin. Anal.: Th. Meth. Appl. [**118**]{} (2015), 82–96. F. Zullo, “On an integrable discretisation of the Ablowitz–Ladik hierarchy", J. Math. Phys. [**54**]{} (2013), 053515 (14 pages).
--- author: - 'Jérôme Benoit[^1]' - Ana Nunes - Margarida Telo da Gama bibliography: - 'pamds.bib' date: '[[`q-bio/0510005`](http://arxiv.org/abs/q-bio/0510005)]{}' title: Pair Approximation Models for Disease Spread --- \[1997/12/01\] Introduction {#sec/introduction} ============ Stochastic Susceptible-Infective-Recovered (<span style="font-variant:small-caps;">SIR</span>) epidemic models on lattices and networks can be mapped on to percolation problems and are well understood [@Grassberger1982; @MooreNewman2000May; @MooreNewman2000Nov]. To describe disease spread and persistence in a community, the model must be extended to include a mechanism for renewal of susceptibles, either births or immunity waning. Models with immunity waning, Susceptible-Infective-Recovered-Susceptible (<span style="font-variant:small-caps;">SIRS</span>), are based on the following transitions: $$\label{ESWN/SIRS/scheme} {S}{\xrightarrow[I_{\eswnNeighbourIndex}]{\mspace{15.0mu}\eswnInfectionRate\mspace{15.0mu}}} {I}{\xrightarrow{\mspace{15.0mu}\eswnRecoveryRate\mspace{15.0mu}}} {R}{\xrightarrow{\mspace{15.0mu}\eswnDemographicRate\mspace{15.0mu}}} {S},$$ meaning that any susceptible individual $S$ can be infected by an infected neighbour $I_{\eswnNeighbourIndex}$ at the infection rate $\eswnInfectionRate$, any infected individual $I$ becomes recovered $R$ at the recovery rate $\eswnRecoveryRate$, and any recovered individual $R$ becomes susceptible $S$ at the immunity loss rate $\eswnDemographicRate$. Following customary habits, we shall choose time units for which . The <span style="font-variant:small-caps;">SIRS</span> model interpolates between two well known models, the contact process (also known as Susceptible-Infective-Susceptible or <span style="font-variant:small-caps;">SIS</span>) and the <span style="font-variant:small-caps;">SIR</span> model, in the limits $\eswnDemographicRate\rightarrow\infty$ and $\eswnDemographicRate\rightarrow{0}$, respectively, and much is known about its behaviour on regular lattices, both from the point of view of rigorous results [@DurrettNeuhauser1991; @AndjelSchinazi1996; @vdBerg1998] and of assessing the performance of mean field and pair approximations against stochastic simulations [@JooLebowitz2004]. In particular, it is known [@vdBerg1998] that on hypercubic lattices of arbitrary dimension the phase diagram of has two critical values, , which are the critical rates of the two limit problems that is the contact process and <span style="font-variant:small-caps;">SIR</span>, respectively. For there is disease extinction for every $\eswnDemographicRate$, while for $\eswnCriticalInfectionRate(0)<\eswnInfectionRate$ there is disease persistence for every $\eswnDemographicRate$. For $\eswnCriticalInfectionRate(\infty)<\eswnInfectionRate<\eswnCriticalInfectionRate(0)$ disease persistence occurs only for $\eswnDemographicRate$ above a certain threshold. The region of disease persistence for every $\eswnDemographicRate$ is ‘missing’ in dimension , because in this case $\eswnCriticalInfectionRate(0)$ is infinite. In [@JooLebowitz2004] the uncorrelated pair approximation (<span style="font-variant:small-caps;">UPA</span>, see Section \[sec/MFAUPA\]) was applied to the <span style="font-variant:small-caps;">SIRS</span> model on linear and square lattices, and the phase diagrams computed from the corresponding equations of evolution were compared with the mean field phase diagram and with the results of simulations. It was shown that, by contrast with the mean field approximation, the <span style="font-variant:small-caps;">UPA</span> phase diagram agrees qualitatively with the simulations and the exact results both in and in . Since the <span style="font-variant:small-caps;">UPA</span> does not take into account the lattice dimensionality explicitly, it predicts identical phase diagrams on lattices with the same coordination number $\eswnDegree$, namely on linear () and square () lattices, when next nearest neighbours () are considered. However, in one dimension the critical infection rate , and the critical line has an asymptote at $\eswnDemographicRate=0$, while in two dimensions the critical line crosses the $\eswnDemographicRate=0$ axis at a finite value of $\eswnCriticalInfectionRate$, which is the result of the <span style="font-variant:small-caps;">UPA</span> for . The question is then whether generalized pair approximations can account for the dependence on dimensionality and, in particular, whether they can describe phase diagrams with different qualitative behaviours at fixed coordination numbers. We have addressed this question, and more generally the problem of constructing suitable pair approximations (Section \[sec/MFAUPA\]), in the context of a modification of model , where the mechanism of renewal of susceptibles is demography, rather than immunity waning. This is the natural scenario in the epidemiology of diseases that confer permanent immunity, such as childhood infectious diseases [@AndersonMay; @MurrayII]. For this model infection obeys the same rules as in , immunity is permanent and all individuals, whatever their state, are submitted to birth and death events at a rate $\eswnDemographicRate$. The stochastic process, which describes the dynamics of this system, is governed by the transitions \[ESWN/scheme\] $$\begin{gathered} \label{ESWN/scheme/disease} {S}{\xrightarrow[I_{\eswnNeighbourIndex}]{\mspace{15.0mu} \eswnInfectionRate\mspace{15.0mu}}} {I}{\xrightarrow{\mspace{15.0mu}\eswnRecoveryRate\mspace{15.0mu}}}{R} , \\ \label{ESWN/scheme/demographic} {\{S,I,R\}}{\xrightarrow{\mspace{15.0mu}\eswnDemographicRate\mspace{15.0mu}}}{S} .\end{gathered}$$ In the limit $\eswnDemographicRate=0$, both models and coincide with <span style="font-variant:small-caps;">SIR</span> model. In the opposite limit the dynamics of the two models are drastically different. While, in the limit $\eswnDemographicRate=\infty$, <span style="font-variant:small-caps;">SIRS</span> coincides with the contact model [@JooLebowitz2004], in the same limit the dynamics of is trivial: it is driven by demography, that keeps the entire population susceptible for any $\eswnCriticalInfectionRate$, and thus $\eswnCriticalInfectionRate(\infty)=\infty$. We are interested in the regime, where $\eswnDemographicRate$ is smaller than the recovery rate, which is meaningful for the study of acute disease spread. Although in this regime the dynamics is dominated by the infection and recovery processes which are identical in both models, the behaviour of appears to be different, in a subtle way, from that of (Section \[sec/MFAUPA\]). We have considered the demographic <span style="font-variant:small-caps;">SIR</span> model on a linear lattice with periodic boundary conditions (ring) and next nearest neighbour interactions, . We constructed a family of correlated pair approximations (<span style="font-variant:small-caps;">CPA</span>), parametrized by $\eswnClosedFormParameter$, a measure of the relative contributions of loops and open triplets of connected sites involved in the disease spread (Section \[sec/CPA\]). For $\eswnClosedFormParameter=0$ the approximation reduces to the standard <span style="font-variant:small-caps;">UPA</span> (Section \[sec/MFAUPA\]). The phase diagrams of the <span style="font-variant:small-caps;">CPA</span> show that as $\eswnClosedFormParameter$ increases from $0$ to $\eswnRemarkableClosedFormParameter$ (see Section \[sec/CPA\]) the <span style="font-variant:small-caps;">CPA</span> interpolates between the <span style="font-variant:small-caps;">UPA</span> critical behaviour and the typical one-dimensional phase behaviour, with $\eswnCriticalInfectionRate(0)=\infty$. Finally, we have simulated the demographic <span style="font-variant:small-caps;">SIR</span> model on a ring, with . The results of the simulations indicate that while the <span style="font-variant:small-caps;">CPA</span> with a constant value of $\eswnClosedFormParameter$ cannot describe the global phase diagram of , a reasonable description of endemic equilibria as well as of the phase diagram is obtained when $\eswnClosedFormParameter$ is allowed to depend on the demographic rate $\eswnDemographicRate$ (Section \[sec/CPA\]). This illustrates that in addition to describe the dimensional crossover for lattices with coordination number , the <span style="font-variant:small-caps;">CPA</span> can be made semi-quantitative providing an alternative to the stochastic simulations of individual based models. We conclude in Section \[sec/discussion\] with a brief discussion of the results. ![ Endemic infective probability versus infection rate at (a) high demographic rate, and at (b) huge demographic rate: the endemic infective probability is plotted from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed lines), the <span style="font-variant:small-caps;">UPA</span> (dashed dotted lines) and the correlated model with best-fit closed form parameters (solid lines). The fitting procedure is based on perpendicular offsets and on the assumption that the closed form parameter $\eswnClosedFormParameter$ depends only on the demographic rate $\eswnDemographicRate$. Closed form parameters $\eswnClosedFormParameter$ for (a) and (b) respectively: $0.50$, $0.70$. []{data-label="fig/X/endemic"}](pamds-EndemicInfectivePlots){width="1.0\linewidth"} Mean Field and Uncorrelated Pair Approximations {#sec/MFAUPA} =============================================== In this section we consider the time evolution of the demographic <span style="font-variant:small-caps;">SIR</span> model on regular lattices and review the mean-field and (standard) uncorrelated pair approximations, setting the notation and the stage for the development of the more sophisticated correlated pair approximations. In the demographic <span style="font-variant:small-caps;">SIR</span> model on networks, sites represent individuals and bonds social links. The dynamics is governed by the stochastic process . Denoting by $\prob{A}$ the probability for an individual to be in state $A$ (at time $t$), $\prob{AB}$ the probability for a lattice bond to connect an individual in state $A$ to an individual in state $B$, the time evolution of the singleton probabilities $\prob{A}$ can be described by the set of first order differential equations [@AndersonMay; @MurrayII]: \[ESWN/SDE/1\] $$\begin{aligned} \label{ESWN/SDE/1/S} \difft{\prob{S}}&= +\eswnDemographicRate\: \bigl[ \prob{I} + \prob{R} \bigr] -\eswnInfectionRate\sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}} , \\ \label{ESWN/SDE/1/I} \difft{\prob{I}}&= +\eswnInfectionRate\sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}} -(\eswnDemographicRate+\eswnRecoveryRate)\: \prob{I} , \\ \label{ESWN/SDE/1/R} \difft{\prob{R}}&= +\eswnRecoveryRate\: \prob{I} -\eswnDemographicRate\: \prob{R} ,\end{aligned}$$ where the summations run over the connected neighbours. Clearly the set of equations is not closed since it involves pair probabilities without describing their time evolution. This follows from the stochastic process where infection proceeds *via* $SI$ contact pairs. As a matter of fact, the time evolution of the $q$-tuple probabilities is described by a set of first order differential equations expressing their time derivatives as linear combinations of $q$-tuple and $(q\!+\!1)$-tuple probabilities, subject to a normalization condition. In order to proceed, the set of equations must be closed, that is the $(q\!+\!1)$-tuple probabilities must be written in terms of $q$-tuple probabilities. The ‘art’ is to use closures that capture key physical features of the system and are still manageable by symbolic or numerical-symbolic computation. The results of a particular closure, or approximation, may then be checked against rigorous results and/or stochastic simulations. For most closures the $(q\!+\!1)$-tuple probabilities are rational functions of the $q$-tuple probabilities, appropriately normalized, and thus the constrained set of first order differential equations may be replaced by an unconstrained set where the time derivatives of independent $q$-tuple probabilities are expressed as rational functions of these $q$-tuple probabilities. Although the resulting sets of equations are easily integrable by classical numerical methods and admit polynomial systems as steady state equations, their analysis remains cumbersome even at low order $q$. The simplest closure is the mean field approximation (<span style="font-variant:small-caps;">MFA</span>), where the pairs ($2$-tuples) are assumed to be formed by uncorrelated singletons ($1$-tuples): $$\label{ESWN/UCS/approximation} \sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}}\approx \eswnDegree\: \prob{S}\prob{I} .$$ For the demographic <span style="font-variant:small-caps;">SIR</span> model the endemic equilibrium (steady state) is computed easily. The mean-field endemic infective probabilities are plotted in Figure \[fig/X/endemic\] as a function of the infection rate, at two values of $\eswnDemographicRate$. For any value of $\eswnDemographicRate$, the <span style="font-variant:small-caps;">MFA</span> predicts two different steady states: at infection rates $\eswnInfectionRate$ smaller than the critical infection rate $\eswnCriticalInfectionRate$ there is disease extinction, while at infection rates $\eswnInfectionRate$ greater than the critical infection rate $\eswnCriticalInfectionRate$ there is disease persistence, *i.e.* infected (and recovered) individuals coexist with susceptibles. The two regimes are separated by the mean-field endemic threshold that is plotted in Figure \[fig/UPA/phasediagram\] (dashed line). ![ Phase diagram for the <span style="font-variant:small-caps;">UPA</span>: the no-coexistence phase and the coexistence phase are separated by the critical curve from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed line), and the <span style="font-variant:small-caps;">UPA</span> (thick solid line). Within the coexistence phase, at very low demographic rates $\eswnDemographicRate$, the <span style="font-variant:small-caps;">UPA</span> predicts an oscillatory phase as shown in the inset. []{data-label="fig/UPA/phasediagram"}](pamds-PhaseDiagram-UPA){width="1.0\linewidth"} We anticipate that the results of the <span style="font-variant:small-caps;">MFA</span> will be accurate when the demographic process of dominates over the infectious one since in this regime pairs are continually broken and thus the behaviour of each individual is essentially independent on that of the other ones. The infection process governed by *Susceptible*-*Infective* contact pairs, dominates in the opposite regime (), relevant in the epidemiological context. The appropriate mean field theory is then the uncorrelated pair approximation (<span style="font-variant:small-caps;">UPA</span>). The <span style="font-variant:small-caps;">UPA</span> is for pairs what the <span style="font-variant:small-caps;">MFA</span> is for singletons. In the <span style="font-variant:small-caps;">UPA</span> triplets ($3$-tuples) are assumed to be formed by uncorrelated pairs: $$\label{ESWN/UCP/approximation} \sum_{\eswnNeighbourIndex}{\prob{{AS}{I_{\eswnNeighbourIndex}}}}\approx (\eswnDegree-1)\: \frac{\prob{SA}\prob{SI}}{\prob{S}} .$$ The <span style="font-variant:small-caps;">UPA</span> is expected to outperform the <span style="font-variant:small-caps;">MFA</span> but, in general, its solution is not known in closed form. For the demographic <span style="font-variant:small-caps;">SIR</span> model the calculation of the phase diagram and the stability analysis is still tractable by symbolic computation. For lattices with coordination number , the phase diagram is plotted in Figure \[fig/UPA/phasediagram\]. It is clear that the <span style="font-variant:small-caps;">UPA</span> is quantitatively superior to the <span style="font-variant:small-caps;">MFA</span> when compared with the results of simulations (open circles). Both the <span style="font-variant:small-caps;">MF</span> and the <span style="font-variant:small-caps;">UP</span> approximations of the demographic <span style="font-variant:small-caps;">SIR</span> model predict a finite critical infection rate at $\eswnDemographicRate=0$, while the simulations indicate that $\eswnCriticalInfectionRate$ will diverge as $\eswnDemographicRate$ tends to $0$. However, the <span style="font-variant:small-caps;">SIRS</span> and the demographic <span style="font-variant:small-caps;">SIR</span> models are different at low (but finite) demographic rates $\eswnDemographicRate$. In the demographic <span style="font-variant:small-caps;">SIR</span> model the mechanism for the renewal of susceptibles is totally random by contrast to the mechanism of the <span style="font-variant:small-caps;">SIRS</span> model. In our model susceptibles are born anywhere on the lattice while in the <span style="font-variant:small-caps;">SIRS</span> model only previously infected sites loose immunity. We note that the randomizing effect of the demographic <span style="font-variant:small-caps;">SIR</span> mechanism for the renewal of susceptibles is reminiscent of the randomizing effect of shortcuts in small-world networks of the Watts and Strogatz type [@WattsStrogatz1998; @Hastings2003] where correlations are destroyed and an effective mixing of the population is achieved, with (drastic) consequences on the phase diagram. Finally, it is worth noticing that the <span style="font-variant:small-caps;">UPA</span> also predicts the existence of an oscillatory phase within the survival or coexistence phase (*i.e.*, to the right of $\eswnCriticalInfectionRate(0)$), for small values of $\eswnDemographicRate$ (Figure \[fig/UPA/phasediagram\]). The same is true for the <span style="font-variant:small-caps;">UPA</span> of process on the square lattice. This behaviour will be difficult to identify in stochastic simulations, since it may be blurred by large fluctuations and stochastic extinctions. ![ Isoparametric phase diagrams for the correlated model: the no-coexistence and coexistence phases are separated by the critical curve from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed line), the <span style="font-variant:small-caps;">UPA</span> (dashed dotted line) and the correlated model for different $\eswnClosedFormParameter$ (solid lines). For $\eswnRemarkableClosedFormParameter\!\approx\!{0.3807}$ (bold solid line) the critical infection rate $\eswnCriticalInfectionRate$ tends asymptotically to infinity when the demographic rate $\eswnDemographicRate$ vanishes. Closed form parameters $\eswnClosedFormParameter$ from left to right: $\tfrac{1}{4}$, $\eswnClosedFormParameter^{*}$, $\tfrac{1}{2}$, $\tfrac{5}{8}$, $\tfrac{3}{4}$. []{data-label="fig/CLM/phasediagram/isoparametric"}](pamds-PhaseDiagram-CPA){width="1.0\linewidth"} Correlated pair approximations {#sec/CPA} ============================== In order to construct more realistic pair approximations, we have investigated closure procedures inspired by the geometrical structure of the lattice. Within this perspective and as far as social triplets are concerned, the ring of degree and the triangular lattice are propitious networks since their nearest-neighbour triplets split into two distinct classes: ‘chain-like’ (open) and ‘loop-like’ (closed) triplets. A very naive idea is to take into account the two classes of triplets and to use the probability $\eswnClosedFormParameter$ and $1-\eswnClosedFormParameter$ of finding respectively a ‘loop-like’ triplet and a ‘chain-like’ triplet as a parameter to be fitted to simulation results. Thus triplets are assumed to be formed either of uncorrelated (chained) pairs or of correlated (looped) pairs [@VanBaalen2000]: $$\label{ESWN/CPL/approximation} \begin{split} &\qquad \sum_{\eswnNeighbourIndex}{\prob{{AS}{I_{\eswnNeighbourIndex}}}}\approx \\ & \begin{cases} (\eswnDegree-1)\, \Bigr[ \bigr(1-\eswnClosedFormParameter\bigl)\, \tfrac{\prob{SA}\prob{SI}}{\prob{S}} + \, \eswnClosedFormParameter\: \tfrac{\prob{AI}\prob{SA}\prob{SI}}{\prob{A}\prob{S}\prob{I}} \Bigl] &\\ \qquad\text{if ${A}\in\{S,R\}$} , &\\ (\eswnDegree-1)\, \prob{SI} - \sum_{\eswnNeighbourIndex}{ \bigr[ \prob{{SS}{I_{\eswnNeighbourIndex}}} \!+\! \prob{{RS}{I_{\eswnNeighbourIndex}}} \bigl] } &\\ \qquad\text{if ${A}={I}$} . & \end{cases} \end{split}$$ The demographic <span style="font-variant:small-caps;">SIR</span> version of the <span style="font-variant:small-caps;">CPA</span> is amenable by cumbersome numerical-symbolic computation although some interesting results may be obtained by symbolic computation. The phase diagrams are shown in Figure \[fig/CLM/phasediagram/isoparametric\]. We find that, as $\eswnClosedFormParameter$ increases from $0$ to $\eswnRemarkableClosedFormParameter\!\approx\!{0.3807}$[^2], keeping fixed, the <span style="font-variant:small-caps;">CPA</span> phase diagrams interpolate between the <span style="font-variant:small-caps;">UPA</span> behaviour and typical one-dimensional phase diagrams with $\eswnCriticalInfectionRate(0)=\infty$. At $\eswnRemarkableClosedFormParameter$ the critical infection rate $\eswnCriticalInfectionRate$ tends asymptotically to infinity as the demographic rate $\eswnDemographicRate$ vanishes. Inspection of Figure \[fig/CLM/phasediagram/isoparametric\] also shows that the closed form parameter $\eswnClosedFormParameter$ cannot be constant if a quantitative description of the global phase diagram is required. If we allow $\eswnClosedFormParameter$ to depend on $\eswnDemographicRate$, reasonable descriptions of the endemic equilibria (Figure \[fig/X/endemic\]) and of the global phase diagram (Figure \[fig/CLM/phasediagram/isoparametric\]) are obtained. For the <span style="font-variant:small-caps;">SIRS</span> model on the square lattice, a <span style="font-variant:small-caps;">CPA</span> obtained by fitting $\eswnClosedFormParameter$ to $\eswnCriticalInfectionRate(0)$ will improve the results of the <span style="font-variant:small-caps;">UPA</span> used in [@JooLebowitz2004] to describe the behaviour of the system at low values of $\eswnDemographicRate$. Discussion {#sec/discussion} ========== We have proposed a simple <span style="font-variant:small-caps;">CPA</span> that was shown to provide a reasonable approximation to the behaviour of stochastic models that are relevant in epidemiology — the agreement against simulation data being far better than <span style="font-variant:small-caps;">MFA</span> and <span style="font-variant:small-caps;">UPA</span> with a suitable choice of the parameters. The resulting equations of evolution may be used to approximate phase diagrams, as well as steady state and dynamical behaviours of the associated stochastic models. The <span style="font-variant:small-caps;">CPA</span> takes into account some of the effects of the local lattice structure and yields a clear alternative to heavy stochastic simulations. One of the directions of future work includes the development of <span style="font-variant:small-caps;">CPA</span>s, along the lines of the present work, to account for the local (lattice like) structure of a class of complex networks, such as the Watts and Strogatz small-world networks, that have been shown to be relevant in epidemiological contexts [@Verdasca2005]. Financial supports from the Foundation of the University of Lisbon, under contract BPD-CI/01/04, and from the Portuguese Foundation for Science and Technology (FCT), under contracts POCTI/ESP/44511/2002 and POCTI/ISFL/2/618, are gratefully acknowledged. [^1]: [^2]: the real solution of the cubic equation $27\eswnClosedFormParameter^{3}-18\eswnClosedFormParameter^{2}+87\eswnClosedFormParameter-32=0$.
--- abstract: 'Given a ${\mathrm{C}^*}$-correspondence $X$, we give necessary and sufficient conditions for the tensor algebra ${{\mathcal{T}}}_X^+$ to be hyperrigid. In the case where $X$ is coming from a topological graph we obtain a complete characterization.' address: - | Department of Mathematics\ East Carolina University\ Greenville, NC 27858\ USA - | Department of Mathematics and Statistics\ MacEwan University\ Edmonton, AB\ Canada author: - 'Elias G. Katsoulis' - Christopher Ramsey title: 'The hyperrigidity of tensor algebras of ${\mathrm{C}^*}$-correspondences' --- [^1] [^2] Introduction ============ A not necessarily unital operator algebra ${{\mathcal{A}}}$ is said to be *hyperrigid* if given any non-degenerate $*$-homomorphism $$\tau \colon {\mathrm{C}^*_{\text{env}}}({{\mathcal{A}}}) \longrightarrow B({{\mathcal{H}}})$$ then $\tau$ is the only completely positive, completely contractive extension of the restricted map $\tau_{\mid {{\mathcal{A}}}}$. Arveson coined the term hyperrigid in [@Arv] but he was not the only one considering properties similar to this at the time, e.g. [@Duncan]. There are many examples of hyperrigid operator algebras such as those which are Dirichlet but the situation was not very clear in the case of tensor algebras of C$^*$-correspondences. It was known that the tensor algebra of a row-finite graph is hyperrigid [@Duncan], [@Kak] and Dor-On and Salmomon [@DorSal] showed that row-finiteness completely characterizes hyperrigidity for such graph correspondences. These approaches, while successful, did not lend themselves to a more general characterization. The authors, in a previous work [@KatRamHN], developed a sufficient condition for hyperrigidity in tensor algebras. In particular, if Katsura’s ideal acts non-degenerately on the left then the tensor algebra is hyperrigid. The motivation was to provide a large class of hyperrigid C$^*$-correspondence examples as crossed products of operator algebras behave in a very nice manner when the operator algebra is hyperrigid. This theory was in turn leveraged to provide a positive confirmation to the Hao-Ng isomorphism problem in the case of graph correspondences and arbitrary groups. For further reading on the subject please see [@KatRamMem; @KatRamCP2; @KatRamHN]. In this paper, we provide a necessary condition for the hyperrigidity of a tensor algebra, that a C$^*$-correspondence cannot be $\sigma$-degenerate, and show that this completely characterizes the situation where the C$^*$-correspondence is coming from a topological graph, which generalizes both the graph correspondence case and the semicrossed product arising from a multivariable dynamical system. Regarding hyperrigidity ----------------------- The reader familiar with the literature recognizes that in our definition of hyperrigidity, we are essentially asking that the restriction on ${{\mathcal{A}}}$ of any non-degenerate representation of ${\mathrm{C}^*_{\text{env}}}({{\mathcal{A}}})$ possesses the *unique extension property* (abbr. UEP). According to [@DorSal Proposition 2.4] a representation $\rho : {{\mathcal{A}}}\rightarrow B({{\mathcal{H}}})$, degenerate or not, has the UEP if and only if $\rho$ is a maximal representation of ${{\mathcal{A}}}$, i.e., whenever $\pi$ is a representation of ${{\mathcal{A}}}$ dilating $\rho$, then $\pi = \rho \oplus \pi'$ for some representation $\pi'$. Our definition of hyperrigidity is in accordance with Arveson’s nomenclature [@Arv], our earlier work [@Kat; @KatRamHN] and the works of Dor-On and Salomon [@DorSal] and Salomon [@Sal], who systematized quite nicely the non-unital theory. An alternative definition of hyperrigidity for ${{\mathcal{A}}}$ may ask that *any* representation of ${\mathrm{C}^*_{\text{env}}}({{\mathcal{A}}})$, not just the non-degenerate ones, possesses the UEP when restricted on ${{\mathcal{A}}}$. It turns out that for operator algebras with a positive contractive approximate unit[^3], such a definition would be equivalent to ours [@Sal Proposition 3.6 and Theorem 3.9] . However when one moves beyond operator algebras with an approximate unit, there are examples to show that the two definitions differ. One such example is the non-unital operator algebra ${{\mathcal{A}}}_V$ generated by the unilateral forward shift $V$. It is easy to see that ${{\mathcal{A}}}_V$ is hyperrigid according to our definition and yet the zero map, as a representation on ${{\mathcal{H}}}={{\mathbb{C}}}$, does not have the UEP. (See for instance [@Sal Example 3.4].) Main results ============ A ${\mathrm{C}^*}$-correspondence $(X,{{\mathcal{C}}},{\varphi}_X)$ (often just $(X,{{\mathcal{C}}})$) consists of a ${\mathrm{C}^*}$-algebra ${{\mathcal{C}}}$, a Hilbert ${{\mathcal{C}}}$-module $(X, {\left\langle\phantom{,},\phantom{,}\right\rangle})$ and a (non-degenerate) $*$-homomorphism ${\varphi}_X\colon {{\mathcal{C}}}\rightarrow {{\mathcal{L}}}(X)$ into the C$^*$-algebra of adjointable operators on $X$. An isometric (Toeplitz) representation $(\rho,t, {{\mathcal{H}}})$ of a ${\mathrm{C}^*}$-correspondence $(X,{{\mathcal{C}}})$ consists of a non-degenerate $*$-homomorphism $\rho\colon {{\mathcal{C}}}\rightarrow B({{\mathcal{H}}})$ and a linear map $t\colon X \rightarrow B({{\mathcal{H}}})$, such that $$\rho(c)t(x)=t({\varphi}_X(c)(x)), \ \ \textrm{and}$$ $$t(x)^*t(x')=\rho({\left\langlex,x'\right\rangle}),$$ for all $c\in {{\mathcal{C}}}$ and $x, x'\in X$. These relations imply that the ${\mathrm{C}^*}$-algebra generated by this isometric representation equals the closed linear span of $$t(x_1)\cdots t(x_n)t(y_1)^*\cdots t(y_m)^*, \quad x_i,y_j\in X.$$ Moreover, there exists a $*$-homomorphism $\psi_t:{{\mathcal{K}}}(X)\rightarrow B$, such that $$\psi_t(\theta_{x,y})= t(x)t(y)^*,$$ where ${{\mathcal{K}}}(X) \subset {{\mathcal{L}}}(X)$ is the subalgebra generated by the operators $\theta_{x,y}(z) = x \langle y,z\rangle, \ x,y,x\in X$, which are called by analogy the compact operators. The Cuntz-Pimsner-Toeplitz ${\mathrm{C}^*}$-algebra ${{\mathcal{T}}}_X$ is defined as the ${\mathrm{C}^*}$-algebra generated by the image of $(\rho_{\infty} , t_{\infty})$, the universal isometric representation. This is universal in the sense that for any other isometric representation there is a $*$-homomorphism of ${{\mathcal{T}}}_X$ onto the ${\mathrm{C}^*}$-algebra generated by this representation in the most natural way. The *tensor algebra* ${{\mathcal{T}}}_{X}^+$ of a ${\mathrm{C}^*}$-correspondence $(X,{{\mathcal{C}}})$ is the norm-closed subalgebra of ${{\mathcal{T}}}_X$ generated by $\rho_{\infty}({{\mathcal{C}}})$ and $t_{\infty}(X)$. See [@MS] for more on these constructions. Consider Katsura’s ideal $${{\mathcal{J}}}_X\equiv \ker{\varphi}_X^{\perp}\cap {\varphi}_X^{-1}({{\mathcal{K}}}(X)).$$ An isometric representation $(\rho, t)$ of $(X, {{\mathcal{C}}},{\varphi}_X)$ is said to be covariant (or Cuntz-Pimsner) if and only if $$\psi_t ( {\varphi}_X(c)) = \rho (c),$$ for all $c \in {{\mathcal{J}}}_X$. The Cuntz-Pimsner algebra ${{\mathcal{O}}}_X$ is the universal ${\mathrm{C}^*}$-algebra for all isometric covariant representations of $(X, {{\mathcal{C}}})$, see [@KatsuraJFA] for further details. Furthermore, the first author and Kribs [@KatsoulisKribsJFA Lemma 3.5] showed that ${{\mathcal{O}}}_X$ contains a completely isometric copy of ${{\mathcal{T}}}_X^+$ and ${\mathrm{C}^*_{\text{env}}}({{\mathcal{T}}}_X^+)\simeq {{\mathcal{O}}}_X$. We turn now to the hyperrigidity of tensor algebras. In [@KatRamHN] a sufficient condition for hyperrigidity was developed, Katsura’s ideal acting non-degenerately on the left of $X$. To be clear, non-degeneracy here means that $\overline{{\varphi}_X({{\mathcal{J}}}_X)X} = X$ which by Cohen’s factorization theorem implies that we actually have ${\varphi}_X({{\mathcal{J}}}_X)X = X$. \[thm:hyperrigid\] Let $(X, {{\mathcal{C}}})$ be a ${\mathrm{C}^*}$-correspondence with $X$ countably generated as a right Hilbert ${{\mathcal{C}}}$-module. If ${\varphi}_X({{\mathcal{J}}}_X)$ acts non-degenerately on $X$, then ${{\mathcal{T}}}^+_X$ is a hyperrigid operator algebra. The proof shows that if $\tau'\colon {{\mathcal{O}}}_X \longrightarrow B({{\mathcal{H}}})$ is a completely contractive and completely positive map that agrees with a $*$-homomorphism of ${{\mathcal{O}}}_X$ on ${{\mathcal{T}}}^+_{X}$ then the multiplicative domain of $\tau'$ must be everything. This is accomplished through the multiplicative domain arguments of [@BRO Proposition 1.5.7] and the fact that by $X$ being countably generated, Kasparov’s Stabilization Theorem implies the existence of a sequence $\{x_n \}_{n=1}^{\infty}$ in $X$ so that $ \sum_{n=1}^{k} \theta_{x_n, x_n}$, $k=1, 2, \dots$, is an approximate unit for ${{\mathcal{K}}}(X)$. After quite a lot of inequality calculations one arrives at the fact that all of ${{\mathcal{T}}}_X^+$ is in the multiplicative domain and thus so is ${{\mathcal{O}}}_X$. A ${\mathrm{C}^*}$-correspondence $(X, {{\mathcal{C}}})$ is called *regular* if and only if ${{\mathcal{C}}}$ acts faithfully on $X$ by compact operators, i.e., ${{\mathcal{J}}}_X= {{\mathcal{C}}}$. We thus obtain the following which also appeared in [@KatRamHN]. \[regular hyper\] The tensor algebra of a regular, countably generated ${\mathrm{C}^*}$-correspondence is necessarily hyperrigid. We seek a converse to Theorem \[thm:hyperrigid\]. Let $(X, {{\mathcal{C}}})$ be a ${\mathrm{C}^*}$-correspondence and let ${{\mathcal{J}}}_X$ be Katsura’s ideal. We say that ${\varphi}_X({{\mathcal{J}}}_X)$ acts *$\sigma$-degenerately* on $X$ if there exists a representation $\sigma \colon {{\mathcal{C}}}\rightarrow B({{\mathcal{H}}})$ so that $${\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}}\neq X\otimes_{\sigma} {{\mathcal{H}}}.$$ In particular, if there exists $n \in {{\mathbb{N}}}$ so that $$({\varphi}_X({{\mathcal{J}}}_X)\otimes {{\operatorname{id}}})X^{\otimes n }\otimes_{\sigma}{{\mathcal{H}}}\neq X^{\otimes n}\otimes_{\sigma} {{\mathcal{H}}}.$$ then by considering the Hilbert space ${{\mathcal{K}}}:=X^{\otimes n-1 }\otimes_{\sigma}{{\mathcal{H}}}$, we see that $${\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{K}}}\neq X\otimes_{\sigma} {{\mathcal{K}}}.$$ and so ${\varphi}_X({{\mathcal{J}}}_X)$ acts $\sigma$-degenerately on $X$. The following gives a quick example of a $\sigma$-degenerate action. Note that this is possibly stronger than having a not non-degenerate action. Let $(X, {{\mathcal{C}}})$ be a ${\mathrm{C}^*}$-correspondence. If $({\varphi}_X({{\mathcal{J}}}_X)X)^{\perp}\neq \{0 \}$, then ${\varphi}_X({{\mathcal{J}}}_X)$ acts $\sigma$-degenerately on $X$. Let $0 \neq f \in ({\varphi}_X({{\mathcal{J}}}_X)X)^{\perp}$. Let $\sigma : {{\mathcal{C}}}\rightarrow B({{\mathcal{H}}})$ be a $*$-representation and $h \in {{\mathcal{H}}}$ so that $\sigma \big( \langle f , f \rangle^{1/2}\big)h \neq 0$. Then, $$\langle f\otimes_{\sigma}h , f\otimes_{\sigma}h \rangle= \langle h , \sigma (( \langle f , f \rangle)h \rangle = \| \sigma \big( \langle f , f \rangle^{1/2}\big)h\|\neq0.$$ A similar calculation shows that $$0 \neq f\otimes_{\sigma}h \in ({\varphi}_X({{\mathcal{J}}}_X)X \otimes_{\sigma}{{\mathcal{H}}})^{\perp}$$ and we are done. We need the following \[l;use1\] Let $(X, {{\mathcal{C}}})$ be a ${\mathrm{C}^*}$-correspondence and $(\rho, t)$ an isometric representation of $(X, {{\mathcal{C}}})$ on ${{\mathcal{H}}}$. 1. If ${{\mathcal{M}}}\subseteq {{\mathcal{H}}}$ is an invariant subspace for $(\rho\rtimes t)({{\mathcal{T}}}_X^+)$, then the restriction $(\rho_{\mid_{{{\mathcal{M}}}}}, t_{\mid_{{{\mathcal{M}}}}})$ of $(\rho, t)$ on ${{\mathcal{M}}}$ is an isometric representation. 2. If $\rho(c)h = \psi_t({\varphi}_X(c))h,$ for all $c \in {{\mathcal{J}}}_X$ and $h \in [t(X){{\mathcal{H}}}]^{\perp}$, then $(\rho, t)$ is a Cuntz-Pimsner representation. \(i) If $p$ is the orthogonal projection on ${{\mathcal{M}}}$, then $p$ commutes with $\rho({{\mathcal{C}}})$ and so $\rho_{\mid_{{{\mathcal{M}}}}}(\cdot)= p\rho (\cdot) p$ is a $*$-representation of ${{\mathcal{C}}}$. Furthermore, for $x, y \in X$, we have $$\begin{aligned} t_{\mid_{{{\mathcal{M}}}}}(x)^*t_{\mid_{{{\mathcal{M}}}}}(y) &= pt(x)^*pt(y)p \\ &=pt(x)^*t(y)p \\ &= p\rho(\langle x, y\rangle ) p= \rho_{\mid_{{{\mathcal{M}}}}}(\langle x, y\rangle) \end{aligned}$$ and the conclusion follows. \(ii) It is easy to see on rank-one operators and therefore by linearity and continuity on all compact operators $K \in {{\mathcal{K}}}(X)$ that $$t(Kx)=\psi_t(K)t(x), \quad x \in X.$$ Now if $c \in {{\mathcal{J}}}_X$, then for any $x \in X$ and $ h \in {{\mathcal{H}}}$ we have $$\rho(c)t(x)h= t({\varphi}_X(c)x)h =\psi_t({\varphi}_X(c))t(x)h.$$ By assumption $ \rho(c)h = \psi_t({\varphi}_X(c))h$, for any $h \in [t(X){{\mathcal{H}}}]^{\perp}$ and the conclusion follows. \[thm;converse\] Let $(X, {{\mathcal{C}}})$ be a ${\mathrm{C}^*}$-correspondence. If Katsura’s ideal ${{\mathcal{J}}}_X$ acts $\sigma$-degenerately on $X$ then the tensor algebra ${{\mathcal{T}}}^+_X$ is not hyperrigid. Let $\sigma \colon {{\mathcal{C}}}\rightarrow B({{\mathcal{H}}})$ so that $${\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}}\neq X\otimes_{\sigma} {{\mathcal{H}}}$$ and let $ {{\mathcal{M}}}_0:= ({\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}})^{\perp}$. We claim that $$\label{eq;use1} ({\varphi}_X({{\mathcal{J}}}_X)\otimes I){{\mathcal{M}}}_0=\{ 0\}.$$ Indeed for any $f \in {{\mathcal{M}}}_0$ and $j \in {{\mathcal{J}}}_X$ we have $$\begin{aligned} \big \langle ({\varphi}_X(j)\otimes I)f \, , \, ({\varphi}_X(j)\otimes I)f \big \rangle =\langle f, ({\varphi}_X(j^*j)\otimes I) f\rangle = 0\end{aligned}$$ since $f \in ({\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}})^{\perp}$. This proves the claim. We also claim that $$\label{eq;use2} ({\varphi}_X({{\mathcal{C}}})\otimes I){{\mathcal{M}}}_0 = {{\mathcal{M}}}_0.$$ Indeed this follows from the fact that $$({\varphi}_X({{\mathcal{C}}})\otimes I)({\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}}) = {\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}{{\mathcal{H}}},$$ which is easily verified. Using the subspace ${{\mathcal{M}}}_0$ we produce a Cuntz-Pimsner representation $(\rho, t)$ of $(X, {{\mathcal{C}}})$ as follows. Let $(\rho_{\infty}, t_{\infty})$ be the universal representation of $(X, {{\mathcal{C}}})$ on the Fock space ${{\mathcal{F}}}(X)=\oplus_{n =0}^{\infty} X^{\otimes n}$, $X^{\otimes 0}:={{\mathcal{C}}}$. Let $$\begin{aligned} \rho_0 &\colon {{\mathcal{C}}}\longrightarrow B({{\mathcal{F}}}(X)\otimes_{\sigma} {{\mathcal{H}}}); c \longmapsto \rho_{\infty}(c)\otimes I \\ t_0 &\colon X \longrightarrow B({{\mathcal{F}}}(X)\otimes_{\sigma} {{\mathcal{H}}}) ; x \longmapsto t_{\infty}(x)\otimes I.\end{aligned}$$ Define $$\begin{aligned} {{\mathcal{M}}}:&= 0\oplus {{\mathcal{M}}}_0 \oplus (X\otimes {{\mathcal{M}}}_0) \oplus (X^{\otimes 2}\otimes {{\mathcal{M}}}_0) \oplus\dots \\ &=(\rho_0\rtimes t_0)({{\mathcal{T}}}_X^+)(0 \oplus{{\mathcal{M}}}_0 \oplus 0 \oplus 0\oplus \dots )\subseteq {{\mathcal{F}}}(X)\otimes_{\sigma}{{\mathcal{H}}},\end{aligned}$$ with the second equality following from (\[eq;use2\]). Clearly, ${{\mathcal{M}}}$ is an invariant subspace for $(\rho_0\rtimes t_0)({{\mathcal{T}}}_X^+)$. Let $\rho:={\rho_0}_{\mid_{{{\mathcal{M}}}}}$ and $t:={t_0}_{\mid_{{{\mathcal{M}}}}}$. By Lemma \[l;use1\](i), $(\rho, t)$ is a representation of $(X, {{\mathcal{C}}})$. We claim that $(\rho, t)$ is actually Cuntz-Pimsner. Indeed by Lemma \[l;use1\](ii) it suffices to examine whether $\psi_t({\varphi}_X(j))h = \rho(j)h$, for any $h \in {{\mathcal{M}}}\ominus t(X){{\mathcal{M}}}$. Note that since $$t(X){{\mathcal{M}}}= 0 \oplus 0 \oplus (X\otimes {{\mathcal{M}}}_0 ) \oplus (X^{\otimes 2}\otimes {{\mathcal{M}}}_0) \oplus ... ,$$ we have that $${{\mathcal{M}}}\ominus t(X){{\mathcal{M}}}= 0 \oplus {{\mathcal{M}}}_0 \oplus 0 \oplus 0\oplus \dots.$$ From this it follows that for any $h \in {{\mathcal{M}}}\ominus t(X){{\mathcal{M}}}$ we have $$t_0(x)^*h \in ({{\mathcal{C}}}\otimes_{\sigma}{{\mathcal{H}}}) \oplus 0 \oplus 0\oplus ... ,\quad x \in X$$ and so in particular for any $j \in {{\mathcal{J}}}_X$ we obtain $$\psi_t({\varphi}_X(j))h \in {t_0}_{\mid_{{{\mathcal{M}}}}}(X)({t_0}_{\mid_{{{\mathcal{M}}}}})(X)^*h = \{0\}.$$ On the other hand, $$\rho(j)h \in 0\oplus ({\varphi}_X({{\mathcal{J}}}_X)\otimes I){{\mathcal{M}}}_0\oplus 0\oplus 0\oplus\dots =\{ 0\},$$ because of (\[eq;use2\]). Hence $(\rho, t)$ is Cuntz-Pimsner. At this point by restricting on ${{\mathcal{T}}}_X^+$, we produce the representation $\rho \rtimes t \mid_{{{\mathcal{T}}}_X^+}$ of ${{\mathcal{T}}}_X^+$ coming from a $*$-representation of its ${\mathrm{C}^*}$-envelope ${{\mathcal{O}}}_X$, which admits a dilation, namely $\rho_0 \rtimes t_0 \mid_{{{\mathcal{T}}}_X^+}$. If we show now that $\rho_0 \rtimes t_0 \mid_{{{\mathcal{T}}}_X^+}$ is a non-trivial dilation of $\rho \rtimes t \mid_{{{\mathcal{T}}}_X^+}$, i.e. ${{\mathcal{M}}}_0$ is not reducing for $(\rho_0\rtimes t_0)({{\mathcal{T}}}_X^+)$, then $\rho \rtimes t \mid_{{{\mathcal{T}}}_X^+}$ is not a maximal representation of ${{\mathcal{T}}}^+_X$. Proposition 2.4 [@DorSal] shows $\rho \rtimes t \mid_{{{\mathcal{T}}}_X^+}$ does not have the UEP and so ${{\mathcal{T}}}_X^+$ is not hyperrigid, as desired. Towards this end, note that $${{\mathcal{M}}}^{\perp} = {{\mathcal{C}}}\oplus ({\varphi}_X({{\mathcal{J}}}_X)X\otimes_{\sigma}\\H )\oplus (X\otimes {{\mathcal{M}}}_0 )^{\perp}\oplus \dots$$ and so $$\begin{aligned} t_0(X){{\mathcal{M}}}^{\perp}=0\oplus (X {{\mathcal{C}}}\otimes_{\sigma}{{\mathcal{H}}})\oplus 0 \oplus0 \oplus \dots \nsubseteq {{\mathcal{M}}}^{\perp}\end{aligned}$$ Therefore ${{\mathcal{M}}}^{\perp}$ is not an invariant subspace for $(\rho_0\rtimes t_0)({{\mathcal{T}}}_X^+)$ and so ${{\mathcal{M}}}$ is not a reducing subspace for $(\rho_0\rtimes t_0)({{\mathcal{T}}}_X^+)$. This completes the proof. Topological graphs ================== A broad class of ${\mathrm{C}^*}$-correspondences arises naturally from the concept of a topological graph. For us, a topological graph $G= (G^0, G^1, r , s)$ consists of two $\sigma$-locally compact spaces $G^0$, $G^1$, a continuous proper map $r: G^1 \rightarrow G^0$ and a local homeomorphism $s: G^1 \rightarrow G^0$. The set $G^0$ is called the base (vertex) space and $G^1$ the edge space. When $G^0$ and $G^1$ are both equipped with the discrete topology, we have a discrete countable graph. With a given topological graph $G= (G^0, G^1, r , s)$ we associate a ${\mathrm{C}^*}$-correspondence $X_{G}$ over $C_0(G^0)$. The right and left actions of $C_0(G^0 )$ on $C_c ( G^1)$ are given by $$(fFg)(e)= f(r(e))F(e)g(s(e))$$ for $F\in C_c (G^1)$, $f, g \in C_0(G^0)$ and $e \in G^1$. The inner product is defined for $F, H \in C_c ( G^1)$ by $$\left< F \, | \, H\right>(v)= \sum_{e \in s^{-1} (v)} \overline{F(e)}H(e)$$ for $v \in G^0$. Finally, $X_{G}$ denotes the completion of $C_c ( G^1)$ with respect to the norm $$\label{norm} \|F\| = \sup_{v \in G^0} \left< F \, | \, F\right>(v) ^{1/2}.$$ When $G^0$ and $G^1$ are both equipped with the discrete topology, then the tensor algebra ${{\mathcal{T}}}_{G}^+ \equiv {{\mathcal{T}}}^+_{X_{G}}$ associated with $G$ coincides with the quiver algebra of Muhly and Solel [@MS]. See [@Raeburn] for further reading. Given a topological graph $G= (G^0, G^1, r, s)$, we can describe the ideal ${{\mathcal{J}}}_{X_G}$ as follows. Let $$\begin{aligned} G^0_{{\operatorname{sce}}}&= \{v \in G^0\mid v \mbox{ has a neighborhood } V \mbox{ such that } r^{-1}(V) = \emptyset \} \\ G^0_{{\operatorname{fin}}}&= \{v \in G^0\mid v \mbox{ has a neighborhood } V \mbox{ such that } r^{-1}(V) \mbox{ is compact} \} \end{aligned}$$ Both sets are easily seen to be open and in [@Katsura Proposition 1.24] Katsura shows that $$\ker{\varphi}_{X_G}= C_0 (G^0_{{\operatorname{sce}}})\,\, \mbox{ and }\,\, {\varphi}_{X_G}^{-1}({{\mathcal{K}}}(X_G)) =C_0(G^0_{{\operatorname{fin}}}).$$ From the above it is easy to see that ${{\mathcal{J}}}_{X_G}=C_0(G^0_{{\operatorname{reg}}})$, where $$G^0_{{\operatorname{reg}}}:= G^0_{{\operatorname{fin}}} \backslash \overline{G^0_{{\operatorname{sce}}}}.$$ We need the following \[l;toppled\] Let $G= (G^0, G^1, r, s)$ be a topological graph. Then $r^{-1}\big( G^0_{{\operatorname{reg}}} \big)=G^1$ if and only if $r : G^1 \rightarrow G^0$ is a proper map satisfying $r(G^1) \subseteq \big(\overline{r(G^1)}\big)^{\circ}$. Notice that $$r^{-1}(G^0_{{\operatorname{reg}}})=r^{-1}(G^0_{{\operatorname{fin}}}) \cap r^{-1}(\overline{G^0_{{\operatorname{sce}}}})^c$$ and so $r^{-1}\big( G^0_{{\operatorname{reg}}} \big)=G^1$ is equivalent to $r^{-1}(G^0_{{\operatorname{fin}}}) =r^{-1}(\overline{G^0_{{\operatorname{sce}}}})=G^1$ First we claim that $r^{-1}(G^0_{{\operatorname{fin}}}) = G^1$ if and only if $r$ is a proper map. Indeed, assume that $r^{-1}(G^0_{{\operatorname{fin}}}) = G^1$ and let $K\subseteq r(G^1)$ compact in the relative topology. For every $x \in K$, let $V_x$ be a compact neighborhood of $x$ such that $r^{-1}(V_x)$ is compact and so $r^{-1}(V_x \cap K))$ is also compact. By compactness, there exist $x_1, x_2, \dots , x_n \in K$ so that $K = \cup_{i=1}^{n}(V_{x_i} \cap K)$ and so $$r^{-1} (K) = \cup_{i=1}^{n}r^{-1}(V_{x_i} \cap K)$$ and so $r^{-1} (K)$ is compact. Conversely, if $r$ is proper then any compact neighborhood V of any point in $G^0$ is inverted by $r^{-1}$ to a compact set and so $r^{-1}(G^0_{{\operatorname{fin}}}) = G^1$. We now claim that $r^{-1}(\overline{G^0_{{\operatorname{sce}}}})=\emptyset$ if and only $r(G^1) \subseteq \big(\overline{r(G^1)}\big)^{\circ}$. Indeed, $e \in r^{-1}(\overline{G^0_{{\operatorname{sce}}}})$ is equivalent to $r(e) \in \overline{(r(G^1)^c)^{\circ}}$ and so $r^{-1}(\overline{G^0_{{\operatorname{sce}}}})=\emptyset$ is equivalent to $$r(G^1)\subseteq \left( \overline{(r(G^1)^c)^{\circ}} \right)^c =\big(\overline{r(G^1)}\big)^{\circ},$$ as desired. If $G= (G^0, G^1, r, s)$ is a topological graph and $S \subseteq G^1 $, then $N(S)$ denotes the collection of continuous functions $F \in X_{G}$ with $F_{|S}=0$, i.e., vanishing at $S$. The following appears as Lemma 4.3(ii) in [@KatsLoc]. \[topology\] Let $G= (G^0, G^1, r, s)$ be a topological graph. If $S_1 \subseteq G^0$, $S_2 \subseteq G^1$ closed, then $$N(r^{-1}(S_1) \cup S_2)=\overline{{\operatorname{span}}} \{(f\circ r) F\mid f_{|S_1} = 0, F_{|S_2} = 0\}$$ Let $G =(G^0, G^1, r, s)$ be a topological graph and let $X_G$ the ${\mathrm{C}^*}$-correspondence associated with $G$. Then the following are equivalent - the tensor algebra ${{\mathcal{T}}}_{X_G}^+$ is hyperrigid - ${\varphi}({{\mathcal{J}}}_{X_G})$ acts non-degenerately on $X_G$ - $r: G^1 \rightarrow G^0$ is a proper map satisfying $ r(G^1) \subseteq \big(\overline{r(G^1)}\big)^{\circ}$ If ${\varphi}({{\mathcal{J}}}_{X_G})$ acts non-degenerately on $X_G$, then Theorem \[thm:hyperrigid\] shows that ${{\mathcal{T}}}_{X_G}^+$ is hyperrigid. Thus (ii) implies (i). For the converse, assume that ${\varphi}({{\mathcal{J}}}_{X_G})$ acts degenerately on $X_G$. If we verify that ${\varphi}({{\mathcal{J}}}_{X_G})$ acts $\sigma$-degenerately on $X_G$, then Theorem \[thm;converse\] shows that ${{\mathcal{T}}}_{X_G}^+$ is not hyperrigid and so (i) implies (ii). Towards this end note that ${{\mathcal{J}}}_{X_G} = {{\mathcal{C}}}_0({{\mathcal{U}}})$ for some proper open set ${{\mathcal{U}}}\subseteq G^0$. (Actually we know that ${{\mathcal{U}}}= G^0_{{\operatorname{reg}}}$ but this is not really needed for this part of the proof!) Hence $$\begin{aligned} \label{eq;iii} \begin{split} {\varphi}({{\mathcal{J}}}_{X_G}) X_G &=\overline{{\operatorname{span}}} \{(f\circ r)F\mid f_{\mid {{\mathcal{U}}}^c}=0 \} \\ &=N(r^{-1}({{\mathcal{U}}})^c), \end{split}\end{aligned}$$ according to Lemma \[topology\]. Since ${\varphi}({{\mathcal{J}}}_{X_G})$ acts degenerately on $X_G$, (\[eq;iii\]) shows that $r^{-1}({{\mathcal{U}}})^c \neq \emptyset$. Let $e \in r^{-1}({{\mathcal{U}}})^c$ and let $F\in C_c(G^1)\subseteq X_{G}$ with $F(e)=1$ and $F(e')=0$, for any other $e'\in G^1$ with $s(e')=s(e)$. Consider the one dimensional representation $\sigma: C_0(G_0) \rightarrow {{\mathbb{C}}}$ coming from evaluation at $s(e)$. We claim that $${\varphi}_{X_G}({{\mathcal{J}}}_{X_G})X_G \otimes_{\sigma}{{\mathbb{C}}}\neq X_G\otimes_{\sigma}{{\mathbb{C}}}.$$ Indeed for any $G \in {\varphi}({{\mathcal{J}}}_{X_G}) X_G = N(r^{-1}({{\mathcal{U}}})^c)$ we have $$\begin{aligned} \langle F\otimes_{\sigma}1, G\otimes_{\sigma}1 \rangle &=\langle 1, \sigma(\langle F,G\rangle 1)= \langle F,G\rangle s(e)\\ &=\sum_{s(e')=s(e)} \overline{F(e')}G(e')\\ &= \overline{F(e)}G(e) = 0.\end{aligned}$$ Furthermore, $$\langle F\otimes_{\sigma}1, F\otimes_{\sigma}1 \rangle s(e)= |F(e)|^2 =1$$ and so $0 \neq F\otimes_{\sigma}1 \in ({\varphi}_{X_G}({{\mathcal{J}}}_{X_G})X_G \otimes_{\sigma}{{\mathbb{C}}})^{\perp}$. This establishes the claim and finishes the proof of (i) implies (ii). Finally we need to show that (ii) is equivalent to (iii). Notice that (\[eq;iii\]) implies that ${\varphi}({{\mathcal{J}}}_{X_G})$ acts degenerately on $X_G$ if and only if $$r^{-1}({{\mathcal{U}}})^c=r^{-1}(G^0_{{\operatorname{reg}}})^c=\emptyset.$$ The conclusion now follows from Lemma \[l;toppled\]. The statement of the previous Theorem takes its most pleasing form when $G^0$ is a compact space. In that case ${{\mathcal{T}}}^+_X$ is hyperrigid if and only is $G^1$ is compact and $r(G^1)\subseteq G^0$ is clopen. [*Acknowledgement.*]{} Both authors would like to thank MacEwan University for providing project funding to bring the first author out to Edmonton for a research visit. The first author received support for this project in the form of a Summer Research Award from the Thomas Harriott College of Arts Sciences at ECU. He also expresses his gratitude to Adam Dor-On and Guy Salomon for several discussions regarding their work on hyperrigidity. The second author was partially supported by an NSERC grant. [99]{} W. Arveson, *The noncommutative Choquet boundary II: hyperrigidity*, Israel J. Math. [**184**]{} (2011), 349–385. N. Brown and N. Ozawa, *${\mathrm{C}^*}$–algebras and finite-dimensional approximations*, Graduate Studies in Mathematics **88**, American Mathematical Society, Providence, RI, 2008. xvi+509 pp. A. Dor-On and G. Salomon, *Full Cuntz-Krieger dilations via non-commutative boundaries*, J. London Math. Soc. **98** (2018), 416–438. B. Duncan, *Certain free products of graph operator algebras*, J. Math. Anal. Appl. **364** (2010), 534–543. E.T.A. Kakariadis, *The Dirichlet property for tensor algebras*, Bull. Lond. Math. Soc. **45** (2013), 1119–1130. E. Katsoulis, *Local maps and the representation theory of operator algebras*, Trans. Amer. Math. Soc. **368** (2016), 5377–5397. E. Katsoulis, *C$^*$-envelopes and the Hao-Ng isomorphism for discrete groups*, Inter. Math. Res. Not. (2017), 5751–5768. E. Katsoulis and D. Kribs, *Tensor algebras of $C^*$-correspondences and their ${\mathrm{C}^*}$-envelopes*, J. Funct. Anal. **234** (2006), 226–233. E. Katsoulis and C. Ramsey, *Crossed products of operator algebras*, Mem. Amer. Math. Soc [**258**]{} (2019), no. 1240, vii+85 pp. E. Katsoulis and C. Ramsey, *Crossed products of operator algebras: applications of Takai duality*, J. Funct. Anal. [**275**]{} (2018), 1173–1207. E. Katsoulis and C. Ramsey, *The non-selfadjoint approach to the Hao-Ng isomorphism problem*, preprint arXiv:1807.11425. T. Katsura, *A class of ${\mathrm{C}^*}$-algebras generalizing both graph algebras and homeomorphism ${\mathrm{C}^*}$-algebras. I. Fundamental results*, Trans. Amer. Math. Soc. **356** (2004), 4287–4322. T. Katsura, *On ${\mathrm{C}^*}$-algebras associated with ${\mathrm{C}^*}$-correspondences*, J. Funct. Anal. **217** (2004), 366–401. P. Muhly and B. Solel, *Tensor algebras over $C^*$-correspondences: representations, dilations, and $C^*$-envelopes*, J. Funct. Anal. **158** (1998), 389–457. I. Raeburn, *Graph algebras*, CBMS Regional Conference Series in Mathematics **103**, American Mathematical Society, Providence, RI, 2005. G. Salomon, *Hyperrigid subsets of Cuntz-Krieger algebras and the property of rigidity at zero*, J. Operator Theory [**81**]{} (2019), 61–79. [^1]: 2010 [*Mathematics Subject Classification.*]{} 46L07, 46L08, 46L55, 47B49, 47L40 [^2]: [*Key words and phrases:*]{} ${\mathrm{C}^*}$-correspondence, tensor algebra, hyperrigid, topological graph, operator algebra [^3]: which includes all operator algebras appearing in this paper
--- abstract: 'The problem of matching primary and secondary light signals, belonging to the same event, is presented in the context of dual-phase time projection chambers. In large scale detectors the secondary light emission could be delayed up to order of milliseconds, which, combined with high signal rates, could make the matching of the signals challenging. A possible approach is offered in the framework of the Stable Marriage and the College Admission problem, for both of which solutions are given by the Gale-Shapley algorithm.' address: - 'ETH Zürich, Institute for Particle Physics and Astrophysics, CH-8093 Zürich, Switzerland' - 'ETH Zürich, Department of Computer Science, CH-8092 Zürich, Switzerland' author: - 'B. Radics' - 'E. Burjons' - 'A. Rubbia' bibliography: - 'Bibliography.bib' title: 'Matching problem for primary and secondary signals in dual-phase TPC detectors' --- Introduction ============ Time projection chambers (TPC) using liquid scintillator are used for three dimensional event reconstruction [@CRubbia]. Liquid TPC detectors operate with an electric field across the liquid, and as a result of an ionisation by a primary particle, liberated electrons can be drifted up to the order of meters. A variant of the liquid TPC detectors is the dual-phase (DP) TPC, which allows for additional charge amplification of the drifted electrons by employing a strong electric field across a small gas gap above the liquid phase [@ARubbia]. This realisation produces primary and secondary signals, the latter being delayed due to the finite drifting of electrons in the liquid phase. Measuring the time of delay of the secondary signal encodes information on the drift distance, and therefore provides the position of the event. The electron drift velocity, however, may be on the order of $\sim$ mm/$\mu$s, leading to a delay of the secondary light signal up to the order of milliseconds for large scale liquid scintillator experiments, which complicates the reconstruction of the events. Because the sensitivity of the measurements scales with the active target size the upcoming DP TPC detectors will have sizes in the order of several meters, which automatically introduces the problem of how the matching of the primary and secondary light signals can be performed for single or multiple scatter events. Experiments that may face such scenario are typically based on large scale liquid scintillator installation such as the DUNE far detector [@DUNE1; @DUNE2], DarkSide-20k [@DS20k], and ArDM [@ArDM]. In this work we provide a possible approach to the DP TPC event matching problem using an algorithm from Gale and Shapley [@GaleShapley], which gives a solution to the stable marriage problem. Matching, in mathematical sense, is selecting a set of independent edges in a graph without common vertices [@GraphTheoryWest]. When applying the concept to DP TPC data, vertices are assumed to be the events and edges are the possible combinations of the events. The problem is to find matching between two classes of events (primary and secondary signal events) in such a way that there are no two events from one class, which could both belong to the same event from the other class. For single scatter events, in which a particle deposited energy only once in the detector, this matching scheme can be applied. For multiple scatter events, other algorithms may also be considered as there could be additional secondary events belonging to the same primary event. Gale and Shapley also provided a solution to a similar problem, the so-called college admission problem [@GaleShapley]. In this paper, we study the application of the stable marriage and college admission algorithms both to single and multiple scatter events using data generated by a toy Monte Carlo. \[sec:SMP\_and\_CAP\_Problems\]The stable marriage and the college admission problems ===================================================================================== \[sec:StableMarriageProblem\]The stable marriage problem -------------------------------------------------------- The problem of matching prompt and delayed secondary signals is similar to that of assigning members of a group to another group based on the individual ranking of the group members. The context of the paper of Gale and Shapley is how to reach stable marriages (only in mathematical sense) between a set of men and women, each having their own ranking for the opposite gender candidates. The algorithm consists of an iterative procedure which stops when an actual stable set of marriages is found. The men first propose to the first ranking women on their list, and each woman rejects all but her favourite from those who proposed to her. The women then keep their selected candidate on a string to allow the possibility of a proposal from higher ranking men on their list in the next iteration. The rejected men will propose in the following iteration to the next candidate on their rankings. The same proposal/rejection routine is repeated iteratively until all women have been proposed to. The algorithm is stable, that is by the end there is no such pair of unmatched man and woman who would prefer each other over their previously established candidates. This algorithm and its adaptations have been successfully applied in the assignment of medical students to universities, job assignments, roommate selections among others [@GraphTheoryWest]. Variations include: unweighted graphs, where there is no preference list for the candidates, assigning multiple pairings to one vertex (college admission, see below) and matching in non bipartite graphs, where there is only one type of data (roomate selections)[@GI89; @Knuth76]. The criteria for a good solution might also vary. For example, the Hungarian algorithm [@Kuhn55; @Mun57] finds the matching with the highest likelihood even if it is not stable. This algorithm and its variations have also been sucessfully implemented and its performance improved. All of these together with other approaches appear in the books by Gusfield-Irving [@GI89] and Knuth [@Knuth76]. \[sec:CollegeAdmissionProblem\]The college admission problem ------------------------------------------------------------ The use case of matching multiple members of a group to single members of another group is covered by the solution to the college admission problem. Briefly, students apply to a certain number of colleges, each with certain quotas and rankings for the students, while each student having their own ranking for the colleges. In the first iteration of the solution, students apply to the first colleges on their ranking lists, and the colleges take a number of students according to the available quotas, and put these students on a waiting list, while the rest of the students are rejected. In the next iteration, the previously rejected students apply to the second colleges on their ranking lists. The colleges consider the new applicants and compare them with those on their waiting lists, picking only the top students from the two sets according to their ranking lists, while rejecting the rest. The iteration terminates when every student is on a waiting list or has been rejected from all colleges. At this point all students on the waiting lists are admitted to the colleges. \[sec:DelayedSignalProblem\]The delayed secondary signal matching problem with single scatter events ==================================================================================================== The context presented can be translated into the problem of matching primary and delayed secondary signals from the same event. The situation is illustrated in Fig. \[fig:graph\]. ![Illustration of matching possibilities of primary (S1) and secondary (S2) events to each other. The dashed and dotted lines indicate candidate edges for fixed S2 events to multiple S1 candidate events. The numbers on the edges indicate possible ranking orders which is to be estimated from detection specific information.[]{data-label="fig:graph"}](graph1.pdf){height="50.00000%"} Events are recorded sequentially in time order and are assumed to be classified to be primary (S1) or secondary (S2) signal type. Upon detecting a secondary signal event a ranking list can be constructed, containing a value for each of a set of previously detected primary signal events such that the first in the ranking list is the most compatible to match the secondary signal to. The ordering rule in the ranking must be based on some information commonly shared by the event characteristics (such as event topology, etc.). The Gale-Shapley algorithm gives a stable solution given any plausible ranking table, but it is not necessarily the true solution (experimental conditions, such as attenuation of drifting charge or inefficient extraction of electrons from the liquid phase, might be present in the data). The ordering rule plays a key role in finding the best matching. In addition to a experimentally validated ordering rule, the primary and delayed secondary signal candidates must also be present in the data set analysed, which might put some constraints on the size of the dataset used to search for the matching signal. In any case, assuming the above conditions are met, an ordering rule can always be constructed (e.g. as a Likelihood function), which gives a measure of compatibility between signal events. In this way a ranking list can be made for each primary (secondary) signal events with respect to all later (earlier) opposite type events. In the following we use a Likelihood ordering and illustrate a simplified situation of toy Monte Carlo simulated primary and delayed secondary signals. \[sec:Problem\]Application to single scatter toy Monte Carlo simulation ----------------------------------------------------------------------- Here we assume a simplified scenario of a DP TPC with 1.5 m drift length, electron drift velocity $v_{\mathrm{drift}} = 1.3$ mm/$\mu$s. The primary signals (S1) are generated randomly and uniformly across the 1.5 m long detector, and for each primary signal, the secondary signal (S2) is generated after a delay time given by the electron drift velocity and the distance of the primary event from the liquid surface. The position of the primary event may be smeared by a Gaussian in order to mimic the detector response. In the following, a Likelihood function is constructed in order to perform the ranking of events for matching. Most experiments can recover some rough event topology information from the relative amount of light detected in various subdetectors. We assume this is the case also in this simplified scenario, which means that a rough guess can be made on the position, $x_{\mathrm{s1}}$, of the primary signal event in the liquid. Therefore having a particular secondary signal event, S2, detected for each previously detected primary signal event, S1, the Likelihood of being the correct match can be calculated formally as, $$L(S1| S2) \propto g(x_{\mathrm{s1}}, t_{\mathrm{s1}}| x_{\mathrm{s2}}, t_{\mathrm{s2}}, v_{\mathrm{drift}}) , t_{\mathrm{s1}} < t_\mathrm{s2} \label{eq:Likelihood}$$ In the above formulation, $g$ denotes an arbitrary measure of probability, but for a simplified case a Gaussian is assumed, whereby signal pairs with calculated delay time $dt' = (x_{\mathrm{s2}} - x_{\mathrm{s1}})/v_{\mathrm{drift}}$ closer to the observed values of $dt = t_{\mathrm{s2}} - t_{\mathrm{s1}}$ get a higher probability. As a toy example, 5 events are presented for primary and secondary signals, generated with 1 kHz fictitious rate for the primary signal. Events are shown in order of the time of their detection in Table \[ex\_tabEvents\]. In the table, $x$ indicates the calculated position for the event from the subdetector information, and for each S2 delayed secondary signal the corresponding true S1 primary event is also indicated. The generated events demonstrate the property that sometimes a primary signal might be detected before the secondary signal arrived for the previous primary event. ------------------------------------------------ -- -- -- **[Event]{} & time \[ms\] & $x$ \[cm\] & Type\ 1 &1 & 137.13974 & S1\ 2 &1.10486 & 143.64141 & S2 (1)\ 3 &2 & 27.871302 & S1\ 4 &3 & 58.513219 & S1\ 5 &3.1447 & 139.29184 & S2 (3)\ 6 &3.69512 & 181.66746 & S2 (4)\ 7 &4 & 94.253070 & S1\ 8 &4.63494 & 150.30378 & S2 (7)\ 9 &5 & 6.1128181 & S1\ 10 & 5.87869 & 175.79793 & S2 (9)\ ** ------------------------------------------------ -- -- -- : Example toy Monte Carlo signal events generated sorted in time order. Primary signal events (Events 1, 3, 4, 7 and 9) have been generated at each millisecond, the delayed secondary signal events (Events 2, 5, 6, 8 and 10) have been detected with the delay time given by the electron drift velocity. The type of event (S1: primary, S2: secondary) is indicated in the last column, and the true primary event number is indicated for each secondary event.[]{data-label="ex_tabEvents"} The ranking order for these events has been calculated using the Likelihood function given in Eq. \[eq:Likelihood\]. The corresponding ranking tables are presented in Table \[ex\_tabS1\] and Table \[ex\_tabS2\], respectively. The condition that the delayed secondary signal must happen, by construction, later than the primary signal is explicitly visible in the empty cells in the ranking tables. In the example we gave full ranking tables, however, in practice the knowledge of the maximum delay time may put further constraint on the matching candidates. --------------------------------------------------------------- -- -- -- -- -- **[S1 time \[ms\]]{} & Rank1 & Rank2 & Rank3 & Rank4 & Rank5\ 1 & 2 & 5 &6 &8 &10\ 2 & 5& 6& 8& 10 & -\ 3 & 6& 5& 8& 10&-\ 4 & 8 &10 &-& -& -\ 5 & 10 &-& -& -& -\ ** --------------------------------------------------------------- -- -- -- -- -- : Ranking table for primary signal candidates, S1. For each row the columns are in order of ranking for the probably matching secondary signal events, S2, indexed by their event number. []{data-label="ex_tabS1"} --------------------------------------------------------------- -- -- -- -- -- **[S2 time \[ms\]]{} & Rank1 & Rank2 & Rank3 & Rank4 & Rank5\ 1.10486 & 1& -& -& -& -\ 3.1447 & 3& 4& 1& -& -\ 3.69512 & 4 &3 &1 &- &-\ 4.63494 & 7& 4& 3& 1& -\ 5.87869 & 9& 7& 4& 3& 1\ ** --------------------------------------------------------------- -- -- -- -- -- : Ranking table for secondary signal candidates, S2. For each row the columns are in order of ranking for the probably matching primary signal events, S1, indexed by event number. []{data-label="ex_tabS2"} The application of the Gale-Shapley iterative matching algorithm thus assigns the events from the two groups to each other, given the information from the ranking tables, and ultimately from the Likelihood function. The performance of the algorithm is studied in the next section. It is noted that the size of the ranking table may depend on the particular problem and on the constraints in the data. The example was given only to illustrate the framework of the algorithm. \[sec:PerformanceFullInfo\]Performance of the Stable Marriage algorithm ----------------------------------------------------------------------- We distinguish two cases in order to study the performance of the match making: events with full information and events with various amounts of position smearing. In case the Likelihood function contains full information it should place the correctly matched events first in the ranking table. However, for certain combinations of event rates and delay times it may occur that by chance a pair of uncorrelated primary and secondary events gets a better position in the ranking table than the true pairs. In order to search matching candidate for a given event we used a time window whose size is twice the size of the expected maximum delay time for the given detector volume. The reasoning behind is that during constructing the ranking table for a fixed S2 event out of earlier S1 events, the latest S1 event in the search window (although a priori not known) most likely has produced a different S2 event at a later time than the original one, maximum after another full length of drift time. We quantify the performance of the algorithm by reporting the fraction of mis-matched events. We define the dimensionless quantity, $\cal{M} = \cal{R} \times \cal{T}$, where $\cal{R}$ is the event rate and $\cal{T}$ is the delay time. Lower values of $\cal{M}$ indicates lower chance of random mismatching. For each value of $\cal{M}$ we generated $10k$ events and have run the reconstruction algorithm with full information in the Likelihood used for ranking. Figure \[fig:Bad\_Full\_M\] shows the fraction of mis-matched events as a function of $\cal{M}$. The results show that even above an extreme value of $\cal{M} \simeq$ $1000$ (equivalent of $\cal{R} \simeq $ $10^{6}$ Hz rate and delay time of $\cal{T} \simeq$ $1$ ms) the fraction of mis-matched events is still at the level of $1 \%$, however, increases exponentially with the dimensionless quantity, $\cal{M}$. ![Fraction of mis-matched events in the Stable Marriage algorithm as a function of the dimensionless quantity $\cal{M}$. For each point $10k$ events were generated. []{data-label="fig:Bad_Full_M"}](Bad_Full_M.pdf){height="70.00000%"} The performance evaluation is repeated with application of certain amount of position smearing on the primary signal position, but fixing $\cal{M}$ to $\cal{M} \simeq$ $ 1$, in order to minimize the cases of random mismatching. The match making reconstruction algorithm is applied and again the fraction of mis-matched events is reported. Figure \[fig:Bad\_Part\_S\] shows the performance of the algorithm for this scenario. For a $\sim 50 \%$ smearing on the primary event position the fraction of mis-matches are found to be at the level of $10\%$, however, this represents a very conservative scenario for a realistic detector. ![Fraction of mis-matched events in the Stable Marriage algorithm as a function of various relative amounts of Gaussian smearing, $\sigma$, on the primary signal emission position, in units of percent, for events with partial information in the Likelihood. For each point $10k$ events were generated.[]{data-label="fig:Bad_Part_S"}](Bad_Part_S.pdf){height="70.00000%"} \[sec:PerformanceMultiS2\]The delayed secondary signal matching problem with multiple scatter events ==================================================================================================== A more important use case for application of matching may be when there are multiple secondary S2 signals as a result of more than one scattering of the single projectile particle in the detector. Complications may arise since there could be multiple S2 candidates corresponding to a single detected S1 signal, while the ranking table in the Stable Marriage problem only allows the first one to be matched to it. Therefore, the solution to the stable marriage problem fails on these type of events by construction. We carry on and study the College Admission algorithm which allows to match multiple S2 events to a single S1 event. For this case the S2 events can be considered as the students, the S1 events are the colleges and the scattering multiplicities associated with an S1 event are the colleges’ quotas. We generate again toy Monte Carlo simulations but extend the event generation to allow for multiple scattering of S1 events and subsequent emission of multiple S2 events. The generated mean S1 event position is taken as the average of all the generated, random S1 positions from multiple scatterings, reflecting the fact that in LAr TPC detectors the primary scintillation events cannot be resolved for multiple scattering. The corresponding S2 events are generated on the hypothetical liquid scintillator top surface as before, assuming a fixed electron drift velocity. The scattering multiplicity of the generated S1 events is randomly sampled from a truncated Poisson distribution, where the zero event multiplicity is excluded, and the maximum allowed multiplicity considered was five scatterings. The algorithm is applied similarly to the Stable Marriage Problem example. The fundamental difference is the quota, which determines the maximum amount of S2 events allowed to be matched to a S1 event. However, for any event a priori the quota is unknown, since the scattering events occur randomly, and various quota values may be needed due to various processes with different multiplicities. We assume however that from the knowledge on the nature of the detector and the environment it is possible put an upper limit on the maximum number scatters that can occur in the detector volume. In the current work we put an arbitrarily large value of 10 in the algorithm as a quota, and let the algorithm match the S1 and S2 events following the Likelihood ordering rule, but filling up the quota by itself up to where the ranking and time constraints allow for. Therefore there is virtually no bias to any limit on the amount of matching that can occur to a S1 event. After the algorithm is finished on the reconstruction of generated events the multiplicity is determined by counting how many S2 events were associated with the each of the S1 events. The reconstructed and true scattering multiplicity distributions are shown in Figure \[fig:CA\_Mult\_1\] and Figure \[fig:CA\_Mult\_2\] for various values of the dimensionless variable, $\mathcal{M}$. The College Admission algorithm can reconstruct the multiplicity distribution remarkably well for $\mathcal{M} < 1$. Above this value the mis-matching rate increases, but depending the multiplicity of the events the amount of mis-matching varies. The events mostly affected are low scattering multiplicity events. For example for $\mathcal{M} = 2.3$ the mis-matching fraction for single scatter events is at the level of $\sim 22 \%$. With increasing scattering multiplicity the fraction of mis-matched events decreases, for scattering multiplicity of four the mis-match fraction is already at $\sim 2\%$. We have found that for the mis-matched events in case of $\mathcal{M} = 2.3$ the ranking algorithm systematically misses the true matching mostly by one rank in the Likelihood ranking table. This is clearly visible in Figure \[fig:CA\_Mult\_2\] as a systematic underestimation of the amount single scatter events and over estimation of the amount of double scatter events. This can be interpreted in a way that with additional experimental constraints the performance of the algorithm may be improved. Another result from the mis-matching is producing a small amount of events with reconstructed multiplicity larger than the maximal generated one. However, the fraction of these events is found to be negligible, less than $0.1\%$. ![Generated and reconstructed scattering multiplicity distribution for various values of the dimensionless variable, $\mathcal{M} = 0.57$ (left), $\mathcal{M} = 1.03$ (right), using the College Admission algorithm. []{data-label="fig:CA_Mult_1"}](CA_multi1.pdf){height="65.00000%"} ![Generated and reconstructed scattering multiplicity distribution for various values of the dimensionless variable, $\mathcal{M} = 1.15$ (left), $\mathcal{M} = 2.31$ (right), using the College Admission algorithm. []{data-label="fig:CA_Mult_2"}](CA_multi2.pdf){height="65.00000%"} The average mis-matching fraction for the generated multiple scattering events is shown in Figure \[fig:CA\_Bad\]. Similarly to the use case of the Stable Marriage algorithm, the mis-match fraction stays at below $\sim 1\%$ for $\mathcal{M} < 1$ , and increasing already to around $\sim 50\%$ at $\mathcal{M} \sim 3.5$. ![Fraction of mis-matched events in the College Admission algorithm as a function of the dimensionless quantity $\cal{M}$. For each point $10k$ events were generated.[]{data-label="fig:CA_Bad"}](CA_Bad.pdf){height="65.00000%"} The effect of Gaussian smearing of the primary event position on the fraction of mis-matched events is slightly different, however, than for the Stable Marriage problem, it is shown in Figure \[fig:CA\_Smear\]. For events of class $\mathcal{M} \simeq 1$ the mis-match fraction stays below $\sim 10 \%$. Considering the fact that the primary S1 events are already taken with the average of potentially multiple scattering positions, surprisingly, an additional smearing do not significantly change their ordering in the Likelihood ranking table. ![Fraction of mis-matched events in the College Admission algorithm as a function of various relative amounts of Gaussian smearing, $\sigma$, on the average primary signal emission position, in units of percent, for events of class $\mathcal{M} = 1$. For each point $10k$ events were generated.[]{data-label="fig:CA_Smear"}](CA_Smear.pdf){height="70.00000%"} As an example the generated and reconstructed multiplicity distribution is shown for the case of $20\%$ Gaussian smearing of the average S1 position for $\mathcal{M} = 1$ events in Figure \[fig:CA\_multi\_Smear\]. The multiplicity distribution has been recovered well from the data by the algorithm. ![Generated and reconstructed scattering multiplicity distribution for the case of $20\%$ smearing on the S1 position, and for class of $\mathcal{M} = 1$ events, using the College Admission algorithm.[]{data-label="fig:CA_multi_Smear"}](CA_multi_smear.pdf){height="60.00000%"} It is noted that in both algorithms discussed, the various stages of the event processing (evaluation of the Likelihood function, filling up the ranking tables, and performing the iterative matching algorithms) do not require significant processing time. In a few iterations the matching algorithm finishes, and each iteration only requires comparing and sorting of a few scalar numbers (depending on the magnitude of the $\mathcal{M}$ parameter) and storage of pairs of event IDs. Therefore in general both Gale-Shapely algorithms are computationally inexpensive. \[sec:Conclusion\]Conclusions ============================= A framework is presented for the problem of matching primary and delayed secondary signals, naturally occurring in DP LAr TPC detectors used for single or multiple scatter events. In the context of the Gale-Shapley solution to the stable marriage problem a Likelihood-based ordering rule is proposed for ranking the probability of agreement between various primary and secondary signal candidate pairs. The parametrisation of the Likelihood function depends on the subdetector-level information available. With a perfectly understood detector such tuning of the parameters of the ordering Likelihood function allows to optimize the matching procedure. For single scatter events in the simplified toy Monte Carlo example, the results suggest reasonably good performance. The algorithm was capable to perform correct match making with a fraction of mis-matches at $1 \%$ level, even at extreme values of event rates. Introducing various levels of position smearing the mis-matched fraction may reach the level of $\sim 10 \%$. For multiple scatter events the College Admission algorithm is applied, allowing multiple secondary events to be matched to a single primary event. The matching algorithm can successfully reconstruct the scattering multiplicity in randomly generated multiplicity values for $\mathcal{M} \leq 1$ class of events, and the mis-matching fraction stays below a few percent. In particular, for large scale, low background liquid scintillator Dark Matter search experiments the rate of primary events and delay time of secondary events fall into a range of class $\mathcal{M} < 1$, therefore the Gale-Shapley algorithms are very prospective candidates for event reconstruction. The performance of the algorithms on data will be presented in a separate publication. Acknowledgments =============== This work was supported by the Swiss National Science Foundation (SNF) Grant $200020$ \_$162794$.\ References {#references .unnumbered} ==========
--- abstract: 'In this paper, we propose a new cooperation model for discrete memoryless multiple access channels. Unlike in prior cooperation models (e.g., conferencing encoders), where the transmitters cooperate directly, in this model the transmitters cooperate through a larger network. We show that under this indirect cooperation model, there exist channels for which the increase in sum-capacity resulting from cooperation is significantly larger than the rate shared by the transmitters to establish the cooperation. This result contrasts both with results on the benefit of cooperation under prior models and results in the network coding literature, where attempts to find examples in which similar small network modifications yield large capacity benefits have to date been unsuccessful.' author: - bibliography: - 'IEEEabrv.bib' - 'ref.bib' title: | On the Power of Cooperation:\ Can a Little Help a Lot?\ (Extended Version) --- Introduction ============ Cooperation is a potentially powerful strategy in distributed communication systems. It can both increase the possible transmission rates of source messages and improve the reliability of network communications [@Kramer]. To date, cooperation is not completely understood. In this paper, we focus on the effect of cooperation on the capacity region and discuss situations where a small amount of rate used to enable cooperation results in a large increase in the total information that can be carried by the network. One model of cooperation, proposed by Willems in [@Willems], is the *conferencing encoders* (CE) model for the discrete memoryless multiple access channel (DM-MAC). In the CE model, there is a noiseless link of capacity $C_{12}$ from the first encoder to the second and a corresponding link of capacity $C_{21}$ back. These links allow a finite number of rounds of communication between the two encoders; the total number of bits sent by each encoder to the other is bounded by the product of the DM-MAC coding blocklength and the capacity of the encoder’s outgoing cooperation link. A similar type of cooperation is applied in the broadcast channel with conferencing decoders [@DaboraServetto] and the interference channel with conferencing encoders [@MaricEtAl]. More recently, the authors of [@PermuterEtAl] investigate the case where each encoder has partial state information and conferencing enables information exchange about both the state and the messages. One can imagine scenarios in which the two transmitters are not able to communicate directly or can communicate more effectively through some other part of the network. The latter can occur, for example, if resources are less constrained elsewhere in the network than they are for direct communication. To capture such scenarios, we introduce the *cooperation facilitator* (CF) model for the DM-MAC. The cooperation facilitator is a node that has complete access to both source messages. Based on the messages, it sends limited-rate information to both encoders through a noiseless bottleneck link of finite capacity (Figure \[fig:networkmodel\]). We define the *cooperation rate* as the capacity of the link carrying the information to be shared. One can think of capacity gains obtained from this model as an outer bound on the benefit of indirect cooperation. To study cooperation under this model, we compare the sum-capacity of a DM-MAC with a CF to the sum-capacity of the DM-MAC when there is no cooperation between the transmitters. This difference equals the capacity cost of removing the CF output link from the network. When the link is removed, the two transmitters are not able to cooperate, and their transmitted codewords are independent. We call the resulting network the DM-MAC with *independent encoders* (IE). The capacity region of this network is due to Ahlswede [@Ahlswede1; @Ahlswede2] and Liao [@Liao]. Since removing the bottleneck link transforms the CF network into the IE network, the proposed cooperation model is related to the edge removal problem in network coding [@HoEtAl; @JalaliEtAl; @LangbergEffros1; @LangbergEffros2; @LeeEtAl]. For networks of noiseless links, there are no known examples of networks for which removing a single edge of capacity $\delta$ changes the capacity region by more than $\delta$ in each dimension, and in some cases it is known that an impact of more than $\delta$ per dimension is not possible [@HoEtAl; @JalaliEtAl]. Therefore, at least in the situations investigated in [@HoEtAl; @JalaliEtAl], inserting a cooperation facilitator in a network cannot increase the sum-capacity by more than a constant times the cooperation rate. How much can cooperation help in a DM-MAC? In the CE model, the increase in sum-capacity is at most the sum of the capacities of the noiseless links between the two encoders (Section \[sec:ce\]). Given the previous discussion, one may wonder whether a similar result holds for the CF model, that is, whether the increase in sum-capacity is limited to a constant times the cooperation rate. In what follows, we see that the benefit of cooperation can far exceed what might be expected based on the CE and edge removal examples. Specifically, we describe a sequence of DM-MACs with increasing alphabet sizes and set the cooperation rate for each channel as a function of its alphabet size. We then show that the increase in sum-capacity that results from cooperation grows more quickly than any polynomial function of the cooperation rate. In the next section, we review the CE model and its capacity region as presented by Willems [@Willems]. We give a formal introduction to the CF model in Section \[sec:cfmodel\]. Prior Work {#sec:ce} ========== Consider the DM-MAC $$\left(\mathcal{X}_{1}\times\mathcal{X}_{2},p_{Y|X_{1},X_{2}}(y|x_{1},x_{2}),\mathcal{Y}\right),$$ where $\mathcal{X}_{1}$, $\mathcal{X}_{2}$, and $\mathcal{Y}$ are finite sets and $p_{Y|X_{1},X_{2}}(y|x_{1},x_{2})$ denotes the conditional distribution of the output, $Y$, given the inputs, $X_{1}$ and $X_{2}$. To simplify notation, we suppress the subscript of the probability distributions when the corresponding random variables are clear from context. For example, we write $p(x)$ instead of $p_{X}(x)$. There are two sources, source 1 and source 2, whose outputs are the messages $W_{1}\in\mathcal{W}_{1}=\left\{ 1,\dots,\left\lceil 2^{nR_{1}}\right\rceil \right\} $ and $W_{2}\in\mathcal{W}_{2}=\left\{ 1,\dots,\left\lceil 2^{nR_{2}}\right\rceil \right\} $, respectively. The random variables $W_{1}$ and $W_{2}$ are independent and uniformly distributed over their corresponding alphabets. The real numbers $R_{1}$ and $R_{2}$ are nonnegative and are called the *message rates*. In the IE model, each encoder only has access to its corresponding message. The encoders are represented by the functions $$\begin{aligned} f_{1n}:\mathcal{W}_{1}& \rightarrow\mathcal{X}_{1}^{n},\\ f_{2n}:\mathcal{W}_{2}& \rightarrow\mathcal{X}_{2}^{n}.\end{aligned}$$ We denote the output of the encoders by $X_{1}^{n}=f_{1n}(W_{1})$ and $X_{2}^{n}=f_{2n}(W_{2})$. Let $Y^{n}$ be the output of the channel when the pair $(X_{1}^{n},X_{2}^{n})$ is transmitted. Using $Y^{n}$, the decoder estimates the original messages via a decoding function $g_{n}:\mathcal{Y}^{n}\rightarrow\mathcal{W}_{1}\times\mathcal{W}_{2}$. A $\left(2^{nR_{1}},2^{nR_{2}},n\right)$ code for the multiple access channel is defined as the triple $(f_{1n},f_{2n},g_{n})$. The average probability of error for this code is given by $$P_{e}^{(n)}=\operatorname{Pr}\left(g_{n}\left(Y^{n}\right)\neq\left(W_{1},W_{2}\right)\right).$$ We say the rate pair $(R_{1},R_{2})$ is achievable if there exists a sequence of $\left(2^{nR_{1}},2^{nR_{2}},n\right)$ codes such that $P_{e}^{(n)}$ tends to zero as the blocklength, $n$, approaches infinity. The capacity region, $\mathscr{C}$, is the closure of the set of all achievable rate pairs. For a given capacity region $\mathscr{C} \subseteq \mathbb{R}_{\geq 0}^{2}$, the *sum-capacity* [@ElGamalKim], $C_{\mathrm{S}}$, is defined as $$\label{eq:sumcapacity} C_{\mathrm{S}}=\max\left\{ R_{1}+R_{2}|\left(R_{1},R_{2}\right)\in\mathscr{C}\right\} .$$ In the IE model [@Ahlswede1; @Ahlswede2; @Liao], the sum-capacity is given by $$C_{\mathrm{S-IE}}=\max_{p(x_{1})p(x_{2})}I\left(X_{1},X_{2};Y\right).$$ In the CE model, each encoder shares some information regarding its message with the other encoder prior to transmission over the channel. This sharing of information is achieved through a $K$*-step conference* over noiseless links of capacities $C_{12}$ and $C_{21}$. A $K$-step conference consists of two sets of functions, $\left\{ h_{11},\dots,h_{1K}\right\} $ and $\left\{ h_{21},\dots,h_{2K}\right\} $, which recursively define the random vectors $V_{1}^{K}:=\left( V_{11},\dots,V_{1K}\right)$ and $V_{2}^{K}:=\left( V_{21},\dots,V_{2K}\right)$ as $$\begin{aligned} V_{1k} & =h_{1k}\left(W_{1},V_{2}^{k-1}\right),\\ V_{2k} & =h_{2k}\left(W_{2},V_{1}^{k-1}\right)\end{aligned}$$ for $k=1,\dots,K$. In step $k$, encoder 1 (encoder 2) computes $V_{1k}$ ($V_{2k}$) and sends it to encoder 2 (encoder 1). Since the noiseless links between the two encoders are of capacity $C_{12}$ and $C_{21}$, respectively, we require $$\begin{aligned} \sum_{k=1}^{K}\log|\mathcal{V}_{1k}| & \leq nC_{12},\\ \sum_{k=1}^{K}\log|\mathcal{V}_{2k}| & \leq nC_{21}\end{aligned}$$ where $\mathcal{V}_{ik}$ is the alphabet of the random variable $V_{ik}$ for $i=1,2$ and $k=1,\dots,K$. The outputs of the encoders, $X_{1}^{n}$ and $X_{2}^{n}$, are given by $$\begin{aligned} X_{1}^{n} & =f_{1n}\left(W_{1},V_{2}^{K}\right),\\ X_{2}^{n} & =f_{2n}\left(W_{2},V_{1}^{K}\right)\end{aligned}$$ where $f_{1n}$ and $f_{2n}$ are deterministic functions. By studying the capacity region of the CE model [@Willems], we deduce $$C_{\mathrm{S-IE}} \leq C_{\mathrm{S-CE}} \leq C_{\mathrm{S-IE}}+C_{12}+C_{21}.$$ Thus, with conferencing, the sum-capacity increases at most linearly in $\left(C_{12},C_{21}\right)$ over the sum-capacity of the IE model. The Cooperation Facilitator: Model and Result {#sec:cfmodel} ============================================= In the CF model, cooperation is made possible not through finite capacity links between the encoders, but instead through a third party, the cooperation facilitator, which receives information from both encoders and transmits a single description of that information back to both (Figure \[fig:networkmodel\]). The cooperation facilitator is represented by the function $$\phi_{n}:\mathcal{W}_{1}\times\mathcal{W}_{2}\rightarrow\mathcal{Z},$$ where the alphabet $\mathcal{Z}=\left\{ 1,\dots,\left\lceil 2^{n\delta}\right\rceil \right\} $ is determined by the cooperation rate $\delta$. The output of the cooperation facilitator, $Z=\phi_{n}(W_{1},W_{2})$, is available to both encoders. Each encoder chooses a blocklength-$n$ codeword as a function of its own source and $Z$ and sends that codeword to the receiver using $n$ transmissions. Hence the two encoders are represented by the functions $$\begin{aligned} f_{1n}:\mathcal{W}_{1}\times\mathcal{Z}& \rightarrow\mathcal{X}_{1}^{n},\\ f_{2n}:\mathcal{W}_{2}\times\mathcal{Z}& \rightarrow\mathcal{X}_{2}^{n}.\end{aligned}$$ The definitions of the decoder, probability of error, and capacity region are similar to the IE model discussed in the previous section and are omitted. ![Network model for the DM-MAC with a CF. The *cooperation rate* is the capacity of the output link of the CF which we denote by $\delta$.[]{data-label="fig:networkmodel"}](networkmodel) Given a pair of functions $f,g:\mathbb{Z}^{+}\rightarrow\mathbb{R}$, we say $f=o(g)$ if $\lim_{m\rightarrow\infty}\frac{f(m)}{g(m)}=0.$ We say $f=\omega(g)$ if $g=o(f)$. For a sequence of DM-MACs $$\left\{ \left(\mathcal{X}_{1}^{(m)}\times\mathcal{X}_{2}^{(m)},p^{(m)}(y|x_{1},x_{2}),\mathcal{Y}^{(m)}\right)\right\} _{m},$$ let $C_{\mathrm{S-IE}}^{(m)}$ denote the IE sum-capacity of the $m^{\text{th}}$ channel and $C_{\mathrm{S-CF}}^{(m)}$ denote the CF sum-capacity of the $m^{\text{th}}$ channel when the cooperation rate is $\delta_{m}$. We are now ready to answer the question posed in the introduction. In the next theorem, which is the main result of this paper, we see that for a sequence of DM-MACs, the increase in sum-capacity is not only greater than the cooperation rate, but also asymptotically larger than any polynomial function of that rate. In what follows, $\log(x)$ is the base 2 logarithm of $x$. \[thm:main\] For every sequence of cooperation rates $\{\delta_{m}\}_{m}$ satisfying $\delta_{m}=\log m +\omega(1)$ and $\delta_{m}\leq m$ and every $\epsilon>0$, there exists a sequence of DM-MACs with input alphabets $$\mathcal{X}_{1}^{(m)}=\mathcal{X}_{2}^{(m)}=\left\{ 1,\dots,2^{m}\right\},$$ such that for sufficiently large $m$, $$C_{\mathrm{S-CF}}^{(m)}-C_{\mathrm{S-IE}}^{(m)}\geq (3-\sqrt{5+4\epsilon})m-\delta_{m}.$$ For the same sequence of channels, we also have $$C_{\mathrm{S-CF}}^{(m)}-C_{\mathrm{S-IE}}^{(m)}\leq m+\delta_{m}.$$ In the above theorem, the choice of $\delta_{m}$ is constrained only by $\delta_{m}=\log m +\omega(1)$ and $\delta_{m}\leq m$. For example, a cooperation rate of $\delta_{m}=\log (m\log m)$ can lead to an increase in sum-capacity that is linear in $m$, giving a capacity benefit that is “almost” exponential in the cooperation rate. In the next section, we prove the existence of a sequence of DM-MACs with properties that are essential for the proof of Theorem \[thm:main\]. In Section \[sec:CF\], we show that for the sequence of channels of Section \[sec:chconst\], $$\label{eq:cfbounds} 2m-\delta_{m} \leq C_{\mathrm{S-CF}}^{(m)}\leq 2m.$$ In Section \[sec:IE\] we show $$\label{eq:iebounds} m-\delta_{m}\leq C_{\mathrm{S-IE}}^{(m)}\leq (\sqrt{5+4\epsilon}-1)m.$$ Combining these two results gives Theorem \[thm:main\]. See Figure \[fig:capacitybounds\]. ![ Inner and outer bounds for the capacity regions of the CF (dashes) and IE (dots) models as derived in Sections \[sec:CF\] and \[sec:IE\], respectively. In this figure, $\varphi=\frac{1+\sqrt{5}}{2}$. []{data-label="fig:capacitybounds"}](capacityiobounds) Channel Construction {#sec:chconst} ==================== For a fixed positive integer $m$, the channel $$\left( \mathcal{X}_{1}^{(m)}\times \mathcal{X}_{2}^{(m)}, p^{(m)}(y|x_1,x_2), \mathcal{Y}^{(m)} \right)$$ used in the proof of Theorem \[thm:main\] has input alphabets $\mathcal{X}_{1}^{(m)}=\mathcal{X}_{2}^{(m)}=\left\{ 1,\dots,2^{m}\right\}$ and output alphabet $$\mathcal{Y}^{(m)}=\left( \mathcal{X}_{1}^{(m)}\times\mathcal{X}_{2}^{(m)}\right) \cup\left\{ \left(E,E\right)\right\},$$ where “$E$” denotes an erasure symbol. For each $(x_1,x_2,y)\in \mathcal{X}_{1}^{(m)}\times\mathcal{X}_{2}^{(m)}\times\mathcal{Y}^{(m)}$, $p^{(m)}(y|x_1,x_2)$ is defined as a function of the corresponding entry $b_{x_{1}x_{2}}$ of a binary matrix $B_{m}=\left(b_{ij}\right)_{i,j=1}^{2^{m}}$. Precisely, $$\label{eq:chdef} p^{(m)}(y|x_{1},x_{2})=\begin{cases} 1-b_{x_{1}x_{2}}, & \text{if } y=\left(x_{1},x_{2}\right)\\ b_{x_{1}x_{2}}, & \text{if } y=\left(E,E\right). \end{cases}$$ That is, when $\left(x_{1},x_{2}\right)$ is transmitted, $y=\left(x_{1},x_{2}\right)$ is received if $b_{x_{1}x_{2}}=0$, and $y=\left(E,E\right)$ is received if $b_{x_{1}x_{2}}=1$. Thus, we interpret the 0 and 1 entries of $B_{m}$ as “good” and “bad” entries, respectively. Let $\mathcal{X}^{(m)}=\{1,\dots,2^{m}\}$. We define the sets $$\begin{aligned} 0_{B_{m}} & =\left\{ \left(i,j\right):b_{ij}=0\right\}, \\ 1_{B_{m}} & =\left\{ \left(i,j\right):b_{ij}=1\right\} \end{aligned}$$ to be the set of good and bad entries of $\mathcal{X}^{(m)}\times \mathcal{X}^{(m)}$, respectively. To simplify notation, we drop $m$ as a superscript when it is fixed. For every $S,T\subseteq\mathcal{X}$, let $B(S,T)$ be the submatrix obtained from $B$ by keeping the rows with indices in $S$ and columns with indices in $T$. For every $x\in\mathcal{X}$, let $B(x,T)=B(\{x\},T)$ and $B(S,x)=B(S,\{x\})$. The proof of Theorem \[thm:main\] requires that $B$ satisfies two properties. One is that every sufficiently large submatrix of $B$ should have a large fraction of bad entries. This property ensures that the sum-capacity of our channel without cooperation is small (Section \[sec:IE\]). The second property is that every submatrix of a specific type should have at least one good entry. This property enables a significantly higher sum-capacity under low-rate cooperation using the cooperation facilitator model (Section \[sec:CF\]). Lemma 2 demonstrates that these two properties can be simultaneously achieved. A proof of this and all subsequent lemmas can be found in the appendices. \[lem:chconst\] Let $f,g:\mathbb{Z}^{+}\rightarrow\mathbb{Z}^{+}$ be two functions such that $f(m)=\omega(m)$ and $g(m)=\log m+\omega(1)$. Then for every $\epsilon>0$, there exists a sequence of $(0,1)$-matrices $\{B_{m}=\left(b_{ij}\right)_{i,j=1}^{2^{m}}\}_{m}$ such that \(1) for every $S,T\subseteq{\mathcal{X}^{(m)}}$ that satisfy $|S|,|T|\geq f(m)$, $$\frac{\left|\left(S\times T\right)\cap1_{B_{m}}\right|}{\left|S\right||T|}>1-\epsilon;$$ that is, in every sufficiently large submatrix of $B_{m}$, the fraction of bad entries is larger than $1-\epsilon$, and \(2) for every $x\in\mathcal{X}^{(m)}$ and $k\in\left\{ 0,1,\dots,2^{m-g(m)}-1\right\} $, both $B_{m}(x,\mathcal{X}_{m,k})$ and $B_{m}(\mathcal{X}_{m,k},x)$ each contain at least one good entry, where $$\mathcal{X}_{m,k}=\left\{ k2^{g(m)}+\ell|\ell=1,\dots,2^{g(m)}\right\};$$ that is, if we break each row or column into consecutive blocks of size $2^{g(m)}$, each block contains at least one good entry. **Channel Definition:** Choose functions $f$ and $g$ that satisfy the constraints $f(m)=\omega(m)$, $g(m)=\log m +\omega(1)$, and $\log f(m)=o(m)$. Fix a sequence of channels as defined by (\[eq:chdef\]) using matrices $\{B_{m}\}_{m}$ satisfying the properties proved possible in Lemma 2 for the chosen functions $f$ and $g$. Inner and Outer Bounds for the CF Capacity Region {#sec:CF} ================================================= For the $m^{\text{th}}$ channel, we show the achievability of the rate pairs $\left(m,m-g(m)\right)$ and $\left(m-g(m),m\right)$, with cooperation rate $\delta_{m}=g(m)$. For each, we employ a blocklength-1 code ($n=1$). Time sharing between these codes results in an inner bound for the capacity region given by $$\begin{aligned} R_{1},R_{2} & \leq m,\\ R_{1}+R_{2} & \leq2m-g(m).\end{aligned}$$ If $R_{1}=m$, $R_{2}=m-g(m)$, and $n=1$, then the independent, uniformly distributed messages $W_{1}$ and $W_{2}$ have alphabets $\mathcal{W}_{1}=\{ 1,\dots,2^{m}\} $ and $\mathcal{W}_{2}=\{ 1,\dots,2^{m-g(m)}\} $, respectively. By the second property of our channel in Lemma 2, for every $w_{1}\in\mathcal{W}_{1}$ and $w_{2}\in\mathcal{W}_{2}$, the submatrix $B_{m}({w_{1},\mathcal{X}_{m,w_{2}-1}})$ contains at least one good entry. Let $z=\phi(w_{1},w_{2})$, the output of the cooperation facilitator, be an element of $\mathcal{Z}=\{ 1,\dots,2^{g(m)}\} $ such that $(w_{1},(w_{2}-1)2^{g(m)}+z)$ is a good entry of $B_{m}({w_{1},\mathcal{X}_{m,w_{2}-1}})$. If there’s more than one good entry, we pick the one that results in the smallest $z$. For messages $W_{1}=w_{1}$ and $W_{2}=w_{2}$, encoder 1 sends $x_{1}=w_{1}$ and encoder 2 sends $x_{2}=(w_{2}-1)2^{g(m)}+z$. By the definition of our channel (\[eq:chdef\]), the channel output is $y=(x_{1},x_{2})$ with probability one, and hence zero error decoding is possible. Thus the rate pair $(m,m-g(m))$ is achievable. Note that for this achievability scheme to work, only the second encoder needs to know the value of $z$. A similar argument proves the achievability of $(m-g(m),m)$ and the lower bound of (\[eq:cfbounds\]) follows. To find an outer bound for the capacity region, we use the capacity region of the CE model [@Willems] in a special case. Consider the situation in which encoder 1 has access to both messages and can transmit information to encoder 2 on a noiseless link of capacity $\delta_{m}$. Then it is easy to see that the capacity region of this network contains the capacity region of the CF model. This situation, however, is equivalent to the CE model for $C_{12}=\delta_{m}$ and $C_{21}=\infty.$ Hence an outer bound for the capacity region is given by the set of all rate pairs $(R_{1},R_{2})$ such that $$\begin{aligned} R_{1} & \leq I\left(X_{1};Y|X_{2},U\right)+\delta_{m},\\ R_{1}+R_{2} & \leq I\left(X_{1},X_{2};Y\right)\end{aligned}$$ for some distribution $p(u)p\left(x_{1}|u\right)p\left(x_{2}|u\right)$. Note that $$\begin{aligned} I\left(X_{1};Y|X_{2},U\right) \leq H\left(X_{1}\right) & \leq m,\\ I\left(X_{1},X_{2};Y\right) \leq H\left(X_{1},X_{2}\right) & \leq 2m,\end{aligned}$$ and $\delta_{m}=g(m)$, so the region $$\begin{aligned} R_{1} & \leq m+g(m),\\ R_{1}+R_{2} & \leq 2m\end{aligned}$$ is an outer bound for the CF model. Note that if we switch the roles of encoders 1 and 2, we get the outer bound $$\begin{aligned} R_{2} & \leq m+g(m),\\ R_{1}+R_{2} & \leq 2m.\end{aligned}$$ Since the intersection of two outer bounds is also an outer bound, the set of all rate pairs $\left(R_{1},R_{2}\right)$ such that $$\begin{aligned} R_{1},R_{2} & \leq m+g(m),\\ R_{1}+R_{2} & \leq 2m\end{aligned}$$ is an outer bound for the CF model as well and the upper bound of (\[eq:cfbounds\]) follows. Inner and Outer Bounds for the IE Capacity Region {#sec:IE} ================================================= Consider the $m^\mathrm{th}$ channel of the construction in Section IV. In the case where there is no cooperation, we show that the set of all rate pairs $(R_{1},R_{2})$ satisfying $$R_{1}+R_{2}\leq m-g(m)$$ is an inner bound for the capacity region. To this end, we show the achievability of the rate pairs $(m-g(m),0)$ and $(0,m-g(m))$. The achievability of all other rate pairs in the inner bound follows by time-sharing between the encoders. Similar to the achievability result of the previous section, let $n=1$. Then $\mathcal{W}_{1} = \{ 1,\dots,2^{m-g(m)}\} $ and $\mathcal{W}_{2} = \{1\}$. By our channel construction, for every $w\in\mathcal{W}_{1}$, $B_{m}({\mathcal{X}_{m,w-1},1})$ contains at least one good entry. This means that the first column of $B_{m}$ contains at least $|\mathcal{W}_{1}|=2^{m-g(m)}$ good entries. Suppose encoder 1 transmits uniformly on these $2^{m-g(m)}$ good entries and encoder 2 transmits $x_{2}=1$. Then the input is always on a good entry and the channel output is the same as the channel input. Thus the pair $(m-g(m),0)$ is achievable. A similar argument shows that the pair $(0,m-g(m))$ is achievable and the inner bound follows. We next find an outer bound for the IE capacity region. Let $Y_{1}$ and $Y_{2}$ be the components of $Y$; that is, if $Y=\left(x_{1},x_{2}\right)$, then $Y_{1}=x_{1}$ and $Y_{2}=x_{2}$, and if $Y=\left(E,E\right)$, then $Y_{1}=Y_{2}=E$. Note that $Y_{1},Y_{2}\in\mathcal{X}\cup\left\{ E\right\}$. In the case of independent encoders, $X_{1}$ and $X_{2}$ are independent, and the distribution of $Y_{1}$ is given by $$\label{eq:disty1} p\left(y_{1}\right)=\begin{cases} \gamma_{y_{1}} & y_{1}\in\mathcal{X},\\ 1-\gamma & y_{1}=E, \end{cases}$$ where $$\gamma_{x_{1}}=p\left(x_{1}\right)\sum_{x_{2}:b_{x_{1}x_{2}}=0}p\left(x_{2}\right),$$ for every $x_{1}\in\mathcal{X}$, and $\gamma =\sum_{x_{1}}\gamma_{x_{1}}$. The capacity region for the IE model (no cooperation) is due to Ahlswede [@Ahlswede1; @Ahlswede2] and Liao [@Liao]. If $\mathscr{R}_{m}$ is the set of all pairs $\left(R_{1},R_{2}\right)$ such that $$\begin{aligned} \label{eq:ieregion} R_{1} & \leq I\left(X_{1};Y|X_{2}\right),\notag\\ R_{2} & \leq I\left(X_{2};Y|X_{1}\right),\\ R_{1}+R_{2} & \leq I\left(X_{1},X_{2};Y\right) \notag\end{aligned}$$ for some distribution $p(x_{1})p(x_{2})p(y|x_{1},x_{2})$ and $\operatorname{conv}(A)$ denotes the convex hull of the set $A$, then the capacity region is given by the closure of $\operatorname{conv}(\mathscr{R}_{m})$. If for all pairs $(R_{1},R_{2})\in \operatorname{conv}(\mathscr{R}_{m})$, one of $R_{1}$ or $R_{2}$ is smaller than or equal to $\log 2f(m)$, then the upper bound of (\[eq:iebounds\]) follows, since $$R_{1}+R_{2} \leq m+\log 2f(m),$$ and $\log f(m)=o(m)$. On the other hand, if there exist rate pairs $(R_{1},R_{2})\in \operatorname{conv}(\mathscr{R}_{m})$ such that $$\label{eq:ratelbound} R_{1},R_{2} > \log 2f(m),$$ then by (\[eq:ieregion\]) and (\[eq:ratelbound\]), $$\label{eq:entlbound} H(X_{1}),H(X_{2}) > \log 2f(m),$$ and the following argument shows $$R_{1}+R_{2} \leq (\sqrt{5+4\epsilon}-1)m.$$ For our channel, $Y$, $Y_{1}$, and $Y_{2}$ are deterministic functions of $(X_{1},X_{2})$, $(X_{1},Y_{2})$ and $(Y_{1},X_{2})$, respectively, and the bounds simplify as $$\begin{aligned} \label{eq:entr1} R_{1} & \leq I\left(X_{1};Y|X_{2}\right) =H\left(Y_{1}|X_{2}\right)\leq H(Y_{1}), \\ R_{2} & \leq I\left(X_{2};Y|X_{1}\right) =H\left(Y_{2}|X_{1}\right)\leq H(Y_{2}). \notag\end{aligned}$$ To bound $H(Y_{1})$, we apply the following lemma, proved in the appendix. This lemma bounds the probability that a random variable $X$ falls in a specific set $T$; the bound is given as a function of the entropy of $X$ and the cardinality of $T$. For any set $T$, we denote its indicator function by $\mathbf{1}_{T}$. \[lem:entbound\] Let $X$ be a discrete random variable with distribution $p:\mathcal{X}\rightarrow\mathbb{R}_{\geq 0}$, and let $T$ be a subset of $\mathcal{X}$. If $q:T\rightarrow \mathbb{R}_{\geq 0}$ is a function and $\alpha=\sum_{x\in T}q(x)$, then $$\label{eq:entbound1} -\sum_{x\in T}q(x)\log q(x)\leq\alpha\log|T|-\alpha\log\alpha.$$ When $q(x)=p(x)\mathbf{1}_{T}(x)$, the above inequality implies $$\label{eq:entbound2} \alpha=\sum_{x\in T}p(x)\leq K\left(1-\frac{H\left(X\right)-1}{\log\left|\mathcal{X}\right|}\right),$$ where $K=\left(1-\frac{\log |T|}{\log\left|\mathcal{X}\right|}\right)^{-1}$. By (\[eq:disty1\]), $$H\left(Y_{1}\right) =-\sum_{x_{1}}\gamma_{x_{1}}\log\gamma_{x_{1}}-\left(1-\gamma\right)\log\left(1-\gamma\right).$$ Applying (\[eq:entbound1\]) from Lemma \[lem:entbound\], $$\label{eq:enty1} H(Y_{1}) \leq\gamma m+H\left(\gamma\right) \leq\gamma m+1.$$ We next bound $\gamma$. To this end, we write each of the input distributions as a particular convex combination of uniform distributions. This is stated in the next lemma. \[lem:convxdist\] If $X$ is a discrete random variable with a finite alphabet $\mathcal{X}$, then there exists a positive integer $k$, a sequence of positive numbers $\{ \alpha_{j}\}_{j=1}^{k}$, and a collection of non-empty subsets of $\mathcal{X}$, $\{ S_{j}\} _{j=1}^{k}$, such that the following properties are satisfied. \(a) For every $j\in \{1,\dots,k-1\}$, $S_{j+1}$ is a proper subset of $S_j$. \(b) For all $x\in\mathcal{X}$, $$p(x)=\sum_{j=1}^{k}\alpha_{j}\frac{\mathbf{1}_{S_{j}}\left(x\right)}{|S_{j}|}.$$ \(c) For every $C$, $0<C<|\mathcal{X}|$, $$\sum_{j:|S_{j}|\leq C}\alpha_{j}\leq K\left(1-\frac{H(X)-1}{\log|\mathcal{X}|}\right),$$ where $K=\left(1-\frac{\log C}{\log|\mathcal{X}|}\right)^{-1}$. Using the previous lemma we write $p(x_{1})$ and $p(x_{2})$ as $$\begin{aligned} p(x_{1})&=\sum_{i=1}^{k}\alpha_{i}^{(1)}\frac{\mathbf{1}_{S_{i}^{(1)}}\left(x_{1}\right)}{|S_{i}^{(1)}|},\\ p(x_{2})&=\sum_{j=1}^{l}\alpha_{j}^{(2)}\frac{\mathbf{1}_{S_{j}^{(2)}}\left(x_{2}\right)}{|S_{j}^{(2)}|}.\end{aligned}$$ Then $$\gamma = \sum_{x_{1},x_{2}:b_{x_{1}x_{2}}=0}p(x_{1})p(x_{2}) = \sum_{i=1}^{k}\sum_{j=1}^{l}\alpha_{i}^{(1)}\alpha_{j}^{(2)}\beta_{ij},$$ where $$\begin{aligned} \beta_{ij} &= \sum_{x_{1},x_{2}:b_{x_{1}x_{2}}=0}\frac{\mathbf{1}_{S_{i}^{(1)}}\left(x_{1}\right)\mathbf{1}_{S_{j}^{(2)}}\left(x_{2}\right)}{|S_{i}^{(1)}||S_{j}^{(2)}|}\\ &= \frac{\left|\left(S_{i}^{(1)}\times S_{j}^{(2)}\right)\cap 0_{B_{m}}\right|}{\left|S_{i}^{(1)}\right|\left|S_{j}^{(2)}\right|}\end{aligned}$$ For every $i$ and $j$, $\beta_{ij}\leq 1$. If, however, $\min\{ |S_{i}^{(1)}|,|S_{j}^{(2)}|\} \geq f(m),$ then by the first property of our channel (Lemma 2), $\beta_{ij}\leq \epsilon$. Thus by part (c) of Lemma \[lem:convxdist\] and (\[eq:entlbound\]), $$\begin{aligned} \gamma &<\epsilon +\sum_{i,j:\min\{ |S_{i}^{(1)}|,|S_{j}^{(2)}|\} < f(m)}\alpha_{i}^{(1)}\alpha_{j}^{(2)}\\ & = \epsilon +1-\sum_{i,j:\min\{ |S_{i}^{(1)}|,|S_{j}^{(2)}|\} \geq f(m)}\alpha_{i}^{(1)}\alpha_{j}^{(2)}\\ & = \epsilon +1\\ & \phantom{=} -\bigg(1-\sum_{i:|S_{i}^{(1)}|< f(m)}\alpha_{i}^{(1)}\bigg)\bigg(1-\sum_{j:|S_{j}^{(2)}|< f(m)}\alpha_{j}^{(2)}\bigg)\\ & \leq \epsilon +1-\left(1-K_{m}\left(1-\frac{H(X_{1})-1}{m}\right)\right)\\ & \phantom{\leq} \times\left(1-K_{m}\left(1-\frac{H(X_{2})-1}{m}\right)\right),\end{aligned}$$ where $K_{m}=\left(1-\frac{\log f(m)}{m}\right)^{-1}$ and $K_{m}\rightarrow 1$ as $m\rightarrow \infty$ since $\log f(m)=o(m)$ by assumption. Since by (\[eq:ieregion\]) and (\[eq:ratelbound\]), $\log 2f(m) \leq R_{i}\leq H(X_{i})$ for $i=1,2$, $$\begin{aligned} \gamma & <\epsilon+1-\left(1-K_{m}\left(1-\frac{R_{1}-1}{m}\right)\right)\\ & \phantom{=}\times \left(1-K_{m}\left(1-\frac{R_{2}-1}{m}\right)\right)\\ & =\epsilon+K_{m}\left(2-\frac{R_{1}+R_{2}-2}{m}\right)\\ & \phantom{=} -K_{m}^{2}\left(1-\frac{R_{1}-1}{m}\right)\left(1-\frac{R_{1}-1}{m}\right).\end{aligned}$$ Combining the previous inequality with (\[eq:entr1\]) and (\[eq:enty1\]) results in $$\begin{aligned} \frac{R_{1}}{m} & \leq\epsilon+\frac{1}{m}+K_{m}\left(2-\frac{R_{1}+R_{2}-2}{m}\right)\\ & \phantom{=} -K_{m}^{2}\left(1-\frac{R_{1}-1}{m}\right)\left(1-\frac{R_{1}-1}{m}\right)\\ & =\epsilon+\frac{1}{m}+K_{m}\left(2-\frac{R_{1}+R_{2}-2}{m}\right)\\ & \phantom{=} -K_{m}^{2}\left(1-\frac{R_{1}+R_{2}-2}{m}+\frac{(R_{1}-1)(R_{2}-1)}{m^{2}}\right).\end{aligned}$$ If we let $x=\frac{R_{1}}{m}$ and $y=\frac{R_{2}}{m}$, then the previous inequality can be written as $$\begin{aligned} x & \leq\epsilon+\frac{1}{m}+K_{m}\left(2+\frac{2}{m}-x-y\right)\\ & \phantom{\leq} -K_{m}^{2}\left(1+\frac{2}{m}-x-y+\left(x-\frac{1}{m}\right)\left(y-\frac{1}{m}\right)\right),\end{aligned}$$ or $$\label{eq:smdef1} (x-a_{m})(y+b_{m})\leq c_{m},$$ where $$\begin{aligned} a_{m} & =1+\frac{1}{m}-\frac{1}{K_{m}},\\ b_{m} & =-1-\frac{1}{m}+\frac{1}{K_{m}}+\frac{1}{K_{m}^{2}},\\ c_{m} & =-1-\frac{2}{m}-\frac{1}{m^{2}}+\left(2+\frac{2}{m}\right)\frac{1}{K_{m}}\\ & \phantom{=}+\left(\epsilon+\frac{1}{m}\right)\frac{1}{K_{m}^{2}}-a_{m}b_{m}.\end{aligned}$$ By symmetry, we can also show $$\label{eq:smdef2} (x+b_{m})(y-a_{m})\leq c_{m}.$$ Note that $$\begin{aligned} a &:=\lim_{m\rightarrow\infty}a_{m}=0, \\ b &:=\lim_{m\rightarrow\infty}b_{m}=1,\\ c &:=\lim_{m\rightarrow\infty}c_{m}=1+\epsilon.\end{aligned}$$ Let $S_{m}$ be the set of all nonnegative $x,y$ that satisfy (\[eq:smdef1\]) and (\[eq:smdef2\]) and $\mathscr{S}_{m}$ be the set of all $(mx,my)$ such that $(x,y)\in S_{m}$. Then by the arguments of this section, every $(R_{1},R_{2})\in\mathscr{R}_{m}$ that satisfies $R_{1},R_{2}>\log 2f(m)$ is in $\mathscr{S}_{m}$. As the capacity region is given by the closure of $\operatorname{conv}(\mathscr{R}_{m})$, the definition of sum-capacity (\[eq:sumcapacity\]) implies $$\begin{aligned} \frac{1}{m}C_{\mathrm{S-IE}}^{(m)} & \leq \frac{1}{m}\max_{(R_1,R_2)\in\operatorname{conv}\left(\mathscr{S}_{m}\right)}(R_1+R_2)\\ & = \max_{(x,y)\in\operatorname{conv}\left(S_{m}\right)}(x+y).\end{aligned}$$ Thus $$\label{eq:scie} \limsup_{m\rightarrow\infty}\frac{C_{\mathrm{S-IE}}^{(m)}}{m}\leq\lim_{m\rightarrow\infty}\max_{(x,y)\in\operatorname{conv}\left(S_{m}\right)}(x+y).$$ To find the limit on the right hand side, we make use of the following lemma proved in the appendix. \[lem:sumcapacity\] Suppose $\left\{ a_{m}\right\} _{m=1}^{\infty}$, $\left\{ b_{m}\right\} _{m=1}^{\infty}$ and $\left\{ c_{m}\right\} _{m=1}^{\infty}$ are three sequences of real numbers such that $\lim_{m\rightarrow\infty}a_{m}=a$, $\lim_{m\rightarrow\infty}b_{m}=b$, $\lim_{m\rightarrow\infty}c_{m}=c$, where $$b,c,a+b,ab+c>0,$$ and $$\sqrt{(a+b)^{2}+4c}>b+\frac{c}{b}.$$ For every positive integer $m$, let $S_{m}$ be defined as above. Then $$\lim_{m\rightarrow\infty}\max_{(x,y)\in\operatorname{conv}\left(S_{m}\right)}(x+y) = a-b+\sqrt{(a+b)^{2}+4c}.$$ It is easy to see that the sequences above satisfy the assumptions of Lemma \[lem:sumcapacity\]. Thus $$\limsup_{m\rightarrow\infty}\frac{C_{\mathrm{S-IE}}^{(m)}}{m} \leq\sqrt{5+4\epsilon}-1,$$ Therefore, for all but finitely many $m$, $$C_{\mathrm{S-IE}}^{(m)}\leq(\sqrt{5+4\epsilon}-1)m.$$ Conclusion ========== In this paper, we present a new model for cooperation and study its benefits in the case of the encoders of a DM-MAC. Specifically, we present channels for which the gain in sum-capacity is “almost” exponential in the cooperation rate. The CF model can be generalized to other network settings, and its study is subject to future work. Acknowledgment {#acknowledgment .unnumbered} ============== This material is based upon work supported by the National Science Foundation under Grant Numbers CCF-1321129, CCF-1018741, CCF-1038578, and CNS-0905615. Proof of Lemma \[lem:chconst\] ============================== We use the probabilistic method [@Spencer]. We assign a probability to every $2^{m}\times2^{m}$ $(0,1)$-matrix and show that the probability of a matrix having both properties is positive for sufficiently large $m$; hence, there exists at least one such matrix. Fix $\epsilon>0$, and let $B_{m}=\left(b_{ij}\right)_{i,j=1}^{2^{m}}$ be a random matrix with $b_{ij}\overset{\mathrm{iid}}{\sim}\text{Bernoulli}\left(p\right)$, where $1-\epsilon<p<1$. Let $$\Gamma_{m}=\left\{ S:S\subseteq\mathcal{X}^{(m)} , |S|\geq f(m)\right\}$$ For every $S,T\in\Gamma_{m}$, define the event $$E_{S,T}^{(m)}=\left\{ \frac{\left|(S\times T)\cap1_{B_{m}}\right|}{\left|S\right||T|}\leq1-\epsilon\right\} .$$ It follows $$\begin{aligned} \MoveEqLeft \operatorname{Pr}\bigg(\bigcup_{S,T\in\Gamma}E_{S,T}^{(m)}\bigg)\\ & \leq\sum_{S,T\in\Gamma}\operatorname{Pr}\left(E_{S,T}^{(m)}\right)\\ & =\sum_{S,T\in\Gamma}\operatorname{Pr}\left(\left|(S\times T)\cap1_{B_{m}}\right|\leq\left(1-\epsilon\right)|S||T|\right)\\ & =\sum_{S,T\in\Gamma}\sum_{k=0}^{\left\lfloor (1-\epsilon)|S||T|\right\rfloor }{|S||T| \choose k}p^{k}(1-p)^{|S||T|-k}\\ & =\sum_{i,j=f(m)}^{2^{m}}{2^{m} \choose i}{2^{m} \choose j}\sum_{k=0}^{\left\lfloor (1-\epsilon)ij\right\rfloor }{ij \choose k}p^{k}(1-p)^{ij-k}.\end{aligned}$$ Suppose $\left\{ X_{l}\right\} _{l=1}^{L}$ is a set of independent random variables such that for each $l$, $X_{l}\in[a_{l},b_{l}]$ with probability one. If $S=\sum_{l=1}^{L}X_{i}$, Hoeffding’s inequality [@Hoeffding] states that for every $u$ smaller or equal to $\mathbb{E}S$, we have $$\operatorname{Pr}\left(S\leq u\right)\leq e^{-\frac{2\left(\mathbb{E}S-u\right)^{2}}{\sum_{l=1}^{L}\left(b_{l}-a_{l}\right)^{2}}}.$$ If $\left\{ X_{l}\right\} _{l=1}^{ij}$ is a set of $ij$ independent Bernoulli($p$) random variables, then for every $l$, $0\leq X_{l}\leq1$, and $$(1-\epsilon)ij \leq pij = \mathbb{E}\left[\sum_{l=1}^{ij}X_{l}\right].$$ Thus Hoeffding’s inequality implies $$\begin{aligned} \sum_{k=0}^{\left\lfloor (1-\epsilon)ij\right\rfloor }{ij \choose k}p^{k}(1-p)^{ij-k} & =\operatorname{Pr}\left(\sum_{l=1}^{ij}X_{l}\leq (1-\epsilon)ij\right) \\ &\leq e^{-2(p-1+\epsilon)^{2}ij}.\end{aligned}$$ Since ${2^{m} \choose i}\leq2^{mi}$, $$\begin{aligned} \MoveEqLeft \sum_{i,j=f(m)}^{2^{m}}{2^{m} \choose i}{2^{m} \choose j}\sum_{k=0}^{\left\lfloor (1-\epsilon)ij\right\rfloor }{ij \choose k}p^{k}(1-p)^{ij-k}\\ & \leq\sum_{i,j=f(m)}^{2^{m}}2^{m(i+j)}e^{-2(p-1+\epsilon)^{2}ij}\\ & =\sum_{i,j=f(m)}^{2^{m}}e^{(i+j)m\ln2-2(p-1+\epsilon)^{2}ij}.\end{aligned}$$ If we define $h:\mathbb{Z}^{2}\rightarrow\mathbb{R}$ as $$h(i,j)=(i+j)m\ln2-2(p-1+\epsilon)^{2}ij,$$ then for $j\geq f(m)$, $$\begin{aligned} \MoveEqLeft h(i+1,j)-h(i,j)\\ & =m\ln2-2(p-1+\epsilon)^{2}j\\ & \leq m\ln2-2(p-1+\epsilon)^{2}f(m)\\ & =f(m)\left(\frac{m}{f(m)}\ln2-2(p-1+\epsilon)^{2}\right).\end{aligned}$$ By assumption, $$\lim_{m\rightarrow\infty}\frac{m}{f(m)}=0,$$ so there exists $M_{1}$ such that for all $m>M_{1}$, $$\frac{m}{f(m)}<\frac{2}{\ln2}(p-1+\epsilon)^{2}.$$ Therefore, for $m>M_{1}$ and $y\geq f(m)$, $h$ is decreasing with respect to $i$. As $h$ is symmetric with respect to $i$ and $j$, for $m>M_{1}$ and $i\geq f(m)$, we also have $h(i,j+1)-h(i,j)<0$. Thus $h$ is a decreasing function in $i$ and $j$ for $m>M_{1}$ and $i,j\geq f(m)$. Hence for $m>M_{1}$, $$\begin{aligned} \MoveEqLeft \sum_{i,j=f(m)}^{2^{m}}e^{(i+j)m\ln2-2(p-1+\epsilon)^{2}ij}\\ & \leq\left(2^{m}-f(m)+1\right)^{2}e^{2mf(m)\ln2-2(p-1+\epsilon)^{2}\left(f(m)\right)^{2}}\\ & <e^{2m\left(1+f(m)\right)\ln2-2(p-1+\epsilon)^{2}\left(f(m)\right)^{2}}\\ & =e^{2\left(f(m)\right)^{2}\left(\left(1+\frac{1}{f(m)}\right)\frac{m}{f(m)}\ln2-2(p-1+\epsilon)^{2}\right)}.\end{aligned}$$ The exponent of the right hand side of the previous inequality, $$2\left(f(m)\right)^{2}\left(\left(1+\frac{1}{f(m)}\right)\frac{m}{f(m)}\ln2-2(p-1+\epsilon)^{2}\right),$$ goes to $-\infty$ as $m$ approaches infinity, so $$\lim_{m\rightarrow\infty}\operatorname{Pr}\bigg(\bigcup_{S,T\in\Gamma}E_{S,T}^{(m)}\bigg)=0.$$ This means that the probability that the fraction of bad entries in a sufficiently large submatrix is less than $1-\epsilon$ is going to zero. Next, we calculate the probability that $B_{m}$ doesn’t satisfy the second property. For every $x\in \mathcal{X}^{(m)}$ and $k\in \{1,\dots,2^{m-g(m)}\}$, define the event $$E_{x,k}^{(m)} = \left\{\left(0_{B_{m}(x,\mathcal{X}_{m,k})}\cup0_{B_{m}(\mathcal{X}_{m,k},x)}\right)\cap 0_{B_{m}} =\emptyset\right\}.$$ The probability that for every $x$ and $k$ the sets $B_{m}(x,\mathcal{X}_{m,k})$ and $B_{m}(\mathcal{X}_{m,k},x)$ don’t have at least one good entry equals $$\begin{aligned} \MoveEqLeft \operatorname{Pr}\bigg(\bigcup_{x,k}E_{x,k}^{(m)}\bigg)\\ & \leq\sum_{x\in\mathcal{X}^{(m)}}\sum_{k=1}^{2^{m-g(m)}}\operatorname{Pr}\left(0_{B_{m}(x,\mathcal{X}_{m,k})}\cap0_{B_{m}}=\emptyset\right)\\ & +\sum_{x\in\mathcal{X}^{(m)}}\sum_{k=1}^{2^{m-g(m)}}\operatorname{Pr}\left(0_{B_{m}(\mathcal{X}_{m,k},x)}\cap0_{B_{m}}=\emptyset\right)\\ & =2^{2m-g(m)+1}p^{2^{g(m)}}\\ & =2^{2^{g(m)}\left(\frac{2m-g(m)+1}{2^{g(m)}}+\log p\right)}.\end{aligned}$$ Since $m=o(2^{g(m)})$, the exponent of the right hand side of the previous inequality, $$2^{g(m)}\left(\frac{2m-g(m)+1}{2^{g(m)}}+\log p\right),$$ goes to $-\infty$ as $m\rightarrow\infty$. This implies that $$\operatorname{Pr}\bigg(\bigcup_{x,k}E_{x,k}^{(m)}\bigg)$$ goes to zero as $m\rightarrow\infty$. Thus, by the union bound the probability that the matrix doesn’t satisfy either of these properties is going to zero. Therefore, for large enough $m$, almost every $(0,1)$-matrix satisfies both properties, though we only need one such matrix. Proof of Lemma \[lem:entbound\] =============================== For the first part, if $\alpha=0$, then $q(x)=0$ for every $x\in T$ and both sides equal zero. Otherwise, $$\begin{aligned} -\sum_{x\in T}q(x)\log q(x) & =-\alpha\sum_{x\in T}\frac{q(x)}{\alpha}\log \frac{q(x)}{\alpha}-\alpha\log \alpha\\ & \leq \alpha\log|T|-\alpha\log \alpha,\end{aligned}$$ since $q(x)/\alpha$ is a probability mass function with entropy $\sum_{x\in T}\frac{q(x)}{\alpha}\log \frac{\alpha}{q(x)}$ and alphabet size $|T|$. For the second part, if $$q(x)=p(x)\mathbf{1}_{T}\left(x\right),$$ then by the previous inequality, $$\begin{aligned} -\sum_{x\in T}p(x)\log p(x) &= -\sum_{x\in T}q(x)\log q(x)\\ &\leq\alpha\log |T|-\alpha\log \alpha,\end{aligned}$$ where $$\alpha = \sum_{x\in T}q(x)= \operatorname{Pr}(x\in T).$$ Similarly, replacing $\mathcal{X} \setminus T$ with $T$ results in $$\begin{aligned} \MoveEqLeft -\sum_{x\in\mathcal{X}\setminus T}p\left(x\right)\log p\left(x\right)\\ & \leq\left(1-\alpha\right)\log|\mathcal{X}\setminus T|-\left(1-\alpha\right)\log\left(1-\alpha\right).\end{aligned}$$ Adding the previous two inequalities gives $$\begin{aligned} H\left(X\right) & \leq\alpha\log|T|+\left(1-\alpha\right)\log|\mathcal{X}\setminus T|+H\left(\alpha\right)\\ & \leq\alpha\log|T|+\left(1-\alpha\right)\log\left|\mathcal{X}\right|+1.\end{aligned}$$ Therefore, $$\frac{H\left(X\right)}{\log\left|\mathcal{X}\right|}\leq1+\frac{1}{\log\left|\mathcal{X}\right|}-\left(1-\frac{\log|T|}{\log\left|\mathcal{X}\right|}\right)\alpha,$$ and $$\alpha\leq\frac{1-\frac{H\left(X\right)-1}{\log\left|\mathcal{X}\right|}}{1-\frac{\log|T|}{\log\left|\mathcal{X}\right|}}.$$ Proof of Lemma \[lem:convxdist\] ================================ Let $k$ be the cardinality of the range of $p:\mathcal{X}\rightarrow\mathbb{R}$. Then there exists a sequence $\left\{ x_{j}\right\} _{j=1}^{k}$ such that $$\left\{ p(x)|x\in\mathcal{X}\right\} =\left\{ p\left(x_{j}\right)|1\leq j\leq k\right\} ,$$ and $$0<p\left(x_{1}\right)<\dots<p\left(x_{k}\right)\leq1.$$ For $j$, $1\leq j\leq k$, define $$S_{j}=\left\{ x\in\mathcal{X}|p(x)\geq p(x_{j})\right\} ,$$ and let $S_{k+1}=\emptyset$. Then for $j$, $1\leq j\leq k$, $S_{j+1}\subseteq S_{j}$ (part a) and $$S_{j}\setminus S_{j+1}=\left\{ x\in\mathcal{X}|p(x)=p(x_{j})\right\} .$$ Thus, the number of $x\in\mathcal{X}$ such that $p(x)=p(x_{j})$ equals $|S_{j}\setminus S_{j+1}|$. For $j\in\left\{ 2,\dots,k\right\} $, define $$\alpha_{j}=|S_{j}|\left(p(x_{j})-p(x_{j-1})\right),$$ and let $\alpha_{1}=|S_{1}|p(x_{1})$. In part (b), the left hand side simplifies as $$\begin{aligned} \sum_{j=1}^{k}\alpha_{j}\frac{\mathbf{1}_{S_{j}}(x)}{|S_{j}|} & =\sum_{j=1}^{k}\left(p(x_{j})-p(x_{j-1})\right)\mathbf{1}_{S_{j}}(x)\\ & =\sum_{j=1}^{k}p(x_{j})\mathbf{1}_{S_{j}\setminus S_{j+1}}(x)\\ & =p(x).\end{aligned}$$ In (c), if the set $\left\{ j|1\leq j\leq k,|S_{j}|\leq C\right\} $ is empty, then there’s nothing to prove. Otherwise, it’s a nonempty subset of $\left\{ 1,\dots,k\right\} $ and thus has a minimum, which we call $j^{*}$. Then $$\begin{aligned} \sum_{j:|S_{j}|\leq C}\alpha_{j} & =\sum_{j=j^{*}}^{k}\alpha_{j}\\ & =\sum_{j=j^{*}}^{k}|S_{j}|\left(p(x_{j})-p(x_{j-1})\right)\\ & =\sum_{j=j^{*}}^{k}|S_{j}\setminus S_{j+1}|p(x_{j})-|S_{j^{*}}|p(x_{j*-1})\\ & =\sum_{x\in S_{j^{*}}}p(x)-|S_{j^{*}}|p(x_{j*-1})\\ & \leq\sum_{x\in S_{j^{*}}}p(x).\end{aligned}$$ By (\[eq:entbound2\]) of Lemma \[lem:entbound\], $$\begin{aligned} \sum_{x\in S_{j^{*}}}p(x) & \leq \frac{1}{1-\frac{\log|S_{j^{*}}|}{\log\left|\mathcal{X}\right|}}\left(1-\frac{H\left(X\right)-1}{\log\left|\mathcal{X}\right|}\right)\\ & \leq \frac{1}{1-\frac{\log C}{\log\left|\mathcal{X}\right|}}\left(1-\frac{H\left(X\right)-1}{\log\left|\mathcal{X}\right|}\right),\end{aligned}$$ since $|S_{j^{*}}|\leq C$. Proof of Lemma \[lem:sumcapacity\] ================================== Prior to proving Lemma \[lem:sumcapacity\], we state and prove the following lemma. \[lem:scaux\] Suppose $a$, $b$, and $c$ are three real numbers such that $b$, $c$, $a+b$, and $ab+c$ are positive, and $$\sqrt{(a+b)^{2}+4c}> b+\frac{c}{b}.$$ Let $S$ be the set of all pairs $(x,y)$ such that $x,y\geq0$, and $$\begin{cases} (x-a)(y+b) & \leq c,\\ (x+b)(y-a) & \leq c. \end{cases}$$ If $x_{0}$ is the unique positive solution to the equation $$(x-a)(x+b)=c,$$ then $$\max_{(x,y)\in\operatorname{conv}(S)}(x+y)=2x_{0}.$$ ![The sets $S_{1}$ and $S_{2}$ (gray area), and their convex hulls.[]{data-label="fig:convexhull"}](convexhull) Since $$\begin{aligned} (x-a)(y+b)-(x+b)(y-a) & =(a+b)(x-y)\end{aligned}$$ and $a+b>0$, the set $S$ can be written as $S=S_{1}\cup S_{2}$ (Figure \[fig:convexhull\]), where $S_{1}$ is the set of all pairs $(x,y)$ such that $0\leq x\leq y$ and $$(x+b)(y-a)\leq c,$$ and $S_{2}$ is the set of all pairs $(x,y)$ such that $0\leq y\leq x$ and $$(x-a)(y+b)\leq c.$$ The intersection of $S_{1}$ and $S_{2}$ consists of all pairs $(x,x)$ such that $0\leq x\leq x_{0}$ where $$x_{0}=\frac{a-b+\sqrt{(a+b)^{2}+4c}}{2}.$$ Note that since $b$, $c$ and $ab+c$ are positive, $$\sqrt{(a+b)^{2}+4c} < a + b + \frac{2c}{b},$$ so $0<x_{0}<a+\frac{c}{b}$. The convex hull of $S_{1}$ consists of all pairs $(x,y)$ such that $0\leq x\leq y$ and $$\left(a+\frac{c}{b}-x_{0}\right)x+x_{0}y\leq\left(a+\frac{c}{b}\right)x_{0},$$ and the convex hull of $S_{2}$ consists of all pairs $(x,y)$ such that $0\leq y\leq x$ and $$x_{0}x+\left(a+\frac{c}{b}-x_{0}\right)y\leq\left(a+\frac{c}{b}\right)x_{0}.$$ Note that $\operatorname{conv}\left(S_{1}\right)\cup\operatorname{conv}\left(S_{2}\right)$ is the region bounded by the axes $y=0$, $x=0$, and the piecewise linear function $$h(x)=\begin{cases} \frac{x_{0}-a-\frac{c}{b}}{x_{0}}x+a+\frac{c}{b} & 0\leq x\leq x_{0},\\ \frac{x_{0}}{x_{0}-a-\frac{c}{b}}x-\frac{\left(a+\frac{c}{b}\right)x_{0}}{x_{0}-a-\frac{c}{b}} & x_{0}<x\leq a+\frac{c}{b}. \end{cases}$$ Since $2x_0\geq a+\frac{c}{b}$ by assumption, $$\frac{x_{0}-a-\frac{c}{b}}{x_{0}} \geq \frac{x_{0}}{x_{0}-a-\frac{c}{b}}.$$ This means the slope of $h$ is decreasing, or equivalently, $h$ is a concave function. Thus $\operatorname{conv}\left(S_{1}\right)\cup\operatorname{conv}\left(S_{2}\right)$ is convex. But $$S\subseteq\operatorname{conv}(S_1)\cup\operatorname{conv}(S_2)\subseteq\operatorname{conv}(S),$$ so $$\operatorname{conv}(S)=\operatorname{conv}(S_1)\cup\operatorname{conv}(S_2).$$ This implies $$\max_{(x,y)\in\operatorname{conv}(S)}(x+y)=2x_{0}.$$ Using this lemma, we may prove Lemma \[lem:sumcapacity\]. There exists a positive $M$ such that for every $m\geq M$, $$b_{m},c_{m},a_{m}+b_{m},a_{m}b_{m}+c_{m}>0$$ and $$\sqrt{(a_{m}+b_{m})^{2}+4c_{m}}-b_{m}-\frac{c_{m}}{b_{m}}>0.$$ Let $x_{0}^{(m)}$ and $x_{0}$ be the unique positive solutions to the equations $$(x_{0}^{(m)}-a_{m})(x_{0}^{(m)}+b_{m})=c_{m}$$ and $$(x_{0}-a)(x_{0}+b)=c.$$ Since $x_{0}^{(m)}$ and $x_{0}$ are continuous functions of $\left(a_{m},b_{m},c_{m}\right)$ and $(a,b,c)$, respectively, we have $$\lim_{m\rightarrow\infty}x_{0}^{(m)}=x_{0}.$$ Thus by Lemma \[lem:scaux\], $$\begin{aligned} \lim_{m\rightarrow\infty}\max_{(x,y)\in\operatorname{conv}\left(S^{(m)}\right)}(x+y) & =\lim_{m\rightarrow\infty}2x_{0}^{(m)}\\ & =2x_{0}\\ & =a-b+\sqrt{(a+b)^{2}+4c}.\end{aligned}$$
--- abstract: 'I comment on a recent preprint “A Proposal to Measure Photon-Photon Scattering” that appeared recently as arXiv:1106.0465v1 \[hep-ph\].' author: - Denis Bernard title: 'Comment on : A Proposal to Measure Photon-Photon Scattering' --- The Authors of Ref. [@Fujita:2011rd] proposed that the low-energy expression for the cross section of photon-elastic scattering (Heisenberg and Euler [@euler]) : $${d\sigma\over d\Omega}\simeq {139\alpha^4\over (180\pi)^2 m^2} \left({\omega\over m} \right)^6 (3+\cos^2\theta )^2$$ , be replaced by ([@kanda], to be published) : $${d\sigma\over d\Omega} \simeq {\alpha^4\over (12\pi)^2 \omega^2} (3+2\cos^2\theta +\cos^4\theta ) \label{eq:kanda}$$ where $\omega$ and $m$ denote the photon energy and the mass of electron, respectively. The orders of magnitude that were given for $\omega \simeq 1$ eV are $2.3 \times 10^{-21} \ \ {\rm cm}^2 $ and $9.3 \times 10^{-67} \ \ {\rm cm}^2 $, respectively.   Experimentally, an upper limit was obtained first by colliding two laser beams head-on, and searching for one of the scattered photons (Moulin [*et al.*]{} [@Moulin:1996vv]) : $$\sigma < 10^{-39} \ \ {\rm cm}^2. \label{eq:direct}$$ The limit was further improved with a stimulation of the reaction with a third beam (Bernard [*et al.*]{} [@Bernard:2010dx]), based on the computations of Moulin [*et al.*]{} [@Moulin:2002ya] (that were revisited recently by Lundstrom, Lundin [*et al.*]{} [@Lundstrom:2005za; @Lundin:2006wu]). $$\sigma < 1.5 \times 10^{-48} \ \ {\rm cm}^2. \label{eq:stimulated}$$ These two experimental publications [@Moulin:1996vv; @Bernard:2010dx] are indeed mentionned in Ref. [@Fujita:2011rd], but it would be interesting to understand how the prediction of eq. (\[eq:kanda\]) can face the upper limits of eqs. (\[eq:direct\]) and (\[eq:stimulated\]).   Also, it should be noted that the QED computation of the 4-photon loop diagrams (See, e.g., [@Kinoshita:2002ns] and a review at [@Jegerlehner:2009ry]) take part into the prediction of the lepton “anomalous” magnetic moment, which is found to be in fair (i.e., within three standard deviations) agreement with the experimental measurement (e.g. [@Davier:2010nc]). [99]{} T. Fujita and N. Kanda, “A Proposal to Measure Photon-Photon Scattering,” arXiv:1106.0465 \[hep-ph\]. N. Kanda, “Light-Light Scattering”, to be published W. Heisenberg and H. Euler, “Consequences of Dirac’s theory of positrons,” Z. Phys.  [**98**]{}, 714 (1936) F. Moulin, D. Bernard and F. Amiranoff, “Photon-photon elastic scattering in the visible domain,” Z. Phys. C [**72**]{}, 607 (1996). D. Bernard [*et al.*]{}, “Search for Stimulated Photon-Photon Scattering in Vacuum,” Eur. Phys. J. D [**10**]{}, 141 (2000) \[arXiv:1007.0104 \[physics.optics\]\]. F. Moulin and D. Bernard, “Four-wave interaction in gas and vacuum. Definition of a third order nonlinear effective susceptibility in vacuum,” Opt. Commun. [**164**]{} (1999) 137 \[arXiv:physics/0203069\]. E. Lundstrom [*et al.*]{}, “Using high-power lasers for detection of elastic photon-photon scattering,” Phys. Rev. Lett. [**96**]{}, 083602 (2006) \[arXiv:hep-ph/0510076\]. J. Lundin [*et al.*]{}, “Detection of elastic photon-photon scattering through four-wave mixing using high power lasers,” Phys. Rev. A [**74**]{}, 043821 (2006) \[arXiv:hep-ph/0606136\]. T. Kinoshita and M. Nio, “Revised alpha\*\*4 term of lepton g-2 from the Feynman diagrams containing an internal light-by-light scattering subdiagram,” Phys. Rev. Lett. [**90**]{}, 021803 (2003) \[arXiv:hep-ph/0210322\]. F. Jegerlehner and A. Nyffeler, “The Muon g-2,” Phys. Rept. [**477**]{}, 1 (2009) \[arXiv:0902.3360 \[hep-ph\]\]. M. Davier, A. Hoecker, B. Malaescu and Z. Zhang, “Reevaluation of the Hadronic Contributions to the Muon g-2 and to alpha(MZ),” Eur. Phys. J. C [**71**]{}, 1515 (2011) \[arXiv:1010.4180 \[hep-ph\]\].
--- abstract: 'We investigate the asymptotic state of time-periodic quantum systems with regular and chaotic Floquet states weakly coupled to a heat bath. The asymptotic occupation probabilities of these two types of states follow fundamentally different distributions. Among regular states the probability decreases from the state in the center of a regular island to the outermost state by orders of magnitude, while chaotic states have almost equal probabilities. We derive an analytical expression for the occupations of regular states of kicked systems, which depends on the winding numbers of the regular tori and the parameters temperature and driving frequency. For a constant winding number within a regular island it simplifies to Boltzmann-like weights $\exp(-{\beta_{\text{eff}}}{E^{\text{reg}}}_m)$, similar to time-independent systems. For this we introduce the regular energies ${E^{\text{reg}}}_m$ of the quantizing tori and an effective winding-number-dependent temperature $1/{\beta_{\text{eff}}}$, different from the actual bath temperature. Furthermore, the occupations of other typical Floquet states in a mixed phase space are studied, i.e. regular states on nonlinear resonances, beach states, and hierarchical states, giving rise to distinct features in the occupation distribution. Avoided crossings involving a regular state lead to drastic consequences for the entire set of occupations. We introduce a simplified rate model whose analytical solutions describe the occupations quite accurately.' author: - Roland Ketzmerick - Waltraut Wustmann title: Statistical mechanics of Floquet systems with regular and chaotic states --- Introduction ============ The response of a dynamical system to a time-periodic driving force is ubiquitous in both classical and quantum mechanics and plays a fundamental role in many physical and technical applications. It opened the field for the coherent control of atoms and molecules [@RicZha2000], the optimal control of chemical reactions [@BruSha2003], or the manipulation of semiconductor-nanodevices and heterostructures in solids [@Phi1994]. Under realistic, nonidealized conditions real physical systems interact with their environment. If the environment contains a vast number of degrees of freedom, the full dynamics of the composite system is not traceable. The system is then interpreted as an open subsystem in mutual contact with a heat bath and the dynamics of the subsystem is characterized by its reduced density operator $\rho$. To evaluate the evolution of $\rho$ for an open quantum system in a time-varying, strong external field is a nontrivial task, as it is permanently driven out of equilibrium. For only very few systems exact analytical solutions of the damped dynamics are feasible, in particular a driven two-level system [@GriSasStoWei1993_GriSasHaeWei1995] and a driven harmonic oscillator [@GraHue1994; @ZerHae1995]. In general systems it is studied numerically, e.g. with focus on tunneling, see Ref. [@GriHae1998] and references therein. Especially in the regime of weak interaction with the environment, standard methods, originally established for time-independent quantum systems, have been adapted to the demands of time-periodic systems [@BluETAL1991; @KohDitHae1997; @BreHubPet2000; @Koh2001; @HonKetKoh2009]. The final state of the relaxation process has so far not received as much attention as transient phenomena, although this can be ranked as even more fundamental and is in fact a core question of statistical mechanics. The usual thermodynamic concepts for the equilibrium state of time-independent systems are not applicable, such as the canonical distribution of Boltzmann weights, reached in the stationary limit of a time-independent system that is weakly coupled to a heat bath. The Boltzmann weights $e^{-\beta E_n}$ of the eigenstates are unique functions of the eigenenergy with the temperature $1/\beta = k_B T$ of the heat bath as the only relevant parameter, whereas microscopic details of the weak coupling play no role. Such a stationary limit, in the sense of convergence to time-independent values for all dynamical variables, is not encountered in a periodically driven system, where energy is permanently exchanged between the driven system and the environment. Instead, the relaxation process finally leads to an asymptotic state that adopts the periodicity of the driving, and that in general depends on the microscopic details of the coupling. The density operator of the time-periodic subsystem is best represented in the Floquet state basis. The Floquet states are quasi-periodic solutions of the Schrödinger equation for the time-periodic Hamiltonian without the coupling to the environment. In the Floquet basis the evolution equation for the density matrix can be approximated within the Floquet-Markov approach [@BluETAL1991; @KohDitHae1997; @BreHubPet2000; @Koh2001; @HonKetKoh2009] by a Markovian quantum master equation. In the long-time limit of the evolution the Floquet states are populated with asymptotic occupation probabilities that can be determined from a system of rate equations. Beyond the numerical evaluation of such a master equation, an intuitive understanding of these Floquet occupations is still lacking. A related problem occurs at avoided crossings, which are ubiquitous in the quasienergy spectra of generic Floquet systems, since the quasienergies are bounded within a finite interval, $0 \leq \varepsilon < \hbar\omega$, where $\omega = 2\pi/\tau$ is the driving frequency and $\tau$ is the driving period. As a consequence, the number of avoided crossings grows without limit for increasing Hilbert-space dimension, leading to a breakdown of the adiabatic theorem [@HonKetKoh1997]. In the presence of a heat bath this problem is approached in Ref. [@HonKetKoh2009] where it is shown that the reduced density operator $\rho$ is not affected by a small avoided crossing, provided that it is smaller than a specific effective coupling parameter and so is not ‘resolved’ by the heat bath. These findings justify the unevitable truncation of the, in general, infinite Hilbert space dimension in numerical implementations. One way to tackle the general challenge of finding the Floquet occupations beyond their numerical evaluation is to study the semiclassical regime of one-dimensional driven systems. In their classical limit regular and chaotic motions generically coexist. This is most clearly reflected in phase space, where regular trajectories evolve on invariant tori and chaotic trajectories fill the remaining phase-space regions. According to the semiclassical eigenfunction hypothesis [@Per1973_Ber1977_Vor1979] almost all Floquet states can be classified as either regular or chaotic, provided that the phase-space regions are larger than the Planck constant. The regular states localize on the regular tori and the chaotic states typically spread out over the chaotic region. For a driven particle in a box coupled to a heat bath the Floquet occupations of regular and chaotic states were found to follow different statistical distributions [@BreHubPet2000]. The regular states, which in this example differ only slightly from the eigenstates of the undriven system, carry almost Boltzmann weights, whereas all chaotic states have nearly the same occupation probability. In this paper we concentrate on situations characteristic for strong driving, where both phase space and Floquet states are strongly perturbed compared to the originally time-independent system. We demonstrate that the Floquet occupations of the states in a regular island under these conditions deviate considerably from the Boltzmann result. For kicked systems, making use of some reasonable assumptions, we derive an analytical expression for the regular occupations. In many cases it can be well approximated by weights of the Boltzmann type $e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_m}$. This requires the introduction of the regular energies ${E^{\text{reg}}}_m$, which are semiclassical invariants of the quantizing tori of the regular island, and the parameter $1/{\beta_{\text{eff}}}$, which is an effective temperature depending on the winding number of the regular island. Furthermore, we give an overview and interpretation for the occupations of other typical Floquet states in a mixed phase space, such as states on a resonance island chain, beach states, and hierarchical states. Avoided crossings in the Floquet spectrum can lead to severe changes in the occupations if they are larger than an effective coupling parameter. The effect can be intuitively explained by a set of effective rate equations with an additional rate ${ R^{\text{ac}} }$ between the states of the avoided crossing [@HonKetKoh2009]. It can be exploited for a switching mechanism in driven quantum systems, e.g. a weakly driven bistable system [@KetWus2009]. For the above occupation distributions for regular and chaotic states we demonstrate drastic consequences if a regular state has an avoided crossing with either a regular or a chaotic state. We introduce a simplified rate model whose analytical solution describes the Floquet occupations accurately. The paper is organized as follows: in Sec. \[sec:FloquetMarkov\] the microscopic model of driven dissipative systems and the Floquet-Markov description of its asymptotic state are sketched and the relevant coupling operators are introduced. Section \[sec:Occupations\] presents general occupation characteristics for the example of a driven quartic oscillator (Sec. \[sec:Driven\]) and a kicked rotor (Sec. \[sec:Kicked\]). They are related to the corresponding rate matrices (Sec. \[sec:Rates\]). In Sec. \[sec:RegularStates\] we derive an analytical expression for the regular occupations of kicked systems, depending on the winding numbers of the regular tori. For a constant winding number a simplification to the Boltzmann-like weights $e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_m}$ is shown (Sec. \[sec:approx2\_betaeff\]). An example where this is not possible is also discussed (Sec. \[sec:examples\]). Section \[sec:AddStructs\] gives an overview of the occupation characteristics of other types of Floquet states. In Sec. \[sec:AC\] the influence of avoided crossings on the Floquet occupations is demonstrated, which are compared to the analytical solutions of a simplified rate model (Sec. \[sec:AC\_model\]). Section \[sec:Summary\] summarizes the paper. Master equation in time-periodic systems {#sec:FloquetMarkov} ======================================== The coupling of a quantum system with the Hamiltonian $H_s(t)$ to a heat bath is modelled in a standard way by the composite Hamiltonian [@Wei1999] $$\label{eq:Hamiltonian_tot} H(t) = H_s(t) + H_b + H_{sb} .$$ Herein, the bath Hamiltonian $H_b = \sum_n \left( \frac{p_n^2}{2m_n} + \frac{m_n \omega_n^2}{2} x_n^2 \right)$ describes an ensemble of noninteracting harmonic oscillators coupled via the interaction Hamiltonian $H_{sb}$ to the system. In spatially extended systems this interaction is commonly assumed to be bilinear, $$\label{eq:H_sb} H_{sb} = A \sum_n c_n x_n ,$$ with some coupling operator $A$ of the system. The properties of the system-bath coupling are specified by the spectral density of the bath $J(\omega) := \frac{\pi}{2} \sum_n \frac{c_n^2}{m_n \omega_n} \left[ \delta(\omega-\omega_n) - \delta(\omega+\omega_n) \right]$. In the continuum limit the spectral density is assumed to be a smooth function which is linear for an Ohmic bath. An exponential cutoff beyond the spectral mode $\omega_c$ leads to $J(\omega) = \eta \omega \, e^{-\left|\omega\right|/\omega_c}$, where $\eta$ is proportional to the classical damping coefficient. In the absence of the heat bath the solutions of the time-dependent Schrödinger equation for the isolated system with the $\tau$-periodic Hamiltonian $$H_s(t+\tau) = H_s(t)$$ are the Floquet states $|\psi_i(t)\rangle$. These can be factorized into a product $$|\psi_i(t)\rangle = e^{-{\text{i}}\varepsilon_i t/\hbar} | u_i(t) \rangle$$ of a phase factor with the quasienergy $\varepsilon_i$ and a periodic state vector $|u_i(t)\rangle$, $$|u_i(t+\tau)\rangle = |u_i(t)\rangle ,$$ with the period $\tau$ of the Hamiltonian. In the presence of the heat bath the state of the system is described by the reduced density operator $\rho(t)$. Its equation of motion for time-periodic quantum systems has been derived within the Floquet-Markov approach [@BluETAL1991; @KohDitHae1997; @BreHubPet2000; @Koh2001; @HonKetKoh2009]: herein the Floquet formalism ensures a non-perturbative treatment of the coherent dynamics of the driven system. The density operator is represented in the set of the time-periodic state vectors $|u_i(t)\rangle$, which form a complete orthonormal basis at all times $t$. The coupling to the heat bath is treated perturbatively in second order of $H_{sb}$, which is valid in the limit of weak coupling between the driven system and the bath. This approximation requires a rapid decay of bath correlations compared to the typical relaxation time of the system. In this paper we use a cutoff frequency $\omega_c = 100 \omega$, which is large compared to the frequency $\omega = 2\pi/\tau$ of the driving. In the following we restrict the discussion to the limit of large times, much larger than the relaxation time. In this limit the density-matrix elements $\rho_{ij} = \langle u_i(t)|\rho(t)|u_j(t)\rangle$ are approximated as time-independent [@KohDitHae1997; @HonKetKoh2009]. Note that the corresponding density operator, $\sum_{i,j} | u_i(t) \rangle \rho_{ij} \langle u_j(t) |$, is still time-periodic because of the inherent time-dependence of the $|u_i(t)\rangle$. In this paper we restrict to the weak-coupling regime, where the system-bath coupling is small compared to all quasienergy spacings of a truncated Hilbert space (see discussion below). The Floquet occupations $p_i \equiv \rho_{ii}$ then obey the set of rate equations $$\label{eq:RGS_rho_diag} 0 = - p_{i} \sum_{j} R_{ij} + \sum_j p_{j} R_{ji} ,$$ which are independent of the damping coefficient $\eta$. Note that the rate equations beyond this weak-coupling regime would also contain the non-diagonal elements $\rho_{ij}$ ($i \neq j$). The rates $$\label{eq:R_ik} R_{ij} := \frac{\pi}{\hbar} \sum_K \left| A_{ij}(K) \right|^2 g(\varepsilon_j-\varepsilon_i - K {\hbar\omega}) ,$$ that describe bath-induced transitions between the Floquet states, use the Fourier coefficients $$\label{eq:x_ijK_def} A_{ij}(K) = \frac{1}{\tau} \int_0^{\tau} {\text{d}}t e^{-{\text{i}}\omega K t} A_{ij}(t)$$ of the time-periodic matrix elements $$\label{eq:x_ijt_def} A_{ij}(t) = \left\langle u_i(t) \right| A \left| u_j(t) \right\rangle .$$ The correlation function $g(E) = \pi^{-1} n_\beta(E) J(E/\hbar)$ of the bath coupling operator contains the spectral density $J(\omega)$ and the thermal occupation number $n_\beta(E)$ of the boson bath with temperature $1/\beta$. The reduction to the set of rate equations  seems possible only for systems with a finite dimension of the Hilbert space, since otherwise the quasienergies densely fill the interval $[0,{\hbar\omega})$. However, as demonstrated in Ref. [@HonKetKoh2009], near degeneracies much smaller than the coupling strength are not resolved by the interaction to the heat bath and do not influence the asymptotic density operator. The Hilbert dimension can therefore be truncated, keeping only those Floquet states of non-negligible occupation. Equation  is formally identical to the familiar system of rate equations describing the equilibrium state in time-independent systems. In contrast, however, specifics of the time-periodic system are present in the rates, Eq. , whose structure does in general not allow a detailed balance relation [@Koh2001]. Coupling operator for extended and for cyclic systems ----------------------------------------------------- For extended systems we assume as usual [@Wei1999] the linear coupling operator $$A = x$$ in Eq.  for the interaction Hamiltonian $H_{sb}$ with the heat bath. For cyclic systems defined on the unit interval $[0,1)$ with periodic boundary conditions in $x$ the coupling operator $x$ would be discontinuous at the borders of the interval and the coupling to the heat bath would therefore not be homogeneous. An adapted coupling scheme with the interaction Hamiltonian $$\label{eq:H_sb_cyclic} H_{sb} = \int_0^{2\pi} {\text{d}}\alpha \frac{\sqrt{2}}{2\pi} \sin(2\pi x + \alpha) \sum_n c_n x_n \delta(\alpha - \alpha_n)$$ has been proposed in Ref. [@Coh1994] for such situations. The angles $\alpha_n$ characterize the individual bath oscillators and are equidistributed in the interval $[0,2\pi)$. The new spectral density $$J(\omega,\alpha) = \frac{\pi}{2} \sum_n \frac{c_n^2}{m_n \omega_n} \delta(\alpha-\alpha_n) \left[ \delta(\omega-\omega_n) - \delta(\omega+\omega_n) \right]$$ hence factorizes into independent factors, $J(\omega, \alpha)= J(\omega)/(2\pi)$, with the spectral density $J(\omega)$ as defined above and the homogeneous angular density $1/(2\pi)$. The spatially periodic interaction Hamiltonian  is continuous and, by virtue of the equidistributed angles $\alpha_n$, models the interaction with a homogeneous environment, where no position is singled out. Making use of the trigonometric addition theorem $\sin(2\pi x + \alpha) = \sin(2\pi x)\cos(\alpha) + \cos(2\pi x)\sin(\alpha)$, the interaction Hamiltonian  leads to the same system of rate equations, Eq. . But now the rates $$\label{eq:R_ik_cyclic} R_{ij} = R_{ij}^{(1)} + R_{ij}^{(2)}$$ are composed of two independent contributions from the simpler coupling operators $$\begin{aligned} \label{eq:A1_cyclic} A^{(1)} &=& \sin(2\pi x)/(2\pi), \\ \label{eq:A2_cyclic} A^{(2)} &=& \cos(2\pi x)/(2\pi),\end{aligned}$$ respectively, to be used in the interaction Hamiltonian $H_{sb}$ in Eq. . For this result we use that the mixed second-order term $\int_0^{2\pi} {\text{d}}\alpha \int_0^{2\pi} {\text{d}}\alpha' \cos(\alpha) \sin(\alpha') \delta(\alpha-\alpha_n) \delta(\alpha'-\alpha_n) = \int_0^{2\pi} {\text{d}}\alpha \cos(\alpha) \sin(\alpha) = 0$ vanishes, while the other two terms give rise to the operators in Eqs.  and . Occupations of regular and chaotic Floquet states {#sec:Occupations} ================================================= In this section we study the Floquet occupations for two classes of periodically driven systems, the additively driven quartic oscillator as a representative of a continuously driven system and the quantum kicked rotor from the class of kicked systems. In both cases we demonstrate that the Floquet occupations of regular and chaotic Floquet states follow tremendously different distributions. Similar observations have been made for a driven quantum particle in a box [@BreHubPet2000]. While there it was found that regular states carry Boltzmann weights, we find typically significant deviations. Numerical results will be presented in this section and the quantitative analysis of the occupations of the regular states will be deferred to Sec. \[sec:RegularStates\]. Driven Quartic Oscillator {#sec:Driven} ------------------------- We consider as an example of a continuously driven system the additively driven quartic oscillator with the Hamiltonian $$H_s(t) = \frac{p^2}{2m} + V_0 \left( \frac{x^4}{x_0^4} + \kappa \frac{x}{x_0} \cos(\omega t) \right) .$$ We introduce the dimensionless quantities $\tilde x = x / x_0$, $\tilde H = H / V_0$, $\tilde p = p / p_0$ with $p_0 = \sqrt{m V_0}$, $\tilde t = t \cdot p_0 / (x_0 m)$, $\tilde \omega = \omega \cdot (x_0 m) / p_0$, and also $\tilde \hbar = \hbar / (x_0 p_0)$, the ratio of the Planck constant to a typical phase-space area. In the following we omit the overtilde and then the dimensionless Hamiltonian reads $$\label{eq:H_quart} H_s(t) = \frac{p^2}{2} + x^4 + \kappa x \cos( \omega t) .$$ At $\kappa=0.2$ and $\omega = 5/6$ the stroboscopic Poincaré section of the phase space $(x,p)$ at integer multiples $t=n\tau$ of the driving period, $\tau = 2\pi/\omega$, features a chaotic domain, see Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](a). Furthermore, there are two distinct regular regions: first the highly excited tori, which are only slightly influenced by the driving, and second a regular island embedded in the chaotic sea. The Floquet states are determined using the $(t,t')$-technique [@PfeLev1983_PesMoi1993]. Their energy expectation value, $\langle \psi_i(t) |H_s(t)| \psi_i(t)\rangle = \langle u_i(t) |H_s(t)| u_i(t)\rangle$, oscillates with the period of the driving. It is convenient to introduce the cycle-averaged energy $$\label{eq:Emean} \langle E_i \rangle := \frac{1}{\tau} \int_t^{t+\tau} {\text{d}}t' \langle u_i(t') |H_s(t')| u_i(t')\rangle - E_0 .$$ An energy shift $E_0$ is determined by the classical periodic orbit at the center of the regular island, such that there the cycle-averaged energy is zero. ![ (Color online) (a) Stroboscopic Poincaré section of the classical phase space of the driven osillator, Eq. . The size of the chosen dimensionless Planck constant $h$ is indicated in the lower right corner. (b) Floquet occupations $p_i$ vs. cycle-averaged energies $\langle E_i \rangle$ compared to the Boltzmann-like prediction $\exp(-\beta \langle E_i \rangle)$ (dashed line). The insets show Husimi representations of a regular Floquet state localized in the central island, a chaotic Floquet state, and a regular state on a surrounding torus. The parameters are $\kappa = 0.2$, $\omega=5/6$, ${\hbar}= 0.002$, and $\beta = 100$. []{data-label="fig:phSp_Emean_rho_Ereg_rho_quart"}](figure1.eps){width="8.5cm"} For a sufficiently small value of $h$ almost all Floquet states can be classified as either regular or chaotic according to the semiclassical eigenfunction hypothesis [@Per1973_Ber1977_Vor1979]. The regular states are localized on the regular island and can be ordered by a quantum number $m$, whereas the chaotic states typically spread over the entire chaotic phase-space area and fluctuate irregularly. The different types of states are visualized in phase space by means of their Husimi representation $H_\psi(x,p) := \left| \left\langle \alpha(x,p) \right. \left| \psi \right\rangle \right|$, i.e. the projection onto the coherent states $| \alpha(x,p) \rangle$ centered at the phase space points $(x,p)$, see insets in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b). In the example of Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\] the small central island is about $26$ times larger than the dimensionless Planck constant $h$, indicated in the lower right corner of Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](a). Thus the central island supports $26$ regular states. Besides, there are $97$ chaotic states, spreading over the chaotic region. It is surrounded by regular tori, on which a further group of infinitely many regular states are localized. The cycle-averaged energies $\langle E_i \rangle$ of these three different types of Floquet states form distinct intervals with only small overlaps, as indicated by the arrows in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b). The regular states of the central island are lowest in cycle-averaged energy, followed by the chaotic states and finally the high excited regular states of the surrounding tori. The oscillator coupled to a heat bath is treated as explained in Sec. \[sec:FloquetMarkov\] with the linear coupling operator $A=x$ for this spatially extended system. We restrict the consideration to the weak-coupling regime, where the occupations are well described by the rate equation . In Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b) the resulting Floquet occupations $p_i$ are shown as functions of the cycle-averaged energies $\langle E_i \rangle$. The monotonously falling occupations at low values of $\langle E_i \rangle$ belong to the central regular island, with the state in the center of the island having the highest occupation. At intermediate values of $\langle E_i \rangle$ one finds the occupations of the chaotic states, that fluctuate around a mean value ${\bar{p}_{\text{ch}}}$, with a very small variance compared to the range of occupations of the regular states. This is similar to the observation for chaotic states in Ref. [@BreHubPet2000] for a driven particle in a box. At high values of $\langle E_i \rangle$, there are again monotonously falling occupations belonging to the regular states of the surrounding tori. The observed characteristics of the occupations $p_i$ are clearly in contrast to the naive expectation $p_i \sim e^{-\beta \left\langle E_i\right\rangle}$ motivated by the Boltzmann weights of equilibrium thermodynamics. Note that even the occupations of the (low-energy) regular states notably differ from the Boltzmann result, indicated by the dashed line in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b). A quantitative analysis of these observations will be presented in Sec. \[sec:RegularStates\] for the numerically more convenient kicked rotor. Kicked rotor {#sec:Kicked} ------------ ![ (Color online) (a) Stroboscopic Poincaré section of the classical phase space of the kicked rotor, Eq. , for $\kappa = 2.9$. The size of the chosen dimensionless Planck constant $h$ is indicated in the lower right corner. (b) Floquet occupations $p_i$ vs. cycle-averaged energies $\langle E_i \rangle$ compared to the Boltzmann-like prediction $\exp(-\beta \langle E_i \rangle)$ (dashed line). The insets show Husimi representations of two regular and a chaotic Floquet state. (c) Floquet occupations $p_m$ of regular states vs. regular energies ${E^{\text{reg}}}_m$, defined in Eq. , compared to the Boltzmann-like prediction Eq.  with the inverse effective temperature ${\beta_{\text{eff}}}= 0.85 \beta$ of Eq.  (red solid line), compared to the inverse bath temperature $\beta$ (dashed line). The parameters are $h = 1/210$ and $\beta = 100$. []{data-label="fig:phSp_Emean_rho_Ereg_rho"}](figure2.eps){width="8.5cm"} Kicked quantum systems feature all essential phase-space characteristics of periodically driven systems. They allow for a simplified numerical and conceptual treatment. As a paradigmatic model for a driven system with a mixed phase space we consider here the quantum kicked rotor. Its dynamics is generated by the Hamiltonian $$\label{eq:H_kicked} H_s(t) = T(p) + V(x) \cdot \tau \sum_n \delta(t-n \tau) ,$$ with the kinetic energy $T(p) = p^2/(2m)$ and the potential $V(x) = V_0 \cos(2\pi x/x_0)$ acting at the kicks. We study it on a two-torus with dimensions $x_0$ and $p_0=m x_0 /\tau$. We make the Hamiltonian dimensionless by a similar transformation as in Sec. \[sec:Driven\], where now $\tilde H = H \cdot m/p_0^2$ and the dimensionless kick period is $\tau = 1$. With the rescaled kick strength $\kappa = V_0 \cdot (2\pi)^2 m/p_0^2$ the Hamiltonian reads $$\label{eq:H_kiRo} H_s(t) = \frac{p^2}{2} + \frac{\kappa}{(2\pi)^2} \cos(2\pi x) \sum_n \delta(t-n) .$$ In an intermediate regime of the kick strength the Poincaré section of the phase space features a regular island embedded in the chaotic sea, see Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](a) for $\kappa=2.9$. The Floquet states are evaluated as eigenstates of the time evolution operator $U$ over one period $\tau$, which factorizes into a potential and a kinetic part $U = e^{-{\text{i}}\tau V(x)/{\hbar}} e^{-{\text{i}}\tau T(p)/{\hbar}}$. The quantization on the two-torus relates the effective Planck constant $h$ to the dimension $N$ of the Hilbert space by the condition $h = 2\pi{\hbar}= 1/N$. For $h=1/210$ the area of the regular island supports $23$ regular states. The asymptotic state of the kicked rotor weakly coupled to a heat bath is again determined from Eq. , with the composite rates $R_{ij} = R_{ij}^{(1)} + R_{ij}^{(2)}$ from Eq.  appropriate for a cyclic system. The resulting Floquet occupations $p_i$ are shown in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b) as functions of the cycle-averaged energies $\langle E_i \rangle$, with $E_0 = -\kappa/(2\pi)^2$. The regular and chaotic states again are ordered with respect to this quantity, as indicated by the arrows in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b). The regular states have small values of $\langle E \rangle$, since both kinetic and potential energies are minimal in the center of the regular island, whereas the chaotic states have a stronger overlap with regions of phase space with higher energies. Similarly as for the driven oscillator, the regular occupations depend monotonously on $\langle E \rangle$, while the occupations of the chaotic states seem uncorrelated with the cycle-averaged energy $\langle E \rangle$ and form a plateau with only weak fluctuations around a mean value ${\bar{p}_{\text{ch}}}$. Rate matrix {#sec:Rates} ----------- The Floquet rate matrix $R_{ij}$ determines the Floquet occupations via Eq. . In Fig. \[fig:ratematrix\] we show $R_{ij}$ for both the driven oscillator in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\] (Fig. \[fig:ratematrix\](a)) and the kicked rotor in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] (Fig. \[fig:ratematrix\](b)). Employing $\langle E_i \rangle$ as the ordering parameter for the entries $i,j$, the regular and chaotic parts are well-separated, revealing a distinct block structure of the matrix [@BreHubPet2000]. There are only few rates between the regular and the chaotic subspaces. Similarly, the rates between the two different regular subspaces in the case of the driven oscillator in Fig. \[fig:ratematrix\](a) are practically zero. Also by virtue of the chosen ordering, the regular domains feature a band structure with particular dominance of the first off diagonals (nearest-neighbor rates). The rates in the subspace of the chaotic states on the contrary fluctuate strongly. ![ Density representation of the rate matrix $R_{ij}$, with entries $i \neq j$ sorted by increasing $\langle E_i \rangle$ for (a) the driven oscillator, Eq. , and (b) the kicked rotor, Eq. , with enlarged domain of regular island states, revealing strong nearest-neighbor rates. For parameters see Figs. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\] and \[fig:phSp\_Emean\_rho\_Ereg\_rho\], respectively. []{data-label="fig:ratematrix"}](figure3.eps){width="8.5cm"} One can thus observe a close relation between the structure of the rate matrix $R_{ij}$ and the resulting set of occupations. First, the almost independent behaviour of the occupations of regular states and chaotic states owes to the relatively weak rates $R_{ij}$ connecting the corresponding subspaces. Furthermore, the random character of the chaotic rate submatrix gives rise to the equally random character of the set of chaotic Floquet occupations. Note that for the kicked rotor we find in the semiclassical limit $h \to 0$, that the mean value ${\bar{p}_{\text{ch}}}$ decreases, since more and more regular states emerge. Also the relative variance $\overline{ \left( p_i - {\bar{p}_{\text{ch}}}\right)^2 } \big/ {\bar{p}_{\text{ch}}}^2$ of the chaotic occupations $p_i$ decreases in this limit and we observe a universal scaling in the semiclassical limit which can be analyzed with the help of a random-rate model [@Wus2010]. These aspects of the chaotic occupations are not explored in this paper, instead the focus is set on the regular occupations. Regular States {#sec:RegularStates} ============== The observations in Figs. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\] and \[fig:phSp\_Emean\_rho\_Ereg\_rho\] indicate that the asymptotic state of a time-periodic system in weak interaction with a heat bath carries signatures of the classical phase-space structure. The Floquet occupations of the regular and the chaotic states behave very differently, e.g. as functions of the cycle-averaged energy $\langle E \rangle$. In this section we focus on the asymptotic occupations of the regular states, which we label by their quantum number $m$ starting with $m=0$ for the state in the center of the island. Figures \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b) and \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b) suggest a roughly exponential dependence for the regular occupations $p_m$ as functions of the cycle-averaged energies $\langle E_m\rangle$. However, the regular occupations are different from the Boltzmann weights $e^{-\beta \langle E_m \rangle}$ with the true inverse bath temperature $\beta$. In fact, there is no physical reason for a coincidence with the Boltzmann distribution when expressed in terms of the, qualitatively suitable but arbitrary, energy measure $\langle E_m \rangle$  [@Koh2001]. In the following sections we therefore make use of an alternative energy measure for the regular states, the regular energy ${E^{\text{reg}}}_m$ (Sec. \[sec:Ereg\]), allowing us to consistently parametrize the regular occupations as functions of ${E^{\text{reg}}}_m$ (Sec. \[sec:approx1\_nN\]). Often, this functional dependence is approximately exponential (Sec. \[sec:approx2\_betaeff\]). Examples are presented in Sec. \[sec:examples\]. Regular energy ${E^{\text{reg}}}_m$ {#sec:Ereg} ----------------------------------- A time-periodic system is equivalent to an autonomous system with the time as an additional coordinate, leading to the Hamiltonian $H_s'(x,p;t,p_t) = H_s(x,p;t) + p_t$ in the extended phase space, which has periodic boundary conditions in $t$. This allows the application of Einstein-Brillouin-Keller(EBK)-quantization rules for the regular tori and the determination of semiclassical Floquet states on the quantizing tori and their associated semiclassical quasienergies [@BreHol1991; @BenKorMirBen1992]. We introduce the regular energies $$\label{eq:Ereg_m} {E^{\text{reg}}}_{m} := {\hbar\omega}\nu_m \left(m + \frac{1}{2}\right) - \left\langle L \right\rangle_m + \left\langle L \right\rangle_c .$$ Herein, $\nu_m$ is the winding number, i.e. the ratio of the winding frequency of a trajectory on the $m$th torus around the central orbit to the driving frequency $\omega$. Furthermore, $\left\langle L \right\rangle_m$ is the long-time average of the Lagrangian $L = p\dot x - H_s$ for an arbitrary trajectory on the $m$th torus. For convenience we add the time-averaged Lagrange function $\langle L \rangle_c$ of the central orbit of the island. The regular energies of Eq.  are related to the semiclassical quasienergies [@BreHol1991; @BenKorMirBen1992] by $$\label{eq:Eq_Ereg} \varepsilon_m = {E^{\text{reg}}}_m - \langle L \rangle_c \mod {\hbar\omega},$$ whereas there is no relation to the cycle-averaged energies $\langle E_m \rangle$. The time-averaged Lagrangian $\langle L \rangle$ varies only slowly inside the island. The winding number $\nu$ likewise varies slowly across the island. To determine $\nu$ we have applied the frequency map analysis [@LasFroeCel1992] being based on a Fourier decomposition of the quasiperiodic orbits within a stable regular island. Note that due to nonlinear resonances and small chaotic layers within a regular island the semiclassical quantization might require interpolations of the quantities $\nu_m$ and $\langle L \rangle_m$ or the introduction of a fictitious integrable system [@BenKorMirBen1992; @BohTomUll1993_BaeKetLoeSch2008]. ![ (Color online) Ratio between the occupations $p_m$ and an exponential fit $p_0 e^{-{\beta_{\text{fit}}}(e_m-e_0)}$ for (a) the kicked rotor and (b) the continuously driven oscillator using the regular energies $e_m = {E^{\text{reg}}}_m$ (red circles) and the cycle-averaged energies $e_m = \langle E_m \rangle$ (black diamonds). For parameters see Figs. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] and \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\], respectively. []{data-label="fig:Ereg_Emean_devfit"}](figure4.eps){width="8.5cm"} Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\](c) shows the occupations $p_m$ of the regular states of the kicked rotor as functions of regular energies ${E^{\text{reg}}}_m$. The functional dependence of the occupations $p_m$ is close to exponential, but also different from the Boltzmann weights $e^{-\beta {E^{\text{reg}}}_m}$ with the true bath temperature $1/\beta$. However, the assumption of an exponential dependence of $p_m$ vs. ${E^{\text{reg}}}_m$ is fulfilled far better than vs. $\langle E_m \rangle$. This is demonstrated in Fig. \[fig:Ereg\_Emean\_devfit\], where the ratio between the occupations $p_m$ and the respective exponential fit $p_0 e^{-{\beta_{\text{fit}}}(e_m-e_0)}$ is shown for $e_m$ being the regular energy ${E^{\text{reg}}}_m$ (red circles) and the cycle-averaged energy $\langle E_m \rangle$ (black diamonds). The fit involves the parameter ${\beta_{\text{fit}}}:= (\log p_1 - \log p_0)/(e_0-e_1)$. For the kicked rotor, Fig. \[fig:Ereg\_Emean\_devfit\](a), the considered ratio for $e_m = {E^{\text{reg}}}_m$ is close to $1$ for the majority of regular states, whereas the ratio for $e_m = \langle E_m \rangle$ systematically deviates from $1$ already for smaller values of $m$. This indicates that the exponential scaling is far better fulfilled by using the regular energies ${E^{\text{reg}}}_m$. Likewise, Fig. \[fig:Ereg\_Emean\_devfit\](b) shows the same ratio for the regular states of the central island in the driven oscillator, Eq. . The regular energies are again determined according to the above semiclassical quantization, where the frequency map analysis is applied to the solutions of the classical equations of motion, evaluated in the Poincaré section. Here, the quality of the fit with respect to ${E^{\text{reg}}}_m$ is only marginally better as the the fit with respect to $\langle E_m \rangle$, see Fig. \[fig:Ereg\_Emean\_devfit\](b). We have evidence, that the existence of next-nearest-neighbor rates is responsible for this. Note that for other examples of continuously driven systems we typically find a better quality of the fit with respect to ${E^{\text{reg}}}_m$, similar to the situation in Fig. \[fig:Ereg\_Emean\_devfit\](a). Restriction to nearest-neighbor rates $R_{m,m\pm 1}$ {#sec:approx1_nN} ---------------------------------------------------- In this section, the ratio of the rates $R_{m,m+1}$ and $R_{m+1,m}$ between two neighboring regular states $m$ and $m+1$ is analyzed for kicked systems. With the help of a detailed balance condition the occupations $p_m$ can be related to the winding numbers of the regular tori. In the lower part of Fig. \[fig:ratematrix\](b) the rate matrix $R_{ij} = R_{ij}^{(1)} + R_{ij}^{(2)}$, Eq. , for the regular subspace of the kicked rotor is shown. We remind the reader that the indices are ordered by increasing $\langle E \rangle$, coinciding with the natural order of growing quantum number $m$. Figure \[fig:ratematrix\](b) illustrates that the nearest-neighbor rates $R_{m,m \pm 1}$ are dominant among the regular states. These nearest-neighbor rates are mainly contributed by the rates $R_{ij}^{(1)}$ originating from the coupling operator $A^{(1)} = \sin(2\pi x)/(2\pi)$, whereas the rates $R_{m,m \pm 2}$ between next-nearest neighboring states are mainly due to the contribution $R_{ij}^{(2)}$ of the coupling operator $A^{(2)} = \cos(2\pi x)/(2\pi)$. In the following analytical considerations we will neglect the contribution of $R_{ij}^{(2)}$ and in addition approximate the coupling operator $A^{(1)} = \sin(2\pi x)/(2\pi)$ inside the regular island at $x=0.5$ by the linear coupling operator $A=-x$. Using the resulting rate matrix in Eq. , we observe almost the same regular occupations as in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b), which in first approximation differ from the latter only by a tiny $m$-independent factor. Only the occupations of the chaotic states are strongly affected by the different coupling scheme, as for them the discontinuity of the coupling operator $x$ at the border of the unit cell is not negligible. In the rate matrix due to the linear coupling operator, $A=-x$, the nearest-neighbors dominate strongly, and the next-nearest-neighbor rates $R_{m,m \pm 2}$ of regular states $m$ and $m \pm 2$ are zero for symmetry reasons. We will neglect higher-order rates in the following analysis. Thus, the total rate balance among the regular states can be reduced to the detailed balance condition $$\label{eq:detbal_approx} \frac{p_{m+1}}{p_{m}} = \frac{R_{m,m+1}}{R_{m+1,m}}$$ between two neighboring regular states $m$ and $m+1$. Assuming Eq.  and using the definition of the rates in Eq.  the occupation ratio between neighboring regular states becomes $$\begin{aligned} \lefteqn{ \frac{p_{m+1}}{p_{m}} } \nonumber\\ &=& \frac{ \sum_K \left| A_{m,m+1}(K) \right|^2 g\left( \varepsilon_{m+1}-\varepsilon_{m} - K{\hbar\omega}\right) }{ \sum_K \left| A_{m+1,m}(K) \right|^2 g\left( \varepsilon_{m}-\varepsilon_{m+1} - K{\hbar\omega}\right) } \\ &=& \label{eq:R_ij_rel} \frac{ \sum_K \left| x_m(K) \right|^2 g\left( \zeta_m + K{\hbar\omega}\right) }{ \sum_K \left| x_m(K) \right|^2 g\left( \zeta_m + K{\hbar\omega}\right) e^{\beta (\zeta_m + K{\hbar\omega})} } ,\end{aligned}$$ where the properties $A_{m,m+1}(K) = A_{m+1,m}^{*}(-K)$ and $g(E) = g(-E) e^{-\beta E}$ have been used, and where we introduced the short-hand notations $$x_m := A_{m+1,m}$$ for the (regular) nearest-neighbor matrix elements of the operator $A=-x$ and $$\zeta_m := \varepsilon_{m+1} - \varepsilon_m$$ in the arguments of the correlation function $g(E)$. If there were just a single Fourier component $x_m(K^{*})$, which is approximately the case for a weakly driven system, then the occupation ratio would simplify to $p_{m+1}/p_{m} = e^{-\beta (\zeta_m + K^{*}{\hbar\omega})}$, resulting in Boltzmann-like occupations. In general, however, several components $K$ have to be considered. Multiplying numerator and denominator of the fraction in Eq.  with $\left| x_m(0) \right|^{-2} g\left( \zeta_m \right)^{-1}$, $$\label{eq:R_ijji_kicked} \frac{p_{m+1}}{p_{m}} = \frac{ \sum_K \frac{ \left| x_m(K) \right|^2 }{ \left| x_m(0) \right|^2 } \frac{ g\left( \zeta_m + K{\hbar\omega}\right) }{ g\left( \zeta_m \right) } }{ \sum_K \frac{ \left| x_m(K) \right|^2 }{ \left| x_m(0) \right|^2 } \frac{ g\left( \zeta_m + K{\hbar\omega}\right) }{ g\left( \zeta_m \right) } e^{\beta (\zeta_m + K{\hbar\omega})} } ,$$ we introduce ratios of the matrix elements and of the correlation functions. The ratio of the correlation functions reads, using their definition in Sec. \[sec:FloquetMarkov\], $$\begin{aligned} \label{eq:ratio_g} \frac{g(\zeta_m + K {\hbar\omega})}{g(\zeta_m)} &=& \left( 1 + \frac{K{\hbar\omega}}{\zeta_m}\right) \frac{ e^{\beta \zeta_m} - 1 }{ e^{\beta \left(\zeta_m + K{\hbar\omega}\right)} - 1} \\ && \cdot \exp \left( \frac{\left|\zeta_m\right| -\left|\zeta_m + K{\hbar\omega}\right| }{ \hbar \omega_c } \right) \nonumber .\end{aligned}$$ The last factor in Eq.  is close to $1$ and will be omitted in the following, such that the $\omega_c$-dependence is neglected. This is possible since we are interested here in the case $\omega_c \gg \omega$ and we use the fact that only small integers $K$ contribute significantly to the sums in Eq. . For the required ratio of the matrix elements we approximate the evolution of the coupling matrix elements $x_m(t)$ for $0 \leq t \leq \tau$ by $$\label{eq:x_ijt_2} x_m(t) \approx x_m(t=0) e^{-{\text{i}}\zeta_m t/{\hbar}} \left[ 1 + \frac{t}{\tau} \left( e^{{\text{i}}\zeta_m \tau/{\hbar}} - 1 \right) \right] ,$$ using the factorization of the time evolution operator for kicked systems and the approximate commutation relations with the operator $x$ on the two-torus, $\left[x, e^{-{\text{i}}V(x) \tau /{\hbar}}\right] \approx 0$ and $\left[x, e^{-{\text{i}}T(p) t/{\hbar}} \right] \approx t T'(p) e^{-{\text{i}}T(p) t/{\hbar}}$. These commutation relations, which are exact in the infinite Hilbert space, apply here in very good approximation to the regular states, as these are almost independent of the periodic boundary conditions on the two-torus. The coupling matrix elements $x_m(t)$ are time-periodic and have the Fourier components $$\label{eq:x_ijK} x_m(K) = \frac{x_m(t=0)}{2 \pi^2} \left(\frac{\zeta_m}{{\hbar\omega}} + K\right)^{-2} \left(1-\cos\left( 2\pi \frac{\zeta_m}{{\hbar\omega}} \right)\right) ,$$ whose ratio simplifies to $$\label{eq:ratio_x2} \frac{\left| x_m(K) \right|^2}{\left| x_m(0) \right|^2} = \left( 1 + \frac{K{\hbar\omega}}{\zeta_m} \right)^{-4} \;.$$ Finally, inserting Eqs.  and  into the occupation ratio of Eq.  yields $$\label{eq:R_ijji_kicked_2} \frac{p_{m+1}}{p_{m}} = F\left( \frac{\zeta_m}{{\hbar\omega}}, \beta{\hbar\omega}\right)$$ with the function $$\begin{aligned} F\left(z,b\right) := \frac{ \sum_K \left(K + z\right)^{-3} \left( e^{(K+z)b} - 1 \right)^{-1} }{ \sum_K \left(K - z\right)^{-3} \left( e^{(K-z)b} - 1 \right)^{-1} } \;.\end{aligned}$$ It is invariant under an integer shift of the first argument, $F(z + K_0, b) = F(z,b)$, with $K_0 \in \mathbb{Z}$ [@comment1]. We choose the shift $K_0$, such that $$\label{eq:zeta_Ereg} \varepsilon_{m+1}-\varepsilon_m + K_0{\hbar\omega}= {E^{\text{reg}}}_{m+1}-{E^{\text{reg}}}_m$$ is fulfilled, which is possible according to Eq. . This allows us to replace $\zeta_m$ in Eq.  with the regular energy spacing ${E^{\text{reg}}}_{m+1,m} := {E^{\text{reg}}}_{m+1} - {E^{\text{reg}}}_{m}$, leading to $$\frac{p_{m+1}}{p_{m}} = F\left( \frac{{E^{\text{reg}}}_{m+1,m}}{{\hbar\omega}}, \beta {\hbar\omega}\right) .$$ Based on Eq.  we approximate this energy difference by the winding number $${E^{\text{reg}}}_{m+1,m} \simeq {\hbar\omega}\nu_m ,$$ which is exact for a harmonic oscillator-like island with $m$-independent winding number $\nu_m$ and $\langle L \rangle_m$ and is a reasonable approximation even for more generic islands. The occupation ratio then becomes a function of the winding number, $$\label{eq:dist_preg0} \frac{p_{m+1}}{p_{m}} = F\left( \nu_m, \beta {\hbar\omega}\right) .$$ With that an analytical prediction for the occupation of the regular states of a kicked system is found, valid under the assumption of dominant nearest-neighbor rates $R_{m,m \pm 1}$. It is a function of the winding number of the $m$th quantizing torus and of the parameters temperature and driving frequency. Assumption of constant winding number $\nu$ {#sec:approx2_betaeff} ------------------------------------------- ![ Inverse effective temperature ${\beta_{\text{eff}}}/\beta$ according to Eq.  vs. winding number $\nu$ for $\omega_c/\omega \gg 1$ and three different temperatures. []{data-label="fig:nu_betaeff"}](figure5.eps){width="5.2cm"} The function $F$ in Eq.  becomes independent of $m$ if the winding number $\nu$ is constant throughout the regular island. It is then appropriate to introduce an effective temperature $1/{\beta_{\text{eff}}}$ by $$\label{eq:betaeff} {\beta_{\text{eff}}}:= -\frac{\log F\left(\nu, \beta {\hbar\omega}\right)}{{\hbar\omega}\nu} .$$ With this new parameter the occupation ratios are expressed in a form analogous to the Boltzmann weights, $$\label{eq:dist_preg} p_{m+1}/p_m = e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_{m+1,m}} .$$ The ratio ${\beta_{\text{eff}}}/\beta$ is shown in Fig. \[fig:nu\_betaeff\]. Its value is smaller than $1$ and it is symmetric in $\nu$. For $\nu \to 0$, where the kicked system approaches its static limit, the true bath temperature $1/\beta$ is retained. A substantial deviation from the true bath temperature, ${\beta_{\text{eff}}}/\beta \ll 1$, takes place around $\nu \approx 0.5$. For generic islands with non-constant winding number, where the regular energy spacings ${E^{\text{reg}}}_{m+1,m} \simeq {\hbar\omega}\nu_m$ are $m$-dependent, the exponential scaling, Eq. , is still approximately valid if ${\beta_{\text{eff}}}$ varies only moderately. Figure \[fig:nu\_betaeff\] indicates that this is fulfilled especially good for values $\nu \lesssim 0.2$. But even beyond this interval we observe a good agreement of Eq.  with the regular occupations. This breaks down for islands with winding numbers varying close to $\nu=0.5$, where ${\beta_{\text{eff}}}$ is particularly sensitive to variations of $\nu$. As ${\beta_{\text{eff}}}$ develops a pronounced $m$-dependence there, the approximation  is no longer adequate and the general equation  has to be used instead, as demonstrated in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_dev\] below. Examples {#sec:examples} -------- Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\](c) shows the regular Floquet occupations of the kicked rotor, Eq. , vs.the regular energies ${E^{\text{reg}}}_m$ and demonstrates excellent agreement with the above predicted exponential weights $e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_m}$ (red solid line) for almost all regular states. The regular energies ${E^{\text{reg}}}_m$ are here slightly $m$-dependent, as the winding number decreases in the island monotonously from $\nu_0 = 0.32$ for the first ($m=0$) regular state to $\nu_{22} = 0.28$ ($m=22$) for the outermost regular state. Deviations from the exponential distribution occur for the outermost regular states, $m \geq 20$, only. These have a stronger weight outside the regular island and are thus coupled stronger to the chaotic states. The non-negligible rates between these regular states and the chaotic states, see Fig. \[fig:ratematrix\], enforce a gradual adaptation between the outermost regular occupations and the occupation level of the chaotic states. Besides, Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](c) shows the large discrepancy of the Boltzmann weights $e^{-\beta {E^{\text{reg}}}_m}$ with the true bath temperature $1/\beta$. ![ (Color online) (a)–(c) Analogous to Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] for the kicked rotor with $\kappa = 3.9$. The insets in (b) show Husimi representations of two regular states. In (c) the occupations of the innermost $20$ regular states, which are not yet affected by phase-space structures at the border of the island, are well described by the analytical prediction (red solid line) of Eq. . The parameters are $h = 1/400$ and $\beta = 500$. []{data-label="fig:phSp_Emean_rho_Ereg_rho_dev"}](figure6.eps){width="8.5cm"} The exponential distribution, Eq. , requires a constant or moderately varying winding number inside the island. If however $\nu$ varies close to $\nu=0.5$, where ${\beta_{\text{eff}}}$ is particularly sensitive to variations of $\nu$, see Fig. \[fig:nu\_betaeff\], the exponential distribution is no longer adequate. For the kicked rotor this is the case for $\kappa \to 4$, where the central periodic orbit bifurcates and the regular island hence splits into two islands. As an example, Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_dev\] shows the occupations for the kicked rotor for $\kappa = 3.9$, with $\nu_0 = 0.44$ and $\nu_{19} = 0.38$. The regular occupations in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_dev\](c) are well described by the general prediction, Eq.  (red line), but clearly deviate from the exponential approximation, Eq. . The derivation of the analytical occupation ratios  and  is based on assumptions which are justified for kicked systems only. For continuously driven systems an analogous prediction remains a future challenge. Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\](b) demonstrates that a significant and systematic deviation of the occupations from the Boltzmann result is observed also for continuously driven systems. For another continuously driven system, the driven particle in a box, the regular occupations are almost identical to the Boltzmann weights $e^{-\beta \langle E_m \rangle}$ with the true temperature $1/\beta$ [@BreHubPet2000]. However, in this particular example the regular region is almost identical to the undriven system, leading to Boltzmann weights for the regular states by the following reasoning: the regular states of the driven box potential, which emerge from the highly excited eigenstates of the undriven box, still strongly resemble the latter and change only slightly during the driving period $\tau = 2\pi/\omega$. By that, one dominant Fourier contribution $K^{*}$ in the coupling matrix elements is singled out, $x_m(K) \approx 0$ for $K \neq K^{*}$. In this situation the occupation ratio  simplifies to $p_{m+1}/p_{m} = e^{-\beta (\zeta_m + K^{*}{\hbar\omega})}$ and the detailed balance among the regular states, Eq. , is fulfilled accurately. Since at the same time their cycle-averaged energies are close to the eigenenergies in the undriven potential, the occupations are close to the Boltzmann weights. Tiny deviations for the lowest regular states close to the chaotic region are visible in Fig. 1 of Ref. [@BreHubPet2000], which we attribute to rates occurring between the regular and the chaotic Floquet states. As substantiated in this section, systematic and much stronger deviations from the Boltzmann behavior can occur in generic situations, especially in situations characteristic for strong driving, where the phase-space structure and the Floquet states are strongly perturbed compared to the original time-independent system. The strong driving allows us to study the Floquet occupations far from the thermodynamic equilibrium situation, encountered in the time-independent system, and at the same time to have dominant regular structures present in the classical phase space. These host, a sufficiently semiclassical regime presumed, a series of regular states, which under the condition of pronounced nearest-neighbor rates and only small rates to the subspace of the chaotic states are occupied with weights given by Eq. . For constant or slowly varying winding number $\nu$ the occupations even simplify to the exponential weights, Eq. , with the effective temperature of Eq. . A generic modification of the analytical predictions of this section, takes place as a consequence of avoided crossings. We will go back to this point in Sec. \[sec:AC\]. Implications of additional classical phase-space structures {#sec:AddStructs} =========================================================== The set of Floquet states in the examples of the last section are dominated by regular states in large regular islands and chaotic states. Apart from these, other types of Floquet states can exist, depending on the structures in the classical phase space and the size of the effective Planck constant $h$. The following section gives an overview of the fingerprints of such additional types of Floquet states on the distribution of the Floquet occupations $p_i$. Nonlinear resonance chains -------------------------- Apart from the islands centered at stable elliptic fixed points of period 1, there are nonlinear $r$:$s$-resonances consisting of $r$ regular islands around stable periodic orbits of period $r$, see Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](a). A trajectory on such a resonance chain passes from an island to the $s$th next island and returns after $r$ periods to the island, where it initially started. Considering the $r$–fold iterated map instead of the map itself, the trajectory always remains on one and the same island. The semiclassical quantization is done with respect to this $r$–fold map of period $r \tau$ [@MirKor1994]. To each principal quantum number $m$ there exist $r$ regular Floquet states $|\psi_{ml}\rangle$ of different quantum numbers $l=0,\ldots,r-1$ with equidistant quasienergy spacing ${\hbar\omega}/r$. We refer to these states as regular resonance states. Each of them has equal weights in each of the dynamically connected resonance islands, but with different phases. Similarly as in Eq.  we derive from the semiclassical quasienergies the corresponding regular energies $$\label{eq:Ereg_ml} {E^{\text{reg}}}_{ml} = {\hbar\omega}\frac{\nu_m^{(r)}}{r} \left(m+\frac{1}{2}\right) - \left\langle L \right\rangle_{m} + \left\langle L \right\rangle_c ,$$ which are independent of the quantum number $l$. The winding number $\nu_m^{(r)}$ refers to the $r$–fold iterated map. Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](b) shows the Floquet occupations $p_i$ vs. the cycle-averaged energy $\langle E_i\rangle$ for the kicked rotor with $\kappa = 2.35$, where the phase space features in addition to the main regular island a $4$:$1$-resonance around the periodic orbit of period $4$, see Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](a). The entire resonance chain hosts $4\cdot15$ regular resonance states for $h=1/1000$. The Floquet occupations of both the regular states of the central island and the chaotic states resemble those of Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b). In addition, one finds a branch belonging to the regular resonance states. Interestingly, it has a positive slope stemming from the fact that the cycle-averaged energies $\langle E_{ml} \rangle$ of the regular resonance states $|\psi_{ml}\rangle$ decrease with increasing quantum number $m$, in contrast to the regular states of the central island. This is due to the asymmetry of the resonance tori around their respective island center in phase space. This is another clear evidence, that the cycle-averaged energy does not serve as a suitable measure to quantify the regular occupations by exponential weights in analogy to the Boltzmann distribution. ![ (Color online) (a) and (b) Analogous to Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] for the kicked rotor with $\kappa = 2.35$ in presence of a $4$:$1$-resonance chain. The insets in (b) show Husimi representations of a regular state from the main island, a regular resonance state, and a chaotic state. (c) Floquet occupations $p_{ml}$ of the regular resonance states vs. regular energies ${E^{\text{reg}}}_{ml}$, the Boltzmann-like prediction Eq.  with ${\beta_{\text{eff}}}\approx 0.98\,\beta$ (red solid line) compared to the inverse bath temperature $\beta$ (dashed line). The parameters are $h = 1/1000$ and $\beta = 500$. []{data-label="fig:phSp_Emean_rho_Ereg_rho_res4"}](figure7.eps){width="8.5cm"} The $r$ regular resonance states $|\psi_{ml}\rangle$ of fixed quantum number $m$ have almost the same cycle-averaged energy $\langle E_{ml} \rangle$. As long as the coupling to the heat bath does not disturb the equivalence of the resonance islands, the occupations $p_{ml}$ of the $r$ regular resonance states of fixed principal quantum number $m$ are independent of the quantum number $l$. In Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](b) the corresponding four branches of the occupations $p_{ml}$ therefore lie almost on top of each other and cannot be distinguished on the scale of the figure. Small deviations from this degeneracy exist only for the outermost regular resonance states. These can be attributed to the occurrence of avoided crossings, which break the degeneracy of $\varepsilon_{ml} \mod ({\hbar\omega}/r)$ for $l=0,\ldots,r-1$ and fixed $m$, as well as the degeneracy of $\langle E_{ml} \rangle$ and $p_{ml}$. Now we want to explain that the occupations $p_{ml}$ of the regular resonance states are likewise distributed as $p_{ml} \sim e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_{ml}}$, according to the exponential weights  with the effective temperature $1/{\beta_{\text{eff}}}$ of Eq. . We note that Eq.  for the ratio of the correlation functions and Eq.  for the ratio of the coupling matrix elements apply without restriction also to the regular resonance states. The assumed detailed balance relation, Eq. , however, is no longer adapted to the structure of the rate matrix since here, in addition to the nearest-neighbor rates $R_{(ml)(m \pm 1,l)}$, also ‘internal’ rates exist, i.e. rates in the subspace of the $r$ equivalent regular resonance states $l=0,\ldots,r-1$ with fixed quantum number $m$. Nonetheless, the total rate balance approximately decouples for each principal quantum number $m$ into the $r$ balance relations for $l=0,\ldots,r-1$ $$\label{eq:detbal_approx_res_2} \frac{ p_{m+1,l} }{p_{ml}} \approx \frac{ R_{(ml)(m+1,l)} }{ R_{(m+1,l)(ml)} } .$$ They have the same structure as Eq.  and turn out to be approximately $l$-independent, leading to approximately $l$-independent occupations $p_{ml}$. In Eq.  the tiny rates $R_{(ml)(m'l')}$ with $m \neq m'$ and $l \neq l'$ are neglected and one can show that the contribution $\sum_{l'} \left( R_{(ml)(ml')} - R_{(ml')(ml)} \right)$ vanishes as a consequence of the equidistant quasienergy spacing for the $r$ regular resonance states of the same $m$ [@Wus2010]. The decoupling into the $r$ equivalent balance relations  finally allows us to approximate the occupations $p_{ml}$ by the exponential weights $e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_{ml}}$ of Eq.  with the effective temperature $1/{\beta_{\text{eff}}}$ of Eq. . Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](c) shows the occupations $p_{ml}$ of the regular resonance states vs. the regular energies ${E^{\text{reg}}}_{ml}$. Even on the magnified scale of this subfigure, compared to Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](b), the tiny differences of the occupations $p_{ml}$ with different quantum numbers $l$ are not visible. The effective temperature $1/{\beta_{\text{eff}}}$ is nearly indistinguishable from the actual temperature $1/\beta$, because the winding number $\nu_{m=0}^{(r)} /r = 0.79 /4 \approx 0.2$ of the resonance islands is small and yields a value of ${\beta_{\text{eff}}}/\beta$ very close to $1$, compare Fig. \[fig:nu\_betaeff\]. Note that it differs, although weakly, from ${\beta_{\text{eff}}}\approx 0.93$ of the main island. The parameter ${\beta_{\text{eff}}}$ is the same for each of the four independent occupation branches $p_{ml}$. The phase space of a generic time-periodic system contains a hierarchy of nonlinear resonance chains and islands of all scales. If $h$ is sufficiently small, one has Floquet states on these islands and we expect that the entire set of Floquet occupations becomes increasingly structured by the branches originating from each nonlinear resonance. Beach states ------------ ![ (Color online) (a) and (b) Analogous to Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] for the kicked rotor with $\kappa=4.415$ in the presence of a $2$:$1$-resonance surrounded by a series of strong partial barriers. The insets in (b) show Husimi representations of a regular resonance state ($m=3$) and a beach state. (c) Floquet occupations $p_{ml}$ of the regular resonance states vs. regular energies ${E^{\text{reg}}}_{ml}$, the Boltzmann-like prediction Eq.  with ${\beta_{\text{eff}}}\approx 0.76\,\beta$ (red solid line) compared to the inverse bath temperature $\beta$ (dashed line). The parameters are $h = 1/600$ and $\beta = 500$. []{data-label="fig:phSp_Emean_rho_Ereg_rho_res2"}](figure8.eps){width="8.5cm"} The transition between regular phase-space regions and the chaotic sea is usually not sharp, but shaped by a multitude of small island chains and cantori, the fractal remains of broken Kolmogorov-Arnol’d-Moser tori. These additional phase-space structures can strongly inhibit the classical flux of trajectories toward and away from the regular island and, depending on the size of $h$, can give rise to the formation of quantum beach states, a term introduced in Ref. [@FriDor1998]. These reside on the transition layer around the regular islands and have little overlap with the remaining chaotic sea. Typically, beach states have very similar appearance and properties like the regular states of the adjacent island. Due to the proximity they partly even allow a quantization similar to the EBK-quantization rules [@FriDor1998; @BohTomUll1990]. At $\kappa = 4$ the central island of the kicked rotor bifurcates into a resonance around a stable periodic orbit of period $2$. It is accompanied by a series of partial barriers with a reduced classical flux toward and away from the islands. This is indicated for $\kappa = 4.415$ in the stroboscopic Poincaré section of Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res2\](a) by the relatively high density of the chaotic orbit in the vicinity of the island. Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res2\](b) shows the Floquet occupations $p_i$ vs. the cycle-averaged energy $\langle E_i \rangle$. The highest occupations belong to the regular states of the resonance. The occupations of the beach states form a separate, nearly monotonous set in the transition region between the occupations of the regular resonance states and the chaotic states. This is a consequence of the structure in the coupling matrix $R_{ij}$, where typically the nearest-neighbor rates dominate, similarly as for the regular states. Furthermore, the regular occupations $p_{ml}$ of the regular resonance states are shown vs. ${E^{\text{reg}}}_{ml}$ in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res2\](c). In this example the winding number $\nu_{m=0}^{(r)} /r = 0.71 /2 \approx 0.35$ in the resonance islands yields a stronger deviation between $\beta$ and ${\beta_{\text{eff}}}$, with ${\beta_{\text{eff}}}/\beta \approx 0.76$, than in the example presented in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\]. Hierarchical states ------------------- As mentioned above, in the vicinity of regular islands typically many partial barriers with a limited classical flux toward and away from the island can be found, e.g. in the form of cantori or based on stable and unstable manifolds [@MaKMeiPer1992_Mei1992]. Depending on the ratio of $h$ to the classical flux, partial barriers can prevent Floquet states from spreading over the entire chaotic domain, apart from tunneling tails. If the phase-space area enclosed by the island and the partial barrier exceeds $h$, these states locally resemble chaotic states. For decreasing values of $h$ they resolve and occupy the hierarchy of the classical phase space better and better and are therefore called hierarchical states [@KetHufSteWei2000]. The existence of these states does not contradict the semiclassical eigenfunction hypothesis [@Per1973_Ber1977_Vor1979], as their fraction vanishes with $\mathcal{O}(h^{\alpha})$ in the semiclassical limit. We apply an overlap criterion to determine, whether a Floquet state is hierarchical or not: it is identified as a hierarchical state, if it is not a regular state but comparably strongly localized, such that its Husimi weight $\iint_\Omega {\text{d}}x {\text{d}}p H_\psi(x,p)$ within a large chaotic phase-space area $\Omega$ away from the regular island falls below $70\%$, compared to a state that is uniformly spread over the entire phase space. ![ (Color online) (a) and (b) Analogous to Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\] for the kicked rotor with $\kappa = 2.5$ in the presence of a $4$:$1$-resonance surrounded by a partial barrier. The inset in (b) is the Husimi representation of a hierarchical state. The branch with positive slope belongs to the regular resonance states like in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res4\](b). (c) Magnification of (b) with emphasized data points of the hierarchical states (large green crosses), which are determined by the overlap criterion from the shaded phase-space area $\Omega$ in (a). The parameters are $h = 1/1000$ and $\beta = 500$. []{data-label="fig:phSp_Emean_rho_Ereg_rho_hier"}](figure9.eps){width="8.5cm"} Figure \[fig:phSp\_Emean\_rho\_Ereg\_rho\_hier\](a) shows the Poincaré section and Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_hier\](b) shows the Floquet occupations for the kicked rotor with $\kappa=2.5$, where the fraction of hierarchical states is comparatively high [@KetHufSteWei2000]. In Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_hier\](c), the occupations of the hierarchical states are emphasized. The figure indicates that their occupations are distributed analogously to the chaotic states which explore the entire chaotic phase-space region. Again, the fluctuation pattern of the occupations $p_i$ has its origin in the randomly fluctuating rates $R_{ij}$ in the subspace of the hierarchical states, as is the case for the chaotic states. To conclude this section, the occupation characteristics of the beach states and the hierarchical states again confirm the influence of the classical phase-space structure not only on the spectrum and on the Floquet states, but eventually also on the Floquet occupations and hence on the asymptotic state of the system. Note that in the above examples, Figs. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_res2\] and \[fig:phSp\_Emean\_rho\_Ereg\_rho\_hier\], either of the two types is predominant, but still representatives of the other are present. In general, hierarchical and beach states coexist. For example, a few of the states of intermediate cycle-averaged energy $\langle E \rangle$ that are indicated in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_hier\] as hierarchical by the above overlap criterion had rather to be classified as beach states or as states with scarring behavior, i.e. localized on hyperbolic fixed points or on a family of parabolic fixed points. Avoided crossings {#sec:AC} ================= Since the spectrum of Floquet systems is restricted to a finite interval $0 \leq \varepsilon<{\hbar\omega}$, a multitude of avoided level crossings typically emerges under the variation of a parameter and gives rise to the hybridization of the involved Floquet states. In the case of an infinite dimensional Hilbert space the quasienergy spectrum is dense and there is no longer an adiabatic limit, i.e. any tiny parameter variation will hybridize infinitely many Floquet states in a complex way [@HonKetKoh1997]. However, as shown in Ref. [@HonKetKoh2009], the asymptotic density operator $\rho$ is not affected by a small avoided crossing, provided that it is smaller than a specific effective coupling strength to the heat bath. Thus, the interaction with the heat bath resolves the difficulties of the dense quasienergy spectrum. In this section we focus on the opposite limit, where a single isolated avoided crossing strongly influences the entire set of Floquet occupations. ![ (Color online) Influence of an avoided crossing between the regular states $m_1=5$ and $m_2=15$ on Floquet occupations. (a) Floquet occupations $p_i$ vs. cycle-averaged energies $\langle E_i \rangle$ for $\kappa=\kappa_1=2.85$ (black circles) and $\kappa=\kappa_2=2.857175$ (red dots) close to the center of the avoided crossing. The insets show the Husimi representations of states $m_1$, $m_2$ at $\kappa_1$ and of a corresponding hybridized state at $\kappa_2$ (‘ac’). (b) Occupations $\bar p_m$ of regular states vs. regular energies ${E^{\text{reg}}}_m$ at $\kappa = \kappa_2$ (red dots) and comparison to the analytical solution  of the rate model (solid gray line) and the model with $R_{m,m+1}=R_{0,1}$ from [@HonKetKoh2009] (dashed gray line). The arrow indicates the effective rate ${ R^{\text{ac}} }$ between states $m_1$ and $m_2$ according to Eq. . Note that $\bar p_{m_1}$ and $\bar p_{m_2}$ are measured in the diabatic basis, in contrast to $p_i$ in (a). The parameters are $h = 1/210$ and $\beta=100$. []{data-label="fig:Emean_rho_Ereg_rho_AC1"}](figure10.eps){width="8.5cm"} Figure \[fig:Emean\_rho\_Ereg\_rho\_AC1\] presents a typical example of the kicked rotor. In Fig. \[fig:Emean\_rho\_Ereg\_rho\_AC1\](a) the Floquet occupations $p_i$ are shown vs. $\langle E_i \rangle$ for two values of the kick strength near $\kappa = 2.9$, very close to the parameter realization in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\]. The difference of these $\kappa$-values is sufficiently small, such that the classical phase space and almost all regular states vary only marginally. For two of the regular Floquet states, which we will denote as states $a$ and $b$, and which are initially identical to the semiclassical modes with the quantum numbers $m_1=5$ and $m_2=15$, this is however not the case. Under the variation of $\kappa$ they undergo an avoided crossing at $\kappa \approx 2.857$, where they hybridize. The tiny $\kappa$-variation strongly affects the Floquet occupations, and most prominently all the regular occupations. Away from the avoided crossing ($\kappa_1=2.85635$), the regular occupations monotonously decrease with $\langle E_i \rangle$, similar as in Fig. \[fig:phSp\_Emean\_rho\_Ereg\_rho\](b). When approaching the center of the avoided crossing ($\kappa_2=2.857175$) the monotonous behaviour is locally disturbed: the states $a$ and $b$, as a consequence of their hybridization, have shifted mean energies $\langle E_{a} \rangle$ and $\langle E_{b} \rangle$ as well as modified occupations $p_{a} \approx p_{b}$. In Fig. \[fig:Emean\_rho\_Ereg\_rho\_AC1\](a) the data points of the states $a$ and $b$ at $\kappa_2$ (marked as ‘ac’) are therefore found indistinguishable on top of each other. Beyond that, also the occupations $p_m$ of all regular states with quantum numbers $m$ from the interval $[m_1,m_2]$ change severely. They are close to the value of occupation $p_a \approx p_b$ of the hybridized states. In contrast, their mean energies $\langle E_m \rangle$ do not change notably under the tiny $\kappa$-variation, like those of the semiclassical modes $m_1$ and $m_2$. The relative occupations $p_m / p_{m+1}$ among the regular states with quantum numbers outside the range $[m_1,m_2]$ are also not affected. Only the absolute values of their occupations $p_m$ are shifted due to the normalization $\sum_i p_i = 1$. The latter is also the origin of a shift of the chaotic occupation plateau ${\bar{p}_{\text{ch}}}$. This example demonstrates that the presence of avoided crossings can change the entire character of the occupation distribution. To explain this impact the authors of Ref. [@HonKetKoh2009] introduced an effective rate equation, $$\label{eq:RGS_rho_AC} 0 = - \bar p_i \sum_j \bar R_{ij} + \sum_j \bar p_j \bar R_{ji} .$$ which refers to a representation in the local diabatic basis of the avoided crossing, denoted by an overbar. In the diabatic basis the states $a$ and $b$ are replaced with states that remain invariant at the avoided crossing, i.e. the semiclassical modes $m_1$ and $m_2$ in the case of an avoided crossing of two regular states. In Eq.  the typically negligible rates $\bar R_{m_1 m_2}$, $\bar R_{m_2 m_1}$ are replaced with a new effective rate [@HonKetKoh2009] $$\label{eq:Rac} { R^{\text{ac}} }:= \frac{\Gamma}{\left(\hbar \Gamma / \Delta\right)^2 + 4 d^2} ,$$ which acts between the states $m_1$ and $m_2$. The gap size $\Delta$, i.e. the minimal quasienergy splitting $|\varepsilon_a-\varepsilon_b|$ of states $a$ and $b$ and the dimensionless distance from its center $d = (\bar \varepsilon_{m_1} - \bar \varepsilon_{m_2})/\Delta$ are characteristic properties of the avoided crossing. Unlike the rates $\bar R_{ij}$, which are nearly constant in the vicinity of the avoided crossing, the additional rate ${ R^{\text{ac}} }$ changes dramatically. The composite rate $\Gamma = \sum_{j} \left( \bar R_{m_1 j} + \bar R_{m_2 j} \right) - (2\pi/\hbar) \sum_K \bar A_{m_1 m_1}(K) \bar A_{m_2 m_2}(K) g(-K{\hbar\omega})$ plays the role of an effective coupling strength and the characteristic parameter $\hbar\Gamma/\Delta$ determines, whether ${ R^{\text{ac}} }$ can become dominant at the center of the avoided crossing $d=0$. In the examples of Figs. \[fig:Emean\_rho\_Ereg\_rho\_AC1\] and \[fig:Emean\_rho\_Ereg\_rho\_AC2\] the condition $\hbar\Gamma/\Delta \ll 1$ is fulfilled and the rate ${ R^{\text{ac}} }$ consequently dominates around $d \approx 0$ with respect to other rates in Eq. . Note that this requires the system-bath coupling strength to be sufficiently small, a limit which has been already presumed in Eq. . The dominance of ${ R^{\text{ac}} }$ is responsible for the local disruption of the formerly exponential behavior of the regular occupations: as ${ R^{\text{ac}} }$ exceeds all other rates, it induces occupation equality of the states $m_1$ and $m_2$. The relative occupations of the regular states below $m_1$ and above $m_2$ still follow the approximate detailed balance, Eq. , with rates acting predominantly between nearest neighbors. Finally, the other regular states, $m_1 < m < m_2$, although also still having dominant nearest-neighbor rates, no longer have exponentially scaling occupations, since the additional rate ${ R^{\text{ac}} }$ between $m_1$ and $m_2$ breaks the detailed balance. These conditions explain the observed signature of the avoided crossing in the occupation characteristics and are substantiated by a simplified rate model introduced in the following section. ![ (Color online) Influence of an avoided crossing between the regular state $m_1=10$ and a chaotic state on the Floquet occupations, analogous to Fig. \[fig:Emean\_rho\_Ereg\_rho\_AC1\] for $\kappa=\kappa_1=2.856400$ (black circles) and $\kappa=\kappa_2=2.856897$ (red dots) close to the center of the avoided crossing. Note in (b), that we have simulated the avoided crossing of $m_1$ with a chaotic state in the rate model, Eq. , by an avoided crossing between $m_1$ and $m=22$, the last regular state. The parameters are $h = 1/210$ and $\beta=100$. []{data-label="fig:Emean_rho_Ereg_rho_AC2"}](figure11.eps){width="8.5cm"} A second example, with an avoided crossing between the regular state $m_1=10$ and a chaotic state, is presented in Fig. \[fig:Emean\_rho\_Ereg\_rho\_AC2\](a). Here, the occupation of the state $m_1$ is forced down to the chaotic occupation level in the vicinity of the avoided crossing (red dots). Moreover, the entire subset of occupations $p_m$ with $m > m_1$ is disturbed from the original exponential scaling (black circles). With a similar reasoning as above, this behavior is explained by the additional rate ${ R^{\text{ac}} }$ and a simplified rate model, see below. These consequences of avoided crossings also explain why the transition between the occupations of regular and chaotic states appears so ’smoothly’, see e.g. Figs. \[fig:phSp\_Emean\_rho\_Ereg\_rho\_quart\] and \[fig:phSp\_Emean\_rho\_Ereg\_rho\]. As the outer regular states or the beach states most likely undergo avoided crossings with chaotic states, the occupation probabilities normally feature a transition to the plateau ${\bar{p}_{\text{ch}}}$ of the chaotic occupations. We emphasize that the observed implication of an avoided level crossing is a remarkable effect bound to the non-equilibrium character of the driven system, with possible applications, e.g.bath-induced switching in a driven double well potential [@KetWus2009]. In a time-independent system on the contrary, an avoided crossing entails only a local shift in the occupations of the two involved states and leaves the Boltzmann distribution of the entire set of occupations unchanged. Simplified rate model {#sec:AC_model} --------------------- To account for the local modifications of the occupations $p_m$, which are entailed by an avoided crossing, we apply a simplified, analytically solvable model for the set of rate equations . It is based on the nearest-neighbor rate model in Ref. [@HonKetKoh2009]. It is restricted to the chain of regular states and, at first in the absence of an avoided crossing, assumes that each of them is coupled only to its directly neighboring states. Under this circumstance the detailed balance is fulfilled. The primary parameter of the model is the rate ratio of the first two states, $g := R_{0,1}/R_{1,0}$. We approximate the rate ratio $R_{m,m+1}/R_{m+1,m}$ as independent of $m$, i.e. $R_{m,m+1}/R_{m+1,m} \equiv g$ for all $m$. According to Eqs.  and  the approximation is especially suited for regular islands with a constant winding number $\nu$. The rates themselves however differ from state to state. In contrast to Ref. [@HonKetKoh2009], where $R_{m,m+1}/R_{0,1}$ is presumed as constant, we make use of the approximation $$\label{eq:R_mmp1_R_01} \frac{R_{m,m+1}}{R_{0,1}} = m+1 .$$ This relation holds exactly for the states of a harmonic oscillator with a linear coupling to the heat bath. The coupling matrix elements fulfill $A_{m,m+1} = \sqrt{m+1}\: A_{0,1}$, a property that is easily shown with the help of the associated algebra of ladder operators. Analogously one can prove relation  for the regular states of a time-periodic system if the regular tori are elliptic, coinciding with those of a harmonic oscillator, and assume $m$-independent winding numbers. Beyond the application to elliptic islands of constant winding number we presume Eq.  for arbitrary islands, which seems to be in general a good approximation. To account for an avoided crossing between the states $m_1$ and $m_2$ the model introduces the additional rates $R_{m_1 m_2} = R_{m_2 m_1} = { R^{\text{ac}} }$ from Eq. . By the action of ${ R^{\text{ac}} }$ the flux $F_{m,m-1} := p_m R_{m,m-1} - p_{m-1} R_{m-1,m}$ becomes non-zero for all $m_1 < m \leq m_2$. The rate equations, Eq. , translate to the flux equations $$\begin{aligned} \label{eq:flux} 0 &=& F_{1,0} \\ 0 &=& F_{m+1,m} - F_{m,m-1} \qquad {\scriptstyle m \neq 0, m_1, m_2}\nonumber\\ \left( p_{m_1} - p_{m_2} \right) { R^{\text{ac}} }&=& F_{m_1+1,m_1} - F_{m_1,m_1-1} \nonumber \\ -\left( p_{m_1} - p_{m_2} \right) { R^{\text{ac}} }&=& F_{m_2+1,m_2} - F_{m_2,m_2-1} \nonumber\end{aligned}$$ with the solution $F_{m,m-1} = F := \left( p_{m_1} - p_{m_2} \right) { R^{\text{ac}} }$ for $m_1 < m \leq m_2$ and $F_{m,m-1} = 0$ otherwise. Eventually, the occupations assume the values $$\label{eq:rho_model2} p_m = \left\{ \begin{array}{l l} p_0 \hspace{2mm} g^m & {\scriptstyle m \leq m_1}\\ p_{m_1} \!\left[ \left(1 - g^{m_2-m_1}\right)\frac{r_m}{1+r_{m_2}} + g^{m-m_1} \right] & {\scriptstyle m_1 < m \leq m_2} \\ p_{m_2} g^{m-m_2} & {\scriptstyle m_2 \leq m} \end{array} \right.$$ with $r_m := { R^{\text{ac}} }\sum_{k=1}^{m-m_1} \left( g^{m-m_1-k} / R_{m_1+k,m_1+k-1} \right)$. For ${ R^{\text{ac}} }\gg R_{1,0}$ the parameter $r_{m_2}$ diverges and $p_{m_2}/p_{m_1} = \left(1 - g^{m_2-m_1}\right) \left[r_{m_2}/(1+r_{m_2})\right] + g^{m_2-m_1}$ approaches $1$. Note that the model solution  relies on the nearest-neighbor coupling and on the $m$-independence of $R_{m,m+1}/R_{m+1,m}$. The ratio $R_{m,m+1}/R_{0,1}$ of Eq.  leads to $r_m = \left({ R^{\text{ac}} }/R_{1,0}\right) \sum_{k=1}^{m-m_1} \left[ g^{m-m_1-k}/(m_1+k)\right]$. In Figs. \[fig:Emean\_rho\_Ereg\_rho\_AC1\](b) and \[fig:Emean\_rho\_Ereg\_rho\_AC2\](b) we apply the simplified rate model to regular states for a kick strength close to the center of the avoided crossing, and compare its solution, Eq. , to the occupations $\bar p_m$ from the solution of the rate equations . The regular energies ${E^{\text{reg}}}$ are not well defined for the hybridizing states of the avoided crossing, but are well defined for the respective diabatic states $m_1$ and $m_2$. That is why the occupations in these subfigures are represented in the diabatic basis by means of the orthogonal transformation $\bar p_{m_1} = \alpha^2 p_{a} + \beta^2 p_{b}$ and $\bar p_{m_2} = \beta^2 p_{a} + \alpha^2 p_{b}$ with $\alpha^2 = \left(1+d/\sqrt{1+d^2}\right)/2$ and $\alpha^2 + \beta^2 = 1$. The comparison indicates that the rate model based on assumption  reproduces the local disturbance of the exponential scaling for the states $m_1 < m < m_2$ very accurately. In contrast, the simpler assumption [@HonKetKoh2009] of $m$-independent rates, $R_{m,m+1}/R_{0,1} = 1$, does not reproduce the $m$-dependence of the occupations between $m_1$ and $m_2$ (dashed line). For the example in Fig. \[fig:Emean\_rho\_Ereg\_rho\_AC2\](b) the rate model seems at first sight not applicable, since it does not account for chaotic states. We therefore apply the model as if the avoided crossing were between the regular state $m_1=10$ and the outermost regular state, $m=22$, and still obtain a good agreement between the observed $p_m$ and the model solutions. This is possible, as the occupation of the regular state $m=22$ differs only weakly from the plateau of chaotic occupations. Summary {#sec:Summary} ======= A core question of statistical mechanics is the characterization of the asymptotic state approached by a quantum system when it interacts with a thermal reservoir. In the familiar equilibrium thermodynamics of time-independent systems in the weak-coupling limit it is answered by the canonical distribution, where the eigenstates of the isolated quantum system are occupied with the statistical weights $p_i \sim e^{-\beta E_i}$. In a time-periodic quantum system, where an external field permanently pumps energy into the system and prevents its relaxation to equilibrium, this is in general an intricate question, which cannot be answered by deduction from the time-independent case. Here, the asymptotic state under a weak coupling to the thermal reservoir becomes time-periodic and is best characterized by the time-independent occupations $p_i$ of the Floquet states. We demonstrate, that the Floquet occupations can be classified according to the semiclassical character of the Floquet states. The occupations of the chaotic Floquet states fluctuate weakly around a mean value ${\bar{p}_{\text{ch}}}$ [@BreHubPet2000]. The regular Floquet states on the contrary acquire probabilities that are roughly exponentially distributed. The validity of this observation is also confirmed by the occupation characteristics of other types of Floquet states, which still reflects their regular or chaotic nature: beach states, which are very similar to the regular states and situated close, but outside the regular island, they form a correlated set of occupations, which is qualitatively comparable to the regular occupations. In contrast, the occupations of hierarchical states, which have the properties of chaotic states, but live in a restricted region of the chaotic phase space, are distributed analogously to the chaotic states. In contrast to previous studies of a driven particle in a box [@BreHubPet2000], where the regular states carry occupations close to the Boltzmann weights, we observe that in general the regular occupations can considerably deviate from the Boltzmann result. This observation is possible as we focus on time-periodic systems where the classical phase space and the Floquet states are strongly perturbed compared to the originally time-independent system. In kicked systems the occupations of the regular states can be well approximated by a function $F(\nu_m, \beta{\hbar\omega})$ depending on the classical winding numbers $\nu_m$ of the regular tori, and the parameters temperature $1/\beta$ and driving frequency $\omega$. For a constant or sufficiently moderately varying winding number within the regular island the distribution of the regular occupations can even be described by weights of the Boltzmann type, $p_m \sim e^{-{\beta_{\text{eff}}}{E^{\text{reg}}}_m}$, depending on the regular energies ${E^{\text{reg}}}_m$. The effective temperature $1/{\beta_{\text{eff}}}$ is evaluated as a function of the winding number in the regular island. As the driven system is not in an equilibrium state, the proper definition of a non-equilibrium temperature is a subtle problem [@Rug1997_MorRon1999_CasJou2003]. The critical question for future investigations is, whether the quantity ${\beta_{\text{eff}}}$ is accessible by a measurement. A situation, where purely classical information is no longer sufficient to account for the observed occupations, is present at avoided crossings, which are ubiquitous in the quasienergy spectra of Floquet systems. Avoided crossings involving a regular state give rise to strong changes in the set of Floquet occupations. We give an intuitive explanation of this effect, based on the additional rate ${ R^{\text{ac}} }$ from Ref. [@HonKetKoh2009], and introduce a simplified rate model whose analytical solutions describe the numerical data accurately. In conclusion, with the presented characterizations of the Floquet occupations we demonstrate that ubiquitous signatures of the classical dynamics are reflected in the asymptotic density matrix of the open quantum system. In this way it is feasible to draw an intuitive picture of the asymptotic state, shedding light on the statistical mechanics of time-periodic quantum systems. Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge helpful discussions with S. Fishman, D. Hone, W. Kohn, T. Kottos, and S. Löck. We thank the DFG for support within the Forschergruppe 760 “Scattering Systems with Complex Dynamics” and R.K. thanks the Kavli Institute for Theoretical Physics at UCSB (NSF Grant No. PHY05-51164). [10]{} S. A. Rice and M. Zhao, [*Optical Control of Molecular Dynamics*]{} (Wiley, New York, 2000). P.  Brumer and M. Shapiro, [*Principles of the Quantum Control of Molecular Processes*]{} (Wiley-VCH, Berlin, 2003). , NATO Advances Studies Institute, Series B: Physics, edited by R. T. Philips (Plenum Press, New York, 1994), Vol. 330. M. Grifoni, M. Sassetti, J. Stockburger, and U. Weiss, Phys. Rev. E [**48**]{}, 3497–3509 (1993); M. Grifoni, M. Sassetti, P. Hänggi, and U. Weiss, Phys. Rev. E [**52**]{}, 3596–3607 (1995). R. Graham and R. Hübner, Ann. Phys. (N.Y.) [**234**]{}, 300–315 (1994). C. Zerbe and P. Hänggi, Phys. Rev. E [**52**]{}, 1533–1543 (1995). M. Grifoni and P. Hänggi, Phys. Rep. [**304**]{}, 229–354 (1998). R. Blümel, A. Buchleitner, R. Graham, L. Sirko, U. Smilansky, and H. Walther, Phys. Rev. A [**44**]{}, 4521–4540 (1991). S. Kohler, T. Dittrich, and P. Hänggi, Phys. Rev. E [**55**]{}, 300–313 (1997). H.-P. Breuer, W. Huber, and F. Petruccione, Phys. Rev. E [**61**]{}, 4883–4889 (2000). W. Kohn, J. Stat. Phys. [**103**]{}, 417–423 (2001). D. W. Hone, R. Ketzmerick, and W. Kohn, Phys. Rev. E [**79**]{}, 051129 (2009). D. W. Hone, R. Ketzmerick, and W. Kohn, Phys. Rev. A [**56**]{}, 4045–4054 (1997). I. C. Percival, J. Phys. B [**6**]{}, L229–L232 (1973); M. V. Berry, J. Phys. A [**10**]{}, 2083–2091 (1977); A. Voros, [*Stochastic Behavior in Classical and Quantum Hamiltonian Systems*]{}, Lecture Notes in Physics Vol. 93 (Springer, Berlin, 1979). R. Ketzmerick and W. Wustmann, Phys. Rev. E [**80**]{}, 021117 (2009). U. Weiss [*Quantum Dissipative Systems*]{}, Series in Modern Condensed Matter Physics Vol. 10 (World Scientific, Singapore, 1999). D. Cohen, J. Phys. A: Math. Gen. [**27**]{}, 4805–4829 (1994). P. Pfeifer and R. D. Levine, J. Chem. Phys [**79**]{}, 5512–5519 (1983); U. Peskin and N. Moiseyev, J. Chem. Phys. [**99**]{}, 4590–4596 (1993). W. Wustmann, PhD thesis, TU Dresden (2010). H. P. Breuer and M. Holthaus, Ann. Phys. [**211**]{}, 249–291 (1991). F. Bensch, H. J. Korsch, B. Mirbach, and N. Ben-Tal J. Phys. A: Math. Gen. [**25**]{}, 6761–6777 (1992). J. Laskar, C. Froeschlé, and A. Celletti, Physica D [**56**]{}, 253–269 (1992). O. Bohigas, S. Tomsovic, and D. Ullmo, Phys. Rep. [**223**]{}, 43–133 (1993); A. Bäcker, R. Ketzmerick, S. Löck, and L. Schilling, Phys. Rev. Lett. [**100**]{}, 104101 (2008). This also holds true without the omission of the last factor in Eq. . B. Mirbach and H. J. Korsch,J. Phys. A: Math. Gen. [**27**]{}, 6579–6604 (1994). S. D. Frischat and E. Doron, Phys. Rev. E [**57**]{}, 1421–1443 (1998). O. Bohigas, S. Tomsovic and D. Ullmo, Phys. Rev. Lett. [**64**]{} 1479–1482 (1990). R. S. MacKay, J. D. Meiss, and I. C. Percival, Physica D [**13**]{}, 55–81 (1984); J. D. Meiss, Rev. Mod. Phys. [**64**]{}, 795–848 (1992). R. Ketzmerick, L. Hufnagel, F. Steinbach, and M. Weiss, Phys. Rev. Lett [**85**]{}, 1214–1217 (2000). H. H. Rugh, Phys. Rev. Lett. [**78**]{}, 772–774 (1997); G. P. Morriss and L. Rondoni, Phys. Rev. E [**59**]{}, R5–R8 (1999); J. Casas-Vázquez and D. Jou, Rep. Prog. Phys. [**66**]{}, 1937–2023 (2003).
--- author: - 'C. Fremling' - 'J. Sollerman' - 'F. Taddia' - 'M. Ergon' - 'M. Fraser' - 'E. Karamehmetoglu' - 'S. Valenti' - 'A. Jerkstrand' - 'I. Arcavi' - 'F. Bufano' - 'N. Elias Rosa' - 'A. V. Filippenko' - 'D. Fox' - 'A. Gal-Yam' - 'D. A. Howell' - 'R. Kotak' - 'P. Mazzali' - 'D. Milisavljevic' - 'P. E. Nugent' - 'A. Nyholm' - 'E. Pian' - 'S. Smartt' bibliography: - 'ngc5806\_13bvn12os\_accepted.bib' date: 'Received; Accepted' subtitle: 'Two stripped-envelope supernovae from low-mass progenitors in NGC 5806' title: 'PTF12os and iPTF13bvn:' --- Introduction ============ Prior to their final fate as core-collapse (CC) supernovae (SNe) releasing $\sim10^{51}$ erg of kinetic energy, the progenitor stars of Type IIb and Ibc SNe have had their envelopes stripped of hydrogen. These SN types are therefore commonly referred to as stripped-envelope (SE) supernovae. In the case of SNe IIb, the stripping is partial, and early-time spectra show clear signatures of Balmer lines. At later times the spectra of SNe IIb instead closely resemble those of Type Ib SNe, which by definition do not show significant hydrogen signatures at any time . Type Ic SNe likely experience even stronger stripping, resulting in a loss of their entire (or nearly entire) envelope prior to explosion. Two main mechanisms have been suggested to produce SE SNe: either binary mass transfer or strong line-driven winds from massive Wolf-Rayet (WR) stars . Binary mass transfer is an appealing mechanism for producing the partial stripping seen in SNe IIb; when the envelope of the donor star decreases in radius below the Roche limit, the mass transfer naturally stops. Simulations of binary evolution show that the final amount of stripping of the exploding star depends mainly on the initial masses of the two stars in the binary system, the spatial configuration of the system, and the metallicity of the stars. Within the parameter space explored by [@Yoon:2010aa] and others, both SNe IIb and Ib are produced in a bimodal fashion, but not at the observed rates (e.g., [@2011MNRAS.412.1441L]; see also ). It is also possible to produce both complete and partial stripping in a single-star scenario . In this case the final amount of hydrogen depends strongly on the initial mass and metallicity of the isolated star, with higher metallicity allowing stronger line-driven winds and more mass loss. In this picture, SNe occurring in galaxies with low metallicity could end up as partially stripped SNe IIb, and SNe occurring in higher-metallicity environments could end up as completely stripped SNe Ib. In the literature there is evidence [e.g., @2012ApJ...759..107K] for SNe IIb tending to reside in lower-metallicity hosts compared to SNe Ib. However, to produce completely stripped SN progenitors from single stars, progenitors with high zero age main sequence (ZAMS) masses of 30 [M$_{\odot}$]{} and beyond are typically needed. Such progenitors give rise to very large ejecta masses of $\sim 10$ [M$_{\odot}$]{} as they explode. This is not consistent with the observed ejecta-mass range of 3.6–5.7 [M$_{\odot}$]{} derived for most SNe Ibc by . In a binary scenario, a low ejecta mass is more easily produced since less-massive stars can end up completely stripped by the mass transfer. Similar low values for ejecta masses of SNe Ib/c were also deduced by [@Cano:2013aa] and [@2014arXiv1406.3667L]. Observational evidence for binarity in the progenitors of SNe IIb exists; for the well-known Type IIb SN 1993J, a binary companion was directly detected by [@Maund:2009]; see also [@2014ApJ...790...17F]. Further, the progenitor of the well-observed Type IIb SN 2011dh has been shown by to most likely be a composite of a nearly completely stripped compact He core with a mass of 3.3–4.0 M$_{\sun}$ surrounded by a thin H-rich envelope extending out to 200–300 [R$_{\odot}$]{}. Binary evolution modeling can readily reproduce these properties at the time of explosion, while single-star models encounter difficulties in reproducing the relatively low ejecta mass in combination with the observed shell . Recently, [@2014ApJ...793L..22F] have also claimed that the remaining flux in [*Hubble Space Telescope*]{} (*HST*) imaging obtained $\sim1160$ d past the explosion is consistent with a suitable binary companion to SN 2011dh; however, [@2015MNRAS.454.2580M] disagree with this conclusion. The Type Ib SN iPTF13bvn was initially suggested to have been the result of the explosion of a single massive WR star based on model fits to the early photometry as well as on the colors and absolute magnitude of the progenitor candidate identified in pre-explosion *HST* images [e.g., @2013ApJ...775L...7C; @Groh:2013aa]. However, hydrodynamical modeling and nebular $r$-band photometry indicate that the ZAMS mass was likely not more than $17$ [M$_{\odot}$]{}. This result was confirmed by the nebular-phase observations of . A single-star model with this ZAMS mass should not produce any kind of SE SN. This indicates that a binary was also the origin for this system. The observed colors in the pre-explosion *HST* images of iPTF13bvn have since also been reassessed and reproduced by evolutionary modeling of a binary system [@2014AJ....148...68B; @2015MNRAS.446.2689E]. Furthermore, the observables of iPTF13bvn are consistent with other SNe Ib in the literature , which indicate that most SNe Ib have progenitors of similar nature. Independent photometry and spectroscopy of iPTF13bvn was also collected by [@2014MNRAS.445.1932S], showing consistent data and similar conclusions. In this paper we present the first comprehensive dataset on PTF12os (SN 2012P), a SN IIb that occurred in the same nearby galaxy, NGC 5806, as the already well-studied Type Ib SN iPTF13bvn. We also supplement previously published data on iPTF13bvn with more photometry and spectra, and perform a comprehensive comparison of these two SNe. In addition, we compare their host environments via spectroscopic metallicity measurements of nearby H$\mathrm{\alpha}$ regions. Our analysis is aided by comparisons with SN 2011dh. This paper is organized as follows. In Sect. \[sec:observations\] we discuss our observations and reduction procedures. A description of the reference-subtraction code that has been used to reduce most of our imaging is presented in detail. Section \[sec:host\] gives long-slit metallicity measurements of H$\mathrm{\alpha}$ regions in NGC 5806 to determine the metallicity at the positions of iPTF13bvn and PTF12os and the metallicity gradient of the host galaxy. Extinction estimates for the two SNe are given in Sect. \[sec:extinction\], and in Sect. \[sec:prog\_id\] we discuss the result of astrometric identification of a progenitor candidate for PTF12os using *HST* pre-explosion images of NGC 5806. In Sect. \[sec:lc\] we present the multiband LCs of the SNe and construct pseudobolometric LCs of PTF12os and iPTF13bvn, which are then compared with SN 2011dh to get a first handle on the properties of the SN progenitors and explosions. In Sect. \[sec:spectra\] we report our spectroscopy of the two SNe; we investigate the presence of early-time H in PTF12os, search for similar indications in the early spectra of iPTF13bvn, and perform spectral-line velocity measurements of the He and Fe lines to estimate the photospheric expansion velocities. The latter are used in Sect. \[sec:hydro\] together with the hydrodynamical model [hyde]{} to constrain the explosion parameters, such as the synthesized Ni and ejecta masses as well as the He-core mass of the progenitor of PTF12os. To allow direct comparison, we have recalculated the explosion parameters for iPTF13bvn and SN 2011dh using the same code. In Sect. \[sec:spectra\] we also use late-time spectroscopy ($>200$ d past the explosions) of both iPTF13bvn and PTF12os to constrain the amount of oxygen in the ejecta of each SN by comparing our data to the nebular models of [@jerkstrand2014]. Finally, Sect. \[sec:conclusions\] contains a summary of this work, some discussion, and our conclusions. \[sec:intro\] Observations and Data Reduction {#sec:observations} =============================== Discovery and imaging --------------------- The discoveries[^1] of PTF12os (SN 2012P[^2]) [2012 Jan. 10.48; @2012ATel.3881....1A] and iPTF13bvn [2013 June 16.24; @2013ATel.5152....1A; @2013ATel.5140....1A] in NGC 5806 were made with the Palomar Oschin 48-inch (P48) Schmidt telescope [@Law:2009aa]. PTF12os was discovered 4.0 d past the explosion (+4 d[^3]), $t_{\rm exp}=$ 2012 Jan. 6.50$^{+0.5}_{-1.3}$, as estimated from the best fit to our hydrodynamical model grid (Sect. \[sec:hydro\]). The explosion date of iPTF13bvn is very well constrained ($t_{\rm exp}=$ 2013 June 15.67; [@2013ApJ...775L...7C]) based on a power-law fit to the earliest P48 data points, which sets the discovery at +0.6 d. [@2013ApJ...775L...7C] reported early-time photometry on iPTF13bvn obtained with the P48, the robotic Palomar 60-inch telescope [P60; @2006PASP..118.1396C], and the Las Cumbres Observatory Global Telescope network [LCOGT; @2013PASP..125.1031B] up to 20 d past the discovery. Later follow-up data obtained with the same telescopes, along with the addition of nebular-phase photometry up to $\sim 240$ d past discovery from the Nordic Optical Telescope (NOT), were reported by . In this paper we provide final reductions in table format (Tables \[tab:13bvnphot\] and \[tab:13bvnphot2\]) of the previously reported photometry from the P60, NOT, and LCOGT of iPTF13bvn , along with previously unpublished late-time photometry obtained with the P60, NOT, and the Palomar 200-inch telescope (P200) through $\sim 350$ d past discovery. We also present P48, P60, NOT, Liverpool Telescope (LT), Gran Telescopio Canarias (GTC), and New Technology Telescope (NTT) multiband observations of PTF12os through $\sim 210$ d past discovery (Table \[tab:12osphot\]) along with ultraviolet (UV) photometry obtained with [*Swift*]{} [@2004SPIE.5165..262R] for both PTF12os (Table \[tab:12osswift\]) and iPTF13bvn (Table \[tab:13bvnswift\]). Imaging reductions and photometry --------------------------------- The reductions of our multicolor LCs of PTF12os and iPTF13bvn from data obtained with the P48, P60, P200, NOT, LT, and GTC are based on our in-house reference image-subtraction pipeline currently performing real-time automatic reductions of iPTF P60 data. This pipeline, while operating in automatic mode, uses mosaiced Sloan Digital Sky Survey [SDSS; @2014ApJS..211...17A] images for the subtraction of the host-galaxy contribution from each science frame taken with the P60. This allows us to quickly obtain host-subtracted magnitudes and colors of newly discovered transients, a valuable aid when deciding on follow-up strategy for a particular transient. The code can in principle operate on any imaging dataset as long as suitable data for producing a reference are available. In this paper it is used to obtain publication-quality host-subtracted magnitudes for both iPTF13bvn and PTF12os from all of our imaging data. To reduce P48 data, we utilize deeply stacked P48 references generated from images obtained during the years before the explosion of PTF12os. For data from the P60, LT, GTC, NOT, and the P200, deep stacks of P60 images are used as the references. For iPTF13bvn, the P60 dataset obtained on PTF12os is used to generate the reference images, and vice versa. Since the two SNe happened more than 500 d apart and they both peaked at $\sim 16$ mag in the $r$ band, any possible remaining flux from the SNe in the reference frames is negligible. The top layer of the pipeline is written in [MATLAB]{}[^4], operating on images that are bias-subtracted and flat-fielded. The basic steps performed by this pipeline are described in sequential order below. When relevant, we also give some details pertaining to the quality of the reductions of our data on PTF12os and iPTF13bvn presented in this paper. ### The Fremling Automated Pipeline, [FPipe]{} - **Quality control of the science data and bad-pixel masking:** Before any further processing and image subtraction, the pipeline checks for the presence of a World Coordinate System (WCS) solution and its reliability. If no WCS is present, an astrometric solution is attempted. The approximate seeing and the number of stars present in each frame are also measured using [SExtractor]{} . Any frames for which it is not possible to determine a robust WCS are discarded. If there is a known map of bad pixels for the CCD in use (e.g., the P60), these are also fixed by linear interpolation in this step. - **Sky subtraction:** Following the quality control, the sky background is removed from the images by masking out all detected sources (point-like or otherwise, using [SExtractor]{} with a low detection threshold), fitting a second-order polynomial surface with a first-order cross term to the remaining data in two iterations with 3$\sigma$ clipping, and then subtracting. - **Reference image generation:** If a reference image is not manually specified, or if SDSS references are not used, a stacked reference is generated from suitable reference data as follows. Initially, images with seeing worse than $1.8''$ and images where there are a smaller number of detected stars than one third of the average in the dataset are filtered out. The remaining frames are sky subtracted, registered, matched in intensity, and combined using the average of all images with 3$\sigma$ clipping for each pixel. The sky subtractions are performed as described above and the registrations are performed as described below, using the image with the best seeing as the reference onto which the rest of the images are stacked. - **Image registration:** To register the science images to the reference image, the centroids of common point sources in the reference and science frames are first identified using [SExtractor]{}. The geometric transformation is subsequently determined with different complexity depending on the number of common point sources. If there are fewer than 7 common point sources, the transformation allows for shifting, rotation, and scaling of the science image. If there are 7 to 15 common sources we also allow for shearing. With more than 15 common sources, which is typically the case, a second-order polynomial transformation with 12 parameters is determined, and the polynomial transformation is finally applied using Lanczos resampling. When operating on P60 references and science frames, the standard deviation in the distance between the centroids in the reference and a registered science frame is typically below 0.05 pixels. Frames with fewer than 3 point sources in common with the reference are discarded. - **Point-spread-function (PSF) modeling:** To obtain a model for the PSF in the reference frame and in each science frame, we use [SExtractor]{} in combination with [PSFex]{} [@2013ascl.soft01001B]. The current version of the pipeline uses a nonparametric PSF model, as measured from the raw data by cleaning and stacking the isolated point sources. We assume that the PSF is constant across the CCD[^5], an assumption that we have found to work very well for datasets solely obtained by the P60 or data from the P60 in combination with SDSS references, the main use of this pipeline. We find no evidence for a spatial dependence on the subtraction residuals in our subtracted frames. - **PSF matching:** Once the PSFs of the reference and each science frame have been determined, we convolve the reference image with the PSF of the science frame, as well as the science frame with the PSF of the reference, for each individual subtraction. This method, also known as the common PSF method (CPM), has been previously proposed by [@2008ApJ...680..550G]. We find that this method is well suited for subtractions where the reference image is not necessarily obtained with the same telescope as the science observations, since there are no parameters to be tuned except the size of the box region used to measure the PSF. However, both the reference and the science frames are somewhat degraded owing to convolution being performed on both images. Finally, the PSF models themselves are also convolved with each other to obtain the final PSF in each subtraction for later use when performing PSF photometry. - **PSF photometry:** To determine the counts of a point source, we perform weighted least-squares fitting of the convolved nonparametric PSF model to the data. The fit is weighted by the photon noise in the images, so that areas with increased signal from the source receive higher weights in the fit. Before the fit is performed, the PSF model is centered on the point source using Discrete Fourier Transform cross-correlation. - **Zero-point determinations and intensity-scale matching of the reference and science data:** Whenever there is SDSS coverage, or other local standard stars with known magnitudes are available for the field, we perform PSF photometry as described above on the locations of these stars in both the science and reference frames for each subtraction. Sources for which the quality of the fit to the PSF model is below a certain threshold are excluded, and the remaining sources are matched to their known magnitudes to determine the zero points (ZPs) of the frames[^6]. We do not apply any color terms. The ZP information is used to scale the counts of the convolved reference frame to the convolved science frame, resulting in a common ZP as determined from the science frame and data that are ready to be subtracted. For our Sloan-filter data of iPTF13bvn and PTF12os, we use SDSS magnitudes to determine the ZPs. Typically at least 20 useable SDSS stars are present in each science frame, resulting in very precise ZP determinations, with standard deviations of the mean for the ZP typically $< 0.01$ mag[^7]. To set the ZPs in our $B$-band images we use $g$- and $r$-band magnitudes of SDSS stars within the fields with the magnitude conversions described by [@2005AJ....130..873J]. For our P48 $R$-band data, we use SDSS $r$-band magnitudes to determine the ZPs. The P48 uses a Mould $R$-band filter, which is not identical to the SDSS $r$ band. However, we find that this method gives LCs that are consistent within approximately $\pm0.05$ mag compared to our P60 $r$-band LCs that have been reduced and calibrated in the same way. Thus, we can make the assumption that the ZPs we find in this way converts our Mould $R$-band data into Sloan $r$-band data, at least to a first approximation. Since this accuracy is sufficiently good for the science performed in this paper, we have not applied any color-corrections when determining the ZPs or corrections for the filter differences and spectral shapes (S-corrections) when determining the magnitudes of the transients in any of the imaging filters used, unless otherwise stated[^8]. - **PSF photometry of the transient, error determination, and detection limits:** After subtracting the scaled reference image from the science frame, we finally perform a PSF model fit at the expected location of the transient. If the quality of the fit is above the detection threshold, the result is used to determine the magnitude of the transient based on the flux and the ZP of the science frame. To determine the uncertainty of the measured flux we insert artificial transients with the measured flux, scattered in a circular pattern around one PSF size away from the real transient, in the unsubtracted science frame. The subtraction is redone for each artificial transient and we measure the uncertainty as the standard deviation in the flux of 35 artificial sources. If the quality of the initial fit is below the detection threshold, a limiting magnitude is determined by inserting artificial sources of increasing magnitude until fewer than $66.7\%$ of the inserted sources are detected. The initial threshold has been tuned so that the results from this procedure represent $3\sigma$ detection limits, by comparison to results from aperture photometry. - **Final uncertainty in the measured magnitude:** The final error in the determined magnitude of the transient is taken as the statistical uncertainty determined from the artificial sources added in quadrature to the standard deviation of the mean when doing the ZP determination from the detected point sources with known magnitudes in the science frame. Typically, for iPTF13bvn and PTF12os, the standard deviation of the mean of the ZP in our data is below 0.01 mag, as mentioned above; consequently, the final error is generally dominated by the statistical uncertainty when measuring the flux of the transients. ### LCOGT photometry For the images of iPTF13bvn obtained by LCOGT, we estimate the galaxy contribution by fitting and subtracting a low-order surface, and then performing PSF-fitting photometry (Valenti et al., in prep.). The Sloan filter data were calibrated against a minimum of 10 SDSS [@2014ApJS..211...17A] stars in the field. The Johnson-Cousins [*UBVRI*]{} filter data were calibrated against Landolt standard stars [@1992AJ....104..340L] observed during photometric nights. We find that this procedure gives consistent LCs in the bands where we also have reference-subtracted P60 data ($g, r, i$). Thus, the host-galaxy contribution at the location of iPTF13bvn appears to be negligible after the galaxy is subtracted in this way. ### Reduction of [*Swift*]{} UVOT photometry Both PTF12os and iPTF13bvn were observed with the UV Optical Telescope onboard [*Swift*]{} [UVOT; @2004ApJ...611.1005G; @2005SSRv..120...95R]. PTF12os was observed in 6 epochs from 2012 Jan. 14 to Jan. 29 (+8.7 d to +23.5 d), and iPTF13bvn in 10 epochs from 2013 June 17 to 2013 July 23 (+2.0 d to +37.8 d). Photometry was obtained as described by [@2009AJ....137.4517B], and an aperture with radius 5$''$ was used. To estimate the host-galaxy contribution at the location of the SNe we use the deepest [*Swift*]{} observation obtained when observing PTF12os as the reference frame for iPTF13bvn, and vice versa. The flux at the position of the transients was measured with the same aperture size in the reference frames as in the science frames and the flux measured in the reference was subtracted from each detection to obtain host-subtracted magnitudes. In this paper we only include data in the $U$, $UVW1$, and $UVM2$ filters, even though more bands were observed. We report our host-subtracted [*Swift*]{}-UVOT photometry as AB magnitudes in Table \[tab:12osswift\] and Table \[tab:13bvnswift\], and it is also shown in Fig. \[fig:lc\_full\]. For one epoch of observations of iPTF13bvn, there were no significant detections in these filters after the host contribution was subtracted. Spectroscopic observations and reductions ----------------------------------------- We obtained 19 epochs of low-resolution optical spectra of PTF12os, starting before peak luminosity at $4$ d past discovery up until $211$ d past discovery, when the SN has entered the nebular phase. These are listed in Table \[tab:spec12os\] along with the telescopes and instruments that were used. The full spectral sequence is shown in Fig. \[fig:spec12os\]. Optical and near-infrared (NIR) spectra of iPTF13bvn were obtained starting within 24 hours after discovery. Spectra until $16$ d past discovery were published by [@2013ApJ...775L...7C], whereas presented 6 additional later-time optical spectra obtained between $18$ and $86$ d after discovery. In this paper we provide our complete dataset, consisting of 26 epochs of optical spectra and 4 NIR spectra, including the previously published data. The earliest spectrum of iPTF13bvn [@2013ATel.5142....1M], obtained with the SALT telescope, is also included in our analysis. We have obtained several new spectra in the nebular phase, one using the NOT and the Andalucia Faint Object Spectrograph (ALFOSC) at $250$ d past discovery, one with the Deep Extragalactic Imaging Multi-Object Spectrograph [DEIMOS; @2003SPIE.4841.1657F] on Keck 2 at $344$ d past discovery, one with the ESO Very Large Telescope using the FORS2 spectrograph at $346$ d past discovery and one with the Intermediate dispersion Spectrograph and Imaging System (ISIS) at the William Herschel Telescope (WHT) at $376$ d past discovery. We also present an additional NIR spectrum that was not published by [@2013ApJ...775L...7C], obtained with the Folded-port InfraRed Echellette spectrograph [FIRE; @2013PASP..125..270S] on Magellan-Baade at $78$ d past discovery. Our spectral data on iPTF13bvn are listed in Table \[tab:spec13bvn\]. The optical spectra are shown in Fig. \[fig:spec13bvn\] and the NIR spectra are in Fig. \[fig:spec13bvnir\]. All spectra were reduced using standard pipelines and procedures for each telescope and instrument. For our nebular spectrum of PTF12os obtained $211$ d past discovery, we use spectra of the underlying region from several years after the explosion to subtract the background continuum. All spectral data and corresponding information is available via WISeREP[^9] [@Yaron:2012aa]. ![image](figures/ngc5806_host_longslit_progenitors_new.jpg){width="17cm"} Host-galaxy properties {#sec:host} ====================== NGC 5806 is a nearby spiral galaxy having SAB(s)b morphology. The spectroscopic redshift of the galaxy from the SDSS is $z=0.00449$. We adopt the distance modulus $\mu=32.14\pm0.20$ mag [@2013AJ....146...86T], corresponding to a distance of $26.8_{-2.4}^{+2.6}$ Mpc[^10]^,^[^11]. We note that since PTF12os and iPTF13bvn occurred in the same galaxy, we do not have to worry about systematic uncertainties in the relative distances of the two SNe. The above distance results in a $B$-band absolute magnitude for NGC 5806 of $M_B=-20.12$ mag[^12]. Adopting the values from NED, the major and minor diameters of NGC 5806 are 18540 and 94554, respectively, and the position angle is 170$^{\circ}$. The morphological T-type is 3.0 according to the Third Reference Catalogue of Bright Galaxies [RC3; @1991rc3..book.....D]. A stack of *HST*/WFC images taken in filters F658N, F435W, F555W, and F814W during 2004 is shown in Fig. \[fig:galaxy\], with the locations of the known transients in the galaxy marked. In addition to iPTF13bvn and PTF12os which we investigate in detail here, the Type IIP SN 2004dg [@Smartt:2009] and the SN impostor SN Hunt 248 (discovered in 2014) also occurred in this galaxy. SN 2004dg is present in the image stack shown in Fig. \[fig:galaxy\]. ![Metallicity measurements (solid circles), metallicity estimates (open circles), and gradient estimate (solid red line) for NGC 5806 compared to the spiral-galaxy sample (gray dashed lines) by [@Gusev:2012aa]. The dashed blue line shows a first-order polynomial fit which includes the central oxygen metallicity measured for NGC 5806 using an SDSS spectrum, and the dashed red line shows the same fit with the metallicity datapoint at the center of the galaxy excluded.[]{data-label="fig:metgrad"}](figures/ngc5806_met_gradient-eps-converted-to.pdf){width="9cm"} Metallicity estimates {#sec:metallicity} --------------------- We have mapped the metallicity of NGC 5806 via a spectroscopic programme conducted at the NOT[^13] within which we performed spectroscopy of regions in the galaxy using ALFOSC in long-slit mode. This dataset was also supplemented by two WHT spectra (see Fig. \[fig:galaxy\]). Our metallicity measurements are based on strong-line diagnostics using the N2 method [@2004MNRAS.348L..59P], which utilizes the ratio of the \[\] $\lambda$6584 and H$\alpha$ lines to estimate the abundance of oxygen in the line-emitting region. We estimate deprojected galactocentric radii ($r_G/r_{25}$) by following the procedures described by . The location of PTF12os is coincident with a strong region, indicated by a blue circle in panel (a) of Fig. \[fig:galaxy\] (see also Sect. \[sec:prog\_id\] for details on the progenitor identification). At this position we obtained the N2 metallicity estimate $12+\mathrm{log_{10}(O/H)}~=~8.61\pm0.18$ dex using a NOT long-slit spectrum, and we measure the deprojected galactocentric radius $r_G/r_{25}=0.47$. The same metallicity value was obtained at the position of the closest strong region to SN Hunt 248. This region is marked by a yellow circle in panel (a) of Fig. \[fig:galaxy\], and it is located at $r_G/r_{25}=0.55$. SN Hunt 248 itself is situated to the north of this region at $r_G/r_{25}=0.64$. For the location of iPTF13bvn (see Fig. \[fig:galaxy\] and Sect. \[sec:prog\_id\]), we have estimated the deprojected galactocentric radius $r_G/r_{25}=0.43$, similar to that of PTF12os. With WHT/ISIS we obtained spectra of two bright regions close to this location, located at $r_G/r_{25}=0.48$ (red circle in Fig. \[fig:galaxy\]) and $r_G/r_{25}=0.33$ (pink circle in Fig. \[fig:galaxy\]). For these regions we estimated a N2 metallicity of $12+\mathrm{log_{10}(O/H)}=8.70\pm0.18$ dex and $12+\mathrm{log_{10}(O/H)}=8.65\pm0.18$ dex, respectively. To estimate the metallicity gradient in NGC 5806 we use the equation $$12+\mathrm{log_{10}(O/H)}= 12+\mathrm{log_{10}(O/H)}_0 + C_{\mathrm{O/H}}\times(r_G/r_{25}),$$ where $12+\mathrm{log_{10}(O/H)}_0$ is the central oxygen abundance and $C_{\mathrm{O/H}}$ is the gradient (in dex $R_{25}^{-1}$). This equation fitted to the above data, including the central metallicity which we measure as $12+\mathrm{log_{10}(O/H)}=9.31\pm0.18$ using an SDSS spectrum taken at the center of the galaxy, gives $12+\mathrm{log_{10}(O/H)}= 9.25-1.28\,(r_G/r_{25})$ dex. This is a steeper gradient than the average for spirals (e.g., ; [@Gusev:2012aa]). However, we note that in the above fit we are using the central oxygen abundance, without accounting for the possibility of AGN activity at the center of the galaxy. We see no clear evidence of an AGN in the spectrum, but a weak effect on the lines used in the abundance calculation cannot be excluded. If only regions in the disk are used, and the central abundance is extrapolated, we find the gradient in the disk of NGC 5806 to be $C_{\mathrm{O/H}}=-0.12$ dex $R_{25}^{-1}$ and an extrapolated central oxygen abundance of $12+\mathrm{log_{10}(O/H)_0}=8.70$ dex. The average of the two different gradients is $$\label{eq:grad} 12+\mathrm{log_{10}(O/H)_{NGC~5806}}= 8.97-0.70\,(r_G/r_{25}),$$ and this is what we use as the best estimate for the metallicity gradient of NGC 5806. We show our measurements and gradients, along with the gradients of the sample of spiral galaxies studied by [@Gusev:2012aa], in Fig.\[fig:metgrad\]. The average gradient we find for NGC 5806 is in good agreement with this sample. Given the metallicity gradient of NGC 5806 (Eq. \[eq:grad\]), the distance of iPTF13bvn from the center of the galaxy implies a metallicity of $12+\mathrm{log_{10}(O/H)}=8.67\pm0.19$ dex at the position of the SN. At the position of SN Hunt 248 we find $12+\mathrm{log_{10}(O/H)}=8.52\pm0.26$ dex. To estimate the uncertainties we have accounted for the systematic error of 0.18 dex from the N2 method, as well as the difference between the two gradient fits and the average gradient at the position of each object. In conclusion, it appears that both PTF12os and iPTF13bvn occurred in regions of very similar metallicity, where the oxygen abundance is close to the solar value ($12+\mathrm{log_{10}(O/H)}= 8.7$ dex; ). The environment of SN 2011dh was also found to be roughly of solar metallicity by [@2011ApJ...741L..28V]. However, a possible caveat here is that we do not have a direct measurement exactly at the position of iPTF13bvn, since the SN did not occur in an region which would allow the metallicity to be estimated using the N2 method. Extinction {#sec:extinction} ========== Throughout this paper all reddening corrections are applied using the [@1989ApJ...345..245C] extinction law with $R_V=3.1$. For the Milky Way (MW) color excess we adopt $E(B-V)_\mathrm{MW}=0.0437$ mag toward NGC 5806[^14] [@2011ApJ...737..103S]. To estimate the host-galaxy color excess of PTF12os and iPTF13bvn we perform optical color comparisons to SN 2011dh[^15], after correcting for the MW contributions. The multiband light curves (Sect. \[sec:lc\]) were interpolated to evenly spaced dates in order to give one value for each night of observations, and then we used an iterative procedure that minimizes the following expression: $$\Delta_{\mathrm{C}}=\sum^{n=4}_{n=1}{\sum^{t=30~\mathrm{d}}_{t=10~\mathrm{d}}{[(C(n,E,t)_{\mathrm{SN}}}} - (C(n,t)_{\mathrm{dh}})]^2,$$ where the sum over $n$ is performed for $C(n)=B-g,g-r,r-i,$ and $i-z$ colors. $C(n,E,t)_{SN}$ is also a function of $E(B-V)$ and time; it is the extinction-corrected color in a chosen set of filters as a function of time for the object for which the extinction is being computed. Similarly, $C(n,t)_{\mathrm{dh}}$ is the corresponding extinction-corrected color for SN 2011dh. If we assume that the extinction of SN 2011dh is known, we can now compute the host-galaxy color excess we need for our objects to get the best match to the colors of SN 2011dh. Here we adopt a total extinction $E(B-V)=0.07_{-0.04}^{+0.07}$ mag for SN 2011dh . Effectively, this procedure minimizes the total difference in the $B-g$, $g-r$, $r-i$, and $i-z$ colors measured between 10 d and 30 d after explosion for a SN compared to SN 2011dh. These specific colors were chosen since we have the best-quality data in terms of cadence and uncertainties in these bands for both SNe simultaneously. For iPTF13bvn we find $E(B-V)_\mathrm{host}=0.08^{+0.07}_{-0.04}$ mag, and for PTF12os we find $E(B-V)_\mathrm{host}=0.29^{+0.08}_{-0.05}$ mag. The main assumption here is that these SNe have the same intrinsic colors (temperature) as SN 2011dh. To estimate the total uncertainty intervals of these extinction values we have added in quadrature the statistical error resulting from the method described above to the error in the extinction of SN 2011dh. The statistical errors were derived via Monte-Carlo simulations by randomly applying $1\sigma$ errors on all of the photometric data points used in the calculation and iterating the calculation a large number of times. The final errors are dominated by the uncertainty in the extinction estimate for SN 2011dh. Assuming another value for the extinction of SN 2011dh directly reflects in the calculated extinction values as a simple 1:1 relation. We also checked the host-galaxy color excess of PTF12os by measuring the  D equivalent width (EW) in the spectrum taken on 2012 Jan. 25. Using the relation suggested by [@2003fthp.conf..200T], $$\label{eq:NaID} E_{(B-V)}=-0.01+0.16\times\mathrm{EW(\ion{Na}{I} D)},$$ we find $E(B-V)_\mathrm{host}=0.265$ mag. This method is not very reliable since it is based on low-resolution spectra, and the  D feature is not resolved [see, e.g., @2011MNRAS.415L..81P; @2012MNRAS.426.1465P], but the presence of significant  D absorption confirms that the host extinction is not negligible. We note that two different slopes for the D to $E_{(B-V)}$ relation were found by [@2003fthp.conf..200T]. We have adopted the shallower slope in Eq. \[eq:NaID\]. Using the relation derived by [@2011MNRAS.415L..81P] we find $E(B-V)_\mathrm{host}=0.66$ mag, which is consistent with what would be found by adopting the steeper slope found by [@2003fthp.conf..200T]. However, the scatter is very high in these relations (e.g. 0.3 mag in [@2011MNRAS.415L..81P]), thus we consider the results from these relations as roughly consistent with each other as well as with our color-based method. For iPTF13bvn, a host-galaxy color excess $E(B-V)_\mathrm{host}=0.0278$ mag was previously derived from the  D absorption from high-resolution spectroscopy [@Cao:2013aa]. A larger value of $E(B-V)_\mathrm{host}=0.17\pm0.03$ mag was later suggested by [@2014AJ....148...68B] based on a preliminary $B-V$ color comparison to the Carnegie Supernova Project sample. [@2014AJ....148...68B] estimate $E(B-V)_\mathrm{host}=0.07$–0.22 mag from another high-resolution spectrum. We note that the result from our color comparison to SN 2011dh is roughly consistent with all of these values when the uncertainties are taken into account. ![Progenitor identification of PTF12os found by registering 8 ground-based NTT/EFOSC $i$-band images of PTF12os to a stacked pre-explosion *HST* ACS F814W image. The field of view shown here is $4\farcs0\times2\farcs8$. North is up and east is to the left. The white circles show the average position of the centroid of PTF12os after image registration with a radius reflecting the statistical error from 7 image registrations. The smaller circle represents a 2$\sigma$ uncertainty and the larger circle represents a 5$\sigma$ uncertainty. The 2$\sigma$ uncertainty is only shown in the zoomed inset. The pink circles indicate the centroid of the possible progenitor measured in the *HST* image, with a radius representing the 2$\sigma$ uncertainty in its centroid.[]{data-label="fig:prog_12os"}](figures/12os_reg_comp.png){width="9cm"} Progenitor identifications and photometry {#sec:prog_id} ========================================= NGC 5806 has been comprehensively imaged with *HST*, both before and after the explosions of PTF12os and iPTF13bvn. We give a summary of the currently publicly available archival *HST* images covering the region of PTF12os in Table \[tab:hst\]. Moreover, images of iPTF13bvn were obtained in 2013 when this SN was present[^16], and they were used by multiple authors to confirm the progenitor candidate identification that was previously proposed from ground-based imaging (see Sect. \[sec:prog\_id\]). NGC 5806 was also reobserved at two epochs[^17] in 2015 in order to search for the binary companion of iPTF13bvn. In this paper we use these observations to perform accurate astrometric registrations to locate a progenitor candidate of PTF12os and to constrain the mass of this candidate via its photometry (Sect. \[sec:prog\_12os\]). For iPTF13bvn we do not perform any new analysis, but provide a summary of previous work (Sect. \[sec:prog\_13bvn\]). ![image](figures/F555W_diff.pdf){width="18cm"} PTF12os {#sec:prog_12os} ------- A progenitor candidate for PTF12os was suggested by [@2012ATel.3884....1V]. To independently determine the location of the progenitor in NGC 5806, we obtained 11 $i$-band images of PTF12os on 2012 Feb. 16 (+42 d) with the NTT at the La Silla Observatory in Chile. Using the 8 frames with the best seeing (08–10), we perform astrometric registrations to a stacked (filters $F814W$, $F555W$, and $F435W$) archival *HST* (ACS/WFC) image from 2005 [@Smartt:2009] using 11 common point sources. For the registrations we use a procedure based on standard [MATLAB]{} functions following the same principle as described by ; the centroids of common point sources are derived from fits of two-dimensional (2D) Gaussians that allow for rotation, and the geometric transformation is determined as a second-order polynomial transformation with 6 free parameters and applied using bicubic interpolation. After we performed the 8 registrations we applied 3$\sigma$ clipping to the values found for the position of the progenitor. This resulted in one calculation being excluded, leaving us with 7 values for the progenitor position. As the final position we then used the mean of these values and as the uncertainty we used the standard deviation in the mean, giving us a final uncertainty of 0.179 ACS/WFC pixels or 8.9 mas in right ascension ($\alpha$) and 0.176 pixels or 8.8 mas in declination ($\delta$). We find that it is possible to constrain the progenitor location to the central source of a bright region (see Fig. \[fig:prog\_12os\] and Fig. \[fig:galaxy\]) located at $\alpha=~14\degr59\arcmin59\farcs082$, $\delta=~+01\degr53\arcmin23\farcs67$ (J2000.0) in the *HST* image. The centroid of this source, again determined by fitting a 2D Gaussian with rotation in the *HST* image, is offset by only 0.15 pixels or 7.7 mas from the progenitor location that we obtained from our registrations. Furthermore, the 2$\sigma$ uncertainty in the progenitor position that we find almost completely overlap with the 2$\sigma$ uncertainty in the centroid of the *HST* source (Fig. \[fig:prog\_12os\]). Thus, we conclude that they are coincident. This is the same source that was first reported as a potential progenitor candidate by [@2012ATel.3884....1V]. *HST* photometry of this progenitor candidate is shown in Table \[tab:hst\]. The ACS/WFC photometry based on the images obtained in 2005 transformed to $BVI$ filters via a blackbody (BB) fit to the *HST* filters and by performing synthetic photometry using standard $BVI$ filter profiles are $B=23.30\pm0.01$ mag, $V=22.94\pm0.01$ mag, and $I=22.47\pm0.01$ mag. After corrections for the distance modulus of NGC 5806 and the total extinction we get absolute magnitudes $M_B=-10.18\pm0.20$ mag, $M_V=-10.24\pm0.20$ mag, and $M_I=-10.29\pm0.20$ mag, as well as colors $B-V=0.06^{+0.06}_{-0.09}$ mag and $V-I=0.05^{+0.07}_{-0.11}$ mag. In the error bars for the colors we include the uncertainty in the extinction estimate. PSF-fitting photometry was performed on the WFPC2 data using the [hstphot]{} package [@2000PASP..112.1383D], and on the ACS and WFC3 images using [dolphot]{}, a modified version of [hstphot]{}. In all cases, images were masked using the data-quality files, the sky background measured, and library PSFs fitted to sources detected in each frame. The magnitudes were calibrated to the [*HST*]{} Vegamag system using standard ZPs, and in the case of WFPC2 and WFC3 the data were corrected for charge-transfer losses (the ACS images have already been corrected for this effect at the pixel level). We find that the source coincident with PTF12os is consistently brighter in the WFPC2 images. This is likely caused by the lower resolution of these data leading to blending with nearby sources. We therefore do not include the WFPC2 data in our further analysis. The two epochs of ACS imaging both include F814W data, and we find a comparable magnitude for the progenitor in these two epochs. The magnitudes we measure are consistent with those reported by [@2012ATel.3884....1V]. We also find consistent magnitudes between the ACS F435W image taken on 2005 March 10 and the WFC3 F438W image taken after the SN had faded on 2015 June 26. The fact that the source is still present at late times, coupled with the relatively bright absolute magnitudes, clearly suggest that this is a cluster rather than a single stellar progenitor of PTF12os. The F555W late-time magnitude measured in the post-explosion WFC3 image is $\sim0.4$ mag fainter than what was measured in 2005. While it is appealing to interpret this as a deficit of flux owing to the disappearance of a progenitor, it is difficult to reconcile this with the lack of change in F435W/F438W. To further investigate this, we performed difference imaging using the 2005 and 2015 F555W images. The latter was registered to the former, before the two images were convolved to match their PSFs and scaled using the [hotpants]{} package[^18]. The result of this subtraction is shown in Fig. \[fig:F555\_sub\]. No source can be seen at the position of PTF12os, suggesting no major change in flux between the two epochs. The flux at the position of PTF12os in this pre-explosion image cannot have been dominated by a single massive SN progenitor. In the precursor study by [@2015ApJ...811..117S], it was also found that the source located at the position of PTF12os is still at approximately the same brightness 3 years after the SN occurred. Under the assumption that the source coincident with PTF12os is a cluster, we fitted the pre-explosion photometry using the [chorizos]{} package [@2004PASP..116..859M]. Starburst99 models [@1999ApJS..123....3L] were fitted to the measured *HST* ACS F435W, F555W, and F814W magnitudes from 2005, along with the narrow-band F658N magnitude from 2004. The metallicity was set to be solar, while the extinction law was fixed to $R_V=3.1$. The extinction was allowed to vary within the range estimated in Sect. \[sec:extinction\]. The best fit is achieved for a cluster age $\sim 5.5$ Myr. This would imply that the progenitor of PTF12os was relatively massive. Using the STARS models, this corresponds to a ZAMS mass of $\gtrsim25$ [M$_{\odot}$]{} [@2009MNRAS.400.1019E], since single stars less massive than this should not yet have exploded. This value is significantly higher than the $<15$ [M$_{\odot}$]{} estimate of the progenitor mass from our nebular spectroscopy found in Sect. \[sec:nebular\]. However, we are cautious about attributing too much weight to the cluster age derived here given the limited (4-band) photometric data. For comparison, the host cluster age and progenitor mass derived for SN 2004dj by [@2009ApJ...695..619V] relied on measurements in $\sim 20$ different bandpasses. It is also quite possible that the cluster contains multiple stellar populations of differing ages, with the SN progenitor being part of an older population and with the flux in the optical being dominated by a younger population. [lllcc]{} 2001-07-05 & WFPC2/WFC & F450W & 2 $\times$ 230 & 22.204 (0.003)\ - & - & F814W & 2 $\times$ 230 & 21.385 (0.003)\ 2004-04-03 & ACS/WFC & F658N & 2 $\times$ 350 & 21.728 (0.053)\ - & - & F814W & 1 $\times$ 120 & 22.317 (0.033)\ 2005-03-10 & ACS/WFC & F435W & 2 $\times$ 800 &23.319 (0.012)\ - & - & F555W & 2 $\times$ 700 & 23.061 (0.011)\ - & - & F814W & 2 $\times$ 850 & 22.436 (0.009)\ 2015-06-26 & WFC3/UVIS & F438W & 2 $\times$ 1430 & 23.441 (0.014)\ - & - & F555W & 2 $\times$ 1430 & 23.481 (0.010)\ ![image](figures/12os_13bvn_lcs_newdmod3-eps-converted-to.pdf){width="18cm"} iPTF13bvn {#sec:prog_13bvn} --------- A probable progenitor candidate for iPTF13bvn, located at $\alpha=~15\degr00\arcmin00\farcs147$, $\delta=+01\degr52\arcmin53\farcs19$ (J2000.0), was first identified in pre-explosion *HST* images by [@Cao:2013aa]. In we used post-explosion *HST* (WFC3) images[^19] of iPTF13bvn to confirm this result, also illustrated in Fig. \[fig:galaxy\] here. With these data it was possible to rule out other sources to a $5\sigma$ level. This result was yet again confirmed by [@2015MNRAS.446.2689E], who reanalyzed the same pre- and post-explosion *HST* images. In [@Cao:2013aa] the apparent magnitudes for this source were first reported. However, a reanalysis by [@2015MNRAS.446.2689E] resulted in somewhat different values; $F435W=25.80\pm0.12$ mag, $F555W=25.80\pm0.11$ mag, and $F814W=25.88\pm0.24$ mag. After correcting for the reddening and distance modulus, we find the absolute magnitudes[^20]. $M_{F435W}=-6.86\pm0.23$ mag, $M_{F555W}=-6.74\pm0.23$ mag, and $M_{F814W}=-6.47\pm0.31$ mag. By fitting a BB to these magnitudes and performing synthetic photometry using standard $BVI$ filter profiles, we find $M_B=-6.83\pm0.23$ mag, $M_V=-6.78\pm0.23$ mag, and $M_I=-6.73\pm0.31$ mag, as well as color estimates $B-V=-0.05^{+0.17}_{-0.16}$ mag and $V-I = -0.05^{+0.27}_{-0.26}$ mag, where the error bars on the colors include the uncertainty in the extinction estimate. These colors and absolute magnitudes could be consistent with a single WR star [@Eldridge:2013aa; @Groh:2013aa2]. However, previous studies conclude that a binary system is more likely for the progenitor of iPTF13bvn, based on the low ejecta and oxygen masses. In the binary model for the iPTF13bvn system (20 [M$_{\odot}$]{} primary and 19 [M$_{\odot}$]{} secondary initial masses in a very close binary) by [@2014AJ....148...68B], the source in the pre-explosion *HST* images was predicted to be dominated by light from the primary, which implies that the system should be significantly fainter after the SN has faded. Further, based on the binary model grid used by [@2015MNRAS.446.2689E], the *HST* photometry was shown to possibly be consistent with a wide range of initial separations and masses (10–20 [M$_{\odot}$]{} for the SN progenitor). However, in the recent study by [@2016arXiv160405050E], based on the *HST* observations obtained on 2015 June 26 (+740 d), it is shown that the object at the position of iPTF13bvn has dimmed, and is now fainter than in the pre-explosion images, confirming that the progenitor identification was correct. A similar conclusion was also made by [@2016arXiv160406821F]. While [@2016arXiv160405050E] concluded that the observations were likely still dominated by light from the SN at the time of these observations, the mass range of the progenitor could be narrowed down significantly to 10–12 [M$_{\odot}$]{}. A range that is consistent with our nebular models of iPTF13bvn, discussed in Sect. \[sec:nebular\]. Light curves {#sec:lc} ============ The observed light curves (LCs) in Johnson/Cousins/Bessel $UBVRI$, SDSS $griz$ and [*Swift*]{} $UVW1$ and $UVM2$ filters for PTF12os and iPTF13bvn along with SN 2011dh are shown in Fig. \[fig:lc\_full\]. All of the data in Fig. \[fig:lc\_full\] have been corrected for distance and reddening according to Sect. \[sec:host\] and Sect. \[sec:extinction\]. At first glance, the LCs of these three SNe appear surprisingly similar across all photometric bands, but a more detailed look does bring out some minor differences between them. PTF12os peaks in the $g$ band at approximately +16 d, iPTF13bvn at +16.5 d, and SN 2011dh at +20 d (see Fig. \[fig:lc\_early\] for close-up view of the $g$- and $r$-band LC peaks). The widths of the LC peaks are also slightly different. In the $r$ and $i$ bands especially, the LCs of iPTF13bvn appear to be somewhat narrower than those of both PTF12os and SN 2011dh. The width of the LC peak in $r$ is $\sim 35$ days for iPTF13bvn and 50 days for PTF12os and SN 2011dh. This width was measured by finding the point on the rising part of the LC that is 1.5 mag fainter than the maximum, and then measuring the time until the same magnitude is reached on the declining part of the (interpolated) LC (again, see Fig. \[fig:lc\_early\]). Using the same measure a similar difference is observed in the $i$ band. However, it is somewhat difficult to isolate the peaks in the LCs of SN 2011dh and PTF12os in the $i$ and $z$ bands. It is also difficult to assess when the LC peaks end and the LCs start to decline in a more linear fashion. For iPTF13bvn, the LC peaks are somewhat more easily discernible in both the $i$ and $z$ bands. ![Early-time $g$- and $r$-band LCs of PTF12os (squares) and iPTF13bvn (circles) along with SN 2011dh (dashed black lines). Corrections for extinction and distance have been applied according to Sect. \[sec:host\] and Sect. \[sec:extinction\].[]{data-label="fig:lc_early"}](figures/12os_13bvn_early_lcs_newdmod-eps-converted-to.pdf){width="8.5cm"} Another minor difference can be seen in the apparent lack of a plateau phase immediately following the peak (+50 d) in bands bluer than the $r$ band for iPTF13bvn compared to PTF12os. PTF12os shows a temporarily slower decline between approximately +50 d and +100 d in $g$ and a virtually constant magnitude in $B$. For PTF12os we measure the $g$-band decline rate in this phase to be $0.011\pm0.002$ mag d$^{-1}$, based on a linear (first-order polynomial) fit. For SN 2011dh we find a very similar decline rate of $0.013\pm0.001$ mag d$^{-1}$. For iPTF13bvn we measure $0.015\pm0.002$ mag d$^{-1}$. In the fit for iPTF13bvn, data points between +50 d and +80 d were used, since after +80 d there is a gap in our photometric coverage until +210 d. However, our late-time data points between +210 d and +300 d indicate it is likely that the same decline rate as measured between +50 d and +80 d continued up until 300 d past the explosion in the $g$ band. For the $B$ band we measure $0.000\pm0.002$ mag d$^{-1}$ for PTF12os, $0.010\pm0.001$ mag d$^{-1}$ for SN 2011dh, and $0.010\pm0.003$ mag d$^{-1}$ for iPTF13bvn. For SN 2011dh this $B$-band decline rate continued until around +125 d, whereafter the decline rate increased significantly. This behavior is consistent with our data on PTF12os. However, for iPTF13bvn we lack coverage in the $B$ band past +75 d. ![image](figures/12os_13bvn_colors_newdmod-eps-converted-to.pdf){width="18cm"} In the $U$ band, the data for both iPTF13bvn and PTF12os are not of sufficiently high quality to be able to discern whether there is a similar plateau as seen in the SN 2011dh data past +50 d. However, the photometric points for both PTF12os and iPTF13bvn are consistent with the photometry of SN 2011dh within the uncertainties. In the redder bands ($riz$), there is a similar trend as in the bluer bands in the decline rates between +50 d and +100 d, with PTF12os showing marginally slower decline rates. In the $riz$ bands we respectively measure $0.018\pm0.001$ mag d$^{-1}$, $0.018\pm0.001$ mag d$^{-1}$, and $0.012\pm0.001$ mag d$^{-1}$ for PTF12os; $0.022\pm0.002$, $0.020\pm0.002$, and $0.016\pm0.003$ mag d$^{-1}$ for SN 2011dh; and $0.022\pm0.001$, $0.021\pm0.002$, and $0.011\pm0.003$ mag d$^{-1}$ for iPTF13bvn, based on similar first-order polynomial fits as described above. While the widths of the LC peaks are somewhat different for iPTF13bvn and SN 2011dh, and iPTF13bvn lacks the plateau phase (at least in $g$) as discussed above, the decline rates become almost identical past +150 d in all of the bands where our data allow this comparison to be made ($gri$; see the dashed lines in Fig. \[fig:lc\_full\]). For PTF12os, both the LC peaks and the late-time decline rates are very similar to those of SN 2011dh, but owing to the slower declines immediately after the peak the fluxes in the $g$, $r$, and $i$ bands are somewhat higher past +150 d compared to both SN 2011dh and iPTF13bvn. To roughly estimate the post $+100$ d decline rates of PTF12os and iPTF13bvn in the individual bands (the dashed and colored lines in Fig. \[fig:lc\_full\]), we performed first-order polynomial fits to the photometric data. For PTF12os we fit the data obtained between +150 d and +220 d in $gri$ to avoid the plateaus in $g$ and $r$. Photometric data between +70 d and +300 d are used for iPTF13bvn. Based on these fits we find the late-time decline rate in $r$ to be $0.019\pm0.001$ mag d$^{-1}$ for iPTF13bvn, and for PTF12os we measure a decline of $0.016\pm0.002$ mag d$^{-1}$. In $g$ we measure $0.019\pm0.001$ mag d$^{-1}$ for iPTF13bvn and $0.017\pm0.001$ mag d$^{-1}$ for PTF12os. In the $i$ band we measure $0.016\pm0.005$ mag d$^{-1}$ for iPTF13bvn and $0.017\pm0.001$ mag d$^{-1}$ for PTF12os. These decline rates are very similar to the decline rates of SN 2011dh in the corresponding bands at +200 d (again, see Fig. \[fig:lc\_full\]). We note that we are by necessity fitting different time intervals for PTF12os and iPTF13bvn, since there is a large gap in the coverage of iPTF13bvn past +80 d and we lack photometry past +220 d for PTF12os. This makes direct comparisons of the decline rates difficult. However, from Fig. \[fig:lc\_full\], it is clear that PTF12os, overall, declined less from peak until at least +220 d compared to both SN 2011dh and iPTF13bvn, since the LC peaks are generally very similar in PTF12os while the late-time $gri$ photometry is significantly brighter. In the UV regime, the flux is comparable for PTF12os and SN 2011dh in both the $UVW1$ and $UVM2$ bands. For SN 2011dh a smooth decline was observed in the $UVM2$ band, while an initial decline followed by a peak was observed in the $UVW1$ band[^21]. Our data on PTF12os are not of sufficiently high quality to determine whether this SN shows similar behavior. The first point in the $UVW1$ band is not early enough, compared to the epochs where the initial decline was observed for SN 2011dh. Furthermore, the uncertainties of our $UVM2$-band data are too large to disentangle between a peak or a smooth decline in the observed LC. Compared to PTF12os and SN 2011dh, iPTF13bvn appears to have had a significantly lower UV flux. For this SN the first datapoint in the $UVW1$ band was obtained very early, at around +2 d, but we still do not see the initial decline followed by a rise as for SN 2011dh. This could be interpreted as iPTF13bvn having a more compact progenitor compared to SN 2011dh, and hence a faster UV cooling tail following shock breakout (see also Sect. \[sec:earlyphot\]). In the $UVM2$ band we only have two useable data points, and thus we cannot determine the LC shape. Finally, one should note that the UV regime is very sensitive to extinction corrections, and especially to the shape of the extinction law, which we simply assume here to be the same as for the MW (Sect. \[sec:extinction\]) for both objects. This, or alternatively line blocking by metal lines, could also explain why the UV flux appears to be different in iPTF13bvn compared to PTF12os (and SN 2011dh). If the extinction law is too steep in the UV, the considerable extinction corrections we are using for PTF12os could result in an overestimated UV flux. Color evolution and blackbody fits {#sec:colors} ---------------------------------- A comparison of the color evolution in $B-z$, $g-r$, and $r-i$ for PTF12os, iPTF13bvn, and SN 2011dh is shown in Fig. \[fig:color\]. For these calculations the LCs have been interpolated to the dates of the bluest filter in each color. Since the multiband LCs are very similar, as previously discussed, all three SNe also appear similar in terms of the observed colors and color evolution. We note that this is in part by construction; we are matching the colors of PTF12os and iPTF13bvn to the colors of SN 2011dh between +10 d and +30 d, to estimate the extinction (Sect. \[sec:extinction\]). However, other methods to calculate the extinction for both objects are consistent with the values that we have adopted, and the color evolution for each SN relative to the values between +10 d and +30 d is in any case not affected by our assumptions about the extinction. The most apparent differences between these three SNe appear in the $B-z$ and $g-r$ colors. SN 2011dh temporarily shows a bluer color in $B-z$ by around 0.5 mag compared to PTF12os and iPTF13bvn, between +50 d and +70 d. In $g-r$ the color is almost identical among the SNe up until +30 d, after which the color of iPTF13bvn plateaus at a $\sim0.25$ mag bluer color of $g-r\approx0.8$ mag, indicating a significantly lower flux in the $g$ band, since the $r-i$ color continues to be similar also after +30 d. However, compared to the sample of SNe Ibc in , the $g-r$ color and time evolution in both PTF12os and iPTF13bvn appear consistent with those of other SNe Ib. The spread in the observed $g-r$ color in that sample after +30 d is significantly larger than the difference observed between PTF12os and iPTF13bvn. ![Blackbody temperature (top panel) and radius (bottom panel) of iPTF13bvn (red circles), PTF12os (blue circles), and SN 2011dh (gray circles).[]{data-label="fig:bb"}](figures/bradbtemp_newdmod-eps-converted-to.pdf){width="9cm"} ![image](figures/early_lc_fits_newdmod-eps-converted-to.pdf){width="18cm"} Another noteworthy difference is seen in the very early $g-r$ color of iPTF13bvn; up until approximately +4 d, a clear phase of gradual reddening in this color is observed (see the inset in Fig. \[fig:color\]). Similar behavior is seen to a lesser extent in the other colors. This could be evidence of a significant contribution from the cooling phase following shock breakout in the SN explosion, and it is discussed in more detail in Sect. \[sec:earlyphot\]. We have determined the BB parameters of PTF12os, iPTF13bvn, and SN 2011dh by fitting BBs to the spectral energy distribution (SED) derived from the $griz$ LCs (see Fig. \[fig:bb\]). For these BB calculations we have interpolated the flux in the other bands to the dates of the $g$-band measurements. We correct for the distance and extinction according to Sect. \[sec:host\] and Sect. \[sec:extinction\]. Both in terms of the BB temperature and the radius, all three SNe appear to be very similar, except for the very early temperature evolution where iPTF13bvn shows signs of cooling in the earliest data. The BB temperature peaks at $\sim 8500$ K for all three SNe, and the radii peak at $\sim 1.7\times10^{15}$ cm. We note that the peak radius and uncertainties generally become smaller for SN 2011dh if the $r$ band is excluded from the BB fits. This indicates that the SED significantly deviates from a BB for SN 2011dh in $r$, which could possibly be explained by the relatively strong signature from hydrogen in the early spectra (Sect. \[sec:spectra\]). We also note that the radius found for SN 2011dh by via fitting the $VIJHK$ bands peaks at $\sim 1.4\times10^{15}$ cm. That estimate should be more robust, since it covers a much larger wavelength range, and deviations from a BB in small parts of this range should not have as large an effect on the final result. A similar decrease in the BB radii of PTF12os and iPTF13bvn would be expected, given the similarity of all other LC bands. However, we do not have the needed wavelength coverage to perform that comparison. ![image](figures/bolometric_lcs_newdmod-eps-converted-to.pdf){width="17.999cm"} Early-time photometry {#sec:earlyphot} --------------------- For iPTF13bvn we have very early-time $r$-band photometry (at +0.6 d) from the P48, along with color information starting already from +1.7 d. In the early color evolution we observe a cooling trend up to around +4 d in $g-r$ (see the inset of Fig. \[fig:color\]). Our BB fits to the $griz$ filters (Fig. \[fig:bb\]) show temperatures declining from 8500 K at around +2 d to 7000 K at +4 d. Similar evolution is found when fitting BB SEDs to the earliest two spectra (see Sect. \[sec:spectra\]). In Fig. \[fig:early\_lc\_fit\] we show the early BB temperature evolution, an estimated early bolometric LC, and the early $gri$ photometry in detail. One possible interpretation of these data is that we are at early times observing significant flux from the cooling phase following shock breakout [@Piro:2012aa]. Based on this assumption we have performed fitting of models with different radii to the early $gri$-band luminosity, the temperature evolution, and the early bolometric luminosity estimated from the BB temperature and radius using the Stefan-Boltzmann law. Our models are calculated according to @Piro:2012aa [their Eqs. 1 and 2, henceforth abbreviated the PN1 model], and we also let the explosion date of iPTF13bvn be a free parameter in the fitting procedure. We find that it is possible to fit the early temperature evolution with a model that is also consistent with the early photometry in the individual bands (the right panel of Fig. \[fig:early\_lc\_fit\]), by using a progenitor radius of 10 [[$\mathrm{R}_{\odot}$]{}]{} and an explosion date of $t_{\rm exp}=$ 2013 June 15.57. We note that after around +2 d, the luminosities in the individual bands start to greatly exceed what can be produced from a model based purely on shock-breakout cooling while still fitting the observed temperature (and estimated bolometric luminosity) simultaneously. We interpret this as the heating from radioactive nickel gradually becoming the dominant energy source. For these calculations we have adopted the ejecta mass and total kinetic energy from our hydrodynamical model described in Sect. \[sec:hydro\] and an opacity $\kappa=0.2$ cm$^2$g$^{-1}$. The radius we find here is significantly larger than what was found by [@2013ApJ...775L...7C]. This difference can be largely explained by a different approach; in [@2013ApJ...775L...7C] it was assumed that the luminosity from the cooling phase should reach the plateau phase at a luminosity limited by the first photometric point observed by the P48 at +0.6 d. Here we assume that the cooling is still significant at least up until around +2 d. We are also using explosion parameters (ejecta mass, total kinetic energy) from hydrodynamical modeling, while [@2013ApJ...775L...7C] adopted typical values for an SE SN. We note that the explosion date we find using this method is within 0.1 d of the explosion date estimate by [@2013ApJ...775L...7C], but around 0.3 d later than the explosion date derived from our hydrodynamical modeling. We also note that a radius on the order of 10 [R$_{\odot}$]{} immediately prior to the explosion of an SE SN is expected in a binary scenario [see, e.g., @Yoon:2010aa]. From hydrodynamical simulations, [@2014AJ....148...68B] have argued that the early-time observations of iPTF13bvn could be consistent with a significantly larger radius, possibly even as large as $150$ [[$\mathrm{R}_{\odot}$]{}]{}, if the explosion is assumed to have happened somewhat later than our estimate. That calculation was based only on the P48 $r$-band data. Using only the $r$ band and later explosion dates, we can also fit significantly larger radii. However, these models then become inconsistent with all of the remaining observations; too high temperatures compared to the observations are predicted, resulting in bad fits to the bolometric LC. However, there is evidence that analytical shock-cooling models tend to predict unrealistically high temperatures when the radius becomes large [e.g., @2012ApJ...757...31B]. Lower temperatures in our models with larger radii and later explosion times could make them more consistent with the observed LCs. Thus, we do not consider the above radius estimate of $10$ [[$\mathrm{R}_{\odot}$]{}]{}as particularly robust, but it does seem unlikely that the progenitor would have had a significantly smaller radius. In the binary modeling by [@2016arXiv160405050E] a radius on the order of 50 [R$_{\odot}$]{} is found for the progenitor of iPTF13bvn. Finally, for PTF12os it is not possible to put any meaningful constraints on the progenitor radius based on a similar analysis, since the early P48 coverage does not start until around +5 d and we only have color information after $+10$ d. We stress the importance of obtaining early-time color information (ideally before +2 d) for constraining the radius of more SE SNe, following the above recipe or using hydrodynamical calculations. ![image](figures/early_spec_comp-eps-converted-to.pdf){width="18cm"} Quasi-bolometric light curves {#sec:bolometric} ----------------------------- We have constructed quasi-bolometric LCs by integrating the flux in the $Bgriz$ bands. For these calculations we have corrected for the distance and extinction according to Sect. \[sec:host\] and Sect. \[sec:extinction\]. To determine the integrated fluxes we have interpolated the magnitudes in the other bands to the dates of the $r$-band measurements. We then converted the magnitudes into fluxes for each observed $r$-band epoch at the effective wavelengths of the filters and performed linear interpolation of these data at an even 1 Å spacing, followed by integration between the effective wavelengths of the $B$ and $z$ bands. We show a comparison of the $Bgriz$ quasi-bolometric LCs of PTF12os, iPTF13bvn, and SN 2011dh in Fig. \[fig:bol\_lcs\]. Qualitatively, these LCs are again very similar for these three SNe, with only minor differences, as expected from the very similar multiband photometry. Similar to what was seen for the $g$ band, PTF12os and iPTF13bvn rise to peak in 16–18 d, while SN 2011dh rises to peak in $\sim 20$ d. The widths of the LC peaks also differ slightly, with the LC peak of iPTF13bvn being somewhat narrower, as was also seen in some of the individual LC bands. The bolometric LC peak width correlates with the ejecta mass of the explosion [@1977ApJS...33..515F]; thus, this indicates that the ejecta mass could be slightly lower in iPTF13bvn compared with PTF12os and SN 2011dh, which should have very similar ejecta masses if only the LC peak widths are considered (but see Sect. \[sec:hydro\] for a more comprehensive analysis based on our hydrodynamical models, which also account for the observed expansion velocities). The ejecta mass of SN 2011dh was estimated to be $\sim 2$ [M$_{\odot}$]{}, from previous hydrodynamical modeling . The quasi-bolometric peak luminosities are remarkably similar, with iPTF13bvn peaking at the highest luminosity of $10^{42.05}$ erg s$^{-1}$, SN 2011dh peaking at $10^{42}$ erg s$^{-1}$, and PTF12os at $10^{41.95}$ erg s$^{-1}$. The peak luminosity is strongly correlated with the amount of radioactive nickel ejected in the explosions. Thus we find that these three SNe should have very similar nickel masses. In , the nickel mass of iPTF13bvn was found to be $\sim 0.05$ [M$_{\odot}$]{}, which is on the lower end of what is observed for typical SNe Ib/IIb. However, we have here adopted a distance modulus that is higher by 0.38 mag. The nickel mass of SN 2011dh is typical for a SN IIb. In the linear decline phase of the quasi-bolometric LCs, starting at approximately +50 d for all three SNe, the decline rates are very similar, when measured as linear (first-order polynomial) fits between +50 d and +70 d. We find $0.019\pm0.001$ mag d$^{-1}$, for iPTF13bvn, $0.019\pm0.002$ mag d$^{-1}$ for PTF12os, and $0.020\pm0.003$ mag d$^{-1}$ for SN 2011dh. After around +80 d, we observe a break in the LC of PTF12os toward a slower decline of $0.012\pm0.001$ mag d$^{-1}$, as measured by a linear fit between +85 d and +110 d. This could indicate a more efficient trapping of $\gamma$-rays in the SN ejecta of PTF12os, as this decline rate is close to the expected decay rate of $\sim 0.01$ mag d$^{-1}$ from the energy deposition of radioactive assuming complete trapping. This behavior is not seen in the LC of SN 2011dh, for which the same decline as measured between +50 d and +70 d continues for a significantly longer time, at least until around +300 d. However, [@2015MNRAS.450.1295W] show that the late-time decline rates ($\gamma$-rays trapping efficiencies) among SE SNe can vary significantly. For iPTF13bvn, our photometric coverage in the $B$ and $z$ bands stops after +70 d, and thus we are not able to comment on the late-time $Bgriz$ quasi-bolometric LC behavior. However, in Sect. \[sec:lc\] we found that the $gri$-band decline rates are very similar for iPTF13bvn and SN 2011dh also at much later times. When constructing a quasi-bolometric LC of iPTF13bvn using methods similar to ours, [@2014MNRAS.445.1932S] found a rather steep post-peak decline rate of $\sim0.03$ mag d$^{-1}$ between +40 d and +70 d. This is a steeper decline compared with most other SNe IIb/Ib [@2015MNRAS.450.1295W]. However, the decline rate found by [@2014MNRAS.445.1932S] depended heavily on a single bolometric epoch at $~+70$ d. Our quasi-bolometric LC of iPTF13bvn shows a slower decline rate, similar to that of SN 2011dh between +50 d and +70 d. Our result is based on multiple epochs of $Bgriz$ data during this phase, of which several are at around +70 d. Finally, we want to point out that although the quality of our photometry has allowed us to scrutinize minor differences in the multiband LCs, color properties, and quasi-bolometric LCs of these three SNe, the bottom line must still be that PTF12os, iPTF13bvn, and SN 2011dh are astonishingly similar photometrically. Spectroscopy {#sec:spectra} ============ In Sect. \[sec:lc\] we showed that PTF12os and iPTF13bvn are photometrically very similar. Only minor differences, both in terms of the filtered LCs and quasi-bolometric LCs, can be identified. Spectroscopically, the picture is similar, although somewhat more complex. PTF12os (Fig. \[fig:spec12os\]) and iPTF13bvn (Fig. \[fig:spec13bvn\]) qualitatively show the same spectral features and evolve in a similar way (see also Fig. \[fig:earlyspec\]). However, the spectra of iPTF13bvn generally show somewhat broader features, indicating faster expansion velocities. The spectra of PTF12os are more affected by host-galaxy contamination from the strong underlying H$\alpha$ region (Sect. \[sec:prog\_12os\]). The very early spectra (until +10 d) of iPTF13bvn are dominated by broad features, indicating high expansion velocities. We can identify emission from the 5876 Åline in the earliest spectrum of the SN obtained at +2.15 d. Its absorption minimum indicates a velocity of 17,000–18,000 km s$^{-1}$, consistent with what was measured in the spectrum at +2.3 d by . For PTF12os, our spectroscopic coverage starts at +8.5 d, and we can therefore not make a comparison of the very early expansion velocities. However, the 5876 Åline shows a significantly slower velocity of only $7000$ km s$^{-1}$ compared to around 12,000 km s$^{-1}$ for iPTF13bvn in spectra obtained around +8 d. The iPTF13bvn spectrum at +2.15 d shows a significantly bluer continuum compared to the later spectra obtained at +2.3 d and +2.6 d. Blackbody fits to these three very early-time spectra yield temperatures of $8000\pm500$ K at +2.15 d, $7000\pm500$ K at +2.3 d, and $7500\pm500$ K at +2.6 d[^22] (Fig. \[fig:bb\_spec\]). This is consistent with the cooling trend observed in the optical LCs of the SN during this phase (Sect. \[sec:earlyphot\]). Note also that a decline in temperature from around 9000 K to 7000 K is predicted to occur between +2 d and +3 d in the semianalytical model for the early-time photometry we used to estimate the radius of the progenitor of this SN (Sect. \[sec:earlyphot\] and Fig. \[fig:early\_lc\_fit\]). Based on spectral comparisons with other SNe Ib, iPTF13bvn has been classified as a spectroscopically normal SN Ib both in the optical and the NIR . The emergence of $\lambda\lambda$5016, 5876, 6678, and 7065 absorption became clear after $\sim+15$ d. The velocities of the $\lambda\lambda$5876, 6678, 7065 lines along with the velocity of $\lambda$5169, based on the respective absorption minima, were measured by . PTF12os shows very similar evolution in the lines. In spectra taken after $+16$ d, the $\lambda\lambda$5016, 5876, 7065 lines can be clearly identified. $\lambda$6678 is difficult to distinguish in early-time spectra owing to strong host contamination, but it can be seen in our spectra taken between +35 d and +53 d. Another minor difference between these SNe can be seen in the $\lambda$5016 line, which is clearly seen even in our earliest spectrum of PTF12os (taken at $+8.5$ d), while it appears somewhat later, after $+11$ d, in iPTF13bvn. A similar effect is visible in $\lambda$5169; after +10 d this line gradually starts to appear in iPTF13bvn, while it is already there in our earliest spectrum (at +8.5 d) of PTF12os. We have determined the $\lambda\lambda$5016, 5876, 6678, 7065 and $\lambda$5169 velocities in all of the spectra where measurements were possible for both PTF12os and iPTF13bvn. This also included remeasuring velocities in previously published spectra, and it was done by fitting Gaussians in small regions around the absorption minima of the lines and locating the minima in the fits. In some cases, where the galaxy contamination is very strong and reasonable Gaussian fits are not possible, the centers of the absorption features were measured by manual inspection of the spectra. While we attempted velocity measurements of the $\lambda$5016 absorption, we found that it is very contaminated in most spectra of both PTF12os and iPTF13bvn; thus, we have chosen to not include this line in our comparisons. We present the absorption-line velocity evolution of PTF12os and iPTF13bvn in Fig. \[fig:vel\_evol\]. ![Early-time spectra of iPTF13bvn obtained at +2.15 d (red) and +2.3 d (gray) together with blackbody fits (dashed lines, 8000 K in red and 7000 K in gray). The spectra have been normalized by their median values in a region around 6000 Å.[]{data-label="fig:bb_spec"}](figures/bb_fit_early_spec-eps-converted-to.pdf){width="9cm"} ![image](figures/12os_spec_evol_velocities_zoom3-eps-converted-to.pdf){width="18cm"} As an estimate for the photospheric expansion velocity, which is employed in our hydrodynamical modeling in Sect. \[sec:hydro\], we use the $\lambda$5169 line [see @Dessart:aa], which evolves from $\sim 10,000$ km s$^{-1}$ at +10 d to 5800 km s$^{-1}$ at +50 d in iPTF13bvn, and from $\sim 8000$ km s$^{-1}$ at +10 d to 4500 km s$^{-1}$ at +50 d in PTF12os. While PTF12os shows on average slower photospheric velocities, the temporal evolution of the velocities (based on this line) is extremely similar, and the decrease in the expansion velocity is identical within our uncertainties between +10 d and +25 d (see the dashed lines in Fig. \[fig:vel\_evol\]). Similar velocity differences as for $\lambda$5169 can be seen in the lines, with iPTF13bvn showing on average faster velocities compared to PTF12os. We also note that the velocity evolution of the lines in PTF12os appears relatively flat, with only the $\lambda$6678 showing a small but statistically significant decrease in velocity during the evolution of the SN[^23]. Note also that our spectral sequence starts at +8.5 d, and if the same time span were considered for iPTF13bvn, the velocity evolution would look similar. Earlier spectra of PTF12os would have been needed to investigate any constant-velocity evolution in the lines, as has been suggested for some peculiar SNe IIb by [@2014ApJ...792....7F]. Past +50 d, \[\] $\lambda\lambda$5577, 6300, 6364 emission lines along with the $\lambda$7774 triplet start to emerge in both PTF12os and iPTF13bvn, indicating that their ejecta are gradually becoming optically thin. In our latest spectum of PTF12os, obtained at +215 d, weak signatures are still present, but the spectrum has become dominated by \[\] $\lambda\lambda$6300, 6364 and \[\] $\lambda\lambda$7291, 7323 emission. The \] $\lambda$4571 emission line is also present. In our late-time spectra past +250 d of iPTF13bvn, the picture is very similar. The \[\] $\lambda\lambda$6300, 6364 and \[\] $\lambda\lambda$7291, 7323 doublets dominate. Weak \] $\lambda$4571 emission can also be identified, although there is significant noise in this region. The \[\] lines started to emerge after +36 d in iPTF13bvn and after +45 d in PTF12os. In conclusion, the entire spectral evolution in the optical of PTF12os and iPTF13bvn is very similar, consistent with the comparable photometry (Sect. \[sec:lc\]). We will investigate in Sect. \[sec:nebular\] the late-time spectra in detail using nebular models. Near-infrared spectroscopy of iPTF13bvn {#sec:13bvnnir} --------------------------------------- In addition to the early-time NIR spectra (+3 d, +9 d, and +17 d) published by [@2013ApJ...775L...7C], we have obtained one later NIR spectrum at +79 d with the Magellan telescope in Chile. The NIR spectral sequence, along with comparisons to SN 2011dh and SN 2008ax [Type IIb; @2008MNRAS.389..955P; @2011MNRAS.413.2140T], are shown in Fig. \[fig:spec13bvnir\]. In our NIR spectrum of iPTF13bvn obtained at +79 d, in addition to the $\lambda$10,830 and $\lambda$20,590 lines previously identified by [@2013ApJ...775L...7C], emission features around 12,000 Å and 15,000 Å have emerged. These are likely signatures of emission from $\lambda$11,970 or $\lambda$12,000 and a blend of $\lambda\lambda$15,040, 15,048. These identifications are based on the modeling of the NIR spectra of SN 2008ax at +130 d by . Figure \[fig:spec13bvnir\] clearly shows that the spectral evolution in the NIR regime is very similar for iPTF13bvn and SN 2008ax. In the analysis by , $\lambda$12,000 from explosively synthesized silicon is preferred over $\lambda$11,970 for the line emerging at 12,000 Å. We also estimate that the velocities have decreased from $\sim 20,000$ km s$^{-1}$ in the early-time spectra to $\sim 15,000$ km s$^{-1}$ at +79 d, measured from the absorption minimum of the $\lambda$10,830 line. Early-time NIR spectra of SN 2011dh show strong signatures of hydrogen, as seen clearly by the presence of emission features at and at 12,820 Å and 10,050 Å (Fig. \[fig:spec13bvnir\]). is blended with strong emission of $\lambda$10,830. Note also that we do not have data for SN 2011dh in the region of owing to telluric absorption. For iPTF13bvn, no clear signatures of hydrogen can be seen. However, compared to an estimate of the continuum, found by heavily smoothing the spectra, there is a possible excess in both the +3 d and +9 d spectra around the line (Fig. \[fig:spec13bvnir\]). Furthermore, the NIR spectrum of the Type IIb SN 2008ax [+10 d; @2011MNRAS.413.2140T] is very similar to the spectrum of iPTF13bvn at +9 d, especially in the regions around the Paschen lines. This indicates that the observed NIR spectra of iPTF13bvn do not exclude the possibility of the presence of a small amount of hydrogen at early times. The line falls in a region with strong telluric absorption in all our NIR spectra of iPTF13bvn, so we are not able to draw conclusions about the presence of potential broad emission. We discuss the presence of hydrogen in the optical spectra of both iPTF13bvn and PTF12os in the following section. Hydrogen in early spectra {#sec:hydrogen} ------------------------- The similarity of PTF12os to iPTF13bvn would logically lead to a Type Ib classification for PTF12os. However, in our early-time spectra of PTF12os we can identify likely signatures of H$\alpha$, H$\beta$, and H$\gamma$ in absorption at $\sim 14,000$ km s$^{-1}$. This is illustrated by the dashed green lines in Fig. \[fig:spec12os\] and also by the dashed lines in Fig. \[fig:earlyspec\], where we illustrate a comparison of early-time spectra of PTF12os, iPTF13bvn, and SN 2011dh. The spectrum of PTF12os at +19 d is very similar to spectra of the Type IIb SN 2011dh obtained at similar epochs (see also the bottom panel of Fig. \[fig:spec12os\]). In iPTF13bvn, no signatures of H$\beta$ or H$\gamma$ can be identified, but the spectra between +10 d and +25 d could still be consistent with the presence of H$\alpha$ since the spectral evolution is very similar to that of PTF12os, especially in this region; the absorption feature around 6300 Å is comparable in strength and it disappears after $\sim$+25 d in both SNe (see Fig. \[fig:halphahbeta\]). For PTF12os, based on the comparison to SN 2011dh, it appears very likely that this feature is due to hydrogen, and thus it could be argued that it should be the case also for iPTF13bvn. The early-time NIR spectra of iPTF13bvn (Fig. \[fig:spec13bvnir\]) are also very similar to NIR spectra of the Type IIb SN 2008ax around the Paschen lines, so these data do not exclude the presence of a hydrogen envelope, as already discussed in Sect. \[sec:13bvnnir\]. The lack of H$\beta$ (and H$\gamma$) signatures in iPTF13bvn may possibly be explained by the higher expansion velocities, making the lines more difficult to identify. In spectra of SN 2011dh, a clear P-Cygni signature from H$\alpha$ at $\sim 6300$ Å is seen in spectra obtained after +4 d. The strength of this feature compared to the continuum reached a maximum at around +25 d, whereafter it gradually weakened and disappeared after $\sim$+90 d, leading to the Type IIb classification. Spectral signatures of H$\beta$ and H$\gamma$ with similar behaviors could also be identified. Monte-Carlo-based spectral modeling was used by to estimate the mass of the hydrogen envelope of the progenitor of SN 2011dh as 0.01–0.04 [[$\mathrm{M}_{\odot}$]{}]{}, in agreement with the non-local thermodynamic equilibrium modeling by [@2011ApJ...742L..18A], who estimate the hydrogen mass in the envelope of SN 2011dh to be $0.024$ [M$_{\odot}$]{}. Based on the sequence shown in Fig. \[fig:earlyspec\], it is clear that much stronger hydrogen features developed in SN 2011dh compared with both PTF12os and iPTF13bvn. The absorption feature at 6300 Å is there at +8.5 d in PTF12os (our earliest spectrum of this SN), and it becomes clear at around +10 d in iPTF13bvn. In both SNe, the absorption disappears after around +25 d (compared to $\sim$+90 d in SN 2011dh). One interpretation of this spectral behavior is that we are seeing the signature of a hydrogen envelope around the progenitor, just as what is seen in SNe IIb such as SN 2011dh. If we assume that this absorption is caused by the H$\alpha$ line, we can estimate the expansion velocity of the hydrogen envelope by measuring the position of the absorption minimum using Gaussian fits. We find that H$\alpha$ in iPTF13bvn would be at $\sim 16,000$ km s$^{-1}$ at +10 d, about 2000 km s$^{-1}$ faster compared to PTF12os (left panel of Fig. \[fig:halphahbeta\] and also Fig. \[fig:vel\_evol\] for the velocity measurements). We have also investigated the region where we would expect H$\beta$ (right panel of Fig. \[fig:halphahbeta\]), and we find that for PTF12os there is absorption at $\sim 13,000$ km s$^{-1}$ in several of the early spectra before +25 d. The velocity evolutions of the H$\alpha$ and H$\beta$ lines are consistent with what was observed in SN 2011dh, except that PTF12os and especially iPTF13bvn exhibit higher velocities. Faster expansion velocities are expected, since the mass of the hydrogen envelopes appear to be smaller in PTF12os and iPTF13bvn compared with SN 2011dh, given the weaker spectral signatures. In the modeling effort by [@2012MNRAS.422...70H], it was found that as little as $0.024$ [M$_{\odot}$]{} of hydrogen in the envelope is enough for H$\beta$ (and a strong signal from H$\alpha$) to show up in the spectra of a SN IIb/Ib; thus, a hydrogen envelope mass upper limit of $\lesssim 0.02$ [[$\mathrm{M}_{\odot}$]{}]{} is a reasonable estimate for PTF12os (but probably rather high, given the very similar estimate for SN 2011dh, which shows much more long-lasting hydrogen signatures). For iPTF13bvn the hydrogen mass is likely lower, since we see no signs of H$\beta$. However, the higher velocities in combination with the fact that the H$\alpha$ absorption disappears at the same time as for PTF12os could be seen as an argument for at least a comparable hydrogen mass, since the emission from the same amount of hydrogen as in PTF12os should disappear earlier for a higher expansion velocity, assuming that the other physical parameters of the explosion are similar. Detailed physical modeling of the spectra of iPTF13bvn is needed to put a realistic upper limit on the hydrogen mass, especially since the modeling by [@2011MNRAS.414.2985D] found that even with an envelope mass as low as $0.001$ [M$_{\odot}$]{}, H$\alpha$ will still be present at +15 d — a very small amount of hydrogen is needed to produce a weak spectral signature at early times. In the spectral analysis of SE SNe by [@2015arXiv150506645P], it is also suggested that several other SNe Ib in the literature besides iPTF13bvn show weak signatures of H$\alpha$ in absorption at early times. This conclusion is also consistent with predictions from binary evolution [e.g., @2015PASA...32...15Y], where it is found that a small amount of hydrogen can remain in some cases for models otherwise tuned to produce SNe Ib. [@2013ApJ...767...71M] suggested that the Type IIb vs. Ib classification is time dependent. If the first spectra of PTF12os and iPTF13bvn had been obtained after +25 d, these objects would simply have been classified as SNe Ib. With knowledge of the early-time spectral evolution they instead appear very similar to the Type IIb SN 2011dh, although likely with a smaller amount of hydrogen. In the Type IIb SN 2008ax, the hydrogen signatures disappeared after $\sim$+45 d. There could indeed be a continuum between SNe IIb and SNe Ib, with SE SNe showing varying degrees of stripping of the progenitor. This would be in contrast with the previously suggested bimodal picture in which SNe IIb and SNe Ib come from different progenitor channels. However, recently [@2015arXiv151008049L] analyzed a large sample of SN IIb and SN Ib spectra, claiming that the strength of the H$\alpha$ absorption, as measured using the pseudoequivalent width , shows a bimodal distribution with a clear separation between SNe IIb and SNe Ib at all epochs, with SNe IIb showing stronger absorption. [@2015arXiv151008049L] also find that iPTF13bvn seems to be an outlier; in terms of the strength of the H$\alpha$ absorption it is comparable to the SNe IIb that exhibit the weakest hydrogen signatures. In Fig. \[fig:halphahbeta\] it is clear that the strength of H$\alpha$ is also similar for PTF12os at all epochs. Thus, it might indeed be that iPTF13bvn is a special case, and should perhaps be considered a SN IIb. More early-time spectra of SNe IIb/Ib, with well-determined explosion dates from early photometry, are needed to further investigate the nature of the hydrogen envelopes of their progenitors. ![Spectral evolution in velocity space around H$\alpha$ (left panel) and H$\beta$ (right panel) of iPTF13bvn (thick black line) and PTF12os (thin dark blue line).[]{data-label="fig:halphahbeta"}](figures/12os_spec_evol_hydrogen-eps-converted-to.pdf "fig:"){width="4cm"}![Spectral evolution in velocity space around H$\alpha$ (left panel) and H$\beta$ (right panel) of iPTF13bvn (thick black line) and PTF12os (thin dark blue line).[]{data-label="fig:halphahbeta"}](figures/12os_spec_evol_hydrogen_beta-eps-converted-to.pdf "fig:"){width="4cm"} ![image](figures/12os_13bvn_nebular_spec_evol_v2-eps-converted-to.pdf){width="18cm"} Nebular spectroscopy and oxygen-mass constraints {#sec:nebular} ------------------------------------------------ Figure \[fig:neb\] shows a comparison of late-time spectra of PTF12os (around +215 d) and iPTF13bvn (at +251 d, +345 d, and +346 d), along with fits based on the nebular models by for SNe IIb. The predicted luminosity of the \[\] $\lambda\lambda$6300, 6364 doublet found by was previously used by to constrain the ZAMS mass of iPTF13bvn. This was based on nebular $r$-band photometry under the assumption that the $r$-band flux is dominated by the \[\] $\lambda\lambda$6300, 6364 doublet. An upper limit of $\sim 17$ [[$\mathrm{M}_{\odot}$]{}]{} on the ZAMS mass of iPTF13bvn was derived. Since then, have arrived at a similar limit on the ZAMS mass using independent nebular spectroscopy obtained at around +300 d. We find that our spectra obtained between +251 d and +346 d of iPTF13bvn give even stricter constraints. The same model that was found to be optimal for SN 2011dh produces good fits to the observed spectra of iPTF13bvn at these epochs, and especially to the \[\] $\lambda\lambda$6300, 6364 doublet, is very well reproduced (see the middle and bottom panels of Fig. \[fig:neb\]). In this model the oxygen mass is $\sim 0.3$ [M$_{\odot}$]{}, which translates into a ZAMS mass of 12 [M$_{\odot}$]{}. The \] $\lambda$4571 line is also present in our spectrum at +251 d, and the 12 [M$_{\odot}$]{} model roughly reproduces this line as well. To produce the model fits for iPTF13bvn we have flux-calibrated our nebular spectra using the extinction-corrected $r$-band flux, scaled the model by the nickel mass derived for iPTF13bvn (0.072 [M$_{\odot}$]{}; see Sect. \[sec:hydro\]), and scaled the model in time to match the epoch of our spectra according to the prescriptions of . We note that in the binary model for iPTF13bvn by [@2014AJ....148...68B], a relatively low oxygen mass is predicted, even though an initial mass of 20 [M$_{\odot}$]{} for the SN progenitor star is assumed. In a single-star setting, an oxygen mass around 2 [M$_{\odot}$]{} would be expected for such a star. However, in the model by [@2014AJ....148...68B], the mass transfer to the companion starts during core hydrogen burning, which results in a reduced oxygen mass in the core of the SN progenitor. Thus, our ZAMS estimate of 12 [M$_{\odot}$]{} should be treated with some caution, since the models by are based on the single-star nucleosynthesis models of [@2007PhR...442..269W], and binary models can arrive at a core with similar properties to our best-fitting model at the time of explosion for larger initial masses [see, e.g., @2015MNRAS.446.2689E; @Yoon:2010aa]. On the other hand, [@2016arXiv160405050E] have recently constructed a binary model for the iPTF13bvn system based on both pre-explosion and late-time (+740 d) *HST* data, where the progenitor mass of the SN is constrained to 10–12 [M$_{\odot}$]{}, which is in excellent agreement with the result from our nebular modeling. Following the same procedure as above, we have attempted to fit the same nebular model to PTF12os as for iPTF13bvn, but now scaled to the nickel mass for PTF12os (0.063 [M$_{\odot}$]{}; see Sect. \[sec:hydro\]). However, this model does not produce sufficiently strong emission lines for this SN (see the gray line in the top panel of Fig. \[fig:neb\]). A model with a ZAMS mass of 13 [M$_{\odot}$]{}  and otherwise identical parameters as the model used for iPTF13bvn gives decent matches to both the quasicontinuum and the emission lines of PTF12os (gray line in the top panel of Fig. \[fig:neb\]). On the other hand, the \[\] $\lambda\lambda$6300, 6364 doublet is still too weak. In Sect. \[sec:bolometric\] we found that while the quasibolometric LC peak is lower for PTF12os, the LC is declining slower compared to SN 2011dh past +80 d, and at +105 d the quasibolometric luminosity of both SNe is almost the same. At later times we lack coverage in the $B$ band; however, the fluxes in the $gri$ bands are almost the same in PTF12os at +215 d as in SN 2011dh. The slower decline of PTF12os implies more-efficient $\gamma$-ray trapping compared to SN 2011dh (and iPTF13bvn). If the nebular model were adjusted accordingly, the lines should be stronger for a lower nickel mass at late times. One way to achieve more-efficient $\gamma$-ray trapping is to simply increase the ZAMS mass of the model, resulting in increased ejecta and oxygen masses. In Fig. \[fig:neb\] we also show a model for a star with ZAMS mass of 17 [M$_{\odot}$]{} . The \[\] $\lambda\lambda$6300, 6364 lines are significantly too strong in this model, and the ratio between this doublet and the \[\] $\lambda\lambda$7291, 7323 doublet is significantly off compared to the observation. Thus, it appears that a ZAMS mass of 17 [M$_{\odot}$]{} is too large for PTF12os. If we interpolate the model spectra as a function of the oxygen mass (using models 12C, 13G, and 17A of ), we find that the \[\] $\lambda\lambda$6300, 6364 lines are very well reproduced by a model with an oxygen mass of $\sim 0.8$ [M$_{\odot}$]{} or a ZAMS mass of 15 [M$_{\odot}$]{} (see the green line in the top panel of Fig. \[fig:neb\]). We note that the core mass of a $15$ [M$_{\odot}$]{} model is still roughly consistent with the limits on the ejecta mass derived from our hydrodynamical model of PTF12os (Sect. \[sec:hydro\]). In terms of luminosity, the \[\] $\lambda\lambda$6300, 6364 doublet luminosity normalized by the energy released by the decaying nickel mass (0.063 [M$_{\odot}$]{}) is also significantly lower at both +140 d and +215 d in PTF12os than what is predicted for the model with a ZAMS mass of 17 [M$_{\odot}$]{}. We measure $L_{\mathrm{Norm}}\approx0.017$ at +140 d and $L_{\mathrm{Norm}}\approx0.026$ at +215 d, following the description of . These numbers are roughly comparable to what has been measured for SN 2008ax and SN 1993J at similar epochs. SN 2008ax is well matched by a model with a ZAMS mass of 13[M$_{\odot}$]{}. We have also compared the models presented above to nebular models of iPTF13bvn (for +346d) and PTF12os (for +215d) calculated with the spectral code by [@2007ApJ...661..892M; @2010MNRAS.408...87M]. The results from these models are also consistent with low-mass ZAMS stars ($<17$ [M$_{\odot}$]{}) as the progenitors to these SNe. The best-fitting model of PTF12os has an oxygen mass of $0.73$ [M$_{\odot}$]{}, and the model of iPTF13bvn has an oxygen mass of $0.2$ [M$_{\odot}$]{}. In conclusion, we arrive at an upper limit of 0.3 [M$_{\odot}$]{} for the oxygen mass of iPTF13bvn and 0.8 [M$_{\odot}$]{} for PTF12os. This translates to an upper limit on the ZAMS mass of 12 [M$_{\odot}$]{} for iPTF13bvn and 15 [M$_{\odot}$]{} for PTF12os for single-star ZAMS models. Hydrodynamical light-curve modeling and progenitor constraints {#sec:hydro} ============================================================== Following our earlier work on iPTF13bvn , we have constructed a similar hydrodynamical model for PTF12os. As input data we use the multiband $UVM2$[^24], $UB$, and $griz$ LCs of PTF12os, a rough explosion date estimate based on an exponential fit to the early-time $r$-band LC as an initial guess, and the measured photospheric velocities (Sect. \[sec:spectra\]), together with a grid of SN models constructed with the hydrodynamical code [hyde]{} . This is a one-dimensional code based on flux-limited diffusion, and it follows the framework outlined by [@1977ApJS...33..515F]. To model PTF12os we have generated a new model grid, based on bare cores evolved until the verge of core collapse using [MESA]{} [@2010ascl.soft10083P], and exploded with the latest version of the [hyde]{} code. To allow direct comparisons we have also redone the fitting of iPTF13bvn and SN 2011dh to this same model grid. For the IR and UV regions, at epochs where we lack coverage in our datasets on PTF12os and iPTF13bvn, we use bolometric corrections derived from the bolometric LC of SN 2011dh. When we perform the fits to our model grid, we place equal weights on the diffusion phase of the LC ($+1$ d to $+40$ d), the early tail of the LC ($+40$ d to $+100$ d), and the photospheric velocity measurements (which are fitted up to $+40$ d only). From the best-fitting models, shown in the bottom panel of Fig. \[fig:bol\_lcs\] along with the estimated bolometric LCs, we derive the explosion parameters for PTF12os, iPTF13bvn, and SN 2011dh shown in Table \[tab:hydro\]. The results on both iPTF13bvn and SN 2011dh are consistent with previous findings . For PTF12os we derive a compact He core of $3.25^{+0.77}_{-0.56}$ [M$_{\odot}$]{}, which corresponds to an ejecta mass of $1.9^{+0.77}_{-0.56}$ [M$_{\odot}$]{} assuming that a 1.4 [M$_{\odot}$]{} compact remnant remained at the center of the explosion. We also find that $0.063^{+0.020}_{-0.011}$ [M$_{\odot}$]{} of was synthesized in the explosion which had a total kinetic energy of $0.54^{+0.41}_{-0.25}\times10^{51}$ erg, and that the nickel needs to be strongly mixed. If we let the explosion time vary by $\pm2.5$ d around the initial guess, we find the explosion epoch of the best-fitting model to be $t_{\rm exp}=2,455,933.0$ days in JD, or 2012 Jan. 6.5 in UT. The errors reported here and in Table \[tab:hydro\] are propagated from the uncertainties in the observed quantities, but we do not take the degeneracy between the explosion parameters into account (e.g., ejecta mass vs. explosion energy) when determining the errors. Our models for all three SNe are qualitatively well fitted to the observed datasets, including both the overall LCs and the photospheric velocities (see Fig. \[fig:vel\_evol\] for the photospheric-velocity fits for PTF12os and iPTF13bvn). In agreement with the discussion of the quasibolometric LCs in Sect. \[sec:lc\], we find that iPTF13bvn and SN 2011dh has the highest nickel masses (0.072 and 0.075 [M$_{\odot}$]{}), followed by PTF12os (0.063 [M$_{\odot}$]{}). In terms of the ejecta mass, we find that PTF12os has the highest upper limit on the ejecta mass at 2.6 [M$_{\odot}$]{} and SN 2011dh the lowest at 2.4 [M$_{\odot}$]{} (see Table. \[tab:hydro\]). However, within the uncertainties the differences are not significant. We do not find a clearly lower ejecta mass for iPTF13bvn, although the width of the bolometric LC peak is narrower. This is because the expansion velocities are higher: a higher photospheric expansion velocity leads to a faster evolution of the LC around peak for the same ejecta mass. There is also some indication that iPTF13bvn could have a higher explosion energy compared to SN 2011dh and PTF12os, but within the errors the difference is not significant. Finally, past +70 d, the observed bolometric LC of PTF12os gradually starts to show a somewhat slower decline than what is seen in our best-fitting model. Models with slower late-time declines (higher ejecta mass, or lower energy, or lower velocities) fit less well to the rest of the data we are fitting. Similarly as to what we found when analyzing the quasibolometric LCs in Sect. \[sec:bolometric\], this again indicates that the gamma-ray trapping could be more efficient in PTF12os compared to iPTF13bvn and SN 2011dh. [lllcc]{} SN 2011dh & $3.31^{+0.53}_{-0.57}$& $0.64^{+0.38}_{-0.31}$ & $0.075^{+0.028}_{-0.020}$& $1.05^{+0.08}_{-0.00}$\ & & & &\ PTF12os& $3.25^{+0.77}_{-0.56}$& $0.54^{+0.41}_{-0.25}$ & $0.063^{+0.020}_{-0.011}$ & $1.55^{+0.07}_{-0.16}$\ & & & &\ iPTF13bvn & $3.38^{+0.57}_{-0.50}$& $0.94^{+0.63}_{-0.43}$ & $0.072^{+0.024}_{-0.016}$ & $1.28^{+0.46}_{-0.00}$\ Discussion, Summary, and Conclusions {#sec:conclusions} ==================================== Our observational data on PTF12os and iPTF13bvn show that these two SNe are SE SNe, with explosion parameters typical of Type IIb or Ib SNe. Compared to the Type IIb SN 2011dh, the multiband light curves, the bolometric light curves, and the spectral evolution of these three SNe are qualitatively very similar. This is reflected in the output when we fit these SNe to our hydrodynamical model grid (Sect. \[sec:hydro\]). We classify PTF12os as a SN IIb, with only a small part ($\lesssim0.02$ [M$_{\odot}$]{}) of the hydrogen envelope of the progenitor star remaining prior to the explosion. This conclusion is based on the presence of absorption from several of the Balmer lines in our spectral sequence (Fig. \[fig:spec12os\]), and on the temporal evolution of the H$\alpha$ line. Compared to the sample studied by [@2015arXiv151008049L], the velocity of the H$\alpha$ absorption in PTF12os is on the higher end, whereas the pEW of this absorption feature is on the lower end, of the SN IIb sample. Compared to typical SNe IIb such as SN 2011dh, the signature from hydrogen is weaker in the spectra of PTF12os (Fig. \[fig:earlyspec\]), and it disappeared quite early at around +25 d. Signatures of H$\alpha$ lingered in the spectra of SN 2011dh until approximately +90 d. All of these measurements indicate a lower mass of hydrogen in the envelope surrounding the progenitor of PTF12os compared to the progenitor of SN 2011dh. However, detailed modeling is needed to quantify the hydrogen envelope mass. For iPTF13bvn the pEW of the H$\alpha$ absorption is close to that of PTF12os and it evolves in a similar way. This means that iPTF13bvn is among the SNe Ib with the largest pEW values in the sample studied by [@2015arXiv151008049L], comparable in pEW to the lower end of the SN IIb sample where PTF12os is also found. It is not completely unreasonable to assume that a similar amount of hydrogen was present in an envelope surrounding the progenitor of iPTF13bvn as for the progenitor of PTF12os. A similar mass of hydrogen as in PTF12os for iPTF13bvn would likely result in a radius of the progenitor prior to explosion that is significantly larger than the value of just a few solar radii reported by [@2013ApJ...775L...7C] by fitting the equations of [@Piro:2012aa] to the early $r$-band LC. When we fitted the temperature, the early bolometric LC, and the early $gri$ LCs of iPTF13bvn simultaneously using the same semianalytical framework, with the explosion parameters from our hydrodynamical model of iPTF13bvn as input, this resulted in a best-fitting radius for the progenitor on the order of 10 [R$_{\odot}$]{}. Hydrodynamical modeling by [@2014AJ....148...68B] is also consistent with a significantly larger radius for iPTF13bvn. The binary models by [@Yoon:2010aa] show that it is likely that some SE SNe, with progenitors very similar to those of hydrogen-free SNe Ib formed in binary systems, will end up with a small part of the hydrogen envelope remaining and that these should spectroscopically be very similar to SNe IIb. It could be that iPTF13bvn is one of these atypical SNe Ib, and in that case it should perhaps be considered to be a SN IIb. It was also pointed out that iPTF13bvn is a clear outlier in terms of the H$\alpha$ absorption strength in the sample studied by [@2015arXiv151008049L]. However, we should note that we do not see any sign of the Balmer lines in the spectra of iPTF13bvn other than possibly H$\alpha$. Regarding our hydrodynamical modeling, there is always a concern that the ejecta masses of SNe IIb/Ib could be underestimated from the LC-peak widths, owing to significant fallback onto a central black hole. However, in such a picture it becomes difficult to explain the oxygen masses derived from the late-time \[\] $\lambda\lambda$6300, 6364 emission lines. If there is significant fallback, most of the oxygen should fall back into the black hole, but we find that the late-time oxygen emission of both PTF12os and iPTF13bvn is very consistent with low ZAMS-mass stars. The rest of the explosion parameters we find are also very consistent with those of other SNe IIb and SNe Ib in the literature. Our study of the metallicity in the vicinity of PTF12os and iPTF13bvn shows that both of these SNe exploded in regions of comparable metallicity, with oxygen abundance very close to the solar value. A solar metallicity is consistent with the models by [@Yoon:2010aa] that predict a small amount of hydrogen remaining around some SNe Ib. [@2016arXiv160405050E] have also constructed a binary model for the iPTF13bvn system based on pre-explosion and late-time (+740 d) *HST* data, where the progenitor of the SN is constrained to a mass range of 10–12 [M$_{\odot}$]{}. This is very similar to the mass-constraint from our nebular modeling, and thus it seems that a very consistent picture of this SN is emerging. These *HST* observations were likely still dominated by light from the SN. However, it is possible that in the near future the first direct detection of the light from a SN Ib binary companion will be possible via another set of *HST* observations. Finally, we stress again that the overall appearance of PTF12os and iPTF13bvn is very similar indeed. Investigating two such SE SNe in the same galaxy has allowed a detailed comparison of subtle differences, but generically both the LCs and the spectral evolution are astonishingly similar (and also to the well-studied SN 2011dh). We note that observations of polarization for both SN 2011dh [@2015MNRAS.453.4467M] and iPTF13bvn [@2015arXiv151002492R] have suggested significant asymmetry and line-of-sight effects. In terms of the overall observables for these SNe, such effects are not dominant. We gratefully acknowledge support from the Knut and Alice Wallenberg Foundation. The Oskar Klein Centre is funded by the Swedish Research Council. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We acknowledge help from Rob Fesen, Craig Wheeler, and Shazrene Mohamed with the SALT data, as well as Jeffrey Silverman for help with the Keck data. We thank Alastair Bruce for assistance with the WHT observing and data reduction. We especially thank Shri Kulkarni, Mansi Kasliwal, Yi Cao, Anna-Lisa de Cia, Jerod Parrent, Assaf Horesh, Tom Matheson, Melissa Graham, Dan Perley, Eric Bellm, Ofer Yaron, Yen-Chen Pan and Kelsey Clubb for help with observations and/or reductions within the PTF effort. We also acknowledge observers, organizers, and data reducers who participated in the BLP 2012P campaign, in particular Andrea Pastorello, Max Stritzinger, Cosimo Inserra, Flora Cellier-Holzem, Luca Borsato, Valerio Nascimbeni, Stefano Benetti, and Stefan Taubenberger. We thank Doug Leonard for discussions regarding complementary MLO data. This work is partly based on observations obtained with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain. We acknowledge the exceptional support we received from the NOT staff throughout this campaign. Also based in part on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof[i]{}sica de Canarias, in the island of La Palma. This work is based in part on observations from the LCOGT network. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. Research at Lick Observatory is partially supported by a generous gift from Google. CAFOS, AFOSC, and EFOSC2 data were taken within the European supernova collaboration involved in the ESO-NTT large programme 184.D-1140 led by Stefano Benetti. Partially based on observations collected at Copernico telescope (Asiago, Italy) of the INAF – Osservatorio Astronomico di Padova, and the 2.2 m Telescope of the Centro Astronomico Hispano-Aleman, Calar Alto, Spain. Based in part on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil). The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universit[ä]{}t M[ü]{}nchen, and Georg-August-Universit[ä]{}t G[ö]{}ttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. Some of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT). This paper is partly based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundaci[ó]{}n Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. Partially based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. Based in part on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme 093.D-0199(A). We are grateful for the assistance of the staff members at all observatories where we obtained data. A.G.-Y. is supported by the EU/FP7 via ERC grant No. 307260, the Quantum Universe I-Core program by the Israeli Committee for Planning and Budgeting; by Minerva and ISF grants; by the Weizmann-UK making connections program; and by Kimmel and ARCHES awards. A.V.F.’s research is supported by NASA/[*HST*]{} grant AR-14295 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555; additional financial assistance was provided by the Christopher R. Redlich Fund, the TABASGO Foundation, and NSF grant AST-1211916. N.E.R. is supported by the PRIN-INAF 2014 with the project: Transient Universe: unveiling new types of stellar explosions with PESSTO. This work was partly supported by the European Union FP7 program though ERC grant number 320360. We acknowledge funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement n$^{\rm o}$ \[291222\]. ![\[fig:spec12os\]Spectral sequence of PTF12os (SN 2012P) (top panel). Comparison of the visible spectra of PTF12os to iPTF13bvn (Ib) and SN 2011dh (IIb) at approximately +20 d (bottom panel). Thick dashed lines mark the central wavelength of the marked emission lines at rest, except for the Balmer lines, which are marked by green dashed lines at $17, 000$ km s$^{-1}$. Telluric features near the wavelengths 6860 Å and especially 7600 Å are present in some of the spectra .](figures/12os_spec_evol_full2016_v2-eps-converted-to.pdf){width="16cm"} ![\[fig:spec13bvn\]Spectral sequence of iPTF13bvn. Thick dashed lines mark the central wavelength of the marked emission lines at rest, except for the Balmer lines, which are marked by green dashed lines at $17, 000$ km s$^{-1}$. Telluric features near the wavelengths 6860 Å and especially 7600 Å are present in some of the spectra.](figures/13bvn_spec_evol_full2016_v2-eps-converted-to.pdf){width="16cm"} ![\[fig:spec13bvnir\]Near-infrared spectral sequence of iPTF13bvn (black lines) compared to SN 2011dh (purple lines) and SN 2008ax (cyan lines) at similar epochs. The spectra have been normalized using the median and shifted by a constant between the epochs. The top panel shows the spectral sequence in wavelength space. In the bottom panels we highlight small sections in velocity space. The bottom-left panel displays the region around $\lambda$10,830; the dashed lines show a smoothed version of the previous spectra in the sequence to highlight the evolution of the line. The bottom-center panel emphasizes the region around Paschen $\beta$; the dashed lines show heavily smoothed versions of the spectra of iPTF13bvn, which should give a rough estimate of the continuum. The bottom-right panel reveals the region around $\lambda$20,590.](figures/13bvn_spec_evol_infrared-eps-converted-to.pdf){width="16cm"} [lrlrlrlrlrlrll]{} $ 0.57$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $18.61$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 0.62$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $18.62$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 1.57$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.61$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 1.62$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.55$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 1.71$ & $...$ & $...$ & $...$ & $...$ & $17.83$ & $0.01$ & $17.50$ & $0.01$ & $17.67$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 2.08$ & $...$ & $...$ & $17.91$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.16$ & $...$ & $...$ & $...$ & $...$ & $17.63$ & $0.01$ & $17.21$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.18$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.32$ & $0.01$ & $17.11$ & $0.04$ & $LCOGT$\ $ 2.33$ & $...$ & $...$ & $17.84$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.34$ & $...$ & $...$ & $...$ & $...$ & $17.57$ & $0.02$ & $17.12$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.36$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.37$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 2.39$ & $17.68$ & $0.06$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.42$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.19$ & $0.04$ & $LCOGT$\ $ 2.50$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.06$ & $0.01$ & $17.23$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 2.56$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.08$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 2.60$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.06$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 2.63$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.04$ & $0.01$ & $17.20$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 2.67$ & $...$ & $...$ & $...$ & $...$ & $17.49$ & $0.01$ & $...$ & $...$ & $17.18$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 3.03$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.86$ & $0.06$ & $LCOGT$\ $ 3.36$ & $...$ & $...$ & $17.48$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.39$ & $17.41$ & $0.07$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.41$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.75$ & $0.03$ & $LCOGT$\ $ 3.53$ & $...$ & $...$ & $17.45$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.54$ & $...$ & $...$ & $...$ & $...$ & $17.26$ & $0.01$ & $16.70$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.56$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.90$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 3.66$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.70$ & $0.04$ & $LCOGT$\ $ 4.33$ & $...$ & $...$ & $17.14$ & $0.02$ & $16.92$ & $0.01$ & $16.42$ & $0.02$ & $16.56$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 4.39$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.45$ & $0.05$ & $LCOGT$\ $ 6.30$ & $16.43$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 6.40$ & $...$ & $...$ & $16.57$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 6.41$ & $...$ & $...$ & $...$ & $...$ & $16.35$ & $0.01$ & $15.96$ & $0.02$ & $16.12$ & $0.02$ & $16.10$ & $0.04$ & $LCOGT$\ $ 8.40$ & $...$ & $...$ & $16.15$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 8.41$ & $...$ & $...$ & $...$ & $...$ & $15.99$ & $0.01$ & $15.66$ & $0.01$ & $15.79$ & $0.01$ & $15.74$ & $0.04$ & $LCOGT$\ $ 8.61$ & $...$ & $...$ & $...$ & $...$ & $15.96$ & $0.01$ & $15.68$ & $0.01$ & $15.81$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 9.42$ & $15.67$ & $0.05$ & $16.03$ & $0.01$ & $15.89$ & $0.01$ & $15.61$ & $0.01$ & $15.74$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 10.40$ & $...$ & $...$ & $15.99$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 10.41$ & $...$ & $...$ & $...$ & $...$ & $15.83$ & $0.01$ & $15.53$ & $0.01$ & $15.64$ & $0.01$ & $15.62$ & $0.02$ & $LCOGT$\ $ 10.59$ & $...$ & $...$ & $...$ & $...$ & $15.75$ & $0.01$ & $15.48$ & $0.01$ & $15.61$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 11.55$ & $...$ & $...$ & $...$ & $...$ & $15.68$ & $0.01$ & $15.41$ & $0.01$ & $15.53$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 12.24$ & $...$ & $...$ & $15.80$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 12.26$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.33$ & $0.02$ & $15.47$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 12.40$ & $...$ & $...$ & $15.89$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 12.41$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.36$ & $0.01$ & $15.46$ & $0.01$ & $15.38$ & $0.03$ & $LCOGT$\ $ 12.52$ & $...$ & $...$ & $...$ & $...$ & $15.63$ & $0.01$ & $15.34$ & $0.01$ & $15.46$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 13.50$ & $...$ & $...$ & $15.79$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 13.51$ & $...$ & $...$ & $...$ & $...$ & $15.63$ & $0.00$ & $15.29$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 13.52$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.41$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 13.52$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.28$ & $0.01$ & $15.42$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 13.53$ & $...$ & $...$ & $...$ & $...$ & $15.59$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 14.61$ & $...$ & $...$ & $...$ & $...$ & $15.56$ & $0.01$ & $15.24$ & $0.01$ & $15.36$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 15.54$ & $...$ & $...$ & $...$ & $...$ & $15.54$ & $0.01$ & $15.19$ & $0.01$ & $15.32$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 16.28$ & $...$ & $...$ & $15.79$ & $0.01$ & $15.55$ & $0.01$ & $15.16$ & $0.01$ & $15.27$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 16.38$ & $...$ & $...$ & $15.81$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 16.39$ & $...$ & $...$ & $...$ & $...$ & $15.54$ & $0.01$ & $15.15$ & $0.01$ & $15.27$ & $0.01$ & $15.22$ & $0.02$ & $LCOGT$\ $ 16.52$ & $...$ & $...$ & $...$ & $...$ & $15.53$ & $0.01$ & $15.16$ & $0.01$ & $15.26$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 17.38$ & $...$ & $...$ & $15.84$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 17.39$ & $...$ & $...$ & $...$ & $...$ & $15.61$ & $0.01$ & $15.17$ & $0.01$ & $15.27$ & $0.01$ & $15.14$ & $0.02$ & $LCOGT$\ $ 17.55$ & $...$ & $...$ & $...$ & $...$ & $15.55$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 18.55$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.12$ & $0.01$ & $15.20$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 18.57$ & $...$ & $...$ & $...$ & $...$ & $15.61$ & $0.01$ & $15.10$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 18.58$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.21$ & $0.04$ & $...$ & $...$ & $LCOGT$\ $ 19.57$ & $...$ & $...$ & $...$ & $...$ & $15.68$ & $0.00$ & $15.14$ & $0.01$ & $15.20$ & $0.01$ & $15.08$ & $0.03$ & $LCOGT$\ $ 20.56$ & $...$ & $...$ & $16.12$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 20.57$ & $...$ & $...$ & $...$ & $...$ & $15.73$ & $0.00$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 20.58$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.19$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 20.58$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.14$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 20.58$ & $...$ & $...$ & $...$ & $...$ & $15.72$ & $0.01$ & $15.15$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 20.58$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.22$ & $0.01$ & $15.01$ & $0.02$ & $LCOGT$\ $ 21.09$ & $16.37$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 21.19$ & $...$ & $...$ & $16.12$ & $0.02$ & $15.78$ & $0.02$ & $15.17$ & $0.03$ & $15.20$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 21.38$ & $...$ & $...$ & $16.18$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 21.39$ & $...$ & $...$ & $...$ & $...$ & $15.81$ & $0.01$ & $15.17$ & $0.01$ & $15.22$ & $0.01$ & $15.16$ & $0.02$ & $LCOGT$\ $ 21.53$ & $...$ & $...$ & $...$ & $...$ & $15.80$ & $0.01$ & $15.19$ & $0.01$ & $15.20$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 21.56$ & $...$ & $...$ & $16.25$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 21.57$ & $...$ & $...$ & $...$ & $...$ & $15.83$ & $0.01$ & $15.18$ & $0.01$ & $15.23$ & $0.01$ & $15.11$ & $0.03$ & $LCOGT$\ $ 22.52$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.24$ & $0.01$ & $15.22$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 22.52$ & $...$ & $...$ & $16.34$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 22.54$ & $...$ & $...$ & $...$ & $...$ & $15.89$ & $0.01$ & $15.25$ & $0.01$ & $15.24$ & $0.01$ & $15.13$ & $0.03$ & $LCOGT$\ $ 23.56$ & $...$ & $...$ & $...$ & $...$ & $16.02$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 24.03$ & $...$ & $...$ & $16.63$ & $0.02$ & $16.09$ & $0.01$ & $15.34$ & $0.01$ & $15.35$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 24.54$ & $...$ & $...$ & $...$ & $...$ & $16.18$ & $0.01$ & $15.41$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 26.52$ & $...$ & $...$ & $...$ & $...$ & $16.45$ & $0.01$ & $15.60$ & $0.01$ & $15.49$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 26.52$ & $...$ & $...$ & $17.06$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 26.54$ & $...$ & $...$ & $...$ & $...$ & $16.44$ & $0.01$ & $15.60$ & $0.01$ & $15.48$ & $0.01$ & $15.35$ & $0.03$ & $LCOGT$\ $ 27.18$ & $18.01$ & $0.10$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 27.52$ & $...$ & $...$ & $...$ & $...$ & $16.56$ & $0.01$ & $15.69$ & $0.01$ & $15.57$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 28.41$ & $...$ & $...$ & $17.26$ & $0.02$ & $16.67$ & $0.01$ & $15.77$ & $0.02$ & $15.62$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 28.52$ & $...$ & $...$ & $17.33$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 28.54$ & $...$ & $...$ & $...$ & $...$ & $16.69$ & $0.01$ & $15.79$ & $0.01$ & $15.64$ & $0.01$ & $15.43$ & $0.02$ & $LCOGT$\ $ 28.55$ & $...$ & $...$ & $...$ & $...$ & $16.71$ & $0.01$ & $15.79$ & $0.01$ & $15.65$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 29.51$ & $...$ & $...$ & $...$ & $...$ & $16.80$ & $0.01$ & $15.87$ & $0.01$ & $15.72$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 29.55$ & $...$ & $...$ & $...$ & $...$ & $16.82$ & $0.01$ & $15.90$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 29.56$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.40$ & $0.05$ & $LCOGT$\ $ 35.53$ & $18.48$ & $0.51$ & $18.05$ & $0.08$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 36.53$ & $...$ & $...$ & $18.00$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 40.54$ & $...$ & $...$ & $18.17$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 40.56$ & $...$ & $...$ & $...$ & $...$ & $17.54$ & $0.02$ & $16.61$ & $0.02$ & $16.38$ & $0.02$ & $15.84$ & $0.04$ & $LCOGT$\ $ 45.07$ & $...$ & $...$ & $18.24$ & $0.04$ & $17.66$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 45.13$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.67$ & $0.02$ & $16.47$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 45.52$ & $...$ & $...$ & $...$ & $...$ & $17.69$ & $0.01$ & $16.73$ & $0.01$ & $16.50$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 45.78$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.73$ & $0.03$ & $16.50$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 46.51$ & $...$ & $...$ & $...$ & $...$ & $17.74$ & $0.01$ & $16.76$ & $0.01$ & $16.52$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 47.51$ & $...$ & $...$ & $...$ & $...$ & $17.74$ & $0.01$ & $16.79$ & $0.01$ & $16.52$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 47.75$ & $...$ & $...$ & $...$ & $...$ & $17.74$ & $0.02$ & $...$ & $...$ & $16.62$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 48.49$ & $...$ & $...$ & $...$ & $...$ & $17.78$ & $0.01$ & $16.82$ & $0.01$ & $16.57$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 49.35$ & $...$ & $...$ & $18.38$ & $0.03$ & $17.73$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 49.37$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.86$ & $0.01$ & $16.58$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 49.48$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.98$ & $0.05$ & $LCOGT$\ $ 49.52$ & $...$ & $...$ & $...$ & $...$ & $17.79$ & $0.01$ & $16.85$ & $0.01$ & $16.60$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 50.49$ & $...$ & $...$ & $...$ & $...$ & $17.81$ & $0.01$ & $16.88$ & $0.01$ & $16.65$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 51.30$ & $...$ & $...$ & $18.38$ & $0.03$ & $17.77$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 51.31$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.91$ & $0.01$ & $16.68$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 51.47$ & $19.03$ & $0.49$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 51.48$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.04$ & $0.07$ & $LCOGT$\ $ 51.49$ & $...$ & $...$ & $18.45$ & $0.06$ & $17.84$ & $0.02$ & $16.93$ & $0.02$ & $16.69$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 52.29$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.08$ & $0.05$ & $LCOGT$\ $ 52.79$ & $...$ & $...$ & $18.38$ & $0.05$ & $17.74$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.48$ & $19.10$ & $0.34$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.49$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.04$ & $0.03$ & $LCOGT$\ $ 53.51$ & $...$ & $...$ & $18.54$ & $0.05$ & $17.89$ & $0.01$ & $17.00$ & $0.02$ & $16.67$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 53.75$ & $...$ & $...$ & $18.37$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.79$ & $...$ & $...$ & $...$ & $...$ & $17.79$ & $0.01$ & $17.00$ & $0.02$ & $16.72$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 54.39$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.00$ & $0.05$ & $LCOGT$\ $ 55.78$ & $...$ & $...$ & $18.33$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 56.40$ & $...$ & $...$ & $18.51$ & $0.07$ & $17.90$ & $0.02$ & $17.04$ & $0.02$ & $16.69$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 57.74$ & $...$ & $...$ & $18.44$ & $0.04$ & $17.84$ & $0.01$ & $17.06$ & $0.02$ & $16.80$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 59.30$ & $19.37$ & $0.31$ & $18.42$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 59.37$ & $...$ & $...$ & $...$ & $...$ & $17.93$ & $0.01$ & $17.12$ & $0.02$ & $16.83$ & $0.02$ & $16.08$ & $0.03$ & $LCOGT$\ $ 59.74$ & $19.12$ & $0.35$ & $18.47$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 60.34$ & $...$ & $...$ & $18.54$ & $0.05$ & $17.90$ & $0.02$ & $17.12$ & $0.02$ & $16.88$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 63.44$ & $...$ & $...$ & $...$ & $...$ & $18.03$ & $0.04$ & $17.26$ & $0.03$ & $16.96$ & $0.03$ & $16.18$ & $0.03$ & $LCOGT$\ $ 65.36$ & $...$ & $...$ & $18.48$ & $0.06$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 66.45$ & $...$ & $...$ & $...$ & $...$ & $18.07$ & $0.05$ & $17.24$ & $0.03$ & $17.05$ & $0.03$ & $16.16$ & $0.03$ & $LCOGT$\ $ 67.73$ & $...$ & $...$ & $...$ & $...$ & $18.08$ & $0.07$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 67.74$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.09$ & $0.10$ & $...$ & $...$ & $LCOGT$\ $ 67.75$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.16$ & $0.07$ & $LCOGT$\ $ 69.49$ & $...$ & $...$ & $...$ & $...$ & $18.16$ & $0.01$ & $17.30$ & $0.01$ & $17.05$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 69.71$ & $...$ & $...$ & $18.62$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 69.74$ & $...$ & $...$ & $...$ & $...$ & $18.05$ & $0.01$ & $17.32$ & $0.01$ & $17.05$ & $0.01$ & $16.25$ & $0.03$ & $LCOGT$\ $ 70.30$ & $...$ & $...$ & $18.60$ & $0.05$ & $...$ & $...$ & $17.32$ & $0.01$ & $17.13$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 70.34$ & $...$ & $...$ & $...$ & $...$ & $18.10$ & $0.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 70.70$ & $...$ & $...$ & $...$ & $...$ & $18.10$ & $0.01$ & $17.39$ & $0.01$ & $17.09$ & $0.02$ & $16.30$ & $0.03$ & $LCOGT$\ $ 71.33$ & $...$ & $...$ & $18.51$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $17.08$ & $0.03$ & $16.25$ & $0.04$ & $LCOGT$\ $ 72.43$ & $...$ & $...$ & $...$ & $...$ & $18.06$ & $0.03$ & $17.39$ & $0.03$ & $17.16$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 74.47$ & $...$ & $...$ & $...$ & $...$ & $18.20$ & $0.02$ & $17.43$ & $0.02$ & $17.13$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 77.32$ & $...$ & $...$ & $...$ & $...$ & $18.16$ & $0.01$ & $17.48$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 77.33$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.13$ & $0.05$ & $...$ & $...$ & $LCOGT$\ $ 78.49$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.53$ & $0.01$ & $17.25$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 78.50$ & $...$ & $...$ & $...$ & $...$ & $18.23$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 81.05$ & $...$ & $...$ & $...$ & $...$ & $18.28$ & $0.02$ & $17.61$ & $0.02$ & $17.31$ & $0.01$ & $...$ & $...$ & $LCOGT$\ $ 82.05$ & $...$ & $...$ & $...$ & $...$ & $18.19$ & $0.02$ & $17.56$ & $0.02$ & $17.26$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 82.49$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.60$ & $0.01$ & $17.29$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 83.42$ & $...$ & $...$ & $...$ & $...$ & $18.29$ & $0.03$ & $17.67$ & $0.04$ & $17.32$ & $0.03$ & $...$ & $...$ & $LCOGT$\ $ 83.48$ & $...$ & $...$ & $...$ & $...$ & $18.37$ & $0.02$ & $17.63$ & $0.02$ & $17.33$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 86.05$ & $...$ & $...$ & $...$ & $...$ & $18.27$ & $0.03$ & $17.70$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 86.06$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.39$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 226.56$ & $...$ & $...$ & $...$ & $...$ & $21.09$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 226.57$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $20.35$ & $0.08$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 239.55$ & $...$ & $...$ & $...$ & $...$ & $21.38$ & $0.08$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 239.57$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $20.58$ & $0.09$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 262.48$ & $...$ & $...$ & $...$ & $...$ & $21.81$ & $0.10$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P200$\ $ 262.49$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $20.99$ & $0.20$ & $...$ & $...$ & $...$ & $...$ & $ P200$\ $ 262.50$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $20.07$ & $0.15$ & $...$ & $...$ & $ NOT$\ $ 293.79$ & $...$ & $...$ & $...$ & $...$ & $22.23$ & $0.31$ & $21.42$ & $0.21$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 320.34$ & $...$ & $...$ & $...$ & $...$ & $22.94$ & $0.29$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 320.35$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $21.71$ & $0.18$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 354.28$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $21.99$ & $0.28$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ [lrlrlrll]{} $ 2.08$ & $17.33$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.14$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.33$ & $17.21$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.39$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 2.41$ & $...$ & $...$ & $16.93$ & $0.02$ & $16.86$ & $0.02$ & $LCOGT$\ $ 3.03$ & $...$ & $...$ & $...$ & $...$ & $16.59$ & $0.05$ & $LCOGT$\ $ 3.36$ & $16.90$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.39$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.41$ & $...$ & $...$ & $16.55$ & $0.01$ & $16.43$ & $0.01$ & $LCOGT$\ $ 3.53$ & $16.87$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.58$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 3.65$ & $...$ & $...$ & $16.50$ & $0.02$ & $16.48$ & $0.04$ & $LCOGT$\ $ 4.33$ & $16.62$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 4.39$ & $...$ & $...$ & $16.21$ & $0.01$ & $16.16$ & $0.03$ & $LCOGT$\ $ 6.30$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 6.40$ & $16.08$ & $0.01$ & $15.78$ & $0.02$ & $15.72$ & $0.02$ & $LCOGT$\ $ 8.40$ & $15.75$ & $0.01$ & $15.48$ & $0.01$ & $15.40$ & $0.02$ & $LCOGT$\ $ 9.42$ & $15.59$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 9.44$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 10.40$ & $15.57$ & $0.01$ & $15.32$ & $0.01$ & $15.24$ & $0.01$ & $LCOGT$\ $ 12.24$ & $15.36$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 12.40$ & $15.38$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 13.50$ & $15.32$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 16.28$ & $15.24$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 16.38$ & $15.26$ & $0.01$ & $14.96$ & $0.01$ & $14.85$ & $0.01$ & $LCOGT$\ $ 17.38$ & $15.24$ & $0.01$ & $14.93$ & $0.01$ & $14.78$ & $0.01$ & $LCOGT$\ $ 18.57$ & $15.36$ & $0.04$ & $14.95$ & $0.02$ & $14.75$ & $0.04$ & $LCOGT$\ $ 20.56$ & $15.31$ & $0.01$ & $14.93$ & $0.01$ & $14.75$ & $0.01$ & $LCOGT$\ $ 21.09$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 21.19$ & $15.32$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 21.38$ & $15.33$ & $0.01$ & $14.92$ & $0.01$ & $14.77$ & $0.01$ & $LCOGT$\ $ 21.56$ & $15.38$ & $0.01$ & $14.96$ & $0.01$ & $14.75$ & $0.01$ & $LCOGT$\ $ 22.52$ & $15.44$ & $0.01$ & $15.02$ & $0.01$ & $14.78$ & $0.01$ & $LCOGT$\ $ 24.03$ & $15.61$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 26.52$ & $15.87$ & $0.01$ & $15.34$ & $0.01$ & $15.04$ & $0.02$ & $LCOGT$\ $ 27.18$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 28.41$ & $16.11$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 28.52$ & $16.12$ & $0.01$ & $15.52$ & $0.01$ & $15.20$ & $0.02$ & $LCOGT$\ $ 29.55$ & $...$ & $...$ & $...$ & $...$ & $15.30$ & $0.05$ & $LCOGT$\ $ 35.53$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 36.53$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 40.54$ & $16.95$ & $0.02$ & $16.27$ & $0.02$ & $15.82$ & $0.02$ & $LCOGT$\ $ 45.07$ & $17.06$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 49.35$ & $17.13$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 49.48$ & $...$ & $...$ & $...$ & $...$ & $15.98$ & $0.05$ & $LCOGT$\ $ 51.30$ & $17.25$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 51.47$ & $...$ & $...$ & $16.56$ & $0.05$ & $15.92$ & $0.13$ & $LCOGT$\ $ 51.49$ & $17.30$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 52.29$ & $...$ & $...$ & $16.62$ & $0.02$ & $16.07$ & $0.02$ & $LCOGT$\ $ 52.79$ & $17.22$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.48$ & $...$ & $...$ & $16.69$ & $0.03$ & $16.05$ & $0.02$ & $LCOGT$\ $ 53.51$ & $17.26$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.75$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 53.79$ & $17.25$ & $0.03$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 54.39$ & $...$ & $...$ & $16.62$ & $0.02$ & $16.06$ & $0.03$ & $LCOGT$\ $ 55.78$ & $17.25$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 56.40$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 57.74$ & $17.30$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 59.30$ & $17.38$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 59.31$ & $...$ & $...$ & $16.74$ & $0.02$ & $16.23$ & $0.02$ & $LCOGT$\ $ 59.74$ & $17.35$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $LCOGT$\ $ 59.75$ & $...$ & $...$ & $16.71$ & $0.02$ & $16.18$ & $0.02$ & $LCOGT$\ $ 60.34$ & $17.48$ & $0.03$ & $16.84$ & $0.01$ & $16.23$ & $0.02$ & $LCOGT$\ $ 65.36$ & $17.51$ & $0.03$ & $16.85$ & $0.02$ & $16.27$ & $0.06$ & $LCOGT$\ $ 67.73$ & $17.52$ & $0.02$ & $16.91$ & $0.02$ & $16.30$ & $0.04$ & $LCOGT$\ $ 69.71$ & $17.59$ & $0.02$ & $17.04$ & $0.02$ & $16.40$ & $0.03$ & $LCOGT$\ $ 70.30$ & $17.63$ & $0.02$ & $17.03$ & $0.02$ & $...$ & $...$ & $LCOGT$\ $ 70.35$ & $17.60$ & $0.02$ & $17.06$ & $0.01$ & $16.29$ & $0.03$ & $LCOGT$\ $ 71.33$ & $17.54$ & $0.02$ & $16.98$ & $0.03$ & $16.35$ & $0.04$ & $LCOGT$\ [lrlrlrlrlrlrll]{} $ 3.98$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.25$ & $0.03$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 6.98$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.72$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 7.69$ & $18.31$ & $0.13$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 7.96$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 10.09$ & $17.95$ & $0.09$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 11.93$ & $...$ & $...$ & $17.11$ & $0.02$ & $16.76$ & $0.02$ & $16.20$ & $0.02$ & $16.13$ & $0.04$ & $16.14$ & $0.07$ & $ P60$\ $ 12.95$ & $...$ & $...$ & $...$ & $...$ & $16.65$ & $0.01$ & $16.14$ & $0.01$ & $16.03$ & $0.01$ & $...$ & $...$ & $ P60$\ $ 14.03$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.13$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 15.99$ & $...$ & $...$ & $16.93$ & $0.02$ & $16.53$ & $0.01$ & $15.98$ & $0.03$ & $15.88$ & $0.02$ & $15.84$ & $0.02$ & $ P60$\ $ 18.17$ & $18.24$ & $0.10$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 18.52$ & $18.66$ & $0.14$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 18.93$ & $...$ & $...$ & $17.10$ & $0.03$ & $16.61$ & $0.02$ & $15.93$ & $0.01$ & $15.82$ & $0.02$ & $15.74$ & $0.04$ & $ P60$\ $ 19.03$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.96$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 20.03$ & $...$ & $...$ & $17.18$ & $0.02$ & $16.68$ & $0.01$ & $15.93$ & $0.00$ & $15.81$ & $0.01$ & $15.73$ & $0.02$ & $ P60$\ $ 20.52$ & $18.73$ & $0.16$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 20.93$ & $...$ & $...$ & $...$ & $...$ & $16.75$ & $0.01$ & $15.93$ & $0.01$ & $15.79$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 21.70$ & $...$ & $...$ & $...$ & $...$ & $16.86$ & $0.04$ & $15.90$ & $0.05$ & $15.82$ & $0.03$ & $15.73$ & $0.05$ & $ LT$\ $ 21.92$ & $...$ & $...$ & $...$ & $...$ & $16.86$ & $0.05$ & $15.97$ & $0.09$ & $15.81$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 22.52$ & $18.97$ & $0.18$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $Swift$\ $ 24.92$ & $...$ & $...$ & $17.60$ & $0.06$ & $17.07$ & $0.02$ & $16.08$ & $0.01$ & $15.87$ & $0.02$ & $15.78$ & $0.03$ & $ P60$\ $ 25.91$ & $...$ & $...$ & $...$ & $...$ & $17.16$ & $0.01$ & $16.12$ & $0.01$ & $15.89$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 26.00$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.12$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 26.69$ & $...$ & $...$ & $...$ & $...$ & $17.24$ & $0.03$ & $16.11$ & $0.08$ & $15.92$ & $0.05$ & $15.81$ & $0.07$ & $ LT$\ $ 26.91$ & $...$ & $...$ & $17.92$ & $0.05$ & $17.28$ & $0.02$ & $16.17$ & $0.01$ & $15.93$ & $0.03$ & $15.83$ & $0.05$ & $ P60$\ $ 27.74$ & $...$ & $...$ & $...$ & $...$ & $17.34$ & $0.06$ & $16.19$ & $0.05$ & $15.96$ & $0.04$ & $15.82$ & $0.05$ & $ LT$\ $ 27.91$ & $...$ & $...$ & $18.11$ & $0.07$ & $17.37$ & $0.02$ & $16.23$ & $0.01$ & $15.96$ & $0.04$ & $15.87$ & $0.02$ & $ P60$\ $ 29.00$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.27$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 29.01$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.27$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 29.94$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.91$ & $0.02$ & $ P60$\ $ 30.66$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.01$ & $0.10$ & $15.89$ & $0.06$ & $ LT$\ $ 30.93$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $15.96$ & $0.02$ & $ P60$\ $ 31.66$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.45$ & $0.06$ & $16.10$ & $0.05$ & $15.82$ & $0.07$ & $ LT$\ $ 33.73$ & $...$ & $...$ & $...$ & $...$ & $17.80$ & $0.08$ & $...$ & $...$ & $16.19$ & $0.07$ & $16.07$ & $0.09$ & $ LT$\ $ 33.89$ & $...$ & $...$ & $18.56$ & $0.14$ & $17.89$ & $0.05$ & $16.59$ & $0.03$ & $16.21$ & $0.03$ & $16.06$ & $0.04$ & $ P60$\ $ 33.92$ & $...$ & $...$ & $18.59$ & $0.21$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 35.96$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.64$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 40.86$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.45$ & $0.01$ & $...$ & $...$ & $ NTT$\ $ 41.87$ & $...$ & $...$ & $19.02$ & $0.09$ & $18.38$ & $0.04$ & $16.92$ & $0.02$ & $16.49$ & $0.04$ & $16.25$ & $0.05$ & $ P60$\ $ 41.92$ & $...$ & $...$ & $19.08$ & $0.08$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 42.87$ & $...$ & $...$ & $19.12$ & $0.05$ & $18.40$ & $0.02$ & $16.96$ & $0.01$ & $16.51$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 42.92$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.91$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 42.97$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.89$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 44.93$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.31$ & $0.02$ & $ P60$\ $ 45.99$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.01$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 46.05$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.02$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 46.73$ & $...$ & $...$ & $...$ & $...$ & $18.56$ & $0.06$ & $17.14$ & $0.03$ & $16.65$ & $0.03$ & $16.34$ & $0.04$ & $ LT$\ $ 48.99$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.13$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 49.04$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.11$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 50.85$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $16.38$ & $0.02$ & $ P60$\ $ 55.91$ & $...$ & $...$ & $19.40$ & $0.09$ & $18.71$ & $0.03$ & $17.34$ & $0.01$ & $16.90$ & $0.02$ & $16.46$ & $0.04$ & $ P60$\ $ 56.01$ & $...$ & $...$ & $19.45$ & $0.06$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 56.86$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.31$ & $0.03$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 56.90$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.30$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 57.61$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.32$ & $0.04$ & $...$ & $...$ & $16.52$ & $0.05$ & $ LT$\ $ 59.73$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $0.06$ & $16.95$ & $0.08$ & $...$ & $...$ & $ LT$\ $ 59.83$ & $...$ & $...$ & $19.61$ & $0.27$ & $18.79$ & $0.06$ & $17.45$ & $0.02$ & $16.99$ & $0.03$ & $16.52$ & $0.03$ & $ P60$\ $ 65.72$ & $...$ & $...$ & $...$ & $...$ & $18.81$ & $0.06$ & $...$ & $...$ & $...$ & $...$ & $16.61$ & $0.07$ & $ LT$\ $ 67.84$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.50$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 67.89$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.48$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 69.80$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.52$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 71.76$ & $...$ & $...$ & $...$ & $...$ & $18.95$ & $0.04$ & $17.56$ & $0.03$ & $...$ & $...$ & $16.70$ & $0.03$ & $ LT$\ $ 73.97$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.62$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 74.99$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.64$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 75.78$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.64$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 77.72$ & $...$ & $...$ & $...$ & $...$ & $19.03$ & $0.08$ & $17.73$ & $0.02$ & $17.31$ & $0.03$ & $16.77$ & $0.05$ & $ LT$\ $ 81.02$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.75$ & $0.02$ & $...$ & $...$ & $...$ & $...$ & $ P48$\ $ 82.85$ & $...$ & $...$ & $19.54$ & $0.07$ & $19.03$ & $0.02$ & $17.82$ & $0.01$ & $17.43$ & $0.03$ & $16.79$ & $0.03$ & $ P60$\ $ 83.75$ & $...$ & $...$ & $...$ & $...$ & $19.05$ & $0.08$ & $17.81$ & $0.06$ & $17.43$ & $0.05$ & $...$ & $...$ & $ P60$\ $ 84.75$ & $...$ & $...$ & $19.49$ & $0.19$ & $19.04$ & $0.05$ & $17.82$ & $0.03$ & $17.46$ & $0.04$ & $16.84$ & $0.04$ & $ P60$\ $ 86.75$ & $...$ & $...$ & $...$ & $...$ & $19.06$ & $0.15$ & $17.87$ & $0.05$ & $17.48$ & $0.06$ & $16.88$ & $0.07$ & $ P60$\ $ 87.74$ & $...$ & $...$ & $19.56$ & $0.22$ & $19.08$ & $0.08$ & $17.88$ & $0.04$ & $17.50$ & $0.04$ & $...$ & $...$ & $ P60$\ $ 88.74$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.52$ & $0.15$ & $16.83$ & $0.13$ & $ P60$\ $ 89.74$ & $...$ & $...$ & $...$ & $...$ & $19.03$ & $0.13$ & $17.89$ & $0.05$ & $17.50$ & $0.04$ & $16.92$ & $0.05$ & $ P60$\ $ 89.75$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 90.74$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.49$ & $0.18$ & $...$ & $...$ & $ P60$\ $ 94.72$ & $...$ & $...$ & $19.50$ & $0.11$ & $19.11$ & $0.04$ & $17.98$ & $0.03$ & $17.60$ & $0.03$ & $16.99$ & $0.04$ & $ P60$\ $ 100.71$ & $...$ & $...$ & $...$ & $...$ & $19.18$ & $0.04$ & $18.05$ & $0.04$ & $17.67$ & $0.05$ & $17.05$ & $0.06$ & $ P60$\ $ 100.79$ & $...$ & $...$ & $19.68$ & $0.09$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.03$ & $0.03$ & $ P60$\ $ 101.70$ & $...$ & $...$ & $...$ & $...$ & $19.20$ & $0.05$ & $18.08$ & $0.03$ & $17.70$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 102.70$ & $...$ & $...$ & $...$ & $...$ & $19.19$ & $0.06$ & $18.07$ & $0.03$ & $17.70$ & $0.04$ & $...$ & $...$ & $ P60$\ $ 103.91$ & $...$ & $...$ & $...$ & $...$ & $19.21$ & $0.02$ & $18.12$ & $0.01$ & $17.74$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 104.74$ & $...$ & $...$ & $...$ & $...$ & $19.23$ & $0.02$ & $18.12$ & $0.01$ & $17.72$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 105.69$ & $...$ & $...$ & $...$ & $...$ & $19.25$ & $0.03$ & $18.10$ & $0.01$ & $17.76$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 106.69$ & $...$ & $...$ & $...$ & $...$ & $19.24$ & $0.02$ & $18.14$ & $0.01$ & $17.75$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 107.70$ & $...$ & $...$ & $...$ & $...$ & $19.25$ & $0.06$ & $...$ & $0.91$ & $17.76$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 108.78$ & $...$ & $...$ & $...$ & $...$ & $19.26$ & $0.06$ & $18.13$ & $0.03$ & $17.80$ & $0.05$ & $...$ & $...$ & $ P60$\ $ 115.57$ & $...$ & $...$ & $...$ & $...$ & $19.33$ & $0.05$ & $18.24$ & $0.04$ & $...$ & $...$ & $...$ & $...$ & $ NOT$\ $ 115.59$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $17.15$ & $0.09$ & $ NOT$\ $ 128.47$ & $...$ & $...$ & $...$ & $...$ & $19.49$ & $0.05$ & $18.40$ & $0.05$ & $18.00$ & $0.07$ & $17.56$ & $0.13$ & $ NOT$\ $ 139.64$ & $...$ & $...$ & $...$ & $...$ & $19.59$ & $0.03$ & $18.57$ & $0.03$ & $18.25$ & $0.02$ & $...$ & $...$ & $ GTC$\ $ 152.85$ & $...$ & $...$ & $...$ & $...$ & $19.78$ & $0.10$ & $18.72$ & $0.05$ & $18.50$ & $0.06$ & $...$ & $...$ & $ P60$\ $ 155.84$ & $...$ & $...$ & $20.53$ & $0.14$ & $19.81$ & $0.04$ & $18.79$ & $0.02$ & $18.54$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 156.82$ & $...$ & $...$ & $20.44$ & $0.13$ & $19.85$ & $0.04$ & $18.78$ & $0.02$ & $18.55$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 157.80$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $18.57$ & $0.02$ & $...$ & $...$ & $ P60$\ $ 157.81$ & $...$ & $...$ & $20.52$ & $0.11$ & $19.85$ & $0.03$ & $18.83$ & $0.01$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 158.86$ & $...$ & $...$ & $20.42$ & $0.13$ & $19.88$ & $0.05$ & $18.83$ & $0.02$ & $18.57$ & $0.05$ & $...$ & $...$ & $ P60$\ $ 160.82$ & $...$ & $...$ & $20.45$ & $0.09$ & $19.91$ & $0.02$ & $18.86$ & $0.01$ & $18.60$ & $0.03$ & $...$ & $...$ & $ P60$\ $ 168.83$ & $...$ & $...$ & $20.51$ & $0.19$ & $20.04$ & $0.04$ & $18.95$ & $0.04$ & $18.76$ & $0.10$ & $...$ & $...$ & $ P60$\ $ 171.82$ & $...$ & $...$ & $20.71$ & $0.17$ & $20.10$ & $0.06$ & $18.99$ & $0.03$ & $18.79$ & $0.04$ & $...$ & $...$ & $ P60$\ $ 177.83$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $18.90$ & $0.05$ & $...$ & $...$ & $ P60$\ $ 178.81$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $19.01$ & $0.15$ & $18.81$ & $0.17$ & $...$ & $...$ & $ P60$\ $ 179.80$ & $...$ & $...$ & $...$ & $...$ & $20.18$ & $0.14$ & $19.17$ & $0.09$ & $18.91$ & $0.10$ & $...$ & $...$ & $ P60$\ $ 182.81$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $19.15$ & $0.07$ & $18.95$ & $0.08$ & $...$ & $...$ & $ P60$\ $ 186.81$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $19.16$ & $0.13$ & $19.00$ & $0.09$ & $...$ & $...$ & $ P60$\ $ 214.54$ & $...$ & $...$ & $...$ & $...$ & $20.81$ & $0.12$ & $19.79$ & $0.06$ & $...$ & $...$ & $...$ & $...$ & $ P60$\ $ 214.54$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $...$ & $19.61$ & $0.16$ & $...$ & $...$ & $ NTT$\ [lllllll]{} Phase & $U$ & $\sigma$ & $UVW1$ & $\sigma$ & $UVM2$ & $\sigma$\ JD $-$ 2,455,933.00 & \[mag\] & \[mag\] & \[mag\] & \[mag\] & \[mag\] & \[mag\]\ \ 7.69 & 18.31 & 0.13 & 20.74 & 0.47 & 21.73 & 0.54\ 10.09 & 17.95 & 0.09 & 20.08 & 0.27 & 21.29 & 0.39\ 18.17 & 18.24 & 0.10 & 20.12 & 0.24 & 21.70 & 0.47\ 18.52 & 18.66 & 0.14 & 20.76 & 0.40 & 21.68 & 0.46\ 20.52 & 18.73 & 0.16 & 20.92 & 0.48 & 21.76 & 0.52\ 22.52 & 18.97 & 0.18 & 20.79 & 0.41 & ... & ...\ \[tab:12osswift\] [lllllll]{} Phase & $U$ & $\sigma$ & $UVW1$ & $\sigma$ & $UVM2$ & $\sigma$\ JD $-$ 2,456,459.17 & \[mag\] & \[mag\] & \[mag\] & \[mag\] & \[mag\] & \[mag\]\ \ 2.00 & ... & ... & 20.95 & 0.41 & ... & ...\ 3.54 & 18.83 & 0.17 & ... & ... & ... & ...\ 5.88 & 17.49 & 0.07 & 19.88 & 0.18 & ... & ...\ 11.36 & 16.77 & 0.04 & 19.00 & 0.08 & 22.25 & 0.71\ 14.56 & 16.98 & 0.05 & 19.37 & 0.11 & 22.15 & 0.66\ 18.91 & 17.53 & 0.06 & 20.20 & 0.22 & ... & ...\ 22.91 & 18.20 & 0.12 & 20.23 & 0.27 & ... & ...\ 23.56 & ... & ... & ... & ... & ... & ...\ 26.59 & 19.24 & 0.22 & ... & ... & ... & ...\ 37.78 & 20.29 & 0.38 & ... & ... & ... & ...\ \[tab:13bvnswift\] [lcccc]{} 14 Jan. & +8.5 & HET+LRS & 4148-10403\ 17 Jan. & +11.5 & GEMINI N.+GMOS & 3500-9654\ 18 Jan. & +12.5 & P200+DBSP & 3470-10200\ 22 Jan. & +16.66 & ASIAGO 1.82m+AFOSC & 3882-8180\ 25 Jan. & +19.75 & NOT+ALFOSC & 3560-9086\ 26 Jan. & +20.5 & KPNO 4m+RC Spec & 3500-8450\ 29 Jan. & +23.63 & ASIAGO 1.82m+AFOSC & 3892-8180\ 02 Feb. & +27.5 & Lick 3m+Kast & 3500-10000\ 10 Feb. & +35.69 & CA-2.2+CAFOS & 3877-10120\ 15 Feb. & +40.88 & NTT+EFOSC & 3910-9295\ 20 Feb. & +45.03 & Keck 1+LRIS & 3830-10250\ 23 Feb. & +48.5 & Lick 3m+Kast & 3504-10152\ 24 Feb. & +49.77 & NOT+ALFOSC & 3974-9153\ 25 Feb. & +50.5 & NOT+ALFOSC & 3402-9129\ 28 Feb. & +53.63 & CA-2.2+CAFOS & 3877-10120\ 29 Apr. & +1145 & Keck 1+LRIS & 3299-10250\ 30 Apr. & +115.57 & NOT+ALFOSC & 3974-8999\ 23 May & +138.66 & GTC+OSIRIS & 3600-10000\ 08 Aug. & +214.5 & NTT+EFOSC & 3910-9295\ [lccccc]{} 17 June 2013 & +2.15 & SALT+Spectrograph & 3500-9000 & This work\ 17 June 2013 & +2.3 & TNG+DOLORES & 3164-7987& 1\ 18 June 2013 & +2.6 & HET+LRS & 4000-11000& 1\ 18 June 2013 & +3.4 & Magellan Baade+FIRE & 9000-25000 & 1\ 19 June 2013 & +3.7 & FTN+FLOYDS & 3179-11038 & 1\ 20 June 2013 & +4.6 & FTS+FLOYDS & 3179-11038 & 1\ 21 June 2013 & +5.6 & FTN+FLOYDS & 3179-11038 & 1\ 22 June 2013 & +6.7 & FTN+FLOYDS & 3179-11038 & 1\ 24 June 2013 & +8.7 & FTN+FLOYDS & 3179-11038 & 1\ 24 June 2013 & +8.5 & Magellan Baade+FIRE & 9000-25000 & 1\ 26 June 2013 & +10.4 & HET+LRS & 4172-10800 & 1\ 27 June 2013 & +11.5 & APO+DIS & 3338-10039 & 1\ 27 June 2013 & +11.7 & FTN+FLOYDS & 3179-11038 & 1\ 01 July 2013 & +15.5 & HET+LRS & 4172-10800 & 1\ 02 July 2013 & +16.7 & IRTF+SpeX & 9000-25000 & 1\ 03 July 2013 & +17.7 & FTN+FLOYDS & 3179-11038 & This work\ 05 July 2013 & +19.6 & P200+DBSP & 3200-10100 & 2\ 09 July 2013 & +23.7 & FTN+FLOYDS & 3179-11038 & This work\ 11 July 2013 & +25.7 & Keck 2+DEIMOS & 4450-9625 & 2\ 18 July 2013 & +32.6 & FTN+FLOYDS & 3179-11038 & This work\ 21 July 2013 & +36.3 & NOT+ALFOSC & 3601-9141 & 2\ 02 Aug. 2013 & +47.8 & FTS+FLOYDS & 3179-11038 & This work\ 04 Aug. 2013 & +49.6 & P200+DBSP & 3200-10100 & 2\ 05 Aug. 2013 & +50.7 & FTS+FLOYDS & 3179-11038 & This work\ 08 Aug. 2013 & +53.7 & FTS+FLOYDS & 3179-11038 & This work\ 02 Sep. 2013 & +78.7 & Magellan Baade+FIRE & 9000-25000 & This work\ 04 Sep. 2013 & +80.5 & P200+DBSP & 3200-10100 & 2\ 09 Sep. 2013 & +85.6 & Keck 1+LRIS & 3200-10000 & 2\ 21 Feb. 2014 & +250.6 & NOT+ALFOSC & 3301-9142 & This work\ 26 May 2014 & +344.8 & Keck 2+DEIMOS & 4915-10134 & This work\ 28 May 2014 & +346.4 & VLT+FORS2 & 3300-11000 & This work\ 26 June 2014 & +376.3 & WHT+ISIS & 5400-9000 & This work\ [^1]: All dates in this paper are given in Universal Time (UT). [^2]: SN 2012P was also independently discovered by amateur astronomers [@2012CBET.2993....1D]. [^3]: Throughout this paper, we adopt the convention that a plus sign followed by a time implies a time past the estimated explosion date. [^4]: We also use the astronomy & astrophysics package for [MATLAB]{} [@2014ascl.soft07005O]. [^5]: For P48 data this assumption is not true. To handle this, we cut out a small portion around the transient within which the PSF can be assumed to be constant for these reductions. [^6]: In practice, the result is that only sources that are detected in the science frame are used when determining the ZP in both the reference and the science frame. [^7]: For the data from the LT that we present in this paper, the ZP is in some cases difficult to determine in the science frames because of their low signal-to-noise ratio (S/N), resulting in ZP uncertainties up to 0.05 mag for some epochs. [^8]: We caution that for other fields, color-corrections could be needed when determining the ZPs. For other SNe, S-corrections could also be more important than what we find for the SNe in this paper. Convolving the normalized transmission curves of the filters mounted on the P60 along with standard Sloan filter-profiles on the spectra of PTF12os and iPTF13bvn typically result in S-corrections smaller than than 0.1 mag. We also find that our data obtained from other telescopes are consistent with the P60 LCs at this level or better, when we have overlap between the LCs. [^9]: <http://www.weizmann.ac.il/astrophysics/wiserep/> [^10]: When the uncertainties are taken into account, this value is consistent with the kinematic distance of $23.9\pm1.7$ Mpc. [^11]: An older and more uncertain distance modulus of $\mu=31.76\pm0.38$ mag [@Tully:2009] has previously been used extensively in the literature for work on iPTF13bvn. However, we point out that the updated distance modulus estimate of $\mu=32.14\pm0.2$ mag by [@2013AJ....146...86T] is very close to the median value and standard deviation ($\mu=32.09\pm0.2$ mag) of all the distance modulus estimates reported on NED. [^12]: Total $B$-band magnitude from NED, not corrected for extinction. [^13]: Proposal ID 48-408, PI C. Fremling, conducted in service mode. [^14]: We note that in [@2013ApJ...775L...7C] the foreground (MW) extinction toward NGC 5806 was mixed up with the local extinction found at the position of iPTF13bvn. This mixup was also propagated to . [^15]: We use the LC data of SN 2011dh from . [^16]: GO-12888, PI S. Van Dyk, F555W. These images do not cover the explosion site of PTF12os. [^17]: 2015 June 26 (see Table. \[tab:hst\]) and 2015 June 30, GO-13822, PI G. Folatelli, F225W and F814W. [^18]: http://www.astro.washington.edu/users/becker/v2.0/hotpants.html [^19]: Obtained on 2013 Sep. 2.37, GO-12888, PI S. Van Dyk. [^20]: Due to our choice of $\mu=32.14\pm0.2$ mag, these absolute magnitudes are higher compared to previous studies. However, the upper limits considered by e.g. [@2015MNRAS.446.2689E] are consistent within our uncertainties. In [@2016arXiv160405050E] the higher distance modulus is also fully taken into account [^21]: We note that the peak observed in the $UVW1$ band for SN 2011dh is likely a result of leakage from redder wavelengths. [^22]: We note that since the spectra are actually not smooth BBs, these fits come with large systematic uncertainties. [^23]: We also find a decline over time in the few velocity measurements that were possible for $\lambda$5016 in PTF12os, but this line is strongly contaminated, as previously discussed. [^24]: We choose to use only the $UVM2$ band to estimate the UV contributions, since this [*Swift*]{}/UVOT band is least affected by leakage from redder wavelengths .
--- abstract: 'The recently discovered Galactic X-ray transient XTE J1752$-$223 entered its first known outburst in 2010, emitting from the X-ray to the radio regimes. Its general X-ray properties were consistent with those of a black hole candidate in various spectral states, when ejection of jet components is expected. To verify this, we carried out very long baseline interferometry (VLBI) observations. The measurements were carried out with the European VLBI Network (EVN) and the Very Long Baseline Array (VLBA) at four epochs in 2010 February. The images at the first three epochs show a moving jet component that is significantly decelerated by the last epoch, when a new jet component appears that is likely to be associated with the receding jet side. The overall picture is consistent with an initially mildly relativistic jet, interacting with the interstellar medium or with swept-up material along the jet. The brightening of the receding ejecta at the final epoch can be well explained by initial Doppler deboosting of the emission in the decelerating jet.' author: - | J.Yang$^{1}\thanks{E-mail: yang@jive.nl}$, C.Brocksopp$^{2}$, S.Corbel$^{3}$, Z.Paragi$^{1,4}$, T.Tzioumis$^{5}$ and R.P. Fender$^{6}$\ $^{1}$Joint Institute for VLBI in Europe, Postbus 2, 7990 AA Dwingeloo, The Netherlands\ $^{2}$Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK\ $^{3}$Université Paris 7 Denis Diderot and Service d’Astrophysique, UMR AIM, CEA Saclay, F-91191 Gif-sur-Yvette, France\ $^{4}$MTA Research Group for Physical Geodesy and Geodynamics, POB 91, H-1521 Budapest, Hungary\ $^{5}$Australia Telescope National Facility, CSIRO, P.O. Box 76, Epping, NSW 1710, Australia.\ $^{6}$School of Physics and Astronomy, University of Southampton, Highfield, Southampton, SO17 1BJ, UK date: 'Accepted 2010 September 7. Received 2010 August 27; in original form 2010 July 23' title: | A decelerating jet observed by the EVN and VLBA\ in the X-ray transient XTE J1752$-$223 --- \[firstpage\] stars: individual: XTE J1752$-$223 – stars: variable: others – ISM: jets and outflows – radio continuum: stars – X-rays: binaries. Introduction ============ It is clear from the literature of the past three decades that, for almost every black hole X-ray transient (XRT) observed at radio wavelengths, a radio counterpart has been discovered [@fen06]. In a small number of sources, the ejecta have been resolved and monitored as they travel away from the central source. Thus, it has been possible on rare occasions to measure proper motions, sometimes at apparently super-luminal velocities: e.g. GRS 1915$+$105 [@mir94; @fen99], GRO J1655$-$40 [@tin95; @hje95], GX 339$-$4 [@cor10], and even detect the only known instances of a Galactic parsec-scale jet being decelerated in the ISM: XTE J1550$-$564 [@cor02; @kaa03] and XTE J1748$-$288 [@hje98; @mil08]. Jet deceleration has been also investigated in GRS 1915$+$105 although no conclusive evidence was found [@mil07]. The X-ray transient XTE J1752$-$223 was discovered by the *Rossi X-ray Timing Explorer* (*RXTE*) on 2009 October 23 [@mar09] at the start of its first known X-ray outburst. It showed a long and gradual rise at X-ray energies, whilst remaining spectrally hard. The X-ray source later evolved, became softer [@hom10; @bro10a] and entered a spectral state commonly associated with jet ejection events [@fen04; @fen09]. The outburst has been well monitored by the *Monitor of All-sky X-ray Image* [*MAXI*, @nak10], *RXTE* [@sha10], and *Swift* [@cur10] at X-ray energies. All these X-ray observations show that XTE J1752$-$223 is likely to be a black hole transient. Following the activation of XTE J1752$-$223, we initiated the ATCA (Australia Telescope Compact Array) radio observations and discovered the radio counterpart with a flux density of $\sim 2$ mJy at 5.5 GHz [@bro09]. The later ATCA monitoring observations showed that the radio source entered a series of flares, peaking at 5 – 20 mJy (Brocksopp et al. in prep.), after the transition from X-ray hard to soft state. To detect the potential ejecta and study their evolution, we carried out an EVN (European VLBI Network) experiment and three follow-up VLBA experiments at 5 GHz in 2010 February. In this paper, we present the results of these VLBI (Very Long Baseline Interferometry) observations. Rapid EVN and VLBA Observations =============================== The performed VLBI experiments are summarised in Table \[tab1\]. The low declination and potentially weak flux of XTE J1752$-$223 were possible problems for VLBI observations. Therefore, to quickly resolve these concerns, we initiated an e-VLBI experiment with the western European telescopes [@szo08]. The EVN experiment used 1024 Mbps data rate (16 channels, 16 MHz per channel, 2 bit sampling, dual polarisation). The real-time correlation was done with the Earth orientation parameters (EOP) predicted from the EOP model of one day earlier. We applied 2-second integration time, and 16 frequency points per channel. The participating stations were Medicina, Yebes, Torun, Onsala, and Westerbork. In the EVN experiment, we used the following J2000 coordinate: $\mathrm{RA}=17^\mathrm{h}52^\mathrm{m}15\fs095$, $\mathrm{DEC}~=-22\degr20\arcmin32\farcs782$ (positional uncertainty: $\sigma=0\farcs3$), which was determined by the ATCA observations [@bro09]. To obtain fringe-fitting solutions and a reference point, we scheduled a nearby ($0\fdg8$) phase-referencing source: PMN J1755$-$2232 ($\mathrm{RA}=17^\mathrm{h}55^\mathrm{m}26\fs285$, $\mathrm{DEC}=-22\degr32\arcmin10\farcs593$, J2000, position error $\sigma\sim15$ mas). As these sources have low elevation ($<30\degr$) in Europe, we used a short cycle time: 160 seconds on the target and 80 seconds on the reference source. We also observed a strong and compact source (NRAO 530) as the fringe finder and bandpass calibrator. We successfully detected a radio source, consistent with an ejection from the black hole candidate [@bro10b], during the EVN experiment and then performed three follow-up VLBA experiments. We used the same phase-referencing source, cycle time, and observing frequency. The recording data rate was 512 Mbps (16 channels, 8 MHz per channel, 2 bit sampling, dual polarisation). There were eight VLBA telescopes available at the first epoch, nine at other two epochs. The data were correlated with the same parameters as the EVN experiment. We also performed an EVN experiment in March and two VLBA experiments in April to study other ejection events and attempt to detect the core. These additional results will be presented by Brocksopp et al. in a general paper. VLBI Data Calibration ===================== We used the NRAO software [AIPS]{} (Astronomical Image Processing System) to perform the initial calibrations. The *a-priori* amplitude calibration was done with measured system temperatures and antenna gain curves. We corrected the EOP model for the VLBA data before any phase calibrations. We follow the same procedure for both the EVN and VLBA data reduction. (1) We corrected the parallactic angle. (2) We ran the global fringe-fitting for NRAO 530 with half-minute solution interval and then solved for the instrumental bandpass. (3) With the bandpass solutions, we re-ran the fringe-fitting to solve for the instrumental phase and delay using two-minute data of NRAO 530. (4) We ran the fringe-fitting to solve for the phase, the fringe rate, and the delay for the calibrators with a solution interval of the scan length ($\sim1$ minute). On the long baselines to Mauna Kea (Mk) and St. Croix (Sc), there were no solutions found for PMN J1755$-$2232 as it has much lower correlation amplitude ($<10$ mJy) most likely due to scatter broadening. In view of this problem, we removed both Sc and Mk. The phase wrapped slowly (fringe rate $<5$ milliHz). The solutions of PMN J1755$-$2232 were then transferred to XTE J1752$-$223 by linear interpolation. (5) The data were averaged in each IF and then split into single-source files after all the corrections were applied. The reference source PMN J1755$-$2232 was imaged by circular Gaussian model fitting and self-calibration in [Difmap]{}[@she94]. The source was well represented by a circular Gaussian model with a size of $4.2$ mas and a total flux density of $\sim200$ mJy at 5 GHz. Finally, we self-calibrated the $u$–$v$ data with the model and applied the amplitude and phase solutions to both sources in AIPS. ![image](fig1a.eps){width="71.00000%"} ![image](fig1b.eps){width="27.00000%"}\ -------- ---------- ------- ------------------ ------------------ ------------------- ----------------------- ------------------ ------------------ -------------------- Exp. Date Array $N_\mathrm{ant}$ $T_\mathrm{obs}$ $S_\mathrm{peak}$ $\sigma_\mathrm{rms}$ $b_\mathrm{maj}$ $b_\mathrm{min}$ $\phi_\mathrm{pa}$ dd/mm/yy (hour) (mJy/b) (mJy/b) (mas) (mas) ($\degr$) RY001 11/02/10 EVN 5 1.2 2.32 0.21 14.5 6.1 $-3$ BB290A 18/02/10 VLBA 6 3.0 0.77 0.072 10.0 10.0 0 BB290B 23/02/10 VLBA 7 6.0 0.60 0.068 10.0 10.0 0 BB290C 26/02/10 VLBA 7 6.0 0.37 0.057 10.0 10.0 0 -------- ---------- ------- ------------------ ------------------ ------------------- ----------------------- ------------------ ------------------ -------------------- : The summary of the image parameters of Fig. \[fig1\]. []{data-label="tab1"} The columns give: (1) experiment code; (2) date; (3) array name; (4) total number of the used telescopes; (5) total observing time; (6) peak flux density; (7) off-source noise level; (8 – 10) sizes of the major and minor axes of restoring beam and its position angle.\ VLBI detection of XTE J1752$-$223 ================================= The imaging results for the X-ray transient XTE J1752$-$223 are shown in the left panel of Fig. \[fig1\]. The top small panels from left to right are the EVN image of 2010 February 11 and the VLBA images of 2010 February 18 and 26. The background large image is the VLBA image of 2010 February 26. The cross in each image marks the position of component A at the first epoch ($\mathrm{RA}=17^\mathrm{h}52^\mathrm{m}15\fs06370$, $\mathrm{DEC}=-22\degr20\arcmin31\farcs9838$, J2000). At the bottom of the large background image, there is another jet component marked as component B at an angular separation $488$ mas from component A and at a position angle consistent with the moving direction of component A. The related map parameters are listed in Tab. \[tab1\]. Component A, surrounded with the beam pattern, is clearly seen with a peak brightness of 2.32 mJy beam$^{-1}$ in the dirty map at the first epoch, when natural weighting is used. After removing component A with a circular Gaussian model, we notice that there may be at least one more jet component. One candidate is located at angular separation 18.7 mas, position angle $-84\fdg0$; the other at angular separation 70.6 mas and position angle $-36\fdg3$. Both candidates have a peak brightness $\sim0.91$ mJy beam$^{-1}$ ($\sim4\sigma_\mathrm{rms}$) using natural weighting. In Fig. \[fig1\]a, the two candidates show the second positive contours. If either candidate is removed by circular Gaussian model fitting, the other also becomes faint. If we reduce the contribution of the long baselines, both become brighter and show a small peak ($\sim5$%) brightness difference. Due to the limited sensitivity and $u$ – $v$ coverage during the 1.2-hour observations, neither components can be unambiguously identified as a true jet component. However, there is evidence for the extended emission for the source as the total restored flux density is much lower than that ($\sim16$ mJy at 5.5 GHz) measured by the ATCA (Brocksopp et al. in prep.). In the follow-up VLBA observations, the higher resolution and sensitivity are achieved by more telescopes and longer observing time. To image the extended source, we used natural weighting again. Because of the resolved structure and the decaying peak flux density, the source is only seen clearly in the dirty map with the synthesised beam ($16.2\times3.9$ mas at position angle $-15\fdg6$) at the first VLBA epoch. However, the large-scale beam pattern around the faint source could also be easily identified at the later two epochs. If we taper the long baselines, use the short baselines only, or increase the image pixel size, the source becomes significantly brighter in the dirty map at the later two epochs. None of the suspected ejecta candidates in the EVN image are further seen after 7 days in the later VLBA images. Because the diffuse emission can not be well restored by clean components, Gaussian models were used in making all the VLBI images of Fig. \[fig1\]. Due to the limited SNR (6 – 12), circular rather than elliptical Gaussian model fitting was adopted to reduce the number of free parameters. Table \[tab2\] lists the best-fit parameters of the circular Gaussian model. To show the motion of component A, we take the position of component A measured at the first epoch as the reference origin. The random position error was estimated by $\frac{\sqrt{b_\mathrm{maj}b_\mathrm{min}}}{2\mathrm{SNR}}$, where $b_\mathrm{maj}$ and $b_\mathrm{min}$ are the size of the major and minor axes of the used restoring beam, $\mathrm{SNR}$ is the signal to noise ratio ($\frac{S_\mathrm{peak}}{\sigma_\mathrm{rms}}$) listed in Column (7) of Table \[tab2\]. Note that the rather large systematic position error from the reference source will not affect our proper motion measurements. The fitted size has the same random error as the position for each component. Since the measured sizes ($\geq8$ mas) are much larger than that (4.2 mas) of the reference source, they should be very close to the true size of the ejecta. At the second epoch, we notice that component A shows an elongated structure in the East-West direction and the eastern side is brighter than the western side. This brightness distribution caused a slightly different position angle of the component, compared to what is measured at later epochs. The VLBI flux density errors are usually $\sim5\%$. ------- --------- --------------- ----------------- ------- ------- ------ Comp. MJD Separation Position Angle Size Flux SNR (day) (mas) ($^\circ$) (mas) (mJy) A 55238.4 $0\pm0.4$ 0 7.9 4.35 11.0 A 55245.6 $57.4\pm0.5$ $-41.18\pm0.90$ 13.8 2.20 10.7 A 55250.6 $85.5\pm0.6$ $-50.14\pm0.63$ 19.0 2.32 8.8 A 55253.6 $100.6\pm0.8$ $-49.23\pm0.70$ 13.9 1.05 6.1 B 55253.6 $387.9\pm0.8$ $128.08\pm0.18$ 11.9 0.86 6.4 ------- --------- --------------- ----------------- ------- ------- ------ : The circular Gaussian model fitting results of the detected jet components in the X-ray transient XTE J1752$-$223.[]{data-label="tab2"} \ Gradual jet deceleration ======================== The angular separation of component A versus time is shown in the right panel of Fig. \[fig1\]. We take the position and the time of component A measured at the first epoch as the reference origin. We fit these data points to the following proper motion model: $$\label{eq1} r=r_0+\mu_0 t - 0.5\dot\mu t^2$$ where $r$ is the angular separation; $t$ is the observing time; $r_0$ and $\mu_0$ are the angular separation and the proper motion at $t=0$, $\dot\mu$ is the apparent deceleration rate. The dotted straight line and the solid curve represent the uniform proper motion model ($\dot\mu=0$) and the proper motion model with the deceleration rate ($\dot\mu\neq0$) respectively. The best-fit parameters are listed in Table \[tab3\]. The model of $\dot\mu=0$ gives an average proper motion of $\bar\mu=\mu_0=6.90\pm0.05$ mas day$^{-1}$ with the reduced $\chi^2=118.4$. The degree of freedom (DoF) was listed in the last column. The model of $\dot\mu\neq0$ gives $\mu_0=9.15\pm0.15$ mas day$^{-1}$ at MJD 55238.4 and a deceleration rate of $\dot\mu=0.34\pm0.02$ mas day$^{-2}$ with the reduced $\chi^2=3.9$. With the deceleration rate, component A has a proper motion of 4.0 mas day$^{-1}$ at the last epoch. It is clear that the deceleration rate should be included in the proper motion model. Jet deceleration was also found in XTE J1550$-$564 using *Chandra* observations [@cor02; @kaa03] and XTE J1748$-$288 with VLA observations [@hje98; @mil08]. Compared with them, the observed deceleration in XTE J1752$-$223 is free from the blending of multiple jet components caused by the low resolution [@hje95]. If there is a sequence of ejecta which decreased sequentially more rapidly in flux density with increasing distance from the core, the cluster of components may show a decreasing proper motion. In our case, these VLBI observations have a resolution of $<10$ mas and can well identify single ballistic ejecta with a time resolution of less than one day, assuming an initial proper motion 10 mas day$^{-1}$. XTE J1752$-$223 is the second known case of gradual jet deceleration, although on a much smaller scale ($\sim100$ mas) than the first case of XTE J1550$-$564. As for the previous two sources, the jet deceleration in XTE J1752$-$223 is most likely due to interaction with the external dense interstellar medium (ISM) or the residual slowly-moving ejecta from the previous ejection along the jet path. ---------------- ---------------- ------------------ ------------------ -------------- ----- Model $r_{_0}$ $\mu_{_0}$ $\dot\mu$ $\chi^2$/DoF DoF (mas) (mas day$^{-1})$ (mas day$^{-2}$) $\dot\mu\neq0$ $0.06\pm0.41 $ $9.15\pm0.15$ $0.34\pm0.02$ 3.9 1 $\dot\mu=0$ $2.16\pm0.39 $ $6.90\pm0.05$ 0 118.4 2 ---------------- ---------------- ------------------ ------------------ -------------- ----- : Best-fit parameters using the proper motion models with and without the deceleration rate. []{data-label="tab3"} Component B: the receding ejecta? ================================= We interpret component A as an approaching jet component since it is the only component detected at the four epochs and our VLBI observations were performed just after the associated radio flare reached its peak flux density. The ATCA observations (Brocksopp et al., in prep.) show that it had a decaying flux density and a pretty stable and steep spectral index: $\alpha=-1$ ($S_\nu\propto\nu^{\alpha}$) between 5.5 and 9 GHz, i.e. there was no indication of another ejection event during our VLBI observations. Thus, the possibility that the components A and B detected at the last three epochs are associated with a different ejection event can be ruled out. Besides the proper motion, component A shows a hint of expansion. The evolution of its size is displayed in Fig. \[fig2\]. The first three data points give an expansion speed of $0.9\pm0.1$ mas day$^{-1}$ with reduced $\chi^2=1.1$. The expansion speed is much slower than its proper motion, indicating that its expansion was significantly confined too. Because component A shows an increasing size and a decaying peak brightness, its size estimation at a later stage is limited by the image sensitivity and the lack of short baselines. For this reason, the last data point was omitted in the linear fitting. According to the evolution of component A, a jet component is expected to have more compact structure at the earlier stage. Compared with the size (7.9 mas) of component A measured at the first epoch, component B shows a much larger size (11.9 mas). This indicates that component B is most likely an evolved component, which was ejected on the receding jet side at the same time as component A. According to the expansion speed, component A was ejected 8.7 days earlier, i.e. at MJD $55229.7$ (2010 Feb. 2), which is represented by the x-intercept in Fig. \[fig2\]. As component A may have significantly larger expansion speed and strong Doppler boosting effect if it is unhindered at this earlier stage, the inferred birth date may be the earliest possible birth date. Although such extrapolation is not guaranteed, the inferred date is at the beginning of the initial rising stage of the associated radio flare in the ATCA radio light curve (Brocksopp et al. in prep.). The average separation speed of the pair of components is $\bar\mu_\mathrm{app}+\bar\mu_\mathrm{rec}\geq20.4$ mas day$^{-1}$ if they were indeed ejected on the inferred birth date. Since $\bar\mu_\mathrm{app}\geq\bar\mu_\mathrm{rec}$, there is $\bar\mu_\mathrm{app}\geq10.2$ mas day$^{-1}$, which is significantly higher than the average proper motion (6.9 mas day$^{-1}$) measured during our observations. Thus, component A had already been significantly decelerated before our VLBI observations. If the jet expansion is linear and symmetric on both sides, the ratio of the approaching and receding component sizes is [e.g. @mil04]: $$\label{eq2} \frac{R_\mathrm{app}(t_\mathrm{app})}{R_\mathrm{rec}(t_\mathrm{rec})}=\frac{t_\mathrm{app}}{t_\mathrm{rec}}=\frac{1+\bar\beta(0,t_\mathrm{app})\cos\theta}{1-\bar\beta(0,t_\mathrm{rec})\cos\theta}$$ where $t_\mathrm{app}$ and $t_\mathrm{rec}$ are the intrinsic times at which light leaves the approaching and receding jet components respectively and arrive at the telescope at the same observing time, $\bar\beta$ is the average jet speed in units of light speed $c$ and $\theta$ is the inclination angle of the jet axis. By a linear extrapolation, the approaching jet component has a size of 21.3 mas at the fourth epoch. If there is no deceleration, $\bar\beta(0,t_\mathrm{app}) = \bar\beta(0,t_\mathrm{rec})=\beta$, we can give $\beta\cos\theta=0.3c$, which requires $\beta\geq0.3c$ and $\theta\leq73\degr$. Note that the size of the receding jet component detected for the first time is not likely to be affected by the over-resolution since it is much younger than the approaching jet component ($t_\mathrm{rec}\sim0.5t_\mathrm{app}$). Given the jet deceleration and $t_\mathrm{rec}\leq t_\mathrm{app}$ at the same telescope time, then $\bar\beta(0,t_\mathrm{rec})\geq0.3c$ and $\bar\beta(0,t_\mathrm{app})\leq0.3c$. Therefore, we can take $0.3c$ as the lower limit of the jet birth speed in the case of the jet deceleration. The ratio of the flux density measured at $R_\mathrm{app}=R_\mathrm{rec}$ ($t_\mathrm{app}=t_\mathrm{rec}$, free from its intrinsic luminosity evolution effect) for a pair of discrete jet components [@mir99]: $$\label{eq3} \frac{S_\mathrm{app}}{S_\mathrm{rec}} = \left( \frac{1+\beta\cos\theta}{1-\beta\cos\theta} \right)^{3-\alpha}$$ The receding jet component had a size of $R_\mathrm{rec}=11.9$ mas at the last epoch. The corresponding observing time for the approaching jet component is at MJD 55242.9 (between the first two epochs). At $t_\mathrm{app}=t_\mathrm{rec}$, the radio core is inferred to be at the centre: angular separation $\sim174$ mas and position angle $\sim-130\degr$. The radio core was not detected during any of the VLBI epochs but, since all four observations took place during the X-ray soft state, this is to be expected, according to the unified model of @fen04 [@fen09]. The radio position confirms that its optical counterpart is *Swift*-UVOT source A [@cur10]. If we take the flux density at the first epoch as the upper limit of $S_\mathrm{app}$, then $\frac{S_\mathrm{app}}{S_\mathrm{rec}}\leq5$ and $\beta\cos\theta\leq0.2c < \bar\beta(0,t_\mathrm{rec})\cos\theta$, in agreement with the jet deceleration scenario on both sides. The observed flux density from the receding jet is deboosted by a factor: $\delta_\mathrm{rec}^{3-\alpha}$, where $\delta_\mathrm{rec}=(1-\beta^2)^{0.5}(1+\beta\cos\theta)^{-1}$. Because of the jet deceleration, the receding ejecta is less beamed away from our line of sight, and thus it looks relatively brighter. Note that the non-detection of the receding jet component at the first epoch may also be because it stayed at an earlier stage ($t_\mathrm{rec}\sim0.5t_\mathrm{app}$ if $\beta\cos\theta=0.3$) and its flux density was still much lower than the peak flux density of the radio flare. The caveat in the above argument is that component B might have not followed the same luminosity evolution model as component A. It is also possible that the brightening of the receding ejecta was due to sudden jet-cloud interaction, as in the case of XTE J1748$-$228 [@hje98; @mil08]. ![Evolution of the size of component A. The solid straight line shows the fitting result of the first three data points. []{data-label="fig2"}](fig2.eps "fig:"){height="3.2cm"}\ Conclusions =========== In this paper, we present the results of the first VLBI observations of the new Galactic black hole candidate XTE J1752$-$223 during its first known outburst. With EVN and VLBA observations at four epochs in 2010 February, we imaged its radio counterpart at 5 GHz. We detect an ejected component at the first three epochs and find that its proper motion shows significant deviation from the uniform proper motion model and requires a deceleration rate of $0.34\pm0.02$ mas day$^{-2}$. In the jet deceleration scenario, it has proper motion decelerating from 9.2 mas day$^{-1}$ at the first epoch to 4.0 mas day$^{-1}$ at the last epoch. It also shows slow but detectable variation of its transverse size indicating that its expansion is also significantly confined. This is the first time that a Galactic jet is found to be gradually decelerating on the hundred milliarcsecond scale. The discovery provides strong evidence for the existence of significant interaction around the jet at an early stage of its evolution. In addition to the approaching jet component, we detect another jet feature at the last epoch, which is most likely associated with the receding ejecta. We infer that the jet deceleration should start at a time much earlier than our VLBI observations using the birth date (around 2010 February 2) from the ATCA radio light curve. Furthermore, we interpret the detection of the receding ejecta as a result of the reduced Doppler deboosting effect caused by the jet deceleration on the receding side and give a lower limit of $0.3c$ for the jet birth speed assuming symmetric jet motion. It has been reported by @sha10 that the distance, estimated by the spectral-timing correlation scaling technique, is around 3.5 kpc. Thus, the proper motion observed at the first epoch would correspond to an apparent jet speed of $\sim0.2c$, in agreement with our results (but note that the technique is very model dependent). Acknowledgments {#acknowledgments .unnumbered} =============== We thank the EVN PC Chair, T. Venturi and the VLBA Proposal Selection Committee for prompt approval of our ToO requests. e-VLBI developments in Europe were supported by the EC DG-INFSO funded Communication Network Developments project EXPReS. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The European VLBI Network is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils. [99]{} Brocksopp C. et al., 2009, Astronomer’s Telegram, 2278 Brocksopp C. et al., 2010, Astronomer’s Telegram, 2400 Brocksopp C. et al., 2010, Astronomer’s Telegram, 2438 Corbel S. et al., 2002, Sci, 298, 196 Corbel S. et al., 2010, Astronomer’s Telegram, 2745 Curran P.A. et al., 2010, MNRAS, arXiv:1007.5430 Fender R.P. et al., 1999, MNRAS, 304, 865 Fender R.P. et al., 2004, MNRAS, 355, 1105 Fender R.P., 2006, in Lewin W.H.G., van der Klis M., eds., Compact Stellar X-ray Sources, Cambridge Univ Press, Cambridge, p. 381 Fender R.P. et al., 2009, MNRAS, 396, 1370 Hjellming R.M., Rupen M.P., 1995, Nat, 375, 464 Hjellming R.M. et al., 1998, BAAS, 30, 1405 Homan J., 2010, Astronomer’s Telegram, 2387 Kaaret P. et al., 2003, ApJ, 582, 945 Markwardt C.B. et al., 2009, Astronomer’s Telegram, 2258 Mirabel I.F., Rodríguez L.F., 1994, Nat, 371, 46 Mirabel I.F., Rodríguez L.F., 1999, ARA&A, 37, 409 Miller-Jones J.C.A. et al., 2004, ApJ, 603, L21 Miller-Jones J.C.A. et al., 2007, MNRAS, 375, 1087 Miller-Jones J.C.A. et al., 2008, in Bandyopadhyay R.M. et al., eds, A Population Explosion: The Nature and Evolution of X-ray Binaries in Diverse Environments, AIP Conf. Proceedings, 1010, 50 Nakahira S. et al., 2010, PASJ, arXiv:1007.0801 Shepherd M.C. et al., 1994, BAAS, 26, 987 Shaposhnikov N. et al., 2010, ApJ, arXiv:1008.0597 Szomoru A., 2008, in Proceedings of the 9th EVN Symposium on The Role of VLBI in the Golden Age for Radio Astronomy and EVN Users Meeting, Proceedings of Science, PoS040 Tingay S.J. et al., 1995, Nat, 374, 141 \[lastpage\]
--- abstract: 'After describing the for standard and improved gluon and quark actions we present results for the non-perturbative clover coefficients of the SW quark action coupled to the Wilson plaquette action for $\beta \geq 5.7$, as well as the Lüscher-Weisz one-loop tadpole improved gauge action, both in the quenched approximation.' address: 'SCRI, Florida State University, Tallahassee, FL 32306-4130, USA' author: - 'R.G. Edwards${}^{\rm a}$, U.M. Heller${}^{\rm a}$ and T.R. Klassen' title: | -94pt [ FSU-SCRI-97-117\ September 1997\ ]{} The Schrödinger Functional and Non-Perturbative Improvement[^1][^2] --- ===================== ========\#1[(\[\#1\])]{} \#1[Eq. (\[\#1\])]{} ø /[/]{} =\#1[ 10\^[\#1]{} ]{} \#1[ ]{} \#1 Ø[ [O]{} ]{} === ========\#1[(\[\#1\])]{} \#1[Eq. (\[\#1\])]{} /[/]{} =\#1[ 10\^[\#1]{} ]{} \#1[ ]{} \#1 Ø[ [O]{} ]{} ¶ -10ex Introduction {#sec:intro} ============ The high cost of lattice QCD simulations has revitalized interest in the (on-shell) improvement program. Within the Symanzik [@Sym] approach, which we will follow, the use of one-loop (or even classical) and tadpole [@TI] improved gauge actions has lead to much smaller scaling violations on coarse lattices than for the standard Wilson plaquette action. Numerous studies of the static potential, thermodynamics, heavy quarks in either relativistic or non-relativistic frameworks, and glueballs (the latter on anisotropic lattices) have demonstrated this. References can be found in the LATTICE proceedings of the last few years. The improvement of quark actions is much harder. For Wilson-type quark actions, which we will consider here, this is ultimately due to the doubler problem. At least at the quantum level one incurs $\Ord(a)$ violations of chiral symmetry, which have turned out to be quite large. A great step forward was recently taken by the ALPHA collaboration [@ALPHA], which used the and the demand that the PCAC relation hold at small quark masses, to eliminate [*all*]{} on-shell $\Ord(a)$ errors for Sheikholeslami-Wohlert (SW) [@SW] quarks coupled to the Wilson gauge action. Various renormalization constants of axial and vector currents were also calculated non-perturbatively. The success of improved gauge actions on coarse lattices has motivated us to consider the non-perturbative $\Ord(a)$ improvement of quark actions coupled to improved gauge actions. In the process we have also reconsidered the case of the SW action coupled to the Wilson gauge action and extended the determination of the O(a) coefficient to coarser lattices than in [@ALPHA]. Although we are also in the process of determining the improvement coefficients of various currents, we will here concentrate on the $\Ord(a)$ improvement coefficient of the [*action*]{}. Details of the general theoretical setup and our motivation can be found in [@TKSF]; our results will be described in detail in future publications [@EHK]. $\Ord(a)$ and $\Ord(a^2)$ Improvement ===================================== For Wilson-type quark actions we have to introduce second order derivative and clover terms to eliminate doublers without introducing classical $\Ord(a)$ errors. On the quantum level, on an isotropic lattice, these two terms are still the only ones that exist at $\Ord(a)$. We write them as $a r (\sum_\mu \De_\mu + \half \om \sigF$). One of the coefficients $r$, $\om$ can be adjusted at will by a field transformation. It is convenient to fix the Wilson parameter $r$; to eliminate all $\Ord(a)$ violations of chiral symmetry we then have to tune the clover coefficient $\om$ as a function of the gauge coupling. Note that the $\Ord(a)$ terms in the action break chiral but not rotational symmetry (at this order), whereas the leading $\Ord(a^2)$ errors, that already exist at the classical level, show the opposite behavior; they break rotational but not chiral symmetry. For this reason the $\Ord(a)$ and leading $\Ord(a^2)$ terms can essentially be tuned independently (cf. [@AKL_LAT97]). By the same token, one can argue that one indeed [*should*]{} tune the $\Ord(a)$ and (leading) $\Ord(a^2)$ terms to eliminate the violations of both chiral and rotational symmetry. Eliminating the leading $\Ord(a^2)$ errors in a quark action leads to the D234 actions [@D234]. As for gauge actions, using classical and tadpole improvement at $\Ord(a^2)$ seems to almost completely eliminate the violation of rotational symmetry [@D234]. So far we have discussed isotropic lattices. Anisotropic lattices, with a smaller temporal than spatial lattice spacing, are of great interest for studies of heavy particles (glueballs, heavy quarks, hybrids) and thermodynamics. Improvement is more complicated for actions on such lattices. After considering the most general field redefinitions up to $\Ord(a)$, one sees [@TKSF] that two more parameters have to be tuned for on-shell improvement of a quark action up to $\Ord(a)$. One already appears at $\Ord(a^0)$, namely, a “bare velocity of light” that has to be tuned to restore space-time exchange symmetry (by, say, demanding that the pion have a relativistic dispersion relation for small masses and momenta). The other is at $\Ord(a)$; the two terms that have to be tuned at this order can be chosen to be the temporal and spatial parts of the clover term. Although the general methods sketched here should eventually be useful also for the anisotropic case, we will in the following restrict ourselves to isotropic lattices. Chiral Symmetry Restoration =========================== Consider QCD with (at least) two flavors of mass-degenerate quarks. The idea [@ALPHA] for determining the clover coefficient is that chiral symmetry will hold only if its Ward identity is satisfied as a [*local operator equation*]{}. In Euclidean space this means that the PCAC relation between the iso-vector axial current and the pseudo-scalar density, \[PCAC\] \_A\_\^b(x) Ø = 2m P\^b(x) Ø should hold for all operators $\O$, global boundary conditions, $x$ (as long as $x$ is not in the support of $\O$) etc. More precisely, it should hold [*with the same mass*]{} $m$ up to $\Ord(a^2)$ errors (which are quantum errors for (classically and tapole) $a^2$ improved actions). This will only be the case for the correct value of the clover coefficient $\om$. Several issues have to be addressed before this idea can be implemented in practice. First of all, even though here we can ignore the multiplicative renormalization of $A_\mu^b$ and $P^b$, there is an additive correction to $A_\mu^b$ at $\Ord(a)$, \[AP\] P\^b(x) & & (x) \_5 \^b (x) , [\ ]{} A\^b\_(x) & & (x) \_\_5 \^b (x) + a [c\_A]{} \_P\^b(x) . The determination of $\om$ is therefore tied in with that of $c_A$. We will see later how to handle this. Note that $\om$ and $c_A$ have an $\Ord(a)$ ambiguity (at least at the quantum level); different improvement conditions will give somewhat different values for $\om$ and $c_A$. Instead of assigning an error to $\om$ and $c_A$ one should choose a specific, “reasonable” improvement condition — the associated $\Ord(a^2)$ errors in observables are guaranteed to extrapolate away in the continuum limit. For various conceptual reasons it is preferrable to impose the PCAC relation at zero quark mass. Due to zero modes this is not possible with periodic boundary conditions (BCs); the quark propagator would diverge. Another reason to abandon periodic BCs is that to be sensitive to the value of $\om$ it would be highly advantageous to have a background field present; it couples directly to the clover term. The provides a natural setting to implement these goals. The Schrödinger Functional ========================== The phrase “Schrödinger Functional” (SF) refers to quantum field theory with Dirichlet, i.e. fixed, BCs [@LNWW]. In the following we will always use periodic BCs in space (extent $L$) and fixed BCs in time (extent $T$). For finite $T$ the Dirac operator has a gap of order $1/T$ even for vanishing quark mass, at least at weak coupling. Furthermore, by choosing different BCs at “opposite ends of the universe” one induces a chromo-electric classical background field. In implementing the SF on the lattice the main point is to understand exactly how to impose fixed BCs on the gauge and quark fields. In particular, we must be able to do so for improved actions. For details we refer to [@TKSF]; here we just mention some salient features: 1\.  The main difference between the Wilson and $\Ord(a^2)$ improved gauge actions is that for the latter the “boundary” consists of a double layer of time slices. To avoid boundary errors larger than those of the bulk action, one must, already at the classical level, assign loops at the boundary special “temporal weight factors” that depend on the temporal extent of the loop (cf. figs. 1 and 2). The classical values of the weight factors are easy to understand from elementary calculus formulas (e.g. the trapezoidal rule explains the factors of $\half$ in the Wilson case of fig. 1). When using the SF as a tool to tune coefficients in a local action (or current), it is fortunately not necessary to know the exact quantum values of the boundary coefficients: the local Ward identities have to hold independent of global effects at the boundary. = 0.37 = 0.37 2\.  If the boundary values of the gauge field at the top and bottom of the universe commute, the following is a solution of the lattice field equations for [*any*]{} gauge action: $\bU_0(x) = 1$, and \_k(x) = ([1øT]{}) . (The boundary values of the gauge field can be read off from the above by evaluating it at $x_0=0$ and $T$ for Wilson, respectively, $x_0=0,a$ and $T,T+a$ in the improved case.) The question is if the above background field $\bU$ is the [*unique*]{} (up to gauge equivalence) absolute minimum of the classical action for given boundary values. Uniqueness is important, e.g. for perturbative calculations. A theorem establishing uniqueness holds in the Wilson case [@LNWW], if the $C_k$, $C_k'$ parameterizing the boundary values satisfy certain conditions. In the improved case it has been checked using simulated annealing that uniqueness holds under the same conditions [@TKSF]. 3\.  To impose consistent fixed boundary conditions for the fermion fields, it is sufficient to consider the projector structure of the field equations (more precisely, it is only the projector structure in the [*time*]{} direction that matters). For an action with the same projector structure as the standard Wilson quark action, one has to specify $P^+ \psi(x)$, $\psib(x)P^-$ at the (inner) lower boundary in figs. 1 and 2, and $P^- \psi(x)$, $\psib(x)P^+$ at the (inner) upper boundary. Here $P^\pm \equiv \half (1\pm \ga_0)$. For an improved quark action with the appropriate projector structure [@TKSF] one has to specify the same components on both the inner and outer boundary layers in fig. 2. 4\.  One of the very useful ideas in applications of the SF is that of [*quark boundary fields*]{} [@ALPHA]. They are defined as functional derivatives, within the path integral, with respect to the boundary values specified in the previous paragraph (which are then set to zero). For the improved case one can actually define two sets of quark boundary fields. It turns out that if one defines them with respect to the [*outer*]{} boundary values, then most formulas relevant for our application of the SF are identical for improved and standard actions. The boundary fields corresponding to the above boundary values will be denoted as $\bar{\zeta}(\x), \zeta(\x), \bar{\zeta}'(\x), \zeta'(\x)$; the first pair being the [*lower*]{}, the second the [*upper*]{} boundary fields. Details of Non-Perturbative Tuning ================================== With $\O^b =a^6 \sum_{\y\z} \bar\zeta(\y)\, \gamma_5 \frac{1}{2}\tau^b\zeta(\z)$, in terms of the lower boundary fields, and f\_X(x\_0) -[1V]{} \_[b,]{} X\^b(x) Ø\^b , X\^b = A\^b\_0, P\^b the PCAC relation becomes m(x\_0) r(x\_0) + a c\_[A]{} s(x\_0) = + (a\^2) . where $ \widetilde{m}$ differs from $m$ by irrelevant multiplicative renormalization factors, and r(x\_0) , s(x\_0) . Here $\nabla_0$ and $\De_0$ are standard first and second order lattice derivatives, respectively (in the improved case one actually has to use an improved first order derivative to be consistent). Similarly, $f'_X, m', r', s'$ are defined in terms of the upper boundary fields $\zeta', \bar\zeta'$. From $\,m(y_0) \stackrel{!}{=} m'(z_0)\,$ one obtains an estimator of $c_A$: \_A(y\_0,z\_0)    -[1a]{} [r(y\_0)- r’(z\_0)s(y\_0) - s’(z\_0)]{} . In terms of a suitable $\hat{c}_A$ we now have two different estimates of the current quark mass: \[MMprime\] M(x\_0) & & r(x\_0) + a \_A s(x\_0) , [\ ]{} M’(x\_0) & & r’(x\_0) + a \_A s’(x\_0) . Their equality in the presence of a suitable background field [@ALPHA; @EHK] will be our improvement condition for $\om$. More precisely, we demand M(x\_0) M(x\_0) - M’(x\_0) M\^[(0)]{}(x\_0) for some well-chosen $x_0$; here the superscript $(0)$ denotes the higher order (and small) tree-level value of the quantity in question. In practice one measures the required correlators in a simulation for several trial values of $\om$, and interpolates to find the zero crossing of $\Delta M - \Delta M^{(0)}$. This determines the non-perturbative value of $\om$. Results ======= The results we describe in the following all refer to the SW action on either Wilson or one-loop tadpole improved glue [@LW; @Alf] (which we will refer to as “LW glue”). We will always work in the quenched approximation on isotropic lattices. Below we use some $\hat{c}_A(y_0,y_0)$ as estimator of $c_A$ in eq. . We then denote $\Delta M(x_0)$ by $\Delta M(x_0,y_0)$. We used lattices $T\cdot L^3=15\cdot 8^3$ or $12\cdot 6^3$ for Wilson glue, and $14\cdot 8^3$ for LW glue. After some study we decided to use $\Delta M(12,4)$ and $\Delta M(9,3)$, respectively, in the improvement condition for $\om$ in the Wilson case. In the improved case we chose $\Delta M(11,5)$. Typically we generated $1000-2500$ configurations for each gauge coupling considered. Even though the SF alleviates problems due to zero modes, it turns out that on coarse lattices fluctuations still lead to accidental zero modes at vanishing quark mass (“exceptional configurations”). Fortunately, it turns out that the mass dependence of the non-perturbative $\om$ is so weak that one can safely determine it at larger mass values. In this manner we have extended the non-perturbative clover coefficient obtained by the ALPHA collaboration for $\beta\geq 6.0$ to $\beta \geq 5.7$. In fig. 3 we illustrate the weak mass (and volume) dependence of $\om$ for $\beta\seq 5.7$ Wilson glue. We have also checked that the mass dependence is weak for $\beta\seq 5.85$ and $6.0$. The same can be seen for LW glue in fig. 4, which furthermore demonstrates the linearity of $\Delta M$ as a function of $\om$ (all results shown in fig. 4 were calculated using the same gauge configurations). = 0.47 = 0.47 For future use of improved quark actions it is advisable to present the non-perturbative clover coefficient as a definite function of the bare gauge coupling. Combining our Wilson results for $\beta=5.7, 5.85$ and $6.0$ with those of the ALPHA collaboration and one-loop perturbation theory we obtain the parameterization ($g^2 = 6/\beta$) \[cswWil\] (g\^2) = [1-0.6050 g\^2 -0.1684 g\^4 ø1 -0.8709 g\^2 ]{} , g\^2 &lt; 1.06 We were able to accommodate our $\beta=5.7, 5.85$ clover coefficients and the value for $\beta=6.2$ from [@ALPHA] in our curve only in a slightly unsatisfactory manner (extending the Padé in either the numerator or denominator does not help). This issue is under investigation. In the interim the curve  should be regarded as preliminary. For the case of LW glue our current data are parameterized well by \[cswLW\] \_[LW]{}(g\^2) = [1-0.3590g\^2 ø1 -0.4784g\^2 ]{} , g\^2 &lt; 1.55 . (The one-loop coefficient is presently not known analytically in this case.) The relation between the coefficient of the plaquette term in the LW action, $\beta_{\rm LW}$, and the bare coupling $g^2$ is, to one loop [@LW], $g^2 = 10/(\beta_{\rm LW} - 1.422)$. For larger couplings than those in  it seems that the non-perturbative $\om_{\rm LW}$ determined with the SF rises dramatically. This is currently under investigation. It is interesting to compare the nonperturbative clover coefficients obtained for Wilson and LW glue at the same physical scale. The string tension has been measured for both of these actions, so we will use it to set the scale. We would like a [*curve*]{} parameterizing the string tension $\si$ as a function of the coupling. It is known that the two (or three) loop running of the coupling does [*not*]{} properly describe $\si$’s lattice spacing dependence (nor that of other observables) for the couplings we are interested in. However, as pointed out in [@Allton], this is not to be expected, since $\si$ has discretization errors, $g^{2n} a^2, a^4, \ldots$ ($n=0/2$ for Wilson/LW glue) that should be taken into account. We therefore try to parameterize $(a \sqrt{\si})(g)$ as f(g\^2) ( 1 + c\_2 g\^[2n]{} (g)\^2 + c\_4 (g)\^4 ), in terms of the three fit parameters $\sqrt{\si}/\La$, $c_2$, $c_4$. Here $\hat{a}(g)\equiv f(g^2)/f(1)$, in terms of the universal two-loop function a(g)= f(g\^2) (b\_0 g\^2)\^[-[b\_1ø2b\_0\^2]{}]{} (-[1ø2b\_0 g\^2]{}) This works very well (for details see [@EHK]), and the clover coefficients are presented as a function of the lattice spacing in fig. 5. It is interesting to observe that for $a > 0.05$ fm both the tadpole and the non-perturbative $\om$’s are pretty much linear in $a$, at least up to about $0.15$ fm. Furthermore, the [*differences*]{} between the non-perturbative and the tadpole values are essentially linear down to the smallest couplings considered for both cases (of course, for sufficiently small couplings the differences should be of order $g^2$). = 0.47 Conclusions and Outlook ======================= We have shown that the and the non-perturbative elimination of $\Ord(a)$ errors in a Wilson-type quark action can be successfully extended to improved (gauge) actions. By establishing that the non-perturbative clover coefficient has a very weak mass dependence we were able to determine it for lattice spacings significantly above $0.1$ fm, for both Wilson and improved gauge actions. We are currently investigating different definitions of the axial current improvement coefficient $c_A$. This is necessary if one wants to determine it on coarse lattices, where some definitions lead to a $c_A$ of rapidly increasing magnitude (at least when using Wilson glue). Once the $c_A$ determination is completed we plan to determine the other current normalization and improvement coefficients [@ALPHA]. There are many other situations in which the non-perturbative elimination of the $\Ord(a)$ violations of chiral symmetry is important, the most obvious examples being full QCD and D234 quarks on anisotropic lattices (for the study of heavy quarks). We hope that the ultimate outcome of this and future studies will be the ability to perform accurate continuum extrapolations from much coarser, and therefore cheaper, lattices than hitherto possible. [9]{} K. Symanzik, Nucl. Phys. B226 (1983) 187. G.P. Lepage and P.B. Mackenzie, Phys. Rev. D48 (1993) 2250. M. Lüscher et al, Nucl. Phys. B491 (1997) 323, 344. B. Sheikholeslami and R. Wohlert, Nucl. Phys. B259 (1985) 572. T.R. Klassen, [hep-lat/9705025]{}, Nucl. Phys. B, in press. R.G. Edwards, U.M. Heller and T.R. Klassen, in preparation. M. Alford, T.R. Klassen and G.P. Lepage, these proceedings. M. Alford, T.R. Klassen and G.P. Lepage, Nucl. Phys. B496 (1997) 377; Nucl. Phys. B (Proc. Suppl.) 47 (1996) 370; 53 (1997) 861. M. Lüscher, R. Narayanan, P. Weisz, and U. Wolff, Nucl. Phys. B384 (1992) 168. M. Lüscher and P. Weisz, Comm. Math. Phys. 97 (1985) 59; Phys. Lett. 158B (1985) 250. M. Alford et al, Phys. Lett. B361 (1995) 87. C. Allton, [hep-lat/9610016]{}. [^1]: Based on talks by R.G.E. and T.R.K. [^2]: Work supported by DOE grants DE-FG05-85ER250000 and DE-FG05-96ER40979.
--- abstract: 'DP-coloring is a generalization of list coloring, which was introduced by Dvořák and Postle \[J. Combin. Theory Ser. B 129 (2018) 38–54\]. Zhang \[Inform. Process. Lett. 113 (9) (2013) 354–356\] showed that every planar graph with neither adjacent triangles nor 5-, 6-, 9-cycles is 3-choosable. Liu showed that every planar graph without 4-, 5-, 6- and 9-cycles is DP-3-colorable. In this paper, we show that every planar graph with neither adjacent triangles nor 5-, 6-, 9-cycles is DP-3-colorable, which generalizes these results. Yu gave three Bordeaux-type results by showing that (i) every planar graph with the distance of triangles at least three and no 4-, 5-cycles is DP-3-colorable; (ii) every planar graph with the distance of triangles at least two and no 4-, 5-, 6-cycles is DP-3-colorable; (iii) every planar graph with the distance of triangles at least two and no 5-, 6-, 7-cycles is DP-3-colorable. We also give two Bordeaux-type results in the last section: (i) every plane graph with neither 5-, 6-, 8-cycles nor triangles at distance less than two is DP-3-colorable; (ii) every plane graph with neither 4-, 5-, 7-cycles nor triangles at distance less than two is DP-3-colorable.' author: - | Mengjiao Rao Tao Wang[^1]\ [Institute of Applied Mathematics]{}\ [Henan University, Kaifeng, 475004, P. R. China]{} title: 'DP-3-coloring of planar graphs without certain cycles' --- Introduction ============ All graphs considered in this paper are finite, simple and undirected. A planar graph is a graph that can be embedded into the plane so that its edges meet only at their ends. A plane graph is a particular embedding of a planar graph into the plane. We set a plane graph $G = (V, E, F)$ where $V, E, F$ are the sets of vertices, edges, and faces of $G$, respectively. A vertex $v$ and a face $f$ are [**incident**]{} if $v \in V(f)$. Two faces are [**normally adjacent**]{} if the intersection of these two faces is an edge. Call $v\in V(G)$ a $k$-vertex, or a $k^{+}$-vertex, or a $k^{-}$-vertex if its degree is equal to $k$, or at least $k$, or at most $k$, respectively. The notions of a $k$-face, a $k^{+}$-face and a $k^{-}$-face are similarly defined. A proper $k$-coloring of a graph $G$ is a mapping $f$: $V(G) \longrightarrow[k]$ such that $f(u)\neq f(v)$ whenever $uv\in E(G)$, where $[k]=\{1, 2, \dots, k\}$. The smallest integer $k$ such that $G$ has a proper $k$-coloring is called the [**chromatic number**]{} of $G$, denoted by $\chi(G)$. Vizing [@MR0498216], and independently Erdős, Rubin, and Taylor [@MR593902] introduced list coloring as a generalization of proper coloring. A [**list assignment**]{} $L$ gives each vertex $v$ a list of available colors $L(v)$. A graph $G$ is [**$L$-colorable**]{} if there is a proper coloring $\phi$ of $G$ such that $\phi(v) \in L(v)$ for each $v\in V(G)$. A graph $G$ is $k$-choosable if $G$ is $L$-colorable for each $L$ with $|L(v)| \geq k$. The minimum integer $k$ such that $G$ is $k$-choosable is called the [**list-chromatic number**]{} $\chi_{\ell}(G)$. For ordinary coloring, since every vertex has the same color set $[k]$, the operation of vertex identification is allowed. For list coloring, since the vertices may have different lists, it is impossible to identify vertices. To overcome this difficulty, Dvořák and Postle [@MR3758240] introduced DP-coloring under the name “correspondence coloring”, showing that every planar graph without cycles of lengths 4 to 8 is 3-choosable. \[DEF1\] Let $G$ be a simple graph and $L$ be a list-assignment for $G$. For each vertex $v \in V(G)$, let $L_{v} = \{v\} \times L(v)$; for each edge $uv \in E(G)$, let $\mathscr{M}_{uv}$ be a matching between the sets $L_{u}$ and $L_{v}$, and let $\mathscr{M} = \bigcup_{uv \in E(G)}\mathscr{M}_{uv}$, called the [**matching assignment**]{}. The matching assignment is called [**$k$-matching assignment**]{} if $L(v) = [k]$ for each $v \in V(G)$. A [**cover**]{} of $G$ is a graph $H_{L, \mathscr{M}}$ (simply write $H$) satisfying the following two conditions: 1. the vertex set of $H$ is the disjoint union of $L_{v}$ for all $v \in V(G)$; 2. the edge set of $H$ is the matching assignment $\mathscr{M}$. Note that the matching $\mathscr{M}_{uv}$ is not required to be a perfect matching between the sets $L_{u}$ and $L_{v}$, and possibly it is empty. The induced subgraph $H[L_{v}]$ is an independent set for each vertex $v \in V(G)$. Let $G$ be a simple graph and $H$ be a cover of $G$. An [**$\mathscr{M}$-coloring**]{} of $H$ is an independent set $\mathcal{I}$ in $H$ such that $|\mathcal{I} \cap L_{v}| = 1$ for each vertex $v \in V(G)$. The graph $G$ is [**DP-$k$-colorable**]{} if for any list assignment $|L(v)| \geq k $ and any matching assignment $\mathscr{M}$, it has an $\mathscr{M}$-coloring. The [**DP-chromatic number $\chi_{\mathrm{DP}}(G)$**]{} of $G$ is the least integer $k$ such that $G$ is DP-$k$-colorable. We mainly concentrate on DP-coloring of planar graph in this paper. Dvořák and Postle [@MR3758240] noticed that $\chi_{\mathrm{DP}}(G)\leq5$ if $G$ is a planar graph, and $\chi_{\mathrm{DP}}(G)\leq3$ if $G$ is a planar graph with girth at least five. Liu [@MR3886261] gave some sufficient conditions for a planar graph to be DP-$3$-colorable which extends the $3$-choosability of such graphs. A planar graph is DP-$3$-colorable if it satisfies one of the following conditions: 1. it contains no $\{3, 6, 7, 8\}$-cycles. 2. it contains no $\{3, 5, 6\}$-cycles. 3. it contains no $\{4, 5, 6, 9\}$-cycles. 4. it contains no $\{4, 5, 7, 9\}$-cycles. 5. the distance of triangles is at least two and it contains no $\{5, 6, 7\}$-cycles. If $a$ and $b$ are distinct values from $\{6, 7, 8\}$, then every planar graph without cycles of lengths $\{4, a, b, 9\}$ is DP-$3$-colorable. Every planar graph without 3-, 7- and 8-cycles is DP-3-colorable Zhang and Wu [@MR2159447] showed that every planar graph without 4-, 5-, 6- and 9-cycles is 3-choosable. Zhang [@MR3033651] generalized this result by showing that every planar graph with neither adjacent triangles nor 5-, 6- and 9-cycles is 3-choosable. Liu [@MR3886261] showed that every planar graph without 4-, 5-, 6- and 9-cycles is DP-3-colorable. In this paper, we first extend these results by showing the following theorem. [theorem]{}[MRESULTa]{}\[MRESULTa\] Every plane graph with neither adjacent triangles nor 5-, 6- and 9-cycles is DP-$3$-colorable The [**distance of two triangles**]{} $T$ and $T'$ is defined as the value $\min\{\dist(x, y) : x \in T \mbox{ and } y \in T'\}$, where $\dist(x, y)$ is the distance of the two vertices $x$ and $y$. In general, we use $\dist^{\triangledown}$ to denote the minimum distance of two triangles in a graph. Yin and Yu [@MR3954054] gave the following Bordeaux condition for planar graphs to be DP-3-colorable. \[YY\] A planar graph is DP-$3$-colorable if it satisfies one of the following two conditions: 1. the distance of triangles is at least three and it contains no $\{4, 5\}$-cycles. 2. the distance of triangles is at least two and it contains no $\{4, 5, 6\}$-cycles. implies the following new results on 3-choosability. A planar graph is 3-choosable if it satisfies one of the following conditions: 1. the distance of triangles is at least three and it contains no $\{4, 5\}$-cycles. 2. the distance of triangles is at least two and it contains no $\{4, 5, 6\}$-cycles. [|c|c|c|c|c|c|c|c|c|r|r|]{} $\dist^{\triangledown}$&$3$&$4$&$5$&$6$&$7$&$8$&$9$&$10$&list-3-coloring&DP-3-colorable\ &&& & & & & & &Thomassen, 1995 [@MR1328294]&Dvořák, Postle, 2018 [@MR3758240]\ && &&& & & & &Lam, Shiu, Song, 2005 [@MR2137572]&Liu 2019 [@MR3886261]\ && & &&& & & &Dvořák 2010 [@MR2680225]&\ && & &&& && &Zhang, Xu, 2004 [@MR2038769]&\ && & &&&& & &Lidický, 2009 [@MR2527000]&Liu 2019 [@MR3886261]\ && & & &&& & &Dvořák 2009 [@MR2552620]&Li, Li, 2019+ [@Li2019b]\ & &&& & &&&&Dvořák, Postle, 2018 [@MR3758240]&\ & &&&& & && &Zhang, Wu, 2005 [@MR2159447]&Liu 2019 [@MR3886261]\ & &&& && && &Zhang, Wu, 2004 [@MR2098488]&Liu 2019 [@MR3886261]\ & &&& && & &&Zhang, 2012 [@MR2976370]&\ & &&& & &&& &Wang, Lu, Chen, 2010 [@MR2558977]&\ & && &&& && &Wang, Lu, Chen, 2008 [@MR2381380]&Liu 2019 [@MR3969021]\ & && && &&& &Shen, Wang, 2007 [@MR2349483]&Liu 2019 [@MR3969021]\ & && & &&&& &Wang, Wu, Shen, 2011 [@MR2746963]&Liu 2019 [@MR3969021]\ & &&& & && &&Wang, Wu, 2011 [@MR2856858]&\ $\geq 3$& &&& & & & & &derived from [@MR3954054]&Yin, Yu, 2019 [@MR3954054]\ $\geq 2$& &&&& & & & &derived from [@MR3954054]&Yin, Yu, 2019 [@MR3954054]\ $\geq 2$& & &&&& & & &Li, Chen, Wang, 2016 [@MR3558005]&Liu 2019 [@MR3886261]\ $\geq 2$& & &&& && & &Zhang, Sun, 2008 [@MR2428692]&this paper\ $\geq 3$& & &&& & & &&Zhang, 2016 [@MR3492654]&\ $\geq 3$& && & && && &Li, Wang, 2016 [@Li2016]&\ $\geq 2$& &&& && & & &Han, 2009 [@Han2009]&this paper\ The following are two Bordeaux-type results on $3$-choosability. \[ZHS\] Every plane graph with neither 5-, 6-, 8-cycles nor triangles at distance less than two is 3-choosable. \[Han\] Every plane graph with neither 4-, 5-, 7-cycles nor triangles at distance less than two is 3-choosable. In the last section, we give two Bordeaux-type results on DP-3-coloring, the first one improves and the second one improves . [theorem]{}[MRESULTb]{}\[MRESULTb\] Every plane graph with neither 5-, 6-, 8-cycles nor triangles at distance less than two is DP-3-colorable. \[NO457\] Every plane graph with neither 4-, 5-, 7-cycles nor triangles at distance less than two is DP-3-colorable. It is observed that every $k$-degenerate graph is DP-$(k + 1)$-colorable, can be derived from the following . [theorem]{}[MRESULTc]{}\[2D\] Every plane graph with neither 4-, 5-, 7-cycles nor triangles at distance less than two is 2-degenerate. For more results on DP-coloring of planar graphs, we refer the reader to [@MR3983123; @Lu2019; @MR3881665; @MR3802151; @Li2019a; @MR3996735]. If $uv$ is incident with a $7^{+}$-face and a $4^{-}$-face, then we say $uv$ [**controls**]{} the $4^{-}$-face. Similarly, if $uv$ is on a $7^{+}$-cycle and a $4^{-}$-cycle, then we say $uv$ [**controls**]{} the $4^{-}$-cycle. A vertex $v$ on a $7^{+}$-face $f$ is [**rich**]{} to $f$ if none of the two incident edges on $f$ control a $4^{-}$-face, [**semi-rich**]{} if exactly one of the two incident edges on $f$ controls a $4^{-}$-face, and [**poor**]{} if they control two $4^{-}$-faces. A $3$-vertex $v$ is [**weak**]{} if $v$ is incident with a 3-face, [**semi-weak**]{} if $v$ is incident with a 4-face, [**strong**]{} if $v$ is incident with no $4^-$-face. For a face $f \in F$, if all the vertices on $f$ in a cyclic order are $v_{1}$, $v_{2}$, $\dots$, $v_{k}$, then we write $f=v_{1}v_{2}\dots v_{k}$, and call $f$ a $\big(d(v_{1}), d(v_{2}), \dots, d(v_{k})\big)$-face. A face is called a [**$k$-regular face**]{} if every vertex incident with it is a $k$-vertex. A ($d_{1}$, $d_{2}$, $\dots$, $d_{t}$)-path $v_{1}v_{2}\dots v_{t}$ on a face $g$ is a set of consecutive vertices along the facial walk of $g$ such that $d(v_{i})=d_{i}$ and the vertices are different. The notions of $d^{+}$ (or $d^{-}$) are similarly for $d(v)\geq d_{i}$ (or $d(v)\leq d_{i}$). Preliminary =========== In this short section, some preliminary results are given, and these results can be used separately elsewhere. Liu [@MR3886261] showed the “nearly $(k - 1)$-degenerate” subgraph is reducible for DP-$k$-coloring. \[DP-GREEDY\] Let $k \geq 3$, $F$ be a subgraph of $G$ and $G' = G - V(F)$. If the vertices of $F$ can be ordered as $v_{1}, v_{2}, \dots, v_{t}$ such that the following hold: 1. $d_{G'}(v_{1}) < d_{G'}(v_{t})$; 2. $d_{G}(v_{t}) \leq k$ and $v_{1}v_{t} \in E(G)$; 3. for each $2 \leq i \leq t - 1$, $v_{i}$ has at most $k - 1$ neighbors in $G - \{v_{i+1}, v_{i+2}, \dots, v_{t}\}$, then any DP-$k$-coloring of $G'$ can be extended to a DP-$k$-coloring of $G$. We give more specific reducible “nearly $(k - 1)$-degenerate” configuration for DP-$3$-coloring. \[Reducible\] Suppose that $G$ is not DP-$3$-colorable but every subgraph with fewer vertices is DP-$3$-colorable. Let $\mathcal{C}$ be an $m$-cycle $v_{1}v_{2}\dots v_{m}$, let $X = \{\,i : d(v_{i}) = 4, \quad 1 \leq i \leq m\,\}$ and $E^{+} = \{\,v_{i}v_{i+1} : i \in X\,\} \cup \{v_{m}v_{1}\}$. If $v_{m}$ is a $3$-vertex and $v_{m}v_{1}$ controls a $3$-cycle $v_{m}v_{1}u$ or a $4$-cycle $v_{m}v_{1}uw$, then $G$ contains no configuration satisfying all the following conditions: 1. every edge $e$ in $E^{+}$ controls a $4^{-}$-cycle $C_{e}$; 2. all the vertices on $\mathcal{C}$ and the vertices on cycles controlled by $E^{+}$ are distinct; 3. every vertex on $\mathcal{C}$ is a $4^{-}$-vertex; 4. every vertex on cycles controlled by $E^{+}$ but not on $\mathcal{C}$ is a $3$-vertex; 5. the vertex $u$ has a neighbor neither on $\mathcal{C}$ nor on the cycles controlled by $E^{+}$. Suppose to the contrary that there exists a such configuration. For the path $P = v_{1}v_{2}\dots v_{m}$, replace each edge $v_{i}v_{i+1}$ in $E(P) \cap E^{+}$ by the other part of the controlled cycle, and append $v_{m}u$ (when $v_{m}v_{1}u$ is the controlled $3$-cycle) or $v_{m}wu$ (when $v_{m}v_{1}uw$ is the controlled $4$-cycle) at the end, this yields a path starting at $v_{1}$ and ending at $u$. This path trivially corresponds to a sequence of vertices $v_{1}, \dots, v_{m}, (w), u$. It is easy to check that this sequence satisfies the condition of with $k = 3$, a contradiction. A graph is [**minimal non-DP-$k$-colorable**]{} if it is not DP-$k$-colorable but every subgraph with fewer vertices is DP-$k$-colorable. The following structural results for the minimal non-DP-$k$-colorable graph are consequences of Theorems in [@Lu2019a]. \[delta\] If $G$ is a minimal non-DP-$k$-colorable graph, then $\delta(G) \geq k$. \[NONDP\] Let $G$ be a graph and $F$ be a $2$-connected induced subgraph of $G$ with $d_{G}(v) = k$ for all $v \in V(F)$. If $G$ is a minimal non-DP-$k$-colorable graph, then $F$ is a cycle or a complete graph. Proof of ========= Recall the first main result. Let $G$ be a counterexample to the theorem with fewest number of vertices. Thus, it is a minimal non-DP-$k$-colorable graph, and 1. $G$ is connected; 2. $G$ is a plane graph without adjacent triangles and 5-, 6-, 9-cycles; 3. $G$ is not DP-3-colorable; 4. any subgraph with fewer vertices is DP-3-colorable; A [**poor face**]{} is a $10$-face incident with ten $3$-vertices and controlling one 4-face and four 3-faces. A [**bad face**]{} is a $10$-face incident with ten $3$-vertices and controlling five 3-faces. A [**bad vertex**]{} is a $3$-vertex on a bad face. A [**bad edge**]{} is an edge on the boundary of a bad face. A [**special face**]{} is a $(3, 3, 3, 3, 3, 4, 3, 3, 4, 3)$-face controlling six 3-faces. A [**semi-special face**]{} is a $(3, 3, 3, 3, 3, 4, 3, 3, 4, 3)$-face controlling five 3-faces and one $4$-face, meanwhile the $4$-face is next to a $(3, 3)$-path. An illustration of these faces is in . \[0.2\] \(A) at (108:1); (B) at (144:1); (C) at (180:1); (D) at (216:1); (E) at (252:1); (F) at (288:1); (G) at (324:1); (H) at (0:1); (I) at (36:1); (J) at (72:1); (A)–(B)–(C)–(D)–(E)–(F)–(G)–(H)–(I)–(J)–cycle; (B)–(162:1.2)–(C); (D)–(234:1.2)–(E); (F)–(306:1.2)–(G); (H)–(18:1.2)–(I); (J)–(80:1.3)–(100:1.3)–(A); (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt) (F) circle (2pt) (G) circle (2pt) (H) circle (2pt) (I) circle (2pt) (J) circle (2pt) (162:1.2) circle (2pt) (234:1.2) circle (2pt) (306:1.2) circle (2pt) (18:1.2) circle (2pt) (80:1.3) circle (2pt) (100:1.3) circle (2pt); \[0.2\] \(A) at (108:1); (B) at (144:1); (C) at (180:1); (D) at (216:1); (E) at (252:1); (F) at (288:1); (G) at (324:1); (H) at (0:1); (I) at (36:1); (J) at (72:1); (A)–(B)–(C)–(D)–(E)–(F)–(G)–(H)–(I)–(J)–cycle; (B)–(162:1.2)–(C); (D)–(234:1.2)–(E); (F)–(306:1.2)–(G); (H)–(18:1.2)–(I); (J)–(90:1.2)–(A); (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt) (F) circle (2pt) (G) circle (2pt) (H) circle (2pt) (I) circle (2pt) (J) circle (2pt) (162:1.2) circle (2pt) (234:1.2) circle (2pt) (306:1.2) circle (2pt) (18:1.2) circle (2pt) (90:1.2) circle (2pt); \[0.2\] \(A) at (108:1); (B) at (144:1); (C) at (180:1); (D) at (216:1); (E) at (252:1); (F) at (288:1); (G) at (324:1); (H) at (0:1); (I) at (36:1); (J) at (72:1); (A)–(B)–(C)–(D)–(E)–(F)–(G)–(H)–(I)–(J)–cycle; (A)–(126:1.2)–(B)–(162:1.2)–(C); (D)–(234:1.2)–(E); (F)–(306:1.2)–(G); (H)–(18:1.2)–(I)–(54:1.2)–(J); (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt) (F) circle (2pt) (G) circle (2pt) (H) circle (2pt) (I) circle (2pt) (J) circle (2pt) (126:1.2) circle (2pt) (162:1.2) circle (2pt) (234:1.2) circle (2pt) (306:1.2) circle (2pt) (18:1.2) circle (2pt) (54:1.2) circle (2pt) ; \[0.2\] \(A) at (108:1); (B) at (144:1); (C) at (180:1); (D) at (216:1); (E) at (252:1); (F) at (288:1); (G) at (324:1); (H) at (0:1); (I) at (36:1); (J) at (72:1); (A)–(B)–(C)–(D)–(E)–(F)–(G)–(H)–(I)–(J)–cycle; (A)–(126:1.2)–(B)–(152:1.3)–(172:1.3)–(C); (D)–(234:1.2)–(E); (F)–(306:1.2)–(G); (H)–(18:1.2)–(I)–(54:1.2)–(J); (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt) (F) circle (2pt) (G) circle (2pt) (H) circle (2pt) (I) circle (2pt) (J) circle (2pt) (126:1.2) circle (2pt) (152:1.3) circle (2pt) (172:1.3) circle (2pt) (234:1.2) circle (2pt) (306:1.2) circle (2pt) (18:1.2) circle (2pt) (54:1.2) circle (2pt) ; \[0.4\] \(A) at (30:1.1); (B) at (80:1.1); (C) at (130:1.1); (D) at (180:1.1); (E) at (230:1.1); (F) at (280:1.1); (G) at (330:1.1); (H) at (0:1.6); (A)–(B)–(C)–(D)–(E)–(F)–(G)–cycle; (A)–(H)–(G)–cycle; (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt) (F) circle (2pt) (G) circle (2pt) (H) circle (2pt); \[0.4\] \(O) at (0, 0); (A) at (45:3); (B) at (135:3); (A’) at (60:1); (B’) at (120:1); (C) at (90:1.5); (C’) at (90:1); (O)–(A)–(B)–(O)–(A’)–(C)–(B’)–cycle; (O)–(A’)–(C’)–(B’)–cycle; (O) circle (2pt) (A) circle (2pt) (B) circle (2pt) (A’) circle (2pt) (B’) circle (2pt) (C) circle (2pt) (C’) circle (2pt); By , we can easy obtain the following structural result. \[B\] Let $f$ be a 10-face bounded by a cycle in $G$. If $f$ is incident with ten $3$-vertices and it controls a $4^{-}$-face, then the controlled $4^{-}$-face is incident with at least one $4^{+}$-vertex. By and the definitions of poor faces and bad faces, we have the following consequences. \[FORBIDDEN\] 1. There is no adjacent poor faces. 2. There is no adjacent bad faces. 3. There is no poor faces adjacent to bad faces. The following structural results will be frequently used. \[LS\] 1. \[a\] Every $7^{-}$-cycle is chordless. 2. \[b\] Every $3$-cycle is not adjacent to $6^{-}$-cycle. 3. \[c\] Every 7-face is adjacent to at most one $4^{-}$-face; the only possible situation see . Consequently, there is no bad faces adjacent to $7$-faces. 4. \[d\] No $8$-face is adjacent to a $3$-face; no $9$-face is adjacent to a $3$-face. 5. \[e\] There are no adjacent $6^{-}$-faces; thus every $3$-vertex is adjacent to at most one $4^{-}$-face. \[a\] If a $7^{-}$-cycle has a chord, then there is a forbidden configuration, a contradiction. \[c\] Let $f$ be a 7-face and $C$ be its boundary. (i) Suppose that $C$ is a cycle. If $w_{1}w_{2}w_{3}w_{4}$ is on the boundary and $w_{2}w_{3}$ is incident with a $4$-face $u_{1}w_{2}w_{3}u_{4}$, then none of $u_{1}$ and $u_{4}$ is on $C$ because $C$ is chordless and $\delta(G) \geq 3$, but $C$ and $u_{1}w_{2}w_{3}u_{4}$ form a $9$-cycle, a contradiction. Suppose that $f$ is adjacent to two $3$-faces $uvw$ and $u'v'w'$ with $uv, u'v'$ on $C$. If $w = w'$, then there are two adjacent triangles or a $5$-cycle, a contradiction; and if $w \neq w'$, then there is a $9$-cycle, a contradiction. (ii) Suppose that $C$ is not a cycle, and thus it consists of a $3$-cycle and a $4$-cycle. Hence, $f$ cannot be adjacent to any $3$-face by \[b\]. If $f$ is adjacent to a $4$-face, then it can only be shown in . Therefore, $f$ is adjacent to at most one $4^{-}$-face. \[d\] If an $8$-face is bounded by a cycle, then it cannot be adjacent to a $3$-face, otherwise they form a $9$-cycle or a $8$-cycle with two chords, a contradiction. Suppose that the boundary of an $8$-face is not a cycle but it is adjacent to a $3$-face. By \[b\], the boundary of the $8$-face must contain a $7^{+}$-cycle, but this is impossible. Since there is no $9$-cycle, the boundary of a $9$-face is not a cycle. Suppose the boundary of a $9$-face is adjacent to a $3$-face. By \[b\], the boundary of the $9$-face must contain a $7^{+}$-cycle, but this is impossible. \[e\] Since there is no $6$-cycle, the boundary of a $6$-face consists of two triangles. It’s easy to check that there are no adjacent $6^{-}$-faces. \[4-poor\] Each $(3, 3, 3^{+}, 4^{+})$-face $f$ is adjacent to at most one poor face. Since every poor face is incident with ten $3$-vertices, $f$ can only be adjacent to poor faces via $(3, 3)$-edges. Let $f=v_{1}v_{2}v_{3}v_{4}$ with $d(v_{1})=d(v_{2})=3$, $d(v_{3})\geq3$ and $d(v_{4})\geq4$. If $d(v_{3})\geq4$, then $f$ is incident with exactly one $(3, 3)$-edge, and then it is adjacent to at most one poor face. Suppose that $d(v_{3})=3$ and $f$ is adjacent to two poor faces $f_{1}$ and $f_{2}$ via $v_{1}v_{2}$ and $v_{2}v_{3}$. Since $v_{2}$ is a $3$-vertex, the poor face $f_{1}$ is adjacent to the poor face $f_{2}$, but this contradicts . \[BAD-SPECIAL\] Each bad face is adjacent to at most two special faces. Let $f=v_{1}v_{2}\dots v_{10}$ be a bad face and incident with five $3$-faces $v_{1}v_{2}u_{1}, v_{3}v_{4}u_{3}, v_{5}v_{6}u_{5}, v_{7}v_{8}u_{7}, v_{9}v_{10}u_{9}$. Suppose that $f$ is adjacent to $f_{i}$ via edge $v_{i}v_{i+1}$ for $1 \leq i \leq 10$, where the subscripts are taken modulo $10$. Suppose to the contrary that $f$ is adjacent to at least three special faces. Then there exist two special faces $f_{m}$ and $f_{n}$ such that $|m - n| = 2$ or 8, where $\{m, n\} \subset \{2, 4, 6, 8, 10\}$. Without loss of generality, assume that $f_{2}$ and $f_{4}$ are the two special faces. By and the definition of special face, $d(u_{1}) = d(u_{3}) = 4$. Let $x_{3}$ and $x_{4}$ be the neighbors of $u_{3}$ other than $v_{3}$ and $v_{4}$. Since $f_{2}$ and $f_{4}$ are special faces, we have that $x_{3}x_{4} \in E(G)$ and $d(x_{3}) = d(x_{4}) = 3$, but this contradicts . \[BADFACE\] Suppose that $f$ is a $10^{+}$-face and it is not a bad face. Let $t$ be the number of incident bad edges, and $t \geq 1$. Then $3t \leq d(f)$. Moreover, if $d(f) > 3t$, then $f$ is incident with at least $(t + 1)$ $4^{+}$-vertices (repeated vertices are counted as the number of appearance on the boundary). Suppose that $f$ is adjacent to a bad face through $uv$. Let $x$ be the neighbor of $u$ on $f$ and $y$ be the neighbor of $v$ on $f$. Then $u$ and $v$ are bad vertices and the faces controlled by $f$ through $xu$ and $vy$ are all 3-faces. By and the definition of bad face, $d(x) \geq 4$ and $d(y) \geq4$. It is observed that two bad edges are separated by at least two other edges along the boundary of $f$, this implies that $3t \leq d(f)$. By the above discussion, every bad vertex has a $4^{+}$-neighbor along the boundary. Since $3t < d(f)$, there are two bad edges are separated by at least two $4^{+}$-vertices, thus $f$ is incident with at least $(t + 1)$ $4^{+}$-vertices. To prove the theorem, we are going to use discharging method. Define the initial charge function $\mu(x)$ on $V \cup F$ to be $\mu(v)=d(v)-6$ for $v\in V$ and $\mu(f)=2d(f)-6$ for $f\in F$. By Euler’s formula, we have the following equality, $$\sum_{v\in V(G)}(d(v)-6) + \sum_{f\in F(G)}(2d(f)-6) = -12.$$ We design suitable discharging rules to change the initial charge function $\mu(x)$ to the final charge function $\mu'(x)$ on $V\cup F$ such that $\mu'(x)\geq0$ for all $x\in V\cup F$, this leads to a contradiction and completes the proof. The following are the needed discharging rules. 1. Each $4$-face sends $\frac{1}{2}$ to each incident $3$-vertex. 2. Each $6$-face sends $1$ to each incident vertex. 3. Each $7$-face sends $\frac{3}{2}$ to each incident semi-rich $3$-vertex, $1$ to each other incident vertex. 4. Each $8$-face sends $\frac{5}{4}$ to each incident vertex. 5. Each $9$-face sends $\frac{4}{3}$ to each incident vertex. 6. Suppose that $v$ is a $3$-vertex incident with a $10^{+}$-face $f$ and two other faces $g$ and $h$. 1. If $v$ is incident with three $5^{+}$-faces, then $f$ sends $1$ to $v$. 2. If $v$ is incident with a $4$-face, then $f$ sends $\frac{5}{4}$ to $v$; 3. If $f$ is a bad face, $g$ is a $3$-face and $h$ is not a special face, then $f$ sends $\frac{4}{3}$ to $v$ and $h$ sends $\frac{5}{3}$ to $v$. 4. Otherwise, $f$ sends $\frac{3}{2}$ to $v$. 7. Let $v$ be a $4$-vertex on a $10^{+}$-face $f$. 1. If $v$ is a rich vertex or a poor vertex of $f$, then $f$ sends $1$ to $v$. 2. Otherwise, $f$ sends $\frac{1}{2}$ to $v$. 8. Let $v$ be a 5-vertex on a $10^{+}$-face $f$. 1. If $v$ is incident with two $4^{-}$-face, then $f$ sends $\frac{1}{3}$ to $v$. 2. If $v$ is incident with exactly one $4^{-}$-face, then $f$ sends $\frac{1}{4}$ to $v$. 3. Otherwise, $f$ sends $\frac{1}{5}$ to $v$. 9. Each $(3, 3, 3^{+}, 4^{+})$-face sends $\frac{1}{2}$ to adjacent poor face. 10. Each $(3, 4, 3^{+}, 4^{+})$-face and $(3, 4, 4^{+}, 3^{+})$-face send $\frac{1}{4}$ to each adjacent semi-special face. It remains to check that the final charge of every element in $V \cup F$ is nonnegative. For every vertex $v \in V$, $\mu'(v) \geq 0$. By , $G$ has no $2^{-}$-vertices. If $v$ is a $6^{+}$-vertex, then $\mu'(v) \geq \mu(v) = d(v)-6\geq0$. We may assume that $3 \leq d(v)\leq 5$. Suppose that $v$ is a $3$-vertex. By \[e\], $v$ is incident with at most one $4^{-}$-face. If $v$ is incident with no $4^{-}$-face, then it receives at least $1$ from each incident face, and then $\mu'(v) \geq 3 - 6 + 3 \times 1 = 0$. If $v$ is incident with a $4$-face, then it receives at least $\frac{5}{4}$ from each incident $7^{+}$-face, and then $\mu'(v) \geq 3 - 6 + 2 \times \frac{5}{4} + \frac{1}{2} = 0$. If $v$ is incident with a $3$-face and a $7$-face, then the other incident face is not a bad face by \[c\], and then $\mu'(v) = 3 - 6 + 2 \times \frac{3}{2} = 0$. If $v$ is incident with a $3$-face and two $10^{+}$-face, then $\mu'(v) \geq 3 - 6 + \min\big\{\frac{4}{3} + \frac{5}{3}, 2 \times \frac{3}{2}\big\} = 0$. Suppose that $v$ is a $4$-vertex. By \[e\], $v$ is incident with at most two $4^{-}$-faces. If $v$ is incident with exactly one $4^{-}$-face, then $\mu'(v)\geq 4 - 6 + 2 \times \frac{1}{2 } + 1 = 0$. If $v$ is incident with two $4^{-}$-faces, then $\mu'(v) \geq 4 - 6 + 2 \times 1 = 0$. If $v$ is incident with no $4^{-}$-face, then $\mu'(v) \geq 4 - 6 + 4 \times 1 > 0$. Suppose that $v$ is a 5-vertex. By \[e\], $v$ is incident with at most two $4^{-}$-faces. If $v$ is incident with no $4^{-}$-face, then it receives at least $\frac{1}{5}$ from each incident $5^{+}$-face, and $\mu'(v)\geq 5 - 6 + 5 \times \frac{1}{5} = 0$. If $v$ is incident with exactly one $4^{-}$-face, then it receives at least $\frac{1}{4}$ from each incident $5^{+}$-face, and $\mu'(v) \geq 5 - 6 + 4 \times \frac{1}{4} = 0$. If $v$ is incident with two $4^{-}$-faces, then it receives at least $\frac{1}{3}$ from each incident $5^{+}$-face, and $\mu'(v) \geq 5 - 6 + 3 \times \frac{1}{3} = 0$. For every face $f \in F$, $\mu'(f) \geq 0$. If $f$ is a 3-face, then $\mu'(f)=\mu(f) = 0$. Suppose that $f$ is a 4-face. If $f$ is incident with four $3$-vertices, then $\mu'(f) = 2 - 4 \times \frac{1}{2} = 0$. If $f$ is incident with exactly one $4^{+}$-vertex, then it is adjacent to at most one poor face by , and then $\mu'(f) \geq 2 - 3 \times \frac{1}{2} - \frac{1}{2} = 0$. If $f$ is a $(3, 3, 4^{+}, 4^{+})$-face, then it is adjacent to at most one poor face and at most two semi-special faces, and then $\mu'(f) \geq 2 - 2 \times \frac{1}{2} - \frac{1}{2} - 2 \times \frac{1}{4} = 0$. If $f$ is a $(3, 4^{+}, 3, 4^{+})$-face, then it sends at most $\frac{1}{4}$ to each adjacent face, and $\mu'(f) \geq 2 - 2 \times \frac{1}{2} - 4 \times \frac{1}{4} = 0$. If $f$ is incident with exactly three $4^{+}$-vertices, then it is adjacent to at most two semi-special faces, and $\mu'(f) \geq 2 - \frac{1}{2} - 2 \times \frac{1}{4} > 0$. If $f$ is incident with four $4^{+}$-vertices, then $\mu'(f) = \mu(f) = 2$. If $f$ is a $6$-face, then $\mu'(f) = 6 - 6 \times 1 = 0$. Suppose that $f$ is a 7-face. By \[c\], $f$ is adjacent to at most one $4^{-}$-face. If $f$ is adjacent to a $4^{-}$-face (see ), then $f$ is incident with at most two semi-rich $3$-vertices, which implies that $\mu'(f) \geq 8 - 2 \times \frac{3}{2} - 5 \times 1 = 0$. If $f$ is not adjacent to any $4^{-}$-face, then $f$ sends $1$ to each incident vertex, and $\mu'(f) = 8 - 7 \times 1 > 0$. If $f$ is an $8$-face, then $\mu'(f) = 10 - 8 \times \frac{5}{4} = 0$. If $f$ is a $9$-face, then $\mu'(f) = 12 - 9 \times \frac{4}{3} = 0$. Suppose that $f$ is an $11^{+}$-face. Let $t$ be the number of incident bad edges. Hence, $f$ is incident with exactly $2t$ bad vertices. By , $f$ is incident with at least $t$ $4^{+}$-vertices. Thus, $\mu'(f) \geq 2d(f) - 6 - 2t \times \frac{5}{3} - t \times 1-(d(f) - 3t) \times \frac{3}{2} = \frac{1}{2} d(f) - 6 + \frac{t}{6}$. If $d(f) \geq 12$, then $\mu'(f) \geq 12 \times \frac{1}{2} - 6 + \frac{t}{6}\geq0$. So it suffices to consider $10$-faces and $11$-faces. [**Suppose that $f$ is an $11$-face**]{}. (i) $t=0$. It follows that $f$ is not incident with any bad vertex, and it sends at most $\frac{3}{2}$ to each incident vertex. If $f$ is incident with a $4^{+}$-vertex, then $\mu'(f) \geq 16 - 10 \times \frac{3}{2} - 1 = 0$. Suppose that $f$ is a $3$-regular face. By \[e\], every vertex on $f$ is incident with at most one $4^{-}$-face. Since $d(f)$ is odd, $f$ must be incident with a rich $3$-vertex. This implies that $\mu'(f) \geq 16 - 10 \times \frac{3}{2} - 1 = 0$. (ii) $t \geq 1$. It follows that $f$ is incident with exactly $2t$ bad vertices and at least $(t+1)$ $4^{+}$-vertices, and then $\mu'(f) \geq 16 - 2t \times \frac{5}{3} - (t + 1) \times 1 - (11 - (3t + 1)) \times\frac{3}{2} = \frac{t}{6} > 0$. [**Finally we may assume that $f$ is a 10-face**]{}. If $f$ is a special face, then $\mu'(f) = 14 - 8 \times \frac{3}{2} - 2 \times 1 = 0$. If $f$ is a bad face, then it is adjacent to at most two special faces by , which implies that $\mu'(f) \geq 14 - 4 \times \frac{3}{2} - 6 \times \frac{4}{3} = 0$. So we may assume that $f$ is neither a bad face nor a special face. By , $t \leq \lfloor \frac{d(f)}{3} \rfloor = 3$. $\bullet\,\, \bm{t = 0}$. It follows that $f$ is not incident with any bad vertex. Hence, $f$ sends at most $\frac{3}{2}$ to each incident $3$-vertex, at most $1$ to each incident $4$-vertex, and at most $\frac{1}{3}$ to each incident $5$-vertex. If $f$ is incident with a $5^{+}$-vertex, then $\mu'(f) \geq 14 - 9 \times \frac{3}{2} - \frac{1}{3} > 0$. If $f$ is incident with at least two $4$-vertices, then $\mu'(f) \geq 14 - 8 \times \frac{3}{2} - 2 \times 1 = 0$. So we may assume that $f$ is incident with at most one $4$-vertex and no $5^{+}$-vertices. If $f$ is incident with a semi-rich $4$-vertex, then $\mu'(f) \geq 14 - 9 \times \frac{3}{2} - \frac{1}{2} = 0$. If $f$ is incident with a rich $4$-vertex and nine $3$-vertices, then at least one of the incident $3$-vertices is rich, and then $\mu'(f) \geq 14 - 1 - 8 \times \frac{3}{2} - 1 = 0$. If $f$ is incident with a poor $4$-vertex, then there exists a rich $3$-vertex incident with $f$, and $\mu'(f) \geq 14 - 1 - 8 \times \frac{3}{2} - 1 = 0$. Suppose that $f$ is incident with ten $3$-vertices. If $f$ is adjacent to at most four $4^{-}$-faces, then $\mu'(f) \geq 14 - 8 \times \frac{3}{2} - 2 \times 1 = 0$. If $f$ is adjacent to at least two $4$-faces, then $\mu'(f) \geq 14 - 6 \times \frac{3}{2} - 4 \times \frac{5}{4} = 0$. If $f$ is adjacent to four $3$-faces and one $4$-face, then $f$ must be a poor face and the $4$-face must be $(3, 3, 3^{+}, 4^{+})$-face, and then $\mu'(f) = 14 - 8 \times \frac{3}{2} - 2 \times \frac{5}{4} + \frac{1}{2} = 0$. If $f$ is adjacent to five $3$-faces, then it is a bad face, so we are done. $\bullet\,\, \bm{t = 1}$. It follows that $f$ is incident with exactly two bad vertices and at least two $4^{+}$-vertices. If $f$ is incident with a rich $3$-vertex or at least three $4$-vertices, then $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 3 \times 1 - 5 \times \frac{3}{2} > 0$. If $f$ is incident with a $5^{+}$-vertex, then $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 1 - \frac{1}{3} - 6 \times \frac{3}{2} > 0$. If $f$ is incident with a semi-rich $4$-vertex, then $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 1 - \frac{1}{2} - 6 \times \frac{3}{2} > 0$. So we may assume that $f$ is incident with two poor $4$-vertices and eight semi-rich $3$-vertices. Thus $f$ is a $(3, 4, 3, 3, 4, 3, 3, 3, 3, 3)$-face $w_{1}w_{2} \dots w_{10}$ with $w_{3}w_{4}$ is a bad edge, each of $w_{2}w_{3}$ and $w_{4}w_{5}$ controls a $3$-face, each of $w_{1}w_{2}$ and $w_{5}w_{6}$ controls a $4^{-}$-face. If $f$ controls at least one $4$-face by a $(3, 3)$-edge, then $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 2 \times 1 - 2 \times \frac{5}{4} - 4 \times \frac{3}{2} > 0$. So we may further assume that each of $w_{7}w_{8}$ and $w_{9}w_{10}$ controls a $3$-face. If each of $w_{1}w_{2}$ and $w_{5}w_{6}$ controls a $4$-face, then $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 2 \times 1 - 2 \times \frac{5}{4} - 4 \times \frac{3}{2} > 0$. If each of $w_{1}w_{2}$ and $w_{5}w_{6}$ controls a $3$-face, then $f$ must be a special face, a contradiction. If exactly one of $w_{1}w_{2}$ and $w_{5}w_{6}$ controls a $4$-face, then $f$ must be a semi-special face. In this case the controlled $4$-face must be a $(3, 4, 3^{+}, 4^{+})$-face or $(3, 4, 4^{+}, 3^{+})$-face due to , thus $\mu'(f) \geq 14 - 2 \times \frac{5}{3} - 2 \times 1 - \frac{5}{4} - 5\times\frac{3}{2} + \frac{1}{4} \geq 0$. $\bullet\,\, \bm{t = 2}$. It follows that $f$ is incident with exactly four bad vertices and at least three $4^{+}$-vertices. If $f$ is incident with at least four $4^{+}$-vertices, then $\mu'(f) \geq 14 - 4 \times \frac{5}{3} - 4 \times 1 - (10 - 4 - 4) \times \frac{3}{2} > 0$. Thus, $f$ is incident with exactly four bad vertices and exactly three $4^{+}$-vertices. If there is a semi-rich $4^{+}$-vertex, then $\mu'(f) \geq 14 - 4 \times \frac{5}{3} - 2 \times 1 - \frac{1}{2} - 3 \times \frac{3}{2} > 0$. Therefore, the three $4^{+}$-vertices are all poor, so there must be a rich $3$-vertex. This implies that $\mu'(f) \geq 14 - 4 \times \frac{5}{3} - 3 \times 1 - 2 \times \frac{3}{2} - 1 > 0$. $\bullet\,\, \bm{t = 3}$. It follows that $f$ is incident with six bad vertices and four $4^{+}$-vertices. Thus, $\mu'(f) \geq 14 - 6 \times \frac{5}{3} - 4 \times 1 = 0$. Distance of triangles at least two ================================== In this section, we gave two Bordeaux type results on plane graphs with distance of triangles at least two. Plane graphs without 5-, 6- and 8-cycles ---------------------------------------- \[0.4\] i[-1]{} in [0,...,i]{} [ () at ($(\x*360/\n + 90:1)$); ]{} (A0) in [0,...,i]{} [– (A)]{} – cycle; in [0,...,i]{} [ () at (A) ; ]{} in [0,...,27]{} [ () at ($(\x*90/\n + 90:1.4)$); ]{} (A0)–(B1)–(B3)–(A1); (A2)–(B9)–(B11)–(A3); (A4)–(B17)–(B19)–(A5); (A6)–(B25)–(B27)–(A0); () at (B1) ; () at (B3) ; () at (B9) ; () at (B11) ; () at (B17) ; () at (B19) ; () at (B25) ; () at (B27) ; \[0.4\] i[-1]{} in [0,...,i]{} [ () at ($(\x*360/\n + 90:1)$); ]{} (A0) in [0,...,i]{} [– (A)]{} – cycle; in [0,...,i]{} [ () at (A) ; ]{} in [0,...,27]{} [ () at ($(\y*90/\n + 90:1.4)$); ]{} (A0)–(B1)–(B3)–(A1); (A2)–(B9)–(B11)–(A3); (A4)–(B17)–(B19)–(A5); (A6)–(B24); () at (B1) ; () at (B3) ; () at (B9) ; () at (B11) ; () at (B17) ; () at (B19) ; () at (B24) ; Recall the second main result. Suppose to the contrary that $G$ is a counterexample with the number of vertices as small as possible. Thus, $G$ is a minimal non-DP-$k$-colorable graph. A 7-face $f$ is [**special**]{} if $f$ is incident with six semi-weak $3$-vertices and a poor $4$-vertex, see . A 7-face $f$ is [**poor**]{} if $f$ is incident with six semi-weak $3$-vertices and a strong $3$-vertex, see . \[NO-ADJ\] 1. \[1\] There is no $5$-faces and no $6$-faces. 2. \[2\] A 3-face cannot be adjacent to an $8^{-}$-face. 3. \[3\] There is no adjacent $6^{-}$-faces. Since every $5$-face is bounded by a $5$-cycle, but there is no $5$-cycles in $G$, this implies that there is no $5$-faces in $G$. Since there is no $6$-cycles in $G$, the boundary of every $6$-face consists of two triangles, thus the distance of these triangles is zero, a contradiction. Therefore, there is no $5$-faces and no $6$-faces in $G$. It’s easy to check that every $7^{-}$-cycle is chordless. Let $f$ be a $3$-face. If $f$ is adjacent to a $4$-face $g$, then they form a $5$-cycle with a chord, a contradiction. Suppose that $g$ is a 7-face. Then $g$ may be bounded by a cycle or a closed walk with a cut-vertex. If $g$ is bounded by a cycle and it is adjacent to $f$, then these two cycles form an 8-cycle with a chord, a contradiction. If the boundary of $g$ contains a cut-vertex, then the boundary consists of a 3-cycle and a 4-cycle, and neither the $3$-cycle nor the $4$-cycle can be adjacent to the $3$-face $f$. If $g$ is a 8-face, then the boundary of $g$ consists of two 4-cycles, or two triangles and a cut-edge, but no edge on such boundary can be adjacent to the $3$-face $f$. By the hypothesis and fact that a $3$-face cannot be adjacent to an $8^{-}$-face, it suffices to prove that there is no adjacent $4$-faces. Since every $4$-cycle has no chords, two adjacent $4$-faces must form a $6$-cycle with a chord, a contradiction. Each poor 7-face is adjacent to three $(3, 3, 3^{+}, 4^{+})$-faces. Suppose that $f = w_{1}w_{2}\dots w_{7}$ is a poor $7$-face and it is adjacent to a $4$-face $g = u_{1}w_{2}w_{3}u_{4}$. Since $f$ is incident with seven $3$-vertices, it must be bounded by a $7$-cycle. Note that every $7$-cycle has no chords, we have that $\{u_{1}, u_{2}\} \cap \{w_{1}, w_{2}, \dots, w_{7}\} = \emptyset$. The subgraph induced by $\{w_{1}, w_{2}, \dots, w_{7}\} \cup \{u_{1}, u_{2}\}$ is $2$-connected, and it is neither a complete graph nor a cycle. By , $g$ must be incident with a $4^{+}$-vertex. Applying to a special $7$-face, we get the following result. \[Special\] If $f$ is a special $7$-face and it controls a $(3, 3, 3, 3)$-face, then each of the face controlled by $(3, 4)$-edge has at least two $4^{+}$-vertices. Define the initial charge function $\mu(x)$ on $V \cup F$ to be $\mu(v)=d(v)-6$ for $v\in V$ and $\mu(f)=2d(f)-6$ for $f\in F$. By Euler’s formula, we have the following equality, $$\sum_{v\in V(G)}(d(v)-6) + \sum_{f\in F(G)}(2d(f)-6) = -12.$$ We give some discharging rules to change the initial charge function $\mu(x)$ to the final charge function $\mu'(x)$ on $V\cup F$ such that $\mu'(x)\geq0$ for all $x\in V\cup F$, this leads to a contradiction. The following are the discharging rules. 1. \[Ru-1\] Each 4-face sends $\frac{1}{2}$ to each incident $3$-vertex. 2. \[Ru-2\] Each $7^+$-face sends $\frac{3}{2}$ to each incident weak $3$-vertex, $\frac{5}{4}$ to each incident semi-weak $3$-vertex, 1 to each incident strong $3$-vertex. 3. \[Ru-3\] Each $7^+$-face sends 1 to each incident poor $4$-vertex, $\frac{3}{4}$ to each incident semi-rich $4$-vertex, $\frac{1}{2}$ to each incident rich $4$-vertex. 4. \[Ru-4\] Each $7^+$-face sends $\frac{1}{3}$ to each incident 5-vertex. 5. \[Ru-5\] Each $(3, 3, 3^{+}, 4^{+})$-face sends $\frac{1}{4}$ to each adjacent poor 7-face and each adjacent special 7-face through $(3, 3)$-edge, respectively. 6. \[Ru-6\] Each $(3, 4, 3^{+}, 4^{+})$-face and $(3, 4, 4^{+}, 3^{+})$-face sends $\frac{1}{4}$ to each adjacent special 7-face through $(3, 4)$-edge. It remains to check that the final charge of every element in $V\cup F$ is nonnegative. For every vertex $v\in V$, $\mu'(v)\geq 0$. By , $G$ has no $2^{-}$-vertices. If $v$ is a $6^{+}$-vertex, then it is not involved in the discharging procedure, hence $\mu'(v) = \mu(v) = d(v) - 6 \geq 0$. Next, we may assume that $3 \leq d(v) \leq 5$. Suppose that $v$ is a $3$-vertex. If $v$ is incident with no $4^{-}$-face, then it is incident with three $7^{+}$-faces, and then $\mu'(v) = \mu(v) + 3 \times 1 = 0$. If $v$ is incident with a 3-face, then the other two incident faces are $9^{+}$-faces by \[2\], and then $\mu'(v) = \mu(v) + 2 \times \frac{3}{2} = 0$. If $v$ is incident with a 4-face, then the other two incident faces are $7^{+}$-faces by \[3\], and then $\mu'(v) = \mu(v) + 2 \times \frac{5}{4} + \frac{1}{2} = 0$. Suppose that $v$ is a $4$-vertex. By \[3\], $v$ is incident with at most two $4^-$-faces. If $v$ is incident with no $4^{-}$-face, then it is incident with four $7^{+}$-faces, and then $\mu'(v) = \mu(v) + 4 \times \frac{1}{2} = 0$. If $v$ is incident with exactly one $4^-$-face, then $\mu'(v) = \mu(v) + 2 \times \frac{3}{4} + \frac{1}{2} = 0$. If $v$ is incident with exactly two $4^-$-faces, then $\mu'(v) = \mu(v) + 2 \times 1 = 0$. Suppose that $v$ is a 5-vertex. By \[3\], $v$ is incident with at most two $4^-$-faces. Therefore, it is incident with at least three $7^{+}$-faces, and $\mu'(v) \geq \mu(v) + 3 \times \frac{1}{3} = 0$. For every face $f\in F$, $\mu'(f) \geq 0$. Since the distance of triangles is at least two, each $k$-face is adjacent to at most $\lfloor\frac {k}{3}\rfloor$ triangular-faces, thus $f$ contains at most $2\times\lfloor\frac {k}{3}\rfloor$ weak $3$-vertices. As observed above, $G$ has no 5-face and 6-face. If $f$ is a 3-face, then it is not involved in the discharging procedure, and then $\mu'(f) = \mu(f ) = 0$. Suppose that $f$ is a 4-face. If $f$ is incident with four $3$-vertices, then $\mu'(f) = \mu(f) - 4 \times \frac{1}{2} = 0$. If $f$ is incident with exactly one $4^{+}$-vertex, then $f$ sends at most $\frac{1}{4}$ through each incident $(3, 3)$-edge by \[Ru-5\], and then $\mu'(f) \geq \mu(f) - 3 \times \frac{1}{2} - 2 \times \frac{1}{4} = 0$ by \[Ru-1\] and \[Ru-5\]. If $f$ is incident with at least two $4^{+}$-vertices, then $\mu'(f) \geq \mu(f) - 2 \times \frac{1}{2} - 4 \times \frac{1}{4} = 0$ by \[Ru-1\], \[Ru-5\] and \[Ru-6\]. If $f$ is an 8-face, then it is not adjacent to any $3$-face and it sends at most $\frac{5}{4}$ to each incident vertex, and then $\mu'(f) \geq \mu(f) - 8 \times \frac{5}{4} = 0$. i[-1]{} in [0,...,i]{} [ () at ($(\x*360/\n - 90:1)$); ]{} (A0) in [0,...,i]{} [– (A)]{} – cycle; in [0,...,i]{} [ () at (A) ; ]{} in [0,...,35]{} [ () at ($(\y*90/\n - 90:1.4)$); ]{} (A1)–(B6)–(A2); (A4)–(B18)–(A5); (A7)–(B30)–(A8); (A4)–(B15); (A3)–(B11); (A3)–(B13); (A4)–(B16); (A6)–(B23); (A6)–(B25); (A0)–(B35); (A0)–(B1); () at (B6) ; () at (B18) ; () at (B30) ; Suppose that $f$ is a $9$-face. Recall that $f$ is incident with at most six weak $3$-vertices. If $f$ is incident with exactly six weak $3$-vertices, then $f$ sends at most $1$ to each other incident vertex, and then $\mu'(f) \geq \mu(f) - 6 \times \frac{3}{2} - (9 - 6) \times 1 = 0$. If $f$ is incident with exactly five weak $3$-vertices, then $f$ must be adjacent to three $3$-faces and one of the six incident vertices on triangles must be a $4^+$-vertex (see ), and then $\mu'(f) \geq \mu(f) - 5 \times \frac{3}{2} - 1 - \frac{5}{4} - 2 \times 1 > 0$. If $f$ is incident with exactly four weak $3$-vertices and at least one $4^{+}$-vertex, then $\mu'(f) \geq \mu(f) - 4 \times \frac{3}{2} - 1 - (9 - 4 - 1) \times \frac{5}{4} = 0$. If $f$ is incident with exactly four weak $3$-vertices and no $4^{+}$-vertex, then $f$ is incident with at least one strong $3$-vertex and at most four semi-weak $3$-vertices, and then $\mu'(f) \geq \mu(f) - 4 \times \frac{3}{2} - 4 \times \frac{5}{4} - 1 = 0$. If $f$ is incident with at most three weak $3$-vertices, then $\mu'(f) \geq \mu(f) - 3 \times \frac{3}{2} - (9 - 3) \times \frac{5}{4} = 0$. If $f$ is a $10^+$-face, then $\mu'(f) \geq \mu(f) - 2 \times \lfloor\frac {d(f)}{3}\rfloor \times \frac{3}{2} - \left(d(f) - 2 \times \lfloor\frac {d(f)}{3}\rfloor\right)\times \frac{5}{4}\geq0$. Suppose that $f$ is a 7-face. By \[2\], $f$ is not incident with any weak $3$-vertex. It is observed that $f$ is incident with at most six semi-weak $3$-vertices. If there is an incident vertex receives at most $\frac{1}{2}$ from $f$, then $\mu'(f) \geq \mu(f) - \frac{1}{2} - (7 - 1) \times \frac{5}{4} = 0$. So we may assume that $f$ is incident with seven $4^-$-vertices and no rich $4$-vertex. If $f$ is incident with at most four semi-weak $3$-vertices, then $\mu'(f) \geq \mu(f) - 4 \times \frac{5}{4} - 3 \times 1 = 0$. So we may further assume that $f$ is incident with at least five semi-weak $3$-vertices and at most two $4$-vertices. If $f$ is incident with two semi-rich $4$-vertices, then $\mu'(f) = \mu(f) - 2 \times \frac{3}{4} - (7 - 2) \times \frac{5}{4} > 0$. If $f$ is incident with a semi-rich $4$-vertex and a poor $4$-vertex, then $\mu'(f) = \mu(f) - \frac{3}{4} - 1 - (7 - 2) \times \frac{5}{4} = 0$. It is impossible that $f$ is incident with five semi-weak $3$-vertices and two poor $4$-vertices. In the following, assume that $f$ is incident with at most one $4$-vertex and at least five semi-weak $3$-vertices. If $f$ is incident with a semi-rich $4$-vertex, then it is incident with at most five semi-weak $3$-vertices, and then $\mu'(f) \geq \mu(f) - \frac{3}{4} - 5 \times \frac{5}{4} - 1 = 0$. Suppose that $f$ is incident with a poor $4$-vertex, then it must be adjacent to six semi-weak $3$-vertices, $f$ is a special 7-face. If $f$ controls two $(3, 3, 3^{+}, 4^{+})$-faces through $(3, 3)$-edges, then $\mu'(f) = \mu(f) - 1 - 6 \times \frac{5}{4} + 2 \times \frac{1}{4} = 0$. Then we may assume that $f$ controls at least one $(3, 3, 3, 3)$-face. By , $f$ controls two $4$-faces incident with at least two $4^{+}$-vertices through $(3, 4)$-edges, thus $\mu'(f) = \mu(f) - 1 - 6 \times \frac{5}{4} + 2 \times \frac{1}{4} = 0$. Finally, we may assume that $f$ is incident with seven $3$-vertices. In this case, $f$ can only be a poor face. By , $f$ is incident with three $(3, 3, 3^{+}, 4^{+})$-faces, thus $\mu'(f) = \mu(f) - 1 - 6 \times \frac{5}{4} + 3 \times \frac{1}{4} > 0$. Plane graphs without 4-, 5- and 7-cycles ---------------------------------------- The third main result can be derived from the following theorem on degeneracy. Suppose that $G$ is a plane graph satisfying all the hypothesis but the minimum degree is at least three. Without loss of generality, we may assume that $G$ is connected. \[NOADJ\] 1. \[NOADJ-1\] There is no $4$-, $5$-, $7$-faces. Every 6-face is bounded by a 6-cycle. 2. \[NOADJ-2\] A 3-face cannot be adjacent to a $7^{-}$-face. \[NOADJ-1\] Since every $4$-face must be bounded by a $4$-cycle, but there is no $4$-cycles in $G$, this implies that there is no $4$-faces in $G$. Similarly, there is no $5$-faces in $G$. Since there is no $7$-cycles in $G$, there is no $7$-face bounded by a cycle, and then the boundary of every $7$-face must consist of a triangle and a 4-cycle, but this contradicts the absence of $4$-cycles. If the boundary of a $6$-face is not a cycle, then it must consist of two triangles, and the distance of these two triangles is zero, a contradiction. Therefore, every $6$-face is bounded by a $6$-cycle. \[NOADJ-2\] It’s easy to check that every $8^{-}$-cycle is chordless. Since there is no two triangles at distance less than two, there is no two adjacent $3$-faces. Every $6$-face is bounded by a 6-cycle and it is chordless, thus a $3$-face cannot be adjacent to a $6$-face, for otherwise they form a 7-cycle with a chord, a contradiction. Define the initial charge function $\mu(x)$ on $V \cup F$ to be $\mu(v)=d(v)-6$ for $v\in V$ and $\mu(f)=2d(f)-6$ for $f\in F$. By Euler’s formula, we have the following equality, $$\sum_{v\in V(G)}(d(v)-6) + \sum_{f\in F(G)}(2d(f)-6) = -12.$$ Next, we define some discharging rules to change the initial charge function $\mu(x)$ to the final charge function $\mu'(x)$ on $V\cup F$ such that $\mu'(x)\geq0$ for all $x\in V\cup F$. This leads to a contradiction, and then we completes the proof. 1. \[Ru-2-\] Each $6^+$-face sends $1$ to each incident strong $3$-vertex, $\frac{1}{2}$ to each incident rich $4$-vertex, $\frac{1}{4}$ to each incident $5$-vertex. 2. \[Ru-3-\] Each $8^+$-face sends $\frac{3}{2}$ to each incident weak $3$-vertex, $\frac{3}{4}$ to each incident semi-rich $4$-vertex. It remains to check that the final charge of every element in $V\cup F$ is nonnegative. For every vertex $v\in V$, $\mu'(v)\geq0$. If $v$ is a $6^{+}$-vertex, then it is not involved in the discharging procedure, hence $\mu'(v) = \mu(v) = d(v) - 6 \geq 0$. We may assume that $3 \leq d(v) \leq 5$. Since there is no two triangles at distance less than two, every vertex is incident with at most one 3-face. Suppose that $v$ is a $3$-vertex. If $v$ is not incident with any $3$-face, then it is incident with three $6^{+}$-faces, and then $\mu'(v) = \mu(v) + 3 \times 1 = 0$. If $v$ is incident with a 3-face, then the other two incident faces are $8^{+}$-faces by \[NOADJ-2\], and then $\mu'(v) = \mu(v) + 2 \times \frac{3}{2} = 0$. Suppose that $v$ is a $4$-vertex. If $v$ is not incident with any $3$-face, then it is incident with four $6^{+}$-faces, and then $\mu'(v) = \mu(v) + 4 \times \frac{1}{2} = 0$. If $v$ is incident with a $3$-face, then $\mu'(v) = \mu(v) + 2 \times \frac{3}{4} + \frac{1}{2} = 0$. Suppose that $v$ is a 5-vertex. Since $v$ is incident with at most one $3$-face, it is incident with at least four $6^{+}$-faces, so $\mu'(v) \geq \mu(v) + 4 \times \frac{1}{4} = 0$. For every face $f\in F$, $\mu'(f) \geq 0$. Note that there is no $4$-, $5$-, $7$-faces. Since every 3-face $f$ is not involved in the discharging procedure, we have that $\mu'(f) = \mu(f ) = 0$. By \[NOADJ-2\], every $6$-face $f$ is adjacent to six $6^{+}$-faces, thus $\mu'(f) \geq \mu(f) - 6 \times 1 = 0$. Suppose that $f$ is a $d$-face with $d \geq 8$. Since the distance of triangles is at least two, we have that $f$ is adjacent to at most $\lfloor\frac {d}{3}\rfloor$ triangular-faces, thus it is incident with at most $2\times\lfloor\frac {d}{3}\rfloor$ weak $3$-vertices. Hence, $\mu(f) \geq 2d - 6 - 2\times\lfloor\frac {d}{3}\rfloor \times \frac{3}{2} - (d - 2\times\lfloor\frac {d}{3}\rfloor) \times 1 = d - 6 - \lfloor\frac {d}{3}\rfloor \geq 0$. [**Acknowledgments.**]{} This work was supported by the Natural Science Foundation of China and partially supported by the Fundamental Research Funds for Universities in Henan (YQPY20140051). [10]{} L. Chen, R. Liu, G. Yu, R. Zhao and X. Zhou, [DP]{}-4-colorability of two classes of planar graphs, Discrete Math. 342 (11) (2019) 2984–2993. Z. Dvořák, B. Lidický and R. Škrekovski, Planar graphs without 3-, 7-, and 8-cycles are 3-choosable, Discrete Math. 309 (20) (2009) 5899–5904. Z. Dvořák, B. Lidický and R. Škrekovski, 3-choosability of triangle-free planar graphs with constraints on 4-cycles, SIAM J. Discrete Math. 24 (3) (2010) 934–945. Z. Dvořák and L. Postle, Correspondence coloring and its application to list-coloring planar graphs without cycles of lengths 4 to 8, J. Combin. Theory Ser. B 129 (2018) 38–54. P. Erdős, A. L. Rubin and H. Taylor, Choosability in graphs, in: Proceedings of the [W]{}est [C]{}oast [C]{}onference on [C]{}ombinatorics, [G]{}raph [T]{}heory and [C]{}omputing ([H]{}umboldt [S]{}tate [U]{}niv., [A]{}rcata, [C]{}alif., 1979), Congress. Numer., XXVI, Utilitas Math., Winnipeg, Man., 1980, pp. 125–157. Y. Han, A note on 3-choosability of planar graphs, J. Xinjiang Univ. Nat. Sci. 26 (3) (2009) 281–283. S.-J. Kim and K. Ozeki, A sufficient condition for [DP]{}-4-colorability, Discrete Math. 341 (7) (2018) 1983–1986. P. C. B. Lam, W. C. Shiu and Z. M. Song, The 3-choosability of plane graphs of girth 4, Discrete Math. 294 (3) (2005) 297–301. R. Li and T. Wang, [DP]{}-4-coloring of planar graphs with some restrictions on cycles, arXiv:1909.08511, <https://arxiv.org/abs/1909.08511>. X. Li, M. Chen and Y. Wang, On 3-choosability of planar graphs without 5-, 6- or 7-cycles, Adv. Math. (China) 45 (4) (2016) 491–499. X. Li and X. Li, Planar graphs without 3-, 7- and 8-cycles are [DP]{}-3-colorable, submitted. X. Li and Y. Wang, A note on 3-choosability of planar graphs, J. Zhejiang Norm. Univ. Nat. Sci. 39 (1) (2016) 13–17. B. Lidický, On 3-choosability of plane graphs having no 3-, 6-, 7- and 8-cycles, Australas. J. Combin. 44 (2009) 77–86. R. Liu and X. Li, Every planar graph without 4-cycles adjacent to two triangles is [DP]{}-4-colorable, Discrete Math. 342 (3) (2019) 623–627. R. Liu and X. Li, Every planar graph without adjacent cycles of length at most 8 is 3-choosable, European J. Combin. 82 (2019) 102995, 10. R. Liu, S. Loeb, M. Rolek, Y. Yin and G. Yu, D[P]{}-3-coloring of planar graphs without 4, 9-cycles and cycles of two lengths from $\{6,7,8\}$, Graphs Combin. 35 (3) (2019) 695–705. R. Liu, S. Loeb, Y. Yin and G. Yu, [DP]{}-3-coloring of some planar graphs, Discrete Math. 342 (1) (2019) 178–189. F. Lu, Q. Wang and T. Wang, 3-choosable planar graphs with some precolored vertices and no $5^{-}$-cycles normally adjacent to $8^{-}$-cycles, arXiv:1908.04902, <https://arxiv.org/abs/1908.04902>. F. Lu, Q. Wang and T. Wang, Cover and variable degeneracy, arXiv:1907.06630, <https://arxiv.org/abs/1907.06630>. L. Shen and Y. Wang, A sufficient condition for a planar graph to be 3-choosable, Inform. Process. Lett. 104 (4) (2007) 146–151. C. Thomassen, [$3$]{}-list-coloring planar graphs of girth [$5$]{}, J. Combin. Theory Ser. B 64 (1) (1995) 101–107. V. G. Vizing, Coloring the vertices of a graph in prescribed colors, Diskret. Analiz. 29 (1976) 3–10, 101. Y. Wang, H. Lu and M. Chen, A note on 3-choosability of planar graphs, Inform. Process. Lett. 105 (5) (2008) 206–211. Y. Wang, H. Lu and M. Chen, Planar graphs without cycles of length 4, 5, 8 or 9 are 3-choosable, Discrete Math. 310 (1) (2010) 147–158. Y. Wang and Q. Wu, Planar graphs without cycles of length 4, 5, 8 or 10 are 3-choosable, Adv. Appl. Math. Sci. 10 (3) (2011) 297–305. Y. Wang, Q. Wu and L. Shen, Planar graphs without cycles of length 4, 7, 8, or 9 are 3-choosable, Discrete Appl. Math. 159 (4) (2011) 232–239. Y. Yin and G. Yu, Planar graphs without cycles of lengths 4 and 5 and close triangles are [DP]{}-3-colorable, Discrete Math. 342 (8) (2019) 2333–2341. H. Zhang, A sufficient condition for a toroidal graph to be 3-choosable, Ars Combin. 105 (2012) 193–203. H. Zhang, Corrigendum to “[O]{}n 3-choosability of planar graphs with neither adjacent triangles nor 5-, 6- and 9-cycles” \[[I]{}nformation [P]{}rocessing [L]{}etters 110 (24) (2010) 1084–1087\], Inform. Process. Lett. 113 (9) (2013) 354–356. H. Zhang, A note on 3-choosability of planar graphs related to [M]{}ontanssier’s conjecture, Canad. Math. Bull. 59 (2) (2016) 440–448. H. Zhang and Z. Sun, On 3-choosability of planar graphs without certain cycles, Inform. Process. Lett. 107 (3-4) (2008) 102–106. H. Zhang and B. Xu, On 3-choosability of plane graphs without 6-, 7- and 9-cycles, Appl. Math. J. Chinese Univ. Ser. B 19 (1) (2004) 109–115. L. Zhang and B. Wu, Three-choosable planar graphs without certain small cycles, Graph Theory Notes N. Y. 46 (2004) 27–30. L. Zhang and B. Wu, A note on 3-choosability of planar graphs without certain cycles, Discrete Math. 297 (1-3) (2005) 206–209. [^1]: [Corresponding author: wangtao@henu.edu.cn; iwangtao8@gmail.com]{}
--- abstract: 'We present a kinematic analysis of the dense molecular gas in the central 200 parsecs of the nearby galaxy NGC1097, based on Cycle 0 observations with the Atacama Large Millimeter/sub-millimeter Array (ALMA). We use the HCN(4–3) line to trace the densest interstellar molecular gas ($n_{\rm{H}_2} \sim 10^8$ ), and quantify its kinematics, and estimate an inflow rate for the molecular gas. We find a striking similarity between the ALMA kinematic data and the analytic spiral inflow model that we have previously constructed based on ionized gas velocity fields on larger scales. We are able to follow dense gas streaming down to 40 pc distance from the supermassive black hole in this Seyfert 1 galaxy. In order to fulfill marginal stability, we deduce that the dense gas is confined to a very thin disc, and we derive a dense gas inflow rate of $0.09$ [M$_\odot$yr$^{-1}$]{} at 40 pc radius. Combined with previous values from the  and CO gas, we calculate a combined molecular and ionized gas inflow rate of $\sim 0.2$ [M$_\odot$yr$^{-1}$]{} at 40 pc distance from the central supermassive black hole of NGC1097.' author: - 'Kambiz Fathi$^{1,2}$' - 'Andreas A. Lundgren$^{3}$' - 'Kotaro Kohno$^{4,5}$' - 'Nuria Piñol-Ferrer$^{1}$' - 'Sergio Martín$^{6}$' - 'Daniel Espada$^{7}$' - 'Evanthia Hatziminaoglou$^{8}$' - 'Masatoshi Imanishi$^{9}$' - 'Takuma Izumi$^{4}$' - 'Melanie Krips$^{10}$' - 'Satoki Matsushita$^{11}$' - 'David S. Meier$^{12,13}$' - 'Naomasa Nakai$^{14}$' - 'Kartik Sheth$^{15}$' - 'Jean Turner$^{16}$' - 'Glenn van de Ven$^{17}$' - 'Tommy Wiklind$^{3}$' title: ALMA follows streaming of dense gas down to 40 pc from the supermassive black hole in NGC1097 --- Introduction {#intro} ============ The central region of the nearby barred Seyfert 1 galaxy NGC1097 displays a number of intriguing morphological and kinematic features. At $\sim 1 $ kpc, an almost circular ring-like feature marks the transition between the prominent $R\sim 8$ kpc galactic bar and the relatively diffuse region interior to the ring. The bar hosts two prominent dust lanes, both originating at around the Corotation radius of the bar [@npf13], cutting through the inner ring and transforming into nuclear spirals that continue down to $\sim3.5$ pc distance from the active nucleus [@lou01; @f06]. The dust lanes are accompanied by diffuse ionized gas revealing clear kinematic signatures of bar-induced gas inflow over the entire face of the galaxy [@npf13]. Neutral gas and ionized gas data cubes [@o89; @f06] have confirmed the ‘abundant’ presence of these two phases of the interstellar medium across the central kpc radius. However, interferometric molecular gas maps show emission confined to the nuclear ring and the central 2-3 hundred parsecs radius [@k03; @h08; @h11; @h12]. Moreover, [@npf11] showed that the bulk of the interstellar gas at the centre of NGC1097 (like in many other galaxies) is in the molecular phase, and therefore, a detailed analysis of the central gas concentration needs to account for the different physical conditions. The interplay between the different phases provide crucial clues to understanding the energies involved in redistributing the gas in a way that leads to the observed phase transition efficiencies. The discovery of broad ($\sim 10000$ km/s) double-peaked H$\alpha$ emission lines by @sb93 makes NGC1097 also an ideal laboratory for studying the fate of the gas accumulated in the centers of active galactic nuclei (AGN). At a distance of 14.5 Mpc (i.e., $\sim 70$ pc/), this galaxy is also suitable for high-resolution studies of the physical processes that cause the material/fuel to loose its angular momentum and fall toward the AGN [e.g., @sb03]. Although it is straightforward to transport gas down to the central kpc and induce enhanced star formation, it is more difficult to make the gas reach smaller scales (few pc) required to fuel an AGN. In rotating systems, perturbations can cause the potential to become non-axisymmetric, and torques exerted by the subsequent non-axisymmetric features are able to drive material toward the centre [@schwarz84]. [@st99] have argued that magnetic stress may aid the infalling gas to complete the last few parsecs down to the central supermassive black hole (SMBH). To build a realistic scenario for the fate of the gas that is piling up around an AGN to eventually fuel it, one has to make a detailed analysis of the distribution and kinematics of multiple phases of the interstellar gas in the region of interest. Based on ionized gas kinematic maps (van de Ven & Fathi 2010, hereafter [**vdVF**]{}, and Piñol-Ferrer et al. 2013), we have derived a concise picture for NGC1097, according to which, the gravitational perturbation that once gave rise to the formation of the prominent bar drives the evolution of structure inside the bar as well as the outer spiral arms. The outer spiral arms are confined between the Corotation radius and reach beyond the Outer Lindblad Resonance radius of the main galactic bar. The circumnuclear ring once formed at the location of the Outer Inner Lindblad resonance [@npf13], and has likely migrated toward the centre of the galactic gravitational potential [@rt03; @vdv09]. Inside the ring, the non-circular velocities are consistent with the presence of two spiral arms (in morphology). The nuclear arms are well disguised in optical images, and several image-enhancement techniques have led different authors to argue for a different number of arms [e.g., @lou01; @d09 and vdVF]. Most of the disagreements concern the inner $\sim 100$ pc, where increased dust content may have distorted both images and kinematic measurements in the optical and near infrared. Here we present a quantitative analysis of the kinematics of the densest molecular gas within the central kpc radius of NGC1097 based on Atacama Large Millimeter/sub-millimeter Array observations of Hydrogen cyanide, HCN(4–3) (Proposal 2011.0.00108.S, PI: Kohno). We compare the ALMA kinematic data with a dynamical model that we have previously constructed based on two-dimensional spectroscopic data of ionized gas. ![image](./Fig1.jpg){width="98.00000%"} The data {#sec:data} ======== NGC1097 was observed with ALMA on November 5 and 6, 2011, with 14 and 15 antennas, respectively. Our Band 7 observations targeted the HCN(4–3) line at rest frequency of 354.505 GHz with the original channel spacing of 488.28125 kHz. To increase the signal strength, the data were binned by a factor 20, leading to an effective channel width of 9.77 MHz ($\sim 8.3$ ). The primary beam was $18\farcs1$ with a synthesized beam of $1\farcs50\times 1\farcs20$ ( $\sim 105 \times 84$ pc) and at $-72.4\deg$ position angle, sampled at $0\farcs3$/pix. All data specifications are described in [@i13]. Figure \[fig:1\] shows the integrated intensity map of HCN(4–3) with the velocity moment maps and models, described below. At this resolution the spiral arms are difficult to see in the integrated intensity map, which sums all velocities, but are easier to detect in the kinematics. The HCN(4–3) kinematics were derived in two ways. Simple first Moment map (intensity-weighted mean velocity map) was cross checked with Gaussian fitting to each individual spectrum. We found no significant signatures of non-Gaussianity, however, the Gaussian fits resulted in a generally noisier velocity field. We use the Moment 1 maps and exclude all pixels for which the corresponding spectrum Amplitude-over-Noise $A/N<20$ (see Fig. \[fig:1\]). We find almost no signal outside the area illustrated in Fig. \[fig:1\] at lower $A/N$ values. This confirms that the dense gas is confined to a small region around the AGN in NGC1097 [@h12]. The ratio between the Einstein coefficient and collisional rates for the HCN(4–3) transition results in an estimated critical density of a few times $10^7$ to $10^8$  with a small dependency on temperature. Our calculation is conformal with previously reported densities of $n_{\rm{H}_2}\sim 10^8$  at 40 K kinetic temperature [e.g., @choi00; @t07]. Such high densities are also consistent with the deep obscuration expected toward the center of a galaxy [e.g., @k96; @s10]. To constrain the large scale gas kinematics, we have used a mosaic of Fabry-Perot interferometric observations at 083 spatial sampling, covering a $7\arcmin\times 7\arcmin$ field at 15  spectral resolution [@d08; @npf13]. To further look into the central few 100 parsecs, we used  two-dimensional velocity field, at 01 spatial sampling and 85  spectral resolution, from the Gemini South Telescope’s Integral Field Unit, covering the inner $7\arcsec\times 15\arcsec$ [@f06]. The combination of the two sets of data is imperative, as they present a coherent dynamical model for NGC1097 from $\sim 20$ kpc down to $\sim 100$ pc from its central SMBH. Kinematic Parameters {#sec:analysis} ==================== We assume that the galactic disc is predominantly rotating and apply the prescription used in vdVF to quantify the velocity field shown in the top middle panel of Fig. \[fig:1\]. We divide the observed velocity field into concentric rings, each containing $>7$ pixels (this sets the limit for the innermost radius to $0\farcs55 \lesssim 40$ pc). We fix the inclination at $35\deg$ and apply a $\chi^2$ minimization to obtain the central coordinates $(x_0, y_0)$, systemic velocity $V_{\rm sys}$ and position angle PA of the disc. After fixing these parameters, we make the final $\chi^2$ fit to the desired modes (here up to and including 3rd order) of the Fourier decomposition mathematically formulated as $$V_{\rm los} = V_{\rm sys} + \sum_{n=1}^k [c_n(r) \cos n\theta + s_n(r) \sin n\theta] \sin i . \label{eq:vlos}$$ We apply Eq. (\[eq:vlos\]) to the HCN(4–3) velocity field and find an agreement between the kinematic and photometric center to within one pixel (see Fig. \[fig:1\]). Each ring is populated at more than 225 degrees, hence, we ensure that we do not need to assume any level of symmetry in the observed non-circular motions. We derive a systemic velocity of $1300\pm48$  and the kinematic position angle of $147\pm6\deg$. In Fig. \[fig:1\] we illustrate the different stages of the fits described here, and a three-fold symmetry can be found in the non-circular velocity field (as predicted for this region in NGC1097 by vdVF). The avreage final residual velocities are 10 , indicating that we have reproduced 95% of the observed velocity features with the Fourier decomposition method. Fitting Gaussians to each spectrum gives us the average velocity uncertainty for all the pixels at 15 . These errors are used to derive the uncertainties for the derived kinematics by means of Monte Carlo simulations. Repeated application to the Gaussian-randomized velocity field yields the uncertainties on the Fourier parameters. Our simulations show that the average rotation curve uncertainty is 20% [see also @f05]. Similarly, the higher Fourier term uncertainties have been calculated, and we plot the first, second and third Fourier terms of the HCN(4–3) data together with those derived from the GMOS data in Fig. \[fig:3\]. ![Comparing the quantified kinematics of the ionized gas in the optical (crosses) and the HCN(4–3) ALMA data (filled circles) to the analytic model prediction presented in vdVF. The spiral model is not a fit to any of the two data sets shown here, but a carefully chosen set of spiral parameters that best match the GMOS data. The HCN(4–3) data points are simple overplots of the new ALMA data showing a striking agreement with the model predictions. The model does not contain $c_2$ or $s_2$ terms (see Eqs (A12) and (A13) in vdVF).[]{data-label="fig:3"}](./Fig2.jpg){width="49.00000%"} Analysis ======== The ionized gas velocity Fourier terms were modeled by vdVF, who constrained the detailed structure of the nuclear spiral arms and the associated gas inflow from kinematic data (solid curves in the lower panels of Fig. \[fig:3\]). Their nuclear spiral structure is consistent with a weak perturbation in the gravitational potential due to a two-arm logarithmic spiral (in morphology) with a pitch angle of $52 \pm 4\deg$ derived directly from the Fourier expansion of the model velocity field. Similarly large pitch angles in the very central parts of galaxies have also been modeled by [@yy06]. Furthermore, the innermost $\sim 100$ pc radius of the data points analyzed by vdVF displayed the largest errors in the third Fourier terms, as a consequence of dust contamination. We test the effect of beam smearing on the derived kinematic parameters by artificially smoothing by a factor two, and find that the Fourier terms remain virtually unchanged. A notable effect of beam smearing is that it may lead to incorrect kinematic center, which in turn may cause uncertainties in the third Fourier terms. [@w00] found that smearing of 10could produce up to 10  third Fourier terms. This is less than our error bars. In light of these tests and following the discussion in [@w00 Chapter 2], it unlikely that the innermost values would be affected by beam smearing. In Fig. \[fig:3\], we overplot the ALMA HCN(4–3) velocity Fourier components on those derived from the GMOS data. The top panel reveals a very good agreement between the rotation curve ($\sim c_1$ in Eq 1) of the HCN(4–3) and that of the H$\alpha$ gas in the central 200 pc. Hence, due to the agreement of their overall kinematics, the bulk rotation of the dense gas is co-planar to the ionized interstellar gas. Further light can be shed on the dynamical behaviour of the HCN(4–3) by considering the higher Fourier terms. Figure. \[fig:3\] illustrates the analytic spiral model that has lead to the kinematic derivation of the pitch angle of the nuclear spirals with associated gas inflow rates [vdVF and @npf11]. A simple overplot of the ALMA data (filled circles) displays the striking agreement between the relatively non-contaminated HCN(4–3) kinematic behaviour and the model predictions. [*The analytic spiral model predicts the HCN(4–3) velocity Fourier terms down to $0\farcs55 \lesssim 40$ pc.*]{} Besides the $n=1$ and $n=3$ Fourier terms in the velocity field, the non-circular motions also contain marginal $n=2$ terms. vdVF argued that the $n=2$ terms in the H$\alpha$ were most likely due to dust contamination and possible shocks associated with the gas streaming along the nuclear spiral arms [see also @v04]. This seems to be confirmed here with the HCN(4–3) second terms consistent with zero inside 100 pc radius. ![Disc scale height over radius $h/R$ as a function of radius for the HCN(4–3) emitting gas at $n_{\rm{H}_2}\sim 10^8$ (solid curve). For comparison, we also illustrate this parameter for the ionized gas (dotted curve) and the molecular counterparts at lower densities (dashed curve).[]{data-label="fig:4"}](./Fig3.jpg){width="49.00000%"} Dense Gas Streaming Down to $\lesssim 40$ pc From the SMBH ========================================================== We use the derived $c_1$ curve to measure the dynamical mass, assuming a thin disc model and velocity uncertainties of $20\%$. We derive the mass within the central 100 pc to $3.5^{+1.6}_{-2.0}\times 10^8$ [M$_\odot$]{}, and inside the 40 pc radius, $8.0^{+2.9}_{-3.5}\times10^6$ [M$_\odot$]{}. These masses are based on velocity measurements well outside the sphere of influence of the SMBH [$\sim 13$ pc, @p72]. Even so, in theory, a significant contribution from the SMBH is expected on the rotating velocities at 40 pc radius. However, the synthesized beam size of the ALMA data almost entirely smears out the effect of the SMBH at this radius, and therefore, a rotation curve at higher spatial resolution is needed to measure the dynamical mass of the SMBH. Furthermore, it should be noted that our mass estimate from the observed cold molecular gas rotation curve are not corrected for the contribution of non-cirular motions in the $c_1$ curve (see Fig. 3 in vdVF). Thus, these are likely lower mass limits. The prescription for deriving mass inflow rates along the nuclear spirals has been presented in vdVF. They combined the resulting inflow velocity corresponding to the model spiral arm parameters, with the gas density in the spiral arms inferred from  emission line ratios. They calculated the ionized gas inflow rate as a function of radius, reaching $0.033$ [M$_\odot$yr$^{-1}$]{} at a distance of 100 pc from the central SMBH. The inflow rate was later refined by [@npf11] who used CO gas measurements to derive one order of magnitude higher molecular gas inflow rate for a marginally stable disc model. Using this formalism, [@npf11] derived the CO gas inflow of $0.3$[M$_\odot$yr$^{-1}$]{} at 100 pc radius. As we compare the HCN(4–3) kinematics with our previous results for ionized and CO gas [vdVF, @npf11], we set the sound speed at $10$  and derive the dense gas scale height (Fig. \[fig:4\]) using the observed epicyclic frequency. We then use the kinematic parameters from the analytic spiral model to trace the gas inflow down to the resolution limit ($0\farcs55 \lesssim 40$ pc). The data set at hand does not allow investigating density difference between the nuclear spiral arms and the inter-arm region in NGC1097. We assume $10\%$ overdensity, similar to the upper limit of the observed value from the  doublet, and in agreement with numerical simulations [@es00]. We derive the HCN(4–3) inflow rate of $0.3$ [M$_\odot$yr$^{-1}$]{} at 100 pc, and $0.09$ [M$_\odot$yr$^{-1}$]{} at 40 pc from the SMBH (see Fig. \[fig:5\]). The marginal stability criterion of [@r94] ensures that if the gas reaches higher densities, it will be confined to a thinner disc. Hence, changing the gas density by two orders of magnitude will not change the mass inflow rate (Fig. \[fig:4\]). However, accounting for the minimum density variation (arm versus inter-arm) derived from the  line ratios (i.e., $\delta \rho = 5\%$), the inflow rate would decreased by a factor 2 (Fig. \[fig:5\]). The critical value $0.01~\dot{M}_{\rm Edd}$ for the transition between LINER and Sy1 galaxies was found by @ho05 with over three orders of magnitudes distribution. The value that we have calculated here for the dense gas streaming, at 40 pc distance from the SMBH in NGC1097 corresponds to ${\dot{M}}\sim 0.033$ $\dot{M}_{\rm Edd}$, where the Eddington accretion rate is onto a SMBH mass of $1.2 \times 10^8$ [M$_\odot$]{} [based on the central stellar velocity dispersion adopted from @l06 and the $M_\bullet$–$\sigma$ relation of Tremaine et al. 2002]. ![Dense gas inflow rate derived from the analytic spiral model of vdVF as a function of galactocentric radius. Changing the molecular gas density by two orders of magnitudes does not change the inflow rate curves, however, the arm versus inter-arm overdensity $\delta \rho$ changes the inflow rate as illustrated by the sold and dashed curves.[]{data-label="fig:5"}](./Fig4.jpg){width="49.00000%"} Conclusions =========== We have presented a detailed kinematics analysis of the dense interstellar gas in the circumnuclear region of the nearby Seyfert 1 galaxy NGC1097. We used ALMA Band 7 high-resolution observations of the HCN(4–3) emission line, which is postulated to trace $n_{\rm{H}_2}\sim 10^8$  at 40 K kinetic temperature. While visual signatures of a rotation pattern dominate the observed velocity field, we have successfully applied Fourier decomposition of the velocity field to unveil the kinematic signatures of two prominent spiral arms (in morphology). The number of nuclear spiral arms in NGC1097 has been subject to a controversy and in the past years it has been clear that the presence of strong obscuration by dust at the very centre of this galaxy has complicated matters. We argue here that using velocity information from the HCN(4–3) clarifies matters in an unprecedented way. We have found a striking agreement between the kinematics of the HCN(4–3) and an analytic spiral model that we previously build using ionized gas kinematic data at similar spatial resolution. The new ALMA data confirm that the spiral arms have pitch angle of $52\pm4\deg$, down to $\lesssim 40$ pc from the SMBH in NGC1097. We note that some studies have found that the HCN molecule could arise from radiation and vibrational excitation [@ld96; @i04; @mt12]. The relative contribution of the radiation pumping could then decrease the gas density with which the HCN is associated, and we note that an assumed density decrease by two orders of magnitudes will yield $h/R < 0.01$. For a thin disc, we have then measured the mass inside the 100 pc radius to be $3.5^{+1.6}_{-2.0}\times 10^8$ [M$_\odot$]{}, and inside the 40 pc radius, $8.0^{+2.9}_{-3.5}\times10^6$ [M$_\odot$]{}. The derived mass at 40 pc agrees with the $<3.0\times10^7$ [M$_\odot$]{}, derived from the observed line flux following the procedure described in @gs04. We have used a constant arm versus inter-arm overdensity of 10% and kinematically derived a dense gas inflow of $0.3$ [M$_\odot$yr$^{-1}$]{} at 100 pc and $0.09$ [M$_\odot$yr$^{-1}$]{} at 40 pc radius. In combination with our previously derived values from the ionized and CO gas, we calculate a molecular and ionized gas infall of $0.6$ [M$_\odot$yr$^{-1}$]{} at 100 pc and $\sim 0.2$ [M$_\odot$yr$^{-1}$]{} at 40 pc distance from the central SMBH of NGC1097. This inflow corresponds to ${\dot{M}}\sim 0.066 \dot{M}_{\rm Edd}$ onto a black hole in NGC1097 with a mass of $1.2 \times 10^8$ [M$_\odot$]{}. From the current data, it is not clear how much, if any at all, of the observed HCN(4–3) is radiationally excited by the active nucleus. In the presence of radiational excitation, the dense gas scale height that we present here will be lower limits. Notwithstanding, the gas inflow rates remain unchanged. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== We thank the referee whose comments improved our manuscript. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00108.S (PI: Kohno). ALMA is a partnership of ESO, NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.. Sergio Martín is cofunded under Marie Curie Actions of the EC (FP7-COFUND). Satoki Matsushita is supported by NSC 100-2112-M-001-006-MY3 of Taiwan. Kambiz Fathi acknowledges support from the Swedish Research Council & the Swedish Royal Academy of Sciences’ Crafoord Prize Foundation. Choi, M. et al. 2000, ApJ, 538, 738 Davies, R. I. et al. 2009, ApJ, 702, 114 Dicaire, I. et al. 2008, MNRAS, 385, 553 Englmaier, P., Shlosman, I. 2000, ApJ, 528, 677 Fathi, K. et al. 2006, ApJ, 641, L25 Fathi, K. et al. 2005, MNRAS, 364, 773 Gao, Y, Solomon, P. M. 2004, ApJSS, 152, 63 Ho, L. C. 2005, Ap&SS, 300, 219 Hsieh, P-Y et al. 2012, ApJ, 747, 90 Hsieh, P-Y et al. 2011, ApJ, 736, 129 Hsieh, P-Y et al. 2008, ApJ, 683, 70 Imanishi, M. et al. 2004, AJ, 128, 2037 Izumi, T. et al. 2013, PASJ, submitted Kohno, K. et al. 2003, PASJ, 55, L1 Kohno, K. et al. 1996, ApJ, 461, L29 Lepp, S., Dalgarno, A. 1996, A&A, 306, L21 Lewis, K. T., Eracleous, M. 2006, ApJ, 642, 711 Lou, Y.-Q., Yuan, C., Fan, Z., Leon, S. 2001, ApJL, 553, L35 Meier, D. S., Turner, J. 2012, ApJ, 755, 104 Ondrechen, M. P., van der Hulst, J. M., Hummel, E. 1989, ApJ, 342, 39 Peebles, P. J. E. 1972, ApJ, 178, 371 Piñol-Ferrer, N. et al. 2013, MNRAS, submitted Piñol-Ferrer, N., Fathi, K., Lundgren, A. A., van de Ven, G. 2011, MNRAS, 414, 529 Regan M. W., Teuben P., 2003, ApJ, 582, 723 Romeo, A. 1994, A&A, 286, 799 Sakamoto, K. et al. 2010, ApJ, 725, L228 Schwarz, M. P. 1984, MNRAS, 209, 93 Sheth, K., Teuben, P. J. 1999, Nature 397, 298 Storchi-Bergmann, T. et al. 2003, ApJ, 598, 956 Storchi-Bergmann, T., Baldwin, J. A., Wilson, A. S. 1993, ApJ, 410, L11 Takakuwa, S. et al. 2007, ApJ, 662, 431 Tremaine, S. et al. 2002, ApJ, 574, 740 Valotto, C., Giovanelli, R. 2004, AJ, 128, 115 van de Ven, G., Chang, P. 2009, ApJ, 697, 619 van de Ven, G., Fathi, K. 2010, ApJ, 723, 767 [**(vdVF)**]{} Wong, T. 2000, PhD thesis, University of California, Berkeley Yuan, C., Yang, C-C. 2006, ApJ, 644, 180 \[lastpage\]
--- abstract: 'The gap equation for fermions in a version of thermal $QED$ in three dimensions is studied numerically in the Schwinger-Dyson formalism. The interest in this theory has been recently revived since it has been proposed as a model of high-temperature superconductors. We include wave-function renormalization in our equations and use a non-bare vertex. We then have to solve a system of two integral equations by a relaxation algorithm. Fermion and photon self-energies varying independently with energy and momentum are used, which should produce more accurate results than in the previous literature. The behaviour of the theory with increasing temperature and number of fermion flavours is then carefully analyzed.' address: | Institut für Theoretische Physik, Technische Universität München\ James-Franck-Strasse, D-85748 Garching, Germany author: - Georg Triantaphyllou title: ' Fermion mass-function in thermal $\tau_{3}-QED3$ ' --- THE PROBLEM =========== The study of $QED$ in three dimensions during the past years has revealed very interesting physics in this theory [@bunch]-[@volo]. At finite temperature, a version of $QED$, i.e. $\tau_{3}-QED$, has been proposed as a model of high-temperature superconductivity [@Mavro]. The behaviour of the theory with increasing temperature $T$ and number of fermion flavours $N_{f}$ was recently studied numerically in the Schwinger-Dyson formalism in a bare-vertex approximation [@georg]. This approximation corresponds to a particular truncation of the Schwinger-Dyson hierarchy, and its effects, along with wave-function renormalization effects, could be important [@Penni], especially in connection with the behaviour of the theory when the number of fermion flavours is varied. This has led to several interesting studies at zero-temperature trying to include these effects [@Nash]-[@Maris], in order to solve the controversy on whether there is a critical number of fermion flavours beyond which there is no mass generation. At finite temperature, similar studies have so far used several approximations that we will relax in this study. The first purpose of this work is to provide an accurate value for the ratio $r=2M(0,0)/k_{B}T_{c}$ [@Mavro], where $M(0,0)$ is the fermion mass function at zero momentum and energy and $T_{c}$ is the critical temperature beyond which there is no mass generation. Apart from being theoretically very interesting, this quantity is of direct physical relevance since it can be compared to corresponding values measured in certain high-temperature superconductors. The second purpose is to provide a reliable phase diagram of the theory with respect to $T$ and $N_{f}$. We continue to consider momentum- and energy-dependent self-energies, but we include wave-function renormalization effects and a non-bare vertex. This complicates the study considerably and constitutes a non-trivial step forward, since instead of having only one gap equation, a system of two integral equations has to be solved. We continue to neglect the imaginary parts of the photon polarization functions and of the fermion self-energy for simplicity, as other studies have done so far [@instant]-[@ian2]. Their inclusion, even though in principle necessary, would double the number of coupled equations to be solved, which would render the numerical study too complicated for our algorithm and the computer power at hand. THE EQUATIONS ============= We will employ the Schwinger-Dyson formalism in order to gain information on the momentum-dependent fermion self-energy and the non-perturbative physics behind it. The Schwinger-Dyson equation for the fermion self-energy is given by $$S^{-1}(p) = S_{0}^{-1}(p) - e^{2}\int \frac{d^{3}k}{(2\pi)^{3}} \gamma^{\mu}S(q)\Delta_{\mu\nu}(k)\Gamma^{\nu}(p,q)$$ where $q = p - k$, $e$ is the dimensionful gauge coupling of the theory which we will take to be constant throughout this study, $\Delta_{\mu\nu}$ is the photon propagator with $\mu, \nu = 0, 1, 2$, $\Gamma^{\nu}$ is the full photon-fermion vertex, $\gamma^{\mu}$ is a four-dimensional representation of the $\gamma$-matrices, $S_{0}$ is the bare fermion propagator, and the finite-temperature fermion propagator in the real-time formalism is given by $$\begin{aligned} S(p)& =& \left(\left(1+A(p)\right)p\!\!\!/ + \Sigma(p)\right) \times \left( \frac{1}{\left(1+A(p)\right)^{2}p^{2} + \Sigma^{2}(p)}\right. \nonumber \\ & &- \left. \frac{2\pi\delta((1+A(p))^{2}p^{2}+\Sigma^{2}(p))} {e^{\beta|p_{0}|}+1}\right), \end{aligned}$$ where $\beta = 1/k_{B}T$, $A(p)$ is the wave-function renormalization function, $\delta$ is the usual Dirac function and we have made a rotation to Euclidean space. Note that we avoid the matrix form that the propagator has in this formalism, since the Schwinger-Dyson equation that we have written down involves only a one-loop diagram directly, so complications due to the field-doubling problem do not arise [@Land]. However, it has to be noted that a more careful treatment involving the imaginary parts of the self-energies would involve the full matrix propagators [@Smilga]. The resulting equations would in that case be unfortunately too complicated to be solved even numerically, without serious truncations. Moreover, due to the broken Lorenz invariance at finite temperature, the wave function renormalization could in principle affect differently the $p_{0}$ and $|{\bbox}{p}|$ propagator components, i.e. we should replace $(1+A(p))p\!\!\!/$ by $((1+A(p))\gamma^{0}+a)p_{0} + (1+B(p))\gamma^{i}p_{i}$ with $i = 1, 2$. For simplicity we will restrict ourselves to situations where $a=0$, which correspond to a zero chemical potential, and we will work in the approximation where $A(p) = B(p)$ also for non-zero temperatures as done in similar studies [@ian2]. For the vertex $\Gamma^{\nu}(p,q)$ we use the ansatz $\Gamma^{\nu}(p,q) = \left(1+A(q)\right)\gamma^{\nu}$, where $A(q)$ is the same wave-function renormalization function appearing in the fermion propagator. It is unfortunately not symmetric in the vertex momenta, but its use simplifies the numerical algorithm considerably. Even though this vertex does not satisfy [*a priori*]{} the Ward-Takahashi identities, it is expected to incorporate the basic qualitative features of a non-perturbative vertex at zero temperature when used in a Schwinger-Dyson context [@Miranski]. It has also been used in a finite-temperature case [@ian2], supported by the qualitative agreement of the results of this ansatz with the ones obtained by more elaborate treatments [@Penni]. Furthermore, in the problem of the present study it gives results similar to a symmetrized vertex [@georg2]. Moreover, the photon propagator in the Landau gauge is given by [@Mavro] $$\Delta_{\mu\nu}(k) = \frac{Q_{\mu\nu}}{k^{2}+\Pi_{L}(k)} +\frac{P_{\mu\nu}}{k^{2}+\Pi_{T}(k)}$$ where $$\begin{aligned} Q_{\mu\nu}&=&(\delta_{\mu 0}-k_{\mu}k_{0}/k^{2})\frac{k^{2}}{{\bbox}{k}^2} (\delta_{\nu 0} - k_{\nu}k_{0}/k^{2}) \nonumber \\ P_{\mu\nu}&=&\delta_{\mu i}(\delta_{ij} - k_{i}k_{j}/{\bbox}{k}^2)\delta_{\nu j}\end{aligned}$$ with $i, j = 1, 2$, and where we neglect its temperature-dependent delta-function part since it is expected to give a vanishingly small contribution [@georg], [@ian1], [@delref]. The longitudinal and transverse photon polarization functions $\Pi_{L}$ and $\Pi_{T}$ are given explicitly in [@georg] and taken from [@ian1], where they are calculated in a massless-fermion approximation and where wave-function renormalization and vertex effects cancel only for specific vertex choices [@Maris]. One should in principle couple the expressions for $\Pi_{L,T}$ in our system of integral equations in a self-consistent manner, but unfortunately this would render our numerical algorithm too complicated. Identifying the parts of this equation with the same spinor structure, we reduce the problem to that of a system of two three-dimensional integral equations involving two functions varying independently with $p_{0}$ and $|{\bbox}{p}|$. The equations take the following form: $$\begin{aligned} &&M(p_{0},|{\bbox}{p}|)=\frac{\alpha}{N_{f}(1+ A(p_{0},|{\bbox}{p}|))}\int \frac{dk_{0}|{\bbox}{k}|d|{\bbox}{k}|d\theta}{(2\pi)^{3}} \times \nonumber \\&&\nonumber \\ && \times \frac{ M(q_{0},|{\bbox}{q}|)}{q^{2}+ M^{2}(q_{0},|{\bbox}{q}|)} \sum_{P=L,T} \frac{1}{k^{2}+\Pi_{P}(k_{0},|{\bbox}{k}|)} \nonumber \\ && \nonumber \\ &&-\frac{\alpha}{N_{f}(1+ A(p_{0},|{\bbox}{p}|))} \int\frac{|{\bbox}{k}|d|{\bbox}{k}|d\theta}{(2\pi)^{2}} \frac{M(E,|{\bbox}{q}|)} {2E(e^{\beta E} + 1)} \times \nonumber \\&&\nonumber \\&&\times \sum_{\epsilon=1,-1}\sum_{P=L,T} \frac{1}{(p_{0}-\epsilon E)^{2}+{\bbox}{k}^{2}+ \Pi_{P}(p_{0}-\epsilon E,|{\bbox}{k}|)} \nonumber \\&&\nonumber \\ &&A(p_{0},|{\bbox}{p}|)=\frac{\alpha}{N_{f}p^{2}}\int \frac{dk_{0}|{\bbox}{k}|d|{\bbox}{k}|d\theta}{(2\pi)^{3}} \frac{1}{q^{2}+ M^{2}(q_{0},|{\bbox}{q}|)} \times \nonumber \\&&\nonumber \\ &&\times \left(\frac{Q(p_{0},{\bbox}{p},k_{0},{\bbox}{k})}{k^{2}+ \Pi_{L}(k_{0},|{\bbox}{k}|)} +\frac{P(p_{0},{\bbox}{p},k_{0},{\bbox}{k})}{k^{2} +\Pi_{T}(k_{0},|{\bbox}{k}|)} \right) \nonumber \\&&\nonumber \\ &&- \frac{\alpha}{N_{f}p^{2}} \int\frac{|{\bbox}{k}|d|{\bbox}{k}|d\theta}{(2\pi)^{2}} \frac{1} {2E(e^{\beta E} + 1)} \times \nonumber \\&&\nonumber \\&&\times \sum_{\epsilon=1,-1}\left( \frac{Q(p_{0},{\bbox}{p},p_{0}-\epsilon E,{\bbox}{k})}{(p_{0}-\epsilon E)^{2} +{\bbox}{k}^{2}+ \Pi_{L}(p_{0}-\epsilon E,{\bbox}{k})} + \right. \nonumber \\&&\nonumber\\ &&+\left. \frac{P(p_{0},{\bbox}{p},p_{0}-\epsilon E,{\bbox}{k})}{(p_{0}-\epsilon E)^{2} +{\bbox}{k}^{2}+ \Pi_{T}(p_{0}-\epsilon E,{\bbox}{k})}\right), \label{eq:fingap}\end{aligned}$$ where $\alpha = e^{2}N_{f}$, and it is more convenient to work with the mass function $M(p_{0},|{\bbox}{p}|)= \Sigma(p_{0},|{\bbox}{p}|)/(1+ A(p_{0},|{\bbox}{p}|))$. We also sum over the photon polarizations $P=L, T$ and over the two roots of the delta function by introducing $\epsilon=1,-1$. The quantity $E$ is approximated by the relation $E^{2} \approx |{\bbox}{q}|^{2} + M^{2}(0,0)$ [@georg], where use of the delta-function property $\delta(ax)=\delta(x)/|a|$ has been made. Furthermore, the functions $Q$ and $P$ are given by $$\begin{aligned} Q(p_{0},{\bbox}{p},k_{0},{\bbox}{k})&=& 2\left(p_{0}-\frac{(pk)k_{0}}{k^{2}}\right) \frac{k^{2}}{{\bbox}{k}^2}\left(q_{0}-\frac{(qk)k_{0}}{k^{2}}\right) \nonumber\\ P(p_{0},{\bbox}{p},k_{0},{\bbox}{k})&=& 2\left({\bbox}{p}\;{\bbox}{q} -\frac{({\bbox}{p}{\bbox}{k})({\bbox}{k}{\bbox}{q})}{{\bbox}{k}^2} \right) -pq. \end{aligned}$$ One can easily check that $Q + P = - 2(pk)(kq)/k^{2}$, which would reproduce the result of [@ian2] if one takes $\Pi_{L}(k)=\Pi_{T}(k)=\Pi(k)$ and switches to imaginary-time formalism. After inspecting the equations, we note that the vertex ansatz we chose makes the integral giving the function $A(p_{0},|{\bbox}{p}|)$ depend only on the function $M$ and independent of $A$. On the other hand, the equation for $M(p_{0},|{\bbox}{p}|)$ has to be solved self-consistently, and it actually always accepts, apart from the solutions we will seek, the trivial solution as well. An analytical study of such a system would not be possible without severe approximations, so we proceed to the numerical solution of the equations given above. NUMERICAL RESULTS ================= The numerical method used to solve the system of equations is the same as the one presented in [@georg]. The physical quantities of interest do not vary substantially with our ultra-violet (UV) cut-off, since at the effective UV cut-off $\alpha$ of the theory they are already negligibly small [@georg2]. The solution at $T=0$ and $N_{f}=2$ for the functions $\Sigma(p_{0},|{\bbox}{p}|)$ and $ - A(p_{0},|{\bbox}{p}|)$ is given in Figs. 1 and 2 respectively. The general variation of these functions with momentum does not change with increasing temperature or varying of $N_{f}$, even though the overall scale of $\Sigma$ drops fast with $N_{f}$. The function $\Sigma(p_{0},|{\bbox}{p}|)$ falls as expected with increasing momentum, and is of the same form as the mass function $M(p_{0},|{\bbox}{p}|)$. The function $A(p_{0},|{\bbox}{p}|)$ is always in the range between -1 and 0 as required [@Claude], it is tending to zero for increasing momenta, and it is of the same form and magnitude as the approximate form used in [@ian2]. For a given number of fermion flavours $N_{f}$, when the temperature exceeds some critical value there is no solution for the fermion mass function but the trivial one. The value of the ratio $r$ is concentrated approximately around $r \approx 10.6$, which is comparable to the values obtained in [@georg] which neglected $A(p_{0},|{\bbox}{p}|)$. The ratio $r$ found is also comparable to typical values obtained in Ref. [@ian2], which includes wave-function renormalization effects but uses several approximations which we were able to by-pass in this study. We confirm therefore that adding these effects does not influence the behaviour of the theory in a significant way. We have to note moreover that our $r$ values are somewhat larger than the value $r \approx 8$ measured for some high-temperature superconductors [@Schles]. However, we could be overestimating this ratio because of a possibly poor convergence of the algorithm for temperatures close to the critical one. At zero-temperature, this theory is known to exhibit also an interesting behaviour with the number of fermions $N_{f}$. In Fig. 3 we plot the zero-momentum and zero-temperature fermion mass function with respect to $N_{f}$. We fit the data with the exponential curve $e^{-1.48N_{f}}/13.25$. At $N_{f} \approx 3.35$, the mass function is still roughly four times larger than the cut-off. When $N_{f} ~^{>}_{\sim}\; 3.35$, our algorithm does not converge and the mass function tends fast below the IR cut-off. This behaviour could indicate that $N_{f} \approx 3.35$ is some critical point beyond which dynamical mass generation is impossible. The value of $N_{f}$ we find is remarkably close to the one quoted in the numerical study of [@Maris], and it is also close to our previous result [@georg]. A similar study [@Nash], which includes a calculation of the fermion-field anomalous dimension to second-order in $1/N_{f}$, predicts a critical value $N_{f} \approx 3.28$, which is only slightly larger than the theoretical prediction neglecting wave-function renormalization which gives $N_{f} = \frac{32}{\pi^{2}} \approx 3.24$ [@appel], and quite close to the value we find numerically. In Fig. 4 we plot the phase diagram of the theory with respect to $N_{f}$ and $k_{B}T$. It separates two regions of the parameter space which either allow or do not allow dynamical mass generation. The choice of an exponential fitting curve was only made to describe “phenomenologically" the general tendency of the data and to provide a measure for a $r$-ratio independently of $N_{f}$, and is reminiscent of the results in Ref. [@Penni] but with a somewhat steeper slope. However, there are also studies that predict a non-analytic behaviour of $\Sigma$ for $N_{f}$ near its critical value [@appel]. Lack of convergence of the algorithm and fall of the mass function below the IR cut-off however do not allow us to test the precise behaviour of the theory near $N_{f} \approx 3.35$, and it is also clear that the fitting curves should not be extrapolated for $N_{f}$ larger than this value. DISCUSSION ========== We were able to solve a system of two coupled integral equations for the fermion mass function and wave-function renormalization for a finite-temperature version of three-dimensional $QED$, by applying a numerical relaxation technique. One main result is a $r$-ratio of about 10.6, which is close to previous numerical studies, confirming that including wave-function renormalization, in conjunction with a particular non-bare fermion-photon vertex, does not affect the theory in a significant way. The other important result is the existence of a possibly critical fermion flavour number of roughly 3.35, which is also consistent with some theoretical expectations and other numerical results. It is the first time that such a study includes the momentum and energy dependence of the fermion and photon self-energies and of wave-function renormalization in a gap equation with a non-bare vertex, and this allows a more reliable description of the behaviour of the theory. We estimate the numerical uncertainty for the values quoted, which comes mainly from the convergence criteria imposed, at about $\pm 10\%$. The next step for future studies should be the inclusion of the imaginary parts of the self-energies in the equations and the relaxing of the approximation $A(p)=B(p)$, which could in principle influence the results of this investigation. I thank N. Mavromatos, P. Henning and A. Smilga for useful discussions. This research is supported by an [*Alexander von Humboldt Fellowship*]{}. R.D. Pisarski, [*Phys. Rev.*]{} [**D 29**]{}, 2423 (1984); T. Appelquist, M. Bowick, D. Karabali and L.C.R. Wijewardhana, [*Phys. Rev.*]{} [**D 33**]{}, 3704 (1986); E. Dagotto, J.B. Kogut and A. Kocic, [*Phys. Rev. Lett.*]{} [**62**]{}, 1083 (1989); E. Dagotto, A. Kocic and J.B. Kogut, [*Nucl. Phys.*]{} [**B 334**]{}, 279 (1990); D. Atkinson, P.W. Johnson and P. Maris, [*Phys. Rev.*]{} [**D 42**]{}, 602 (1990). T. Appelquist, D. Nash and L.C.R. Wijewardhana, [*Phys. Rev. Lett.*]{} [**60**]{}, 2575 (1988). V.P. Gusynin, V.A. Miranskii and A.V. Shpagin, hep-th/9802136. N. Dorey and N.E. Mavromatos, [*Nucl. Phys.*]{} [**B 386**]{}, 614 (1992). G. Triantaphyllou, [*Phys. Rev.*]{} [**D 58**]{}, 065006 (1998). M. R. Pennington and D. Walsh, [*Phys. Lett.*]{} [**B 253**]{}, 246 (1991); D.C. Curtis, M.R. Pennington and D. Walsh, [*Phys. Lett.*]{} [**B 295**]{}, 313 (1992). D. Nash, [*Phys. Rev. Lett.*]{} [**62**]{}, 3024 (1989). V.P. Gusynin, A.H. Hams and M. Reenders, [*Phys. Rev.*]{} [**D 53**]{}, 2227 (1996). P. Maris, [*Phys. Rev.*]{} [**D 54**]{}, 4049 (1996). N. Dorey and N.E. Mavromatos, [*Phys. Lett.*]{} [**B 256**]{}, 163 (1991); I.J.R. Aitchison, N. Dorey, M. Klein-Kreisler and N.E. Mavromatos, [*Phys. Lett.*]{} [**B 294**]{}, 91 (1992). I.J.R. Aitchison, [*Z. Phys.*]{}, [**C 67**]{}, 303 (1995). I.J.R. Aitchison and M. Klein-Kreisler, [*Phys. Rev.*]{} [**D 50**]{}, 1068 (1994). N. P. Landsman, C.G. van Weert, [*Phys. Rep.*]{} [**145**]{}, 145 (1987); R. Kobes, G.W. Semenoff, [*Nucl. Phys.*]{} [**B 260**]{}, 714 (1985) and [**B 272**]{}, 329 (1988); L. Dolan and R. Jackiw, [*Phys. Rev.*]{} [**D 9**]{}, 3320 (1974). For interesting reviews, see A.V. Smilga, [*Phys. Rept.*]{} [**291**]{}, 1 (1997); P.A. Henning, [*Phys. Rept.*]{} [**253**]{}, 235 (1995). M. R. Pennington and S.P. Webb, BNL Report No. BNL-40886, 1988 (unpublished); D. Atkinson, P. W. Johnson and M.R. Pennington, BNL Report No. BNL-41615, 1988 (unpublished). G. Triantaphyllou, [*Technische Universität München*]{} Preprint No TUM-HEP-310/98, August 1998. A. Kocic, [*Phys. Lett.*]{} [**B 189**]{}, 449 (1987); T.S. Evans, R.J. Rivers, [*Z. Phys.*]{} [**C 40**]{}, 293 (1988). See for instance C. Itzykson and J.-B. Zuber, [*Quantum Field Theory*]{}, McGraw-Hill 1985. Z. Schlesinger, R.T. Collins, D.L. Kaiser and F. Holtzberg, [*Phys. Rev. Lett.*]{} [**59**]{}, 1958 (1987).
--- abstract: 'We compare the nuclear corrections factors from neutrino deep-inelastic scattering (DIS) with the ones coming from a standard analysis of nuclear parton distribution functions (nPDF). We focus on a discrepancy between the most precise neutrino DIS data from NuTeV and the nuclear PDF coming from the analysis of charged lepton DIS and Drell-Yan data.' author: - | [*K.Kovařík*]{}\ Institute for Theoretical Physics, Karlsruhe Institute of Technology, Karlsruhe, 76128,Germany bibliography: - 'kovarik\_karol\_npdf.bib' title: 'Update and Comparison of Nuclear Parton Distribution Functions and Neutrino DIS.' --- Introduction ============ An indispensable part of any prediction for a process measured at a hadron collider such as the LHC are the parton distribution functions (PDFs). Because of the importance of PDFs, many groups perform and update global analyses of PDFs for protons [@Ball:2009mk; @Martin:2009iq; @Nadolsky:2008zw] and for nuclei [@Hirai:2007sx; @Eskola:2009uj; @deFlorian:2011fp]. Proton PDF are determined from data taken not only on protons but from some data taken on nuclear targets, mainly deuterium but also heavy nuclei such as lead and iron in case of neutrino DIS. Neutrino DIS data is sensitive to the strange quark content of the proton and complements newly available LHC data from $W$- or $Z$-boson production. In order to include the neutrino DIS data in a global fit to help constrain the proton PDF, we have to apply a nuclear correction factor. The nuclear correction factor can be obtained either from a specific model of nuclear interactions or from an analysis of nuclear parton distribution functions (NPDF) based on experimental data. Here, we discuss a compatibility of neutrino DIS data with the nuclear correction factors obtained from NPDF analysis focusing on the neutrino DIS data from the NuTeV experiment. Nuclear correction factors from nuclear PDF =========================================== Nuclear correction factors are in general defined as a ratio of an observable in a nuclear process and the same observable in a process involving protons. In the following, we discuss two nuclear correction factors both related either to the $F_2$ structure function in neutrino DIS $$R_{CC}^\nu(F_2;x,Q^2)\simeq \frac{d^A+\bar{u}^A+\ldots}{d^{A,0}+\bar{u}^{A,0}+\ldots}\,, \label{eq:rcc}$$ or to the $F_2$ structure function in charged lepton DIS $$\begin{aligned} && R_{NC}^{e,\mu}(F_2;x,Q^2)\simeq \frac{[d^A + \bar{d}^A + \ldots]+ 4 [u^A + \bar{u}^A+\ldots]}{[d^{A,0} + \bar{d}^{A,0} + \ldots] +4 [u^{A,0} + \bar{u}^{A,0}+\ldots]}\,. \label{eq:rnc}\end{aligned}$$ The superscript $`0'$ stands for using the free nucleon PDFs $f_i^{p,n}(x,Q)$ as given below in Eq. (\[eq:pdf\]). Nuclear correction factors such as those defined by Eqs. \[eq:rcc\],\[eq:rnc\] can be either extracted from the data or calculated using the extracted parton distribution functions. Here we use the nuclear PDF from [@Schienbein:2009kk] and [@Kovarik:2010uv] where the parameterizations of the nuclear parton distributions of partons in bound protons at the input scale of $Q_0=1.3$ GeV are $$\begin{aligned} x\, f_{k}(x,Q_{0}) &=& c_{0}x^{c_{1}}(1-x)^{c_{2}}e^{c_{3}x}(1+e^{c_{4}}x)^{c_{5}}\,,\\ \nonumber \bar{d}(x,Q_{0})/\bar{u}(x,Q_{0}) &=& c_{0}x^{c_{1}}(1-x)^{c_{2}}+(1+c_{3}x)(1-x)^{c_{4}}\,,\end{aligned}$$ where $f_k=u_{v},d_{v},g,\bar{u}+\bar{d},s,\bar{s}$ and $\bar{u},\bar{d}$ are a generalization of the parton parameterizations in free protons used in the CTEQ proton analysis [@Pumplin:2002vw]. To account for different nuclear targets, the coefficients $c_k$ are made to be functions of the nucleon number $A$ $$c_{k}\to c_{k}(A)\equiv c_{k,0}+c_{k,1}\left(1-A^{-c_{k,2}}\right),\ k=\{1,\ldots,5\}\,.\label{eq:Adep}$$ From the input distributions, we can construct the PDFs for a general $(A,Z)$-nucleus $$f_{i}^{(A,Z)}(x,Q)=\frac{Z}{A}\ f_{i}^{p/A}(x,Q)+\frac{(A-Z)}{A}\ f_{i}^{n/A}(x,Q),\label{eq:pdf}$$ where we relate the distributions of a bound neutron, $f_{i}^{n/A}(x,Q)$, to those of a proton by isospin symmetry. In the analysis, the same standard kinematic cuts $Q>2\ {\rm GeV}$ and $W>3.5\ {\rm GeV}$ were applied as in [@Pumplin:2002vw] and we obtain a fit with $\chi^{2}/{\rm dof}$ of 0.946 to 708 data points with 32 free parameters (for further details see [@Schienbein:2009kk]). ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ from the global NPDF analysis compared with the corresponding data for iron target.[]{data-label="Rnc:fig"}](kovarik_karol_npdf_fig1a "fig:"){width="49.00000%"} ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ from the global NPDF analysis compared with the corresponding data for iron target.[]{data-label="Rnc:fig"}](kovarik_karol_npdf_fig1b "fig:"){width="49.00000%"} In Fig. \[Rnc:fig\] (solid line), we show how the result of our global analysis of charged lepton data translates into nuclear correction factors and how the nuclear correction factors compare with experimental data. As first observed in [@Schienbein:2007fs], the $R_{CC}^\nu(F_2;x,Q^2)$ correction factor calculated using Eq. \[eq:rcc\] with parton densities from the fit to the charged lepton nuclear data, does not describe the NuTeV data well which raises the question if including neutrino DIS data in the global analysis corrects this behavior without spoiling the $R_{NC}^{e,\mu}(F_2;x,Q^2)$ correction factor which fits the charged lepton DIS and DY data well. Neutrino DIS ============ To analyze the possible discrepancy between the nuclear correction factor $R_{CC}^\nu(F_2;x,Q^2)$ from the fit to charged lepton data and the neutrino DIS data, we have included the NuTeV and Chorus neutrino DIS cross-section data in the global fit. The 3134 neutrino DIS cross-section data points would clearly dominate 708 charged lepton data in the global fit. That is why, we have introduced the weight to the neutrino data and set up a series of fits in order to find a compromise fit. $\chi^{2}/{\rm dof}$ for each compromise fit with a different weight of neutrino DIS data is listed in Tab. \[tab:compr\]. Each global fit with a different weight results in a different nuclear correction factor and in Fig. \[weight:fig\] we see that the weight is a suitable parameter which interpolates between the fit using only charged lepton data and the fit using only neutrino data (for further details see [@Kovarik:2010uv]). $w$ $ \chi^{2}_{l^{\pm}A}$ (/pt) $\chi^{2}_{\nu A}$ (/pt) total $\chi^{2}$(/pt) ---------- ------------------------------ -------------------------- ----------------------- $0$ 638 (0.90) - 638 (0.90) $1/7$ 645 (0.91) 4710 (1.50) 5355 (1.39) $1/2$ 680 (0.96) 4405 (1.40) 5085 (1.32) $1$ 736 (1.04) 4277 (1.36) 5014 (1.30) $\infty$ - 4192 (1.33) 4192 (1.33) : Summary table of a family of compromise fits. \[tab:compr\] ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ for all fits in Tab. \[tab:compr\].[]{data-label="weight:fig"}](kovarik_karol_npdf_fig2a "fig:"){width="49.00000%"} ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ for all fits in Tab. \[tab:compr\].[]{data-label="weight:fig"}](kovarik_karol_npdf_fig2b "fig:"){width="49.00000%"} In order to decide on how well the compromise fits describe the data we use the $\chi^2$ goodness-of-fit criterion used in [@Stump:2001gu; @Martin:2009iq]. We consider a fit a good compromise if its $\chi^2$ for both data subsets, the charged lepton DIS and DY data and the neutrino DIS data, is within 90% confidence level of the fits to only charged lepton or neutrino data. We define the 90% percentile $\xi_{90}$ used to define the 90% confidence level, by $$\label{xi90} \int_0^{\xi_{90}}P(\chi^2,N)d\chi^2 = 0.90\,,$$ where $N$ is the number of degrees of freedom and $P(\chi^2, N)=\frac{(\chi^2)^{N/2-1}e^{-\chi^2/2}}{2^{N/2}\Gamma(N/2)}$ is the probability distribution. We can assign a 90% confidence level error band to the $\chi^2$ of the fits to the charged lepton DIS and DY data and to the neutrino DIS data. By looking at the overall $\chi^{2}/{\rm dof}$ values and at the plots in Fig. \[weight:fig\], one might conclude that the global fit with $w=1/2$ describes both data well to constitute a compromise fit. The conclusion changes however when we inspect separate contributions to the $\chi^{2}/{\rm dof}$ from different experiments. The change in global $\chi^{2}$ is mostly due to change in $\chi^{2}$ of the DIS scattering on iron which makes all the compromise fits incompatible. ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ where neutrino data were included with uncorrelated systematic errors.[]{data-label="uncorr:fig"}](kovarik_karol_npdf_fig3a "fig:"){width="49.00000%"} ![Nuclear correction factors $R_{NC}^{e,\mu}(F_2;x,Q^2)$ and $R_{CC}^\nu(F_2;x,Q^2)$ where neutrino data were included with uncorrelated systematic errors.[]{data-label="uncorr:fig"}](kovarik_karol_npdf_fig3b "fig:"){width="49.00000%"} The conclusion about incompatibility of the fits rests on NuTeV data having small errors which can be demonstrated by neglecting the correlations in systematic errors which results in a compatible fit of all the data (see Fig. \[uncorr:fig\]). Conclusion ========== A thorough global NPDF analysis of the combined charged lepton and neutrino data leads us to conclude that there is no good compromise description of both the data sets simultaneously. The differences can be seen in the low and intermediate $x$ regions where the neutrino DIS (NuTeV) do not show a strong shadowing effect as the charged lepton data do. The inability to describe all data by one consistent framework poses problems for including the NuTeV data to the proton PDF analysis.
--- abstract: 'We construct a simply connected $2-$complex $C$ embeddable in $3-$space such that for any embedding of $C$ in $\mathbb S^3$, any edge contraction forms a minor of the $2-$complex not embeddable in $3-$space. We achieve this by proving that every edge of $C$ forms a nontrivial knot in any of the embeddings of $C$ in $\mathbb S^3$.' author: - 'Johannes Carmesin[^1]' - 'Lyuben Lichev[^2]' bibliography: - 'Bibliography.bib' title: ' <span style="font-variant:small-caps;">New Constructions related to the Polynomial Sphere Recognition Problem</span>' --- Introduction ============ Given a triangulation of a compact $3-$manifold, is there a polynomial time algorithm to decide whether this $3-$manifold is homeomorphic to the $3-$sphere? This is the Polynomial Sphere Recognition Problem. This problem has fascinated many mathematicians. Indeed, in 1992, Rubinstein proved that there is an algorithm that decides whether a given compact triangulated 3-manifold is isomorphic to the 3-sphere. This was simplified by Thompson [@MR1295555].[^3] It has been shown by Schleimer [@Sch11] that this problem lies in NP, and by Zentner [@Zen16] that this problem lies in co-NP provided the generalised Riemann Hypothesis. These results suggest that there might actually be a polynomial algorithm for Sphere Recognition. The Polynomial Sphere Recognition Problem is polynomial equivalent to the following combinatorial version (this follows for example by combining [@JC2] and [@3space5]). Given a $2-$complex $C$ whose first homology group $H_1(\mathbb{Q})$ over the rationals is trivial, is there a polynomial time algorithm that decides whether $C$ can be embedded in $3$-space (that is, the $3-$sphere or equivalently the 3-dimensional euclidean space $\mathbb{R}^3$)? In this paper, we provide new constructions that demonstrate some of the difficulties of this embedding problem. A naive approach towards this embedding problem is the following. Let a $2-$complex $C$ with $H_1(\mathbb{Q})=0$ be given. 1. Find an edge $e$ of $C$ such that if $C$ is embeddable, then $C/e$ is embeddable. (For example if $e$ is not a loop and $C$ is embeddable, then $C/e$ is embeddable. If $C$ can be embedded in such a way that there is some edge $e'$ that is embedded as a trivial knot, then there also is an edge $e$ such that $C/e$ is embeddable.) 2. By induction get an embedding of the smaller $2-$complex $C/e$. Then use the embedding of $C/e$ to construct an embedding of $C$. We will show that this strategy cannot work. More precisely we prove the following. \[Thm 1\] There is a simply connected $2-$complex $C$ embeddable in $3-$space such that every edge $e$ forms a nontrivial knot in any embedding of $C$ and $C/e$ is not embeddable. Our construction is quite flexible and actually can easily be modified to give an infinite set of examples. It seems to us that the Polynomial Sphere Recognition Problem should be difficult for constructions similar to ours. More precisely, we offer the following open problem. \[Question 2\] Given a $2-$complex $C$ with $H_1(\mathbb{Q})=0$, is there a polynomial time algorithm that decides whether there is an integer $n$ such that the $2-$complex $C$ can be obtained from the $n \times n \times n$ cuboid complex $Z^3[n]$ by contracting a spanning tree and deleting faces? In a sense the 2-complexes constructed in this paper are even more obscure than embeddable 2-complexes that are contractible but not collapsible or shellable; see [@MR3639606] for constructions of such examples and further references in this direction. The remainder of this paper has the following structure. In Section \[sec2\] we give basic definitions and state one theorem and two lemmas, which together imply the main result, Theorem \[Thm 1\]. In the following three sections, we prove these facts, one section for each. We finish by mentioning some follow-up questions in Section \[sec6\]. Precise statement of the results {#sec2} ================================ We begin by giving a rough sketch of the construction of a $2-$complex satisfying . For a $2-$complex $C$ we denote by $Sk_1(C)$ the $1-$skeleton of $C$. We define the concept of *cuboid graphs*. Let $n_1, n_2, n_3$ be nonnegative numbers. We define the sets $$\label{Vc} V_c := \{(x,y,z)\in \mathbb Z^3\hspace{0.4em} |\hspace{0.4em} 0\leq x\leq n_1; 0\leq y\leq n_2; 0\leq z\leq n_3\}$$ and $$E_c := \{((x_1, y_1, z_1), (x_2, y_2, z_2))\hspace{0.4em}|\hspace{0.4em} |x_1-x_2| + |y_1-y_2| + |z_1-z_2| = 1\}.$$ We call the graph $G_c = (V_c, E_c)$ *the cuboid graph of size $n_1\times n_2\times n_3$*. We refer to the given embedding of the graph $G_c$ in $\mathbb R^3$ as the *canonical embedding of the cuboid graph $G_c$*. We define *the cuboid complex $C_c = (V_c, E_c, F_c)$ of size $n_1\times n_2\times n_3$* as the $2-$complex obtained from the cuboid graph of size $n_1\times n_2\times n_3$ with faces attached to every $4-$cycle. Again we refer to the embedding of the cuboid complex $C_c$ in $\mathbb R^3$ obtained from the canonical embedding of the cuboid graph $G_c$ by adding straight faces on each of its $4-$cycles as the *canonical embedding of the cuboid complex $C_c$*, see . It induces a natural metric on $C_c$. This allows us in particular to refer to the vertices of $C_c$ by giving their cartesian coordinates in the canonical embedding of $C_c$. Consider the cuboid complex $C$ of size $(2n+1)\times n\times n$ for some large $n$. We shall construct a tree $T'$ with edges contained in the faces of $C$ and vertices coinciding with the vertices of $C$. It will have the additional property that every fundamental cycle of the tree $T'$ seen as a spanning tree of the graph $T'\cup Sk_1(C)$ is knotted in a nontrivial way in every embedding of $C$ in $3-$space. We will use the edges of $T'$, which do not belong to the $1-$skeleton of $C$, to subdivide some of the faces of $C$. This will produce a simply connected $2-$complex $C'$. Then, by contraction of the spanning tree $T'$ of the $1-$skeleton of $C'$ we obtain the $2-$complex $C''$ with only one vertex and a number of loop edges. We shall show that the $2-$complex $C''$ has the following properties: - It is simply connected. - It is embeddable in $3-$space in a unique way up to homeomorphism and in this embedding each of its edges is knotted in a nontrivial way. - For every edge $e$ of $C''$, the $2-$complex $C''/e$ obtained by contraction of $e$ in $C''$ does not embed in $3-$space. To formalise these ideas, we begin with a definition. \[Def 1\] Let $C_1$ be a $2-$complex with an embedding $\iota_1$ in $3-$space. Let $T_1$ be a spanning tree of the $1-$skeleton of $C_1$. The tree $T_1$ is *entangled with respect to $\iota_1$* if, for any edge $e_1$ of $C_1$ outside $T_1$, the fundamental cycle of $e_1$ is a nontrivial knot in the embedding $\iota_1$. Moreover, if $T_1$ is entangled with respect to every embedding of the $2-$complex $C_1$ in $3-$space, we say that $T_1$ is *entangled*. Let $C = (V, E, F)$ be the cuboid complex of size $(2n+1)\times n\times n$ for $n$ at least 20. The last condition might seem artificial. It is a sufficient condition for the existence of a special type of path constructed later in the proof of Lemma \[spine\]. If it confuses the reader, one might consider $n$ large enough till the end of the paper. \[Thm 2\] There exists a simply connected $2-$complex $C' = (V', E', F')$ with an entangled spanning tree $T'$ of the $1-$skeleton of $C'$ with the following properties: - $C'$ is constructed from $C$ by subdividing some of the faces of the $2-$complex $C$ of size four into two faces of size three. We refer to the edges participating in these subdivisions as *diagonal edges*. - $T'$ contains all diagonal edges. Moreover, every fundamental cycle of $T'$ in the $1-$skeleton of $C'$ contains three consecutive diagonal edges in the same direction (i.e. collinear in the embedding of $C'$ induced by the canonical embedding of $C$). We remark that, indeed, adding the appropriate subdividing edges as line segments contained in the faces of the $2-$complex $C$ within its canonical embedding induces an embedding of $C'$. We call this induced embedding the *canonical embedding of $C'$*. Having a $2-$complex $C'$ and an entangled spanning tree $T'$ of the $1-$skeleton of $C'$ satisfying the conditions of we construct our main object of interest as follows. Let the $2-$complex $C''$ be obtained after contraction of the tree $T'$ in $C'$ i.e. $C'' = C'/T'$. We now make a couple of observations. \[Ob1\] In an embedding $\iota$ of a $2-$complex $C$ in $3-$space every edge that is not a loop can be geometrically contracted within the embedding. Let $e$ be an edge of $C$ that is not a loop. Consider a tubular neighbourhood $D_e$ of $\iota(e)$ in $\mathbb S^3$ such that $D_e\cap \iota$ is connected in $D_e$. Now we can contract $\iota(e)$ within $D_e$. This is equivalent to contracting $e$ within $\iota$ while keeping $\iota \cap (\mathbb S^3\backslash D_e)$ fixed. Thus the $2-$complex $C''$ will be embeddable in $3-$space. \[Ob2\] Contraction of any edge in a simply connected $2-$complex (even one forming a loop) produces a simply connected $2-$complex. Contraction of edges in a $2-$complex does not modify its topology. In particular, the property of being simply connected is preserved under edge contraction. Thus by construction the $2-$complex $C''$ will be simply connected.\ The next lemma is proved in Section \[Section4\]. \[sec4lemma\] Every embedding of the $2-$complex $C''$ in $3-$space is obtained from an embedding of the $2-$complex $C'$ by contracting the spanning tree $T'$. Notice that Observation \[Ob1\] ensures that contraction of $T'$ can be done within any given embedding of $C'$ in $3-$space. The next lemma is proved in Section \[Section5\]. \[sec5lemma\] For every edge $e''$ of $C''$ the $2-$complex $C''/e''$ does not embed in $3-$space. Admitting the results of Section \[sec2\] we stated so far, we prove . We show that $C''$ satisfies . Firstly, we have by Observation \[Ob2\] that $C''$ is a simply connected $2-$complex. Secondly, by Observation \[Ob1\] we have that $C''$ is embeddable in $3-$space, but by Lemma \[sec5lemma\] for every edge $e''$ of $C''$ we have that $C''/e''$ is not embeddable in $3-$space. Finally, let $\iota''$ be an embedding of $C''$ in $3-$space and let $e''$ be an edge of $C''$. The edge $e''$ corresponds to an edge $e'$ of $C'$ not in $T'$. By Lemma \[sec4lemma\] $\iota''$ originates from an embedding $\iota'$ of the $2-$complex $C'$. But by we have that the tree $T'$ is entangled, so the fundamental cycle of $e'$ in the embedding of $T'$ induced by $\iota'$ forms a nontrivial knot. As contracting $T'$ within $\iota'$ preserves the knot type of its fundamental cycles, $\iota''(e'')$ is a nontrivial knot. Thus every edge of $C''$ forms a nontrivial knot in each of the embeddings of $C''$ in $3-$space, which finishes the proof of . We now turn to several important definitions. \[Def 2.2\] A *connected sum* of two knots is an operation defined on their disjoint union as follows. See 1. Consider a planar projection of each knot and suppose these projections are disjoint. 2. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot, but is otherwise disjoint from the knots and so that the arcs of the knots on the sides of the rectangle are oriented around the boundary of the rectangle in the same direction. 3. Now join the two knots together by deleting these arcs from the knots and adding the arcs that form the other pair of sides of the rectangle. We remark that the definition of connected sum of two knots is independent of the choice of planar projection in the first step and of the choice of rectangle in the second step in the sense that the knot type of the resulting knot is uniquely defined. By abuse of language we will often call connected sum of the knots $K$ and $K'$ the knot obtained by performing the operation of connected sum on $K$ and $K'$. This new knot will be denoted $K\# K'$ in the sequel. In the proof of Lemma \[L 3.6\] we rely on the following well-known fact. ![The operation of connected sum between two disjoint knots. The figure illustrates the three steps in the definition. Source: Wikipedia.[]{data-label="Steps"}](Steps.png) \[L 2.6\]([@ZH], [@Kos]) A connected sum of two knots is trivial if and only if each of them is trivial. Let $\phi: \mathbb S^1 \longrightarrow \mathbb S^3$ be an embedding of the unit circle in $3-$space. The knot $\phi(\mathbb S^1)$ is *tame* if there exists an extension of $\phi$ to an embedding of the solid torus $\mathbb S^1 \times D^2$ into the $3-$sphere. Here, $D^2$ is the closed unit disk. We call the image of this extension into the $3-$sphere *thickening* of the knot. We remark that the image of a tame knot by a homeomorphism of the $3-$sphere is again a tame knot. In this paper we consider only piecewise linear knots, which are tame. The following definitions can be found in [@JC1]. The graph $G$ is *$k-$connected* if it has at least $k+1$ vertices and for every set of $k-1$ vertices $\{v_1, v_2, \dots v_{k-1}\}$ of $G$, the graph obtained from $G$ by deleting the vertices $v_1, v_2, \dots v_{k-1}$ is connected. A *rotation system of the graph $G$* is a family $(\sigma_v)_{v\in V(G)}$, where $\sigma_v$ is a cyclic orientation of the edges incident with the vertex $v$ in $G$. For the rotation system $(\sigma_v)_{v\in V(G)}$ of the graph $G$ and for every vertex $v$ in $G$ we call $\sigma_v$ the *rotator* of $v$. A rotation system of a graph $G$ is *planar* if it induces a planar embedding of $G$. Let $G' = (V', E')$ and $G'' = (V'', E'')$ be two disjoint graphs. Let $v'$ and $v''$ be vertices of equal degrees in $G'$ and $G''$ with neighbours $(u'_1, \dots, u'_k)$ and $(u''_1, \dots, u''_k)$ respectively. We define a bijection $\varphi$ between $(u'_1, \dots, u'_k)$ and $(u''_1, \dots, u''_k)$ by $$\forall i\leq k,\hspace{0.4em} \varphi(u'_i) = u''_i.$$ The *vertex sum of $G'$ and $G''$ at $v'$ and $v''$ over $\varphi$* is a graph $G$ obtained from the disjoint union of $G'$ and $G''$ by deleting $v'$ from $G'$ and $v''$ from $G''$ and adding the edges $(u'_i, u''_i)_{1\leq i\leq k}$. We sometimes abuse the term vertex sum to refer to the operation itself. We say that an edge $e'$ of the graph $G'$ is *inherited* by the vertex sum $G$ from the graph $G'$ if its two endvertices are both different from $v'$. A vertex $v'$ of the graph $G'$ is *inherited* by the vertex sum $G$ from the graph $G'$ if it is different from $v'$. Thus $e'$ (respectively $v'$) can be viewed both as an edge (respectively vertex) of $G$ and as an edge (respectively vertex) of $G'$. See . Moreover, consider graphs $G'$ and $G''$ with rotation systems $(\sigma_u')_{u'\in V'}$ and $(\sigma''_{u''})_{u''\in V''}$ and vertices $v'$ in $G'$ and $v''$ in $G''$ with rotators $\sigma_{v'} = (u'_1, \dots, u'_k)$ and $\sigma_{v''} = (u''_1, \dots, u''_k)$ respectively. There is a bijection $\phi$ between the rotators of $v'$ and $v''$ defined up to the choice of a vertex from $(u''_i)_{1\leq i\leq k}$ for $\phi(u'_1)$. Once this $u''_j$ is determined, we construct the edges $(u'_iu''_{(i+j-1 \mod k)})_{1\leq i\leq k}$. This gives the vertex sum of $G'$ and $G''$ at $v'$ and $v''$ over $\phi$. ![Vertex sum of $G'$ and $G''$ at $v'$ and $v''$ respectively. The edge $u'_1u'_k$ is inherited by the vertex sum from $G'$.[]{data-label="VertexSum"}](VertexSum.png) We now give a couple of definitions for $2-$complexes. Let $C _1 = (V_1, E_1, F_1)$ be a $2-$complex and let $v$ be a vertex in $C_1$. The *link graph $L_v(C _1)$ at $v$ in $C_1$* is the incidence graph between edges and faces incident with $v$ in $C_1$. See . A *rotation system of the $2-$complex $C _1$* is a family $(\sigma_e)_{e\in E_1}$, where $\sigma_e$ is a cyclic orientation of the faces incident with the edge $e$ of $C_1$. A rotation system of a $2-$complex $C_1$ induces a rotation system on each of its link graphs $L_v(C _1)$ by restriction to the edges incident with the vertex $v$. A rotation system of a $2-$complex is *planar* if all of the induced rotation systems on the link graphs at the vertices of $C_1$ are planar. ![The link graph at the vertex $v$ is given in black.[]{data-label="Link"}](LinkGraph.png) Proof of ========= In this section we work with the cuboid complex $C$ of size $(2n+1)\times n\times n$ for $n\geq 20$. From this point up to Lemma \[L 3.6\] included we suppress the map from the cuboid complex $C$ to its canonical embedding from the notation. We define the subcomplex $C_{[a,b]}$ of $C$ as the intersection of $C$ with the strip $\{a\leq x\leq b\}\subset \mathbb R^3$. If $a=b$, we write $C_{[a]}$ instead of $C_{[a,a]}$. As the $1-$skeleton of $C$ is a connected bipartite graph, it has a unique proper vertex $2-$colouring in black and white (up to exchanging the two colours). This colouring is depicted in . We fix this colouring.\ **Sketch of a construction of $T'$ and $C'$ from Theorem \[Thm 2\].** We define an *overhand path* as a piecewise linear path in $3-$space such that by connecting its endvertices via a line segment we obtain a copy of the trefoil knot. We construct a path called *spine* contained in the faces of the $2-$complex $C$ that consists roughly of two consecutive overhand paths. See . The spine contains two of the edges of $C$ serving as transitions between vertices of different colours and all remaining edges in the spine are diagonal edges (these diagonal edges are not actually edges of $C$ but straight-line segments subdividing of face of $C$ of size four into two triangles. The endvertices of a diagonal edge always have the same colour.). More precisely, the spine starts with a short path $P_1$ that we later call *starting segment* of three white and two black vertices. We call its last black vertex $A$. See . The vertex $A$ also serves as a starting vertex of the first overhand path $P_2$, which is entirely contained in the subcomplex $C_{[n+2, 2n+1]}$ and uses only diagonal edges. The ending vertex $B$ of $P_2$ is connected via a path $P_3$ of three diagonal edges in the same direction to a black vertex $A'$. The vertex $A'$ serves as a starting vertex of the second overhand path $P_4$. The path $P_4$ uses only diagonal edges and is entirely contained in the subcomplex $C_{[0, n-1]}$ of $C$. Finally, the ending vertex $B'$ of $P_4$ is the beginning of a short path $P_5$ of two black and three white vertices. We later call $P_5$ *ending segment*. Visually $P_5$ is obtained from the starting segment $P_1$ by performing central symmetry. The spine is obtained by joining together the paths $P_1, P_2, P_3, P_4$ and $P_5$ in this order. See . Moreover, we construct the spine $P$ so that no two non-consecutive vertices in $P$ are at distance 1 for the euclidean metric of $\mathbb R^3$. Recall that diagonal edges in $C$ subdivide faces of size four of $C$ into two faces of size three. By adding further diagonal edges, we extend the spine $P$ to a tree $T'$, whose vertex set is the set of vertices of $C$. Thus the tree $T'$ is a spanning tree of the graph $Sk_1(C)\cup T'$ obtained from the 1-skeleton of $C$ by adding the edges of $T'$. We will show that we can choose the diagonal edges in the previous step so that for any edge $e$ of $C$ and not in $T'$, the fundamental cycle of $e$ in $T'$ contains either the path $P_2$ from $A$ to $B$ or the path $P_4$ from $A'$ to $B'$. Both these paths have the structure of an overhand path. Finally, we obtain the $2-$complex $C'$ from $C$ by subdividing the faces of $C$ that contain diagonal edges of $T'$ by those diagonal edges. This $2-$complex $C'$ is simply connected. This completes the informal description of the construction of $C'$ and $T'$.\ ![The canonical embedding of the cuboid complex of size $5\times 2\times 2$ together with the proper $2-$colouring of its vertices.[]{data-label="F1"}](grid3D.png) [**Formal construction of $T'$ and $C'$.**]{} We do it in three steps.\ We call a piecewise linear path contained in $C$ *facial path* if: - It does not meet the edges of $C$ in points different from their endvertices. - It does not contain any vertex of $V(C)$ more than once. - Every pair of consecutive vertices of $C$ with respect to the order induced by the path are not neighbours in the $1-$skeleton of $C$. - The parts of the path between two consecutive vertices of $C$ are embedded as single line segments contained in single faces of $C$. Informally a facial path is a path of diagonal edges without repetition of vertices. See . We remark that the diagonal edges are not edges of $C$. We recall that an *overhand path* is a piecewise linear path in $3-$space such that after joining its endvertices by a line segment we obtain a copy of the trefoil knot. The next definition is technical. Informally, doubly knotted paths look like the black subpath between $A$ and $B'$ in up to rescaling. A facial path $P$ in $C$ is a *doubly knotted* if there exists vertices $A$, $B$, $A'$ and $B'$ appearing in that order on the facial path satisfying all of the following. 1. the subpaths $APB$ and $A'PB'$ are disjoint and have each the structure of overhand paths; 2. each of the subpaths $APB$ and $A'PB'$ contains three consecutive diagonal edges in the same direction i.e. collinear in (the canonical embedding of) $C$; 3. the intersection of a facial path with the half-space $n+2 \leq x$ is exactly the subpath $APB$; 4. the intersection of a facial path with the half-space $x\leq n-1$ is exactly its subpath $A'PB'$; 5. the intersection of a facial path with the strip $n-1< x < n+2$ is exactly the subpath $BPA'$ (this time without the endvertices $A'$ and $B$ themselves). A *starting segment* is a piecewise linear path made of three diagonal edges and one edge of $C$ joining vertices with coordinates $((x,y,z), (x+1,y+1,z), (x+1,y,z+1), (x+2,y,z+1), (x+3,y+1,z+1))$ in this order. We call the vertex $(x,y,z)$ *starting vertex* of the starting segment. We remark that every starting segment is characterised by its starting vertex. See . Likewise, an *ending segment* is a piecewise linear path made of three diagonal edges and one edge of $C$ joining vertices with coordinates $((x,y,z), (x+1,y+1,z), (x+2,y+1,z), (x+2,y,z+1), (x+3,y+1,z+1))$. Again we call the vertex $(x,y,z)$, which indeed charachterises the ending segment, *starting vertex* of the ending segment. We remark that starting segments, ending segments and doubly knotted paths are not defined up to rotation but actually as explicit sets of vertices and edges (either diagonal edges or edges of $C$). Hence their concatenation is only possible in a unique way. This allows us to define a *spine* as a path constructed by concatenating consecutively a starting segment, a doubly knotted path and an ending segment in this order. Spines have roughly the form of the path given in . We call the doubly knotted path participating in a spine *basis* of this spine. \[spine\] There exists a spine. We construct a spine $P$ as a concatenation of five shorter paths. A rough sketch illustrating the construction could be found in . Recall that $C$ is of size $(2n+1)\times n\times n$ for $n\geq 20$. Let us colour the vertex with coordinates $(n-1, 0, 0)$ in white. This uniquely defines the (proper) $2-$colouring of the vertices of $C$ in black and white. The construction of the spine begins with a starting segment $P_1$ with starting vertex $(n-1, 0, 0)$. We denote by $O$ the vertex with coordinates $(n, 0, 1)$ (which is white as it has even distance from the vertex $(n-1, 0, 0)$) and by $A$ the vertex with coordinates $(n+2, 1, 1)$ (which is black). See . ![The starting segment $P_1$ is given by concatenating the four edges coloured in black (these are three diagonal edges and one edge of $C$).[]{data-label="F2"}](grid3Dsmall.png) ![The subpath $P_4$ of the spine $P$ on the left and and the subpath $P_2$ on the right. Both $P_2$ and $P_4$ are overhand facial paths contained in two cuboid subcomplexes of $C$ of size $12\times 12\times 12$. Only a few vertices necessary for the construction of the paths are depicted.[]{data-label="FH"}](FinalHope.png) Next, denote by $B$ the vertex with coordinates $(n+2, 9,9)$. We build an overhand facial path $P_2$ of black vertices of abscissas (i.e. first coordinates) at least $n+3$ except its first vertex $A$ and its last vertex $B$, which have abscissas exactly $n+2$. We define $P_2$ to be the facial path given in the right part of , which is embedded in the faces of the cuboid subcomplex of $C$ of size $12\times 12\times 12$ with $A$ being its closest vertex to the origin[^4]. We remark that in the figure only vertices important for the construction of the path are depicted. Denote by $A'$ the vertex with coordinates $(n-1, 9, 6)$. We construct a facial path $P_3$ consisting of three diagonal edges in the same direction connecting the black vertex $B$ to the black vertex $A'$. Next, let $B'$ be the vertex with coordinates $(n-1, 17, 14)$. We build an overhand facial path $P_4$ of black vertices of abscissas at most $n-2$ except the first vertex $A'$ and the last vertex $B'$, which have abscissas exactly $n-1$. We define $P_4$ to be the facial path given in the left part of , which is embedded in the faces of the cuboid subcomplex of $C$ of size $12\times 12\times 12$ with $B'$ being its farthest vertex to the origin[^5]. Once again only vertices important for the construction of the path are depicted. We call $P_{[2,4]}$ the facial path between $A$ and $B'$ constructed by concatenating $P_2, P_3$ and $P_4$ in this order. It is doubly knotted by construction and will serve as basis of the spine $P$. Next, construct an ending segment $P_5$ with starting vertex $B'$. This is possible as $n\geq 20$. Let $O'$ be the first white vertex in $P_5$ with coordinates $(n + 1, 18, 14)$. Visually $P_5$ is obtained after central symmetry of . The spine $P$ is finally obtained by concatenating the starting segment $P_1$, the doubly knotted path $P_{[2,4]}$ and the ending segment $P_5$ in this order. We introduce the context of our next lemma. Fix three positive integers $x_1,y_1,z_1$ and let $C_1 = (V_1, E_1, F_1)$ be the cuboid complex of size $x_1\times y_1\times z_1$. Its $1-$skeleton is a connected bipartite graph so it admits a unique $2-$colouring up to exchanging the two colours. We fix this colouring in black and white, where vertex $(0, 0, 0)$ is white for concreteness, see . Moreover, from now up to the end of Observation \[ob 3.3\] we suppress the map from the cuboid complex $C_1$ to its canonical embedding from the notation just like we did with the cuboid complex $C$. Let $G_b = (V_{1,b}, E(G_b))$ be a forest, where $V_{1,b}$ is the set of black vertices of $C_1$ and $E(G_b)$ is a subset of the set $E_{1,b}$ of diagonal edges with two black endvertices in $C_1$. Likewise let $V_{1,w}$ be the set of white vertices of $C_1$ and $E_{1,w}$ be the set of diagonal edges with two white endvertices in $C_1$. Finally, let $I_1\subset E_{1,w}$ be the set of diagonal edges with two white endvertices intersecting an edge of $G_b$ in an internal point. \[connected\] The graph $(V_{1,w}, E_{1,w}\backslash I_1)$ is connected. We argue by contradiction. Suppose that the graph $(V_{1,w}, E_{1,w}\backslash I_1)$ is not connected. This means that there is a cuboid subcomplex $K$ of $C_1$ of size $1\times 1\times 1$ (i.e. a unit cube) with white vertices not all in the same connected component of $(V_{1,w}, E_{1,w}\backslash I_1)$. Suppose that the vertex of $K$ closest to $(0, 0, 0)$ is white and let $(w_1, w_2, w_3)$ be its coordinates (the case when this vertex is black is treated analogously). Then, if the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains none of $(w_1 + 1, w_2 + 1, w_3), (w_1 + 1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3 + 1)$, then the black diagonal edges $(w_1 + 1, w_2, w_3)(w_1, w_2 + 1, w_3)$, $(w_1 + 1, w_2, w_3)(w_1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3)(w_1, w_2, w_3 + 1)$ are present in $E(G_b)$. See the left part of . This contradicts the fact that $G_b$ is a forest. If the conected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains exactly one of the white vertices $(w_1 + 1, w_2 + 1, w_3), (w_1 + 1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3 + 1)$, we may assume by symmetry that this is the vertex $(w_1 + 1, w_2 + 1, w_3)$. Then the black diagonal edges $(w_1, w_2 + 1, w_3)(w_1 + 1, w_2 + 1, w_3 + 1)$, $(w_1 + 1, w_2 + 1, w_3 + 1)(w_1 + 1, w_2, w_3)$, $(w_1 + 1, w_2, w_3)(w_1, w_2, w_3 + 1)$ and $(w_1, w_2, w_3 + 1)(w_1, w_2 + 1, w_3)$ are present in $E(G_b)$. See the right part of . Again, this contradicts the fact that $G_b$ is a forest. It follows that the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains at least two of the other three white vertices in $K$. By symmetry this holds for every white vertex in $K$, which contradicts our initial assumption that not all white vertices of $K$ are in the same connected component of the graph $(V_{1,w}, E_{1,w}\backslash I_1)$. This proves the lemma. ![On the left, the case when the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains no other white vertex in $K$. On the right, the case when the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains only the white vertex with coordinates $(w_1 + 1, w_2 + 1, w_3)$ in $K$.[]{data-label="SE"}](SpineExtension.png) \[ob 3.3\] Every forest that is a subgraph of a connected graph $G$ can be extended to a spanning tree of $G$. The spanning tree can be obtained from the forest by adding edges one by one in such a way that no cycle is formed until this is possible. From now up to Lemma \[spanning\_tree\_extend\] included we denote by $V_w$ or $V_b$ the set of white or black vertices of $C$, respectively. By $E_w$ or $E_b$ we denote the set of diagonal edges with two white or black endvertices in $C$, respectively. We construct a graph $G_{b, centre}$ as follows. 1. Consider the restriction $\Tilde{G}_{b, centre}$ of the graph $(V_b, E_b)$ to the vertex set of $C_{[n, n+1]}$. 2. Delete the vertices of $\Tilde G_{b, centre}$ participating in $P_1$ and $P_5$. There are only two of them - the first and the last black vertices of $P$. 3. Delete the edges of $\Tilde G_{b, centre}$ crossing an edge in $E(P)\cap E_w$. Again, there are only two of them - these are the diagonal edges crossing the second and the second-to-last edges of $P$. To summarise, the graph $G_{b, centre}$ is obtained from the graph $\Tilde G_{b, centre}$ by deleting edges and vertices as specified in 2 and 3. \[middle\] The graph $G_{b, centre}$ is connected. Notice that the restriction of $G_{b, centre}$ to $C_{[n]}$ has exactly two connected components, one of which consists of the vertex $(n, 0, 0)$ only, and the restriction of $G_{b, centre}$ to $C_{[n+1]}$ is a connected graph. Now, it remains to see that the edges $(n, 0, 0)(n+1, 1, 0)$ and $(n+1, 1, 0)(n, 1, 1)$ are present in $G_{b, centre}$. \[spanning\_tree\_extend\] Let $P$ be a spine. There is a set of diagonal edges extending $P$ to a tree $T'$ containing all vertices of $C$ with the following properties: - $T'$ uses only diagonal edges except two edges of $C$, one in the starting segment and one in the ending segment of the spine. - Every fundamental cycle of $T'$ as a spanning tree of $(V, E\cup E(T'))$ contains at least one of the paths $P_2$ from $A$ to $B$ or $P_4$ from $A'$ to $B'$ in $P$. In particular: - If $xy$ is an edge in $E\backslash E(T')$ with white vertex $x$ in $C_{[n+1, 2n+1]}$, then the fundamental cycle of the edge $xy$ in $T'$ contains the subpath $P_4$ of $P$. - If $xy$ is an edge in $E\backslash E(T')$ with white vertex $x$ in $C_{[0, n]}$, then the fundamental cycle of the edge $xy$ in $T'$ contains the subpath $P_2$ of $P$. Like in the proof of Lemma \[spine\], we denote by $P_1$ the starting segment of $P$, by $P_3$ the path in $P$ from $B$ to $A'$ and by $P_5$ the ending segment of $P$. The graph $G_{b, right}$ is the induced subgraph of the graph $(V_b, E_b)$ with vertex set $V_b\cap C_{[n+2, 2n+1]}$. The graph $G_{b, right}$ is connected and contains the path $P_2$. By Observation \[ob 3.3\] the path $P_2$ can be extended to a spanning tree of $G_{b, right}$. We choose one such spanning tree and denote it by $T^b_1$. Similarly the graph $G_{b, left}$ is the induced subgraph of the graph $(V_b, E_b)$ with vertex set $V_b\cap C_{[0, n-1]}$. The graph $G_{b, left}$ is connected and contains the path $P_4$. Again, by Observation \[ob 3.3\] the path $P_4$ can be extended to a spanning tree of $G_{b, left}$. We choose one such spanning tree and denote it by $T^b_2$. The black vertices of $C$ not covered by $P$, $T^b_1$ and $T^b_2$ are the ones of $G_{b, centre}$. The graph $G_{b, centre}$ is connected by Observation \[middle\]. We apply Observation \[ob 3.3\] to the forest consisting of the second diagonal edge $e$ of the path $P_3$. Note that this forest is included in $G_{b, centre}$. We conclude that there is a spanning tree of $G_{b, centre}$ containing $e$. Choose one such spanning tree and denote it by $T^b_3$. Thus, the restriction of $P\cup T^b_1\cup T^b_2\cup T^b_3$ to $(V_b, E_b)$ forms a spanning tree of $(V_b, E_b)$, which we call $T^b$. (Indeed, it is connected as the set $P$ interests all the other three trees in the union and the union is acyclic and contains all black vertices by construction.) Let $I$ be the set of diagonal edges with two white endvertices in $C$ intersecting an edge of $T^b$. As $T^b$ is a tree, the induced subgraph of $T_b$ obtained by restricting to the vertex set of $C_{[n+1, 2n+1]}$ is a forest. We apply Lemma \[connected\] with $C_1 = C_{[n+1, 2n+1]}$ and $I_1$ the subset of $I$ consisting of those edges with both endvertices in $C_{[n+1, 2n+1]}$ to deduce that the induced subgraph of the graph $(V_w, E_w\backslash I)$ obtained by restricting to the vertex set of $C_{[n+1, 2n+1]}$ forms a connected graph that we call $G_{w, right}$. By Observation \[ob 3.3\] there is a spanning tree of $G_{w, right}$, which contains the last two diagonal edges of the ending segment $P_5$ of the spine $P$. We choose one such tree and call it $T^w_1$. Similarly, as $T^b$ is a tree, the induced subgraph of $T_b$ obtained by restricting to the vertex set of $C_{[0, n]}$ is a forest. We apply Lemma \[connected\] with $C_1 = C_{[0, n]}$ and $I_1$ the subset of $I$ with both endvertices in $C_{[0, n]}$ to deduce that the induced subgraph of the graph $(V_w, E_w\backslash I)$ obtained by restricting to the vertex set of $C_{[0, n]}$ forms a connected graph. We call that connected graph $G_{w, left}$. By Observation \[ob 3.3\] there is a spanning tree of $G_{w, left}$, which contains the first two diagonal edges of the starting segment $P_1$ of the spine $P$. We choose one such tree and call it $T^w_2$. We define $T' = P\cup T^b\cup T^w_1\cup T^w_2$. We denote its vertex set by $V$ and its edge set by $E( T')$. $T'$ is a tree, and hence a spanning tree of the graph $(V, E\cup E(T'))$. We now prove that every fundamental cycle of $T'$ contains at least one of the paths $P_2$ from $A$ to $B$ and $P_4$ from $A'$ to $B'$ in the spine $P$. All of the edges in $E\backslash E(T')$ have one white and one black endvertex. We treat edges with white endvertex in $C_{[n+1, 2n+1]}$ and edges with white endvertex in $C_{[0,n]}$ separately. Choose an edge $xy$ in $E\backslash E(T')$ with white endvertex $x$. If $x$ is a vertex of $C_{[n+1, 2n+1]}$, then $x$ is a vertex of $T^w_1$. This means that $y$ has abscissa at least $n$ and is a vertex of one of the graphs $P_1$, $P_2$, $P_3$, $T^b_1$ or $T^b_3$. Thus the fundamental cycle of the edge $xy$ in $T'$ contains $P_4$ by construction. Similarly, if $x$ is a vertex of $C_{[0,n]}$ and is consequently covered by $T^w_2$, then $y$ must belong to one the graphs $P_3$, $P_4$, $P_5$, $T^b_2$ or $T^b_3$. It follows that the fundamental cycle of the edge $xy$ in $T'$ contains $P_2$ by construction, which finishes the proof. ![An approximative scheme of a spine.[]{data-label="Kn"}](knots.png) We now subdivide some of the faces of $C$ by using the edges of $T'$ with endvertices in the same colour. This defines the $2-$complex $C' = (V', E', F')$. As subdivisions of faces do not change the topological properties of the $2-$complex, $C'$ is a simply connected $2-$complex. Let us call the embedding of $C'$ in $3-$space obtained after subdivisions of faces of the canonical embedding of $C$ *canonical embedding of $C'$*. In the following lemma we prove that every fundamental cycle of $T'$ as a spanning tree of the $1-$skeleton of $C'$ forms a nontrivial knot in the canonical embedding of $C'$. Otherwise said, we prove that $T'$ is entangled with respect to the canonical embedding of $C'$. \[L 3.6\] Every fundamental cycle of the spanning tree $T'$ forms a nontrivial knot in the canonical embedding of $C'$. All of the edges of $C'$ not in $T'$ have one white and one black endvertex. We treat edges with white endvertex with abscissa at least $n+1$ and edges with white endvertex with abscissa at most $n$ separately. Let $e = xy$ be an edge of $C'$ not in $T'$ with white endvertex $x$. If $x$ has abscissa at least $n+1$, then the fundamental cycle $o_e$ of $e$ contains the path $P_4$ by Lemma \[spanning\_tree\_extend\]. Thus, we can decompose the knot formed by the embedding of the fundamental cycle $o_e$ induced by the canonical embedding of $C'$ as a connected sum of the knot $K$, containing $e$, the line segment between $A'$ and $B'$ and the paths in $T'$ between $y$ and $A'$ and between $B'$ and $x$, and the knot $K'$, containing only the line segment between $A'$ and $B'$ and $P_4$. See . As $K'$ is a nontrivial knot, the connected sum $K \# K'$ is a nontrivial knot by Lemma \[L 2.6\]. This proves that the present embedding of $o_e$ forms a nontrivial knot. In the case when $x$ has abscissa at most $n$, the fundamental cycle $o_e$ of $e$ contains the path $P_2$ by Lemma \[spanning\_tree\_extend\], so its embedding, induced by the canonical embedding of $C'$, can be decomposed in a similar fashion as a connected sum of the knot $K$, containing $e$, the line segment between $A$ and $B$ and the paths in $T'$ between $x$ and $A$ and between $B$ and $y$, and the knot $K'$, containing only the line segment between $A$ and $B$ and $P_2$. Once again by Lemma \[L 2.6\] $K \# K'$ is a nontrivial knot because $K'$ is a nontrivial knot. Thus $T'$ is entangled with respect to the canonical embedding of $C'$. ![$\gamma_e$ is a connected sum of $K$ and $K'$.[]{data-label="k1k2"}](K1K2.png) We continue with the proof of . Our next goal will be to prove the following lemma: \[embedC’\] The $2-$complex $C'$ has a unique embedding in $3-$space up to homeomorphism. As the $2-$complex $C'$ is obtained from the cuboid complex $C$ by subdividing some of the faces of $C$, the two complexes are topologically equivalent. Therefore in the sequel we work with $C$ rather than $C'$ to avoid technicalities that have to do with the diagonal edges, which are irrelevant for the proof of Lemma \[embedC’\]. From ([@JC1], Section 4) combined with Lemma \[iso\] we know that every simply connected and locally $3-$connected[^6] simplicial complex embeddable in $\mathbb S^3$ has a unique embedding in $3-$space up to homeomorphism. One may be tempted to apply this result to the simply connected $2-$complex $C$ directly. Although the link graphs at most of its vertices are $3-$connected, this does not hold for all of them. For example, the link graph at the vertex with coordinates $(1,0,0)$ in the canonical embedding of $C$ is equal to the complete graph $K_4$ minus an edge. It is easy to see that this graph can be disconnected by deleting the two vertices of degree 3. Another obstacle comes from the link graphs at the “corner vertices” of $C$ (take $(0,0,0)$ for example), which are equal to $K_3$ and are therefore only $2-$connected. Our goal now will be to construct a $2-$complex, which contains $C$ as a subcomplex and is moreover embeddable in $3-$space, simply connected and locally $3-$connected at the same time. Roughly speaking, the construction consists of packing $C$ (seen in its canonical embedding) with one layer of unit cubes to obtain a cuboid complex of size $(2n+3)\times (n+2)\times (n+2)$ containing $C$ in its inside, and then contract all edges and faces disjoint from $C$. The formal construction goes as follows, see Figure \[3-connFIG\]. Let $C^+$ be the cuboid complex of size $(2n+3)\times (n+2)\times (n+2)$. Let $\iota^+$ be its canonical embedding. The restriction of $\iota^+$ to the cuboid $[1,2n+2]\times [1,n+1]\times [1,n+1]$ is the canonical embedding of $C$ (translated to the vector $(1,1,1)$). Thus we view $C$ as a subcomplex of $C^+$. \[Ob3\] The $2-$complex $C^+$ is simply connected. Let us contract all edges and faces of $C^+$ disjoint from $C$ to a single vertex $t$. By Observations \[Ob2\] and \[Ob3\] this produces a simply connected $2-$complex $C^t$. \[3-conn\] The link graph at the vertex $t$ of the $2-$complex $C^t$ is $3-$connected. Let us consider the embedding $\iota^t$ of the $2-$complex $C^t$ in $\mathbb S^3$ in which $\iota^t(t) = \infty$, $\iota^t_{|C = C^t\backslash \{t\}}$ is the canonical embedding of $C$ in $3-$space and for every face $f$ of $C^t$, $\iota^t(f)$ is included in some affine plane of $\mathbb R^3\cup \{\infty\}$. From this embedding of $C^t$ we deduce that the link graph at $t$ in $C^t$ can be embedded in $\mathbb R^3$ as follows. Consider the integer points (i.e. the points with three integer coordinates) on the boundary of the cuboid $\iota^t(C)$. Construct a copy of the $1-$skeleton of each side of $\iota^t(C)$ by translating it to an outgoing vector of length one orthogonal to this side. Then, add an edge between every pair of vertices, which are the images of the same integer point on the boundary of the cuboid $\iota^t(C)$ under two different translations. Otherwise said we add edges between the pairs of integer points in $\mathbb R^3$, which are in the copies of two different sides of the cuboid and at euclidean distance $\sqrt{2}$. See . ![The link graph at $t$ in $C^t$. Here $n=2$. The copies of all six sides are depicted in black while the edges added between between copies of two different sides are coloured in light grey.[]{data-label="3-connFIG"}](3-conn.png) We easily verify now that in the graph constructed above there are at least three vertex-disjoint paths between every two vertices (indeed, there are always four such paths). By Menger’s theorem the link graph at $t$ in $C^t$ is then $3-$connected. The *double wheel graph* is the graph on six vertices, which is the complement of a perfect matching. We denote it by $W^2$. \[locally3-conn\] The $2-$complex $C^t$ is locally $3-$connected. The link graph at $t$ in $C^t$ is $3-$connected by Lemma \[3-conn\]. The link graphs at all other vertices are all equal to the double wheel graph, which is $3-$connected as well, which proves the claim. Now, by Observation \[Ob3\], Corollary \[locally3-conn\], Lemma \[iso\] and ([@JC1], Section 4) we deduce that $C^t$, just like any other simply connected and locally $3-$connected $2-$complex embeddable in $3-$space, has a unique embedding in $\mathbb S^3$ up to homeomorphism. \[uniqueC\] The $2-$complex $C$ has a unique embedding in $3-$space up to homeomorphism. Let $\iota$ be an embedding of $C$ in $3-$space. Consider the subcomplex $C_1$ of $C$ induced by the vertices of $C$ with coordinates (taken with respect to the canonical embedding of $C$) in the set $$\Big\{(x, y, z)| \hspace{0.4em} x\in \{0, 2n+1\}\Big\}\bigcup \Big\{(x, y, z)| \hspace{0.4em} y\in \{0, n\}\Big\}\bigcup \Big\{(x, y, z)| \hspace{0.4em} z\in \{0, n\}\Big\}.$$ These are roughly the “boundary vertices” of $C$ in its canonical embedding. Thus $\iota(C_1)$ is a piecewise linear embedding of the $2-$sphere in $3-$space. Now notice that $\mathbb S^3\backslash \iota(C_1)$ has two connected components. Moreover, as $\iota(C)\backslash \iota(C_1)$ is connected, it must lie entirely in one of the two connected components of $\mathbb S^3\backslash \iota(C_1)$. Adding a vertex $t$ to the connected component disjoint from $\iota(C)$ allows us to construct an embedding of $C^t$ in $3-$space. However, this embedding is unique up to homeomorphism of $\mathbb S^3$. We deduce that $C$ also has a unique embedding in $3-$space up to homeomorphism of $\mathbb S^3$. We are ready to prove Lemma \[embedC’\]. Every embedding of the $2-$complex $C'$ comes from an embedding of $C$ by subdividing some of the faces of $C$ with the edges of $T'$. By Corollary \[uniqueC\] there is a unique embedding of $C$ in $3-$space up to homeomorphism. Thus $C'$ has a unique embedding in $3-$space up to homeomorphism as well. Towards the proof of , we prove the following lemma: \[main L\] Every cycle $o$ of $C'$ that is a nontrivial knot in the canonical embedding of $C'$ is a nontrivial knot in any embedding of $C'$. First we need one more lemma and a corollary. \[L 3.8\] Let $\psi: \mathbb S^3\longrightarrow \mathbb S^3$ be a homeomorphism of the $3-$sphere. Let $\gamma$ be a trivial knot in $\mathbb S^3$. Then the knot $\psi(\gamma)$ is trivial. As $\gamma$ is a trivial knot, it has a thickening whose complement is homeomorphic to a solid torus. We call this thickeining $D$. By the Solid Torus Theorem (see [@Al] or [@JR]) the complement of $D$ – that is, $\mathbb S^3\backslash D$ – is a solid torus. As $\psi$ is a homeomorphism, the image $\psi(\gamma)$ of the knot $\gamma$ is a knot. By intersecting the thickening $D$ of $\gamma$ with the inverse image of a thickening of the knot $\psi(\gamma)$ if necessary, we may assume that additionally also $\psi(D)$ is a thicking of the knot $\psi(\gamma)$. The restriction of the homeomorphism $\psi$ to the knot complement $\mathbb S^3\backslash D$ is a homeomorphism to $\mathbb S^3\backslash \psi(D)$. Thus these two knot complements are homeomorphic. By the Gordon-Luecke Theorem [@GL], it follows that the knots $\gamma$ and $\psi(\gamma)$ have the same knot type. Thus the knot $\psi(\gamma)$ must be trivial. \[cor 3.9\] The image of a nontrivial knot in $\mathbb S^3$ by a homeomorphism $\psi$ of the $3-$sphere is a nontrivial knot. This is the contraposition of Lemma \[L 3.8\] applied to $\psi^{-1}$. We are now ready to prove Lemma \[main L\]. This is a direct consequence of Lemma \[embedC’\] and Corollary \[cor 3.9\]. We are now able to complete the proof of . It remains to prove that the spanning tree $T'$ of the $1-$skeleton of $C'$ is entangled (recall that this means that each of its fundamental cycles forms a nontrivial knot in any embedding of $C'$ in $3-$space). Consider an edge $e$ in $E'\backslash E(T')$. By Lemma \[L 3.6\] the fundamental cycle $o_e$ of $T'$ is nontrivially knotted in the canonical embedding of $C'$. By Lemma \[embedC’\] any two embeddings of $C'$ in $3-$space are homeomorphic, so applying Lemma \[main L\] to $o_e$ gives that $o_e$ forms a nontrivial knot in every embedding of $C'$ in $3-$space. As this holds for every edge in $E'\backslash E(T')$ the proof of is complete. Proof of Lemma \[sec4lemma\] {#Section4} ============================ Let us consider the $2-$complex $C'$ and the spanning tree $T'$ of the $1-$skeleton of $C'$ as in . We recall that the $2-$complex $C'' = (V'', E'', F'')$ is obtained by contraction of the spanning tree $T'$ of the $1-$skeleton of $C'$. Let us consider an embedding $\iota'$ of the $2-$complex $C'$ in $3-$space. By Observation \[Ob1\] contractions of edges with different endvertices preserve embeddability and can be performed within $\iota'$. Therefore contracting the edges of the tree $T'$ one by one within $\iota'$ induces an embedding $\iota''$ of $C''$ in which every edge forms a nontrivial knot. The goal of this section is to justify that every embedding of $C''$ in $3-$space can be obtained this way.\ We recall that for a $2-$complex $C _1 = (V_1, E_1, F_1)$, the *link graph* $L_v(C _1)$ at the vertex $v$ in $C_1$ is the incidence graph between edges and faces incident with $v$ in $C _1$. Below we aim to show that every planar rotation system of the $2-$complex $C''$ arises from a planar rotation system of the $2-$complex $C'$. We begin by proving that contractions of edges of a $2-$complex commute with each other. \[comm\] Let $e_1, e_2, \dots , e_k$ be edges of a $2-$complex $C_1$. The link graphs at the vertices of the $2-$complex $C_1/\{e_1, e_2, \dots e_k\}$ do not depend on the order in which the edges $e_1, e_2, \dots , e_k$ are contracted. It is sufficient to observe that the $2-$complex $C_1/\{e_1, e_2,\dots, e_k\}$ is well defined and does not depend on the order of contraction of the edges $e_1, e_2, \dots, e_k$. \[L 4.3\] Let $C _1 = (V_1, E_1, F_1)$ be a locally $2-$connected $2-$complex and let $e$ be an edge of $C _1$ that is not a loop. Then every planar rotation system of the $2-$complex $C _1/e$ is induced by a planar rotation system of $C _1$. Let $e = xy$ for $x, y\in V_1$. As the link graphs at $x$ and $y$ are $2-$connected, the vertices corresponding to the edge $e$ in the two link graphs $L_x(C _1)$ and $L_y(C _1)$ are not cutvertices. Under these conditions ([@JC1], Lemma 2.2) says that every planar rotation system of $C _1/e$ is induced by a planar rotation system of $C _1$. \[2-conn’\] Subdivisions of $2-$connected graphs are $2-$connected. Let $G$ be a $2-$connected graph and $G'$ be a subdivision of $G$. Let $v'$ be a vertex of $G'$. If the vertex $v'$ is present in $G$, then $G\backslash v'$ can be obtained from $G'\backslash v'$ by a sequence of edge contractions, so in particular $G'\backslash v'$ is connected. If the vertex $v'$ is not present in $G$ and participates in the subdivision of the edge $e$ of $G$, then $G\backslash e$ can be obtained from $G'\backslash v'$ by a sequence of edge contractions, so $G'\backslash v'$ is connected. We now state and prove an easy but crucial observation. \[2-conn\] The $2-$complexes $C$ and $C'$ are locally $2-$connected. As the link graphs at the vertices of $C'$ are subdivisions of the link graphs at the vertices of $C$ (to construct $C'$ we only add new edges subdividing already existing faces of $C$), by Observation \[2-conn’\] it is sufficient to prove the observation for the $2-$complex $C$. By *degree of a vertex $v$* in $C$ we mean the number of edges of $C$ incident to $v$. The link graphs at the vertex $v$ of $C$ are equal to: - The double wheel graph $W^2$ if $v$ is of degree 6. - $W^2\backslash w$, where $w$ is any vertex of $W^2$, if $v$ is of degree 5. - $K_4\backslash e$, where $e$ is any edge of the complete graph $K_4$, if $v$ is of degree 4. - The complete graph $K_3$, if $v$ is of degree 3. As each of these graphs is $2-$connected, the $2-$complex $C$ is locally $2-$connected. \[main cor\] Every planar rotation system of $C''$ is induced by a planar rotation system of $C'$. As contractions of edges commute by Lemma \[comm\], the order of contraction of the edges of the tree $T'$ is irrelevant. We know that the $2-$complex $C'$ is locally $2-$connected and by ([@JC1], Lemma 3.4) we also know that vertex sums of $2-$connected graphs are $2-$connected. From these two facts we deduce that the assumptions of Lemma \[L 4.3\] remain satisfied after each contraction. Thus we use Lemma \[L 4.3\] inductively by performing consecutive contractions of the edges of the spanning tree $T'$ of the $1-$skeleton of $C'$, which proves the corollary. \[iso\] Let $\iota$ and $\iota'$ be two embeddings of a locally connected and simply connected $2-$complex in $3-$space with the same planar rotation systems. Then there is a homeomorphism $\psi$ of the $3-$sphere such that the concatenation of $\iota$ and $\psi$ is $\iota'$.[^7] Consider thickenings[^8] $D$ and $D'$ of the embeddings $\iota$ and $\iota '$. As these embeddings are assumed to be piecewise linear, $D$ and $D'$ are well defined up to homeomorphism. Moreover, as the planar rotation systems of $\iota$ and $\iota '$ coincide, $D$ and $D'$ are homeomorphic. We denote the homeomorphism between $D$ and $D'$ by $\psi$. Firstly, as the image of the boundary of $D$ under $\psi$ is the boundary of $D'$, $\psi$ induces a bijection between the connected components of $\mathbb S^3 \backslash D$ and the connected components of $\mathbb S^3 \backslash D'$. More precisely, the connected component $B$ of $\mathbb S^3 \backslash D$ corresponds to the connected component $B'$ of $\mathbb S^3 \backslash D'$ for which $\psi (\partial B) = \partial B'$. Secondly, as the $2-$complex $C$ is simply connected and locally connected, all connected componnets of $\mathbb S^3 \backslash D$ and of $\mathbb S^3 \backslash D'$ have boundaries homeomorphic to the $2-$sphere. See for example Theorem 6.8 in [@JC2]. By Alexander’s Theorem every connected component is homeomorphic to the $3-$ball. Fix a pair $(B, B')$ as above. By a trivial induction argument it is sufficient to extend $\psi$ from $D\cup B$ to $D'\cup B'$. By performing isotopy if necessary, we have that $B$ and $B'$ are convex. Choosing some $b\in B$ and $b'\in B'$, we construct a homeomorphism $\overline \psi :B \longrightarrow B'$ as $\forall \lambda \in [0,1), \forall x\in \partial B, \overline \psi (b + \lambda (x-b)) = b' + \lambda (\psi(x) - b')$. Thus, $\psi \cup \overline \psi$ gives the required homeomorphism from $D\cup B$ to $D'\cup B'$. We are ready to prove Lemma \[sec4lemma\] saying that every embedding of $C''$ in $3-$space is obtained from an embedding of $C'$ by contracting the tree $T'$. Consider an embedding $\iota''$ of the $2-$complex $C''$ in $3-$space with planar rotation system $\Sigma''$. By Corollary \[main cor\] $\Sigma''$ is induced by a planar rotation system $\Sigma'$ of $C'$. As the $2-$complex $C'$ is simply connected and has a planar rotation system $\Sigma'$, by ([@JC2], Theorem 1.1) it has an embedding $\iota'$ in $3-$space with rotation system $\Sigma'$. Contraction of the tree $T'$ in the $2-$complex $C'$ produces an embedding of $C''$ with planar rotation system $\Sigma''$, which is homeomorphic to $\iota''$ by Lemma \[iso\]. This proves Lemma \[sec4lemma\]. We conclude this section with two consequences of Lemma \[sec4lemma\]. \[cor 4.5\] The $2-$complex $C''$ has a unique embedding in $3-$space up to homeomorphism. By Lemma \[embedC’\] there is a unique embedding of $C'$ in $\mathbb S^3$ up to homeomorphism. By Lemma \[sec4lemma\] we conclude that there is a unique embedding of $C''$ in $\mathbb S^3$ up to homeomorphism as well. \[cor 4.6\] Every embedding of the $2-$complex $C''$ in $3-$space contains only edges forming nontrivial knots. Let $\iota''$ be an embedding of $C''$. By Lemma \[sec4lemma\] there is an embedding $\iota'$ of $C'$ in $3-$space, which induces $\iota''$. Let $e''$ be an edge of $C''$. It corresponds to an edge $e'$ of $C'$, which is not in $T'$. As the tree $T'$ is entangled, the embedding of the fundamental cycle of $e'$ in $T'$ induced by $\iota'$ forms a nontrivial knot. Remains to notice that this knot must have the same knot type as $\iota''(e'')$. Thus for every embedding $\iota''$ of $C''$ in $3-$space and every edge $e''$ of $C''$ we have that $\iota''(e'')$ is a nontrivial knot. Proof of Lemma \[sec5lemma\] {#Section5} ============================ The remainder of this paper is dedicated to the proof of Lemma \[sec5lemma\], which will be implied by the following lemma. \[rem\_lem\] For every edge $e''$ of $C''$ the link graph of $C''/e''$ at its unique vertex is not planar. Consider the 2-complex $C''/e''$ for some edge $e''$ of $C''$. By Lemma \[rem\_lem\], the link graph at its unique vertex is not planar. Hence $C''/e''$ is not embeddable in any 3-manifold. Before proving Lemma \[rem\_lem\], we do some preparation. \[L 5.2\] Let the graph $G$ be a vertex sum of the two disjoint graphs $G'$ and $G''$ at the vertices $x'$ and $x''$, respectively. Suppose that $G'$ is not planar and $G''$ is $2-$connected. Then, $G$ is not planar. As the graph $G''$ is $2-$connected, the graph $G''\backslash x''$ is connected. Therefore by contracting the graph $G$ onto the edge set of $G'$, we obtain the graph $G'$ (notice that contraction of a loop edge is equivalent to its deletion). As contraction of edges preserves planarity, if $G'$ is not planar, then $G$ is not planar as well. For a $2-$complex $C_1$ and edges $e_1, e_2, \dots, e_k$ in $C_1$ there is a bijection between the edges of $C_1$ different from $e_1, e_2, \dots, e_k$ and the edges of $C_1/\{e_1, e_2, \dots, e_k\}$. In order to increase readability, we suppress this bijection in out notation below; that is, we identify an edge $e$ of $ C_1$ different from $e_1, e_2, \dots, e_k$ with its corresponding edge of $C_1/\{e_1, e_2, \dots, e_k\}$. Let $e$ be an edge of a $2-$complex $C_1$. We aim to see how the link graphs at the vertices of $C_1$ relate to the link graphs at the vertices of $C_1/e$. Clearly link graphs at vertices not incident with the edge $e$ remain unchanged. If $e = uv$ for different vertices $u$ and $v$ of $C_1$, then contracting the edge $e$ leads to a vertex sum of the link graph at $u$ and the link graph at $v$ at the vertices $x$ and $y$ corresponding to the edge $e$. The bijection between their incident edges $(xx_i)_{i\leq k}$ and $(yy_i)_{i\leq k}$ is given as follows. The edge $xx_i$ in the link graph at $u$ corresponds to the edge $yy_i$ in the link graph at $v$ if both $xx_i$ and $yy_i$ are induced by the same face of $C_1$ incident to $e$. If the edge $e$ is a loop with base vertex [^9] $v$ (i.e. $e = vv$), the link graph $L_v$ at $v$ is modified by the contraction of $e$ as follows. Let $x$ and $y$ be the vertices of $L_v$ corresponding to the loop edge $e$. Firstly, delete all edges between $x$ and $y$ in $L_v$. These edges correspond to the faces of $C_1$ having only the edge $e$ on their boundary. Secondly, for every pair $(xx', yy')$ of edges of $L_v$ incident to the same face of $C_1$, add an edge between $x'$ and $y'$ in $L_v$. This edge might be a loop if $x'$ and $y'$ coincide. Finally, delete the vertices $x$ and $y$ from $L_v$. We call the graph obtained by the above sequence of three operations on the link graph $L_v$ *internal vertex sum within the link graph $L_v$ at the vertices $x$ and $y$*. By abuse of language we also use the term *internal vertex sum* for the sequence of operations itself. \[prove 5.1\] Let $o$ be a fundamental cycle of the spanning tree $T'$ of the $1-$skeleton of the $2-$complex $C'$. Contract the cycle $o$ to a vertex $\underline{o}$. Then, the link graph at the vertex $\underline{o}$ in the $2-$complex $C'/o$ is nonplanar. Before proving Lemma \[prove 5.1\] we show how Lemma \[L 5.2\], Lemma \[prove 5.1\] and some results from previous sections together imply Lemma \[rem\_lem\]. Let $e''$ be an edge of the $2-$complex $C''$. It originates from an edge $e'$ of $C'$, which is not in $T'$. Thus, $e'$ participates in a fundamental cycle $o$ of $T'$. As contractions of edges of a $2-$complex commute by Lemma \[comm\], we obtain $C''/e''$ by first contracting the edges of $o$ in $C'$ and then the edges of $T'$ not in $o$ in $C'/o$. By Lemma \[prove 5.1\] contracting $o$ to a vertex $\underline{o}$ in $C'/o$ leads to a nonplanar link graph at $\underline{o}$. Moreover, as the $2-$complex $C'$ is locally $2-$connected by Observation \[2-conn\], the link graph at every vertex of $C'/o$ except possibly $\underline{o}$ is $2-$connected. Then, by Lemma \[L 5.2\] contraction of any non-loop edge $e = \underline{o}w$ of $C'/o$ incident to $\underline{o}$ leads to a non-planar link graph at the vertex of $C'/\{o,e\}$ obtained by identifying $\underline{o}$ and $w$. Then, contracting one by one the edges of $E(T')\backslash E(o)$ in $C'/o$ to the vertex $\underline{o}$ and applying consecutively Lemma \[L 5.2\] we deduce that the link graph at the only vertex of $C''/e''$ is not planar. (Here by abuse of notation we denote by $\underline{o}$ the vertex at which the link graph is not planar after each following contraction. In this sense $\underline{o}$ is also the only remaining vertex in $C''/e''$.) The aim of this section from now on will be to prove Lemma \[prove 5.1\]. Let $G_{14}$ be the graph depicted on the left of . Formally its vertex set is $$V(G_{14}) = \{X_1, X_2, X_3, Y_1, Y_2, Y_{3,1}, Y_{3,2}, K, L, M, N, Q, R, S\}$$ and its edge set is $$\begin{aligned} E(G_{14}) = &\{X_1Y_1, X_1Y_2, X_2Y_1, X_2Y_2, X_3Y_1, X_3Y_2, X_1Y_{3,1}, X_3K, KY_{3,1}, X_2L, LM, MY_{3,2},\\ & Y_2K, Y_1K, X_1X_2, X_3S, LQ, LN, MQ, MN, RY_{3,2}, RQ, RN, RS, SQ, SN\}.\end{aligned}$$ We construct the graph $G_{13}$ from $G_{14}$ by identifying the vertices $Y_{3,1}$ and $Y_{3,2}$; the resulting identification vertex is denoted by $Y_3$. See the right part of . \[nonpl\] The graph $G_{13}$ is not planar. We contract in the graph $G_{13}$ the paths $X_2LMY_3$ and $X_3KY_3$ each to a single edge. The resulting graph contains all edges between the two vertex sets $\{X_1, X_2, X_3\}$ and $\{Y_1, Y_2, Y_3\}$. So $G_{13}$ has $K_{3,3}$ as a minor. So $G_{13}$ cannot be planar as it has a nonplanar minor. We make two essential reminders. Firstly, consider the canonical embedding of $C'$. The paths $P_2$ and $P_4$ are constructed so that there is a sequence of three consecutive diagonal edges pointing in the same direction. For example in $P_2$ as given in a possible choice of such sequence is the third, the fourth and the fifth edge after the vertex $A$. Secondly, every fundamental cycle obtained by adding an edge in $E'\backslash T'$ to $T'$ contains at least one of the paths $P_2$ and $P_4$ as a subpath by construction. Thus, fixing a fundamental cycle $o$ in $T'$, we find a path $e_1, e_2, e_3$ of three consecutive diagonal edges in the same direction. We denote the four vertices in this path of three edges $e^-_1, e^+_1\equiv e^-_2, e^+_2\equiv e^-_3$ and $e^+_3$. \[Ob 5.3\] The link graph at the vertex $e^+_2$ of $C'/e_2$ (where $e^+_1\equiv e^-_2\equiv e^+_2\equiv e^-_3$ in $C'/e_2$) is equal to $G_{14}$. Recall that the double wheel graph $W^2$ is a graph on six vertices, which is the complement of a perfect matching. Notice that for every edge $e$ of $W^2$ the graph $W^2\backslash e$ is the same. We call this graph *modified double wheel graph* and denote it by $W^{2-}$. \[Ob4\] Subdivisions of the double wheel graph $W^2$ and of the modified double wheel graph $W^{2-}$ are $2-$connected. \[second to last\] Let the $2-$complex $C^- = C'\backslash \{e_1, e_3\}$ be obtained from the $2-$complex $C'$ by deleting the edges $e_1$ and $e_3$. Contract the path $p$ between $e^+_3$ and $e^-_1$ in $C^-$ contained in $o$ to a single vertex. The link graph obtained at this vertex after the contraction of $p$ is $2-$connected. Fix a vertex $s$ of $C^-$ in $p$. If $s$ is different from $e^-_1$ and $e^+_3$, the link graph at $s$ in $C^-$ is equal to the link graph at $s$ in $C'$, which is a subdivision of $W^2$. By Observation \[Ob4\] this graph is $2-$connected. If $s$ is equal to $e^-_1$ or $e^+_3$, then the link graph at $s$ in $C^-$ is a subdivision of the modified double wheel graph, which is again $2-$connected by Observation \[Ob4\]. By ([@JC1], Lemma 3.4) vertex sums of $2-$connected graphs are $2-$connected, which proves the lemma. The argument behind the next proof, despite being a bit technical, is quite straightforward. Informally it states that by plugging certain graphs $L_w$ into the graph $G_{14}$ twice via “vertex sums” at the vertices $Y_{3,1}$ and $Y_{3,2}$ of $G_{14}$ we obtain a graph containing $G_{13}$ as a minor. ![The graph $G_{14}$ depicted on the left is obtained as link graph at the vertex $e^+_2$ after contraction of the edge $e_2$ in $C'$. After identification of $Y_{3, 1}$ and $Y_{3, 2}$ in $G_{14}$ we obtain the graph $G_{13}$ shown on the right. The subdivision of $K_{3,3}$ in $G_{13}$ is given in grey.[]{data-label="New"}](New.png) \[minor\] Let $o$ be a fundamental cycle in $T'$. Contract the cycle $o$ to a vertex $\underline{o}$. Then, the link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ in $C'/o$ has $G_{13}$ as a minor. By Lemma \[comm\] contractions of edges of a $2-$complex commute. Thus, we contract the edges of the cycle $o$ in the following order: 1. We contract all edges except for $e_1, e_2, e_3$; 2. we contract $e_2$, $e_1$ and $e_3$ in this order. We now follow in detail each of the described contractions. Let $L_w$ and $L_u$ be the link graphs at the vertices $w = e^{-}_1 = e^{+}_3$ and $u = e^{+}_2 = e^{-}_2$ respectively just before the contraction of the edge $e_1$ of $C'$. They are both $2-$connected as vertex sums of $2-$connected graphs. Let $Y'_{3,2}$ and $Y'_{3,1}$ correspond to the edges $e_3$ and $e_1$ respectively in the link graph $L_w$ at the vertex $w$. Analogously $Y_{3,2}$ and $Y_{3,1}$ correspond to the edges $e_3$ and $e_1$ respectively in the link graph $L_u$ at the vertex $u$, which is equal to $G_{14}$ by Observation \[Ob 5.3\]. See . Contractions of $e_1$ and $e_3$ produce the $2-$complex $C'/o$. The link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ in $C'/o$ is obtained from $L_w$ and $L_u$ by performing: - A vertex sum between $L_w$ and $L_u$ at $Y'_{3,1}$ and $Y_{3,1}$ respectively. Call this vertex sum $L$. - An internal vertex sum within $L$ at the vertices $Y'_{3,2}$ and $Y_{3,2}$. The internal vertex sum within $L$ forms the link graph $L_{\underline{o}}$. By Lemma \[second to last\] the graph $L_{w}\backslash \{Y'_{3,1}, Y'_{3,2}\}$ is $2-$connected, so connected in particular. It is also realised as an induced subgraph of $L_{\underline{o}}$ by restricting $L_{\underline{o}}$ to the set of vertices inherited from $L_{w}$ (all except $Y'_{3,1}$ and $Y'_{3,2}$). The contraction of the edges of this induced subgraph within $L_{\underline{o}}$ is equivalent to identifying $Y_{3,1}$ and $Y_{3,2}$ in $L_u = G_{14}$. This proves the lemma. We are ready to prove Lemma \[prove 5.1\]. By Lemma \[nonpl\], $G_{13}$ is not planar. At the same time, $G_{13}$ is a minor of the link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ of $C'/o$ of by Lemma \[minor\]. As contraction of edges preserves planarity, $L_{\underline{o}}$ is not planar as well. Conclusion {#sec6} ========== In this paper we provided an example of a simply connected $2-$complex $C ''= (V'', E'', F'')$ embeddable in $3-$space such that the contraction of any edge $e$ of $C''$ in the abstract sense produces a $2-$complex $C ''/e$, which cannot be embedded in $3-$space. This construction opens a number of questions. Some of them are given below. Is there a structural characterisation of the (simply connected) $2-$complexes with exactly one vertex embeddable in $3-$space with the above property? Is there a structural characterisation of the (simply connected) $2-$complexes with exactly one vertex admitting an embedding in $3-$space without edges forming nontrivial knots? Is there a structural characterisation of the (simply connected) $2-$complexes such that each of their edge-contractions admits an embedding in $3-$space? Acknowledgements ================ The second author would like to thank Nikolay Beluhov for a number of useful discussions. [^1]: University of Birmingham [^2]: Ecole Normale Supérieure de Lyon [^3]: See for example [@Sch11] for details on the history. [^4]: Formally the path $P_2$ is given by the fact that it is a facial path approximating (i.e. staying at distance at most 1 from) the following piecewise linear path contained in the $1-$skeleton of $C$: $$\begin{aligned} A = & (n+2, 1, 1), (n+6, 1, 1), (n+6, 5, 1), (n+10, 5, 1), (n+10, 5, 13), (n+10, 13, 13), (n+6, 13, 13), (n+6, 13, 5),\\ & (n+6, 1, 5), (n+14, 1, 5), (n+14, 1, 9), (n+14, 9, 9), (n+2, 9, 9) = B.\end{aligned}$$ Although such approximating facial path is not unique, any choice of such path is adapted for our purposes. In this proof, one particular choice of $P_2$ is made for concreteness. [^5]: Like in the case of $P_2$, the facial path $P_4$ is formally given by an approximation of (i.e. path staying at distance at most 1 from) the following piecewise linear path contained in the $1-$skeleton of $C$: $$\begin{aligned} A' = & (n-1, 9, 6), (n-13, 9, 6), (n-13, 17, 6), (n-13, 17, 10), (n-5, 17, 10), (n-5, 5, 10), (n-5, 5, 2), (n-9, 5, 2),\\ & (n-9, 13, 2), (n-9, 13, 14), (n-5, 13, 14), (n-5, 17, 14), (n-1, 17, 14) = B'.\end{aligned}$$ Again, despite the fact that any such approximating facial path is adapted for our purposes, in this proof we stick to a particular choice of $P_4$. [^6]: For every $k\geq 2$, a simplicial complex is *locally $k-$connected* if each of its link graphs is $k-$connected. [^7]: A consequence of this lemma is that simply connected locally 3-connected 2-complexes have unique embeddings in 3-space. This was observed independently by Georgakopoulos and Kim. [^8]: A *thickening $D$* of an embedding $\iota$ of a $2-$complex in $3-$space is the manifold $\iota + B(0, \varepsilon)$ for $\varepsilon > 0$ such that the number of connected components of $\mathbb S^3 \backslash \iota$ is equal to the number of connected components of $\mathbb S^3 \backslash D$. Here $B(0, \varepsilon)$ is the closed $3-$ball of center 0 and radius $\varepsilon$. [^9]: A *base vertex* of a loop edge is the only vertex this edge is incident with.
--- author: - Yves Pomeau - Martine Le Berre - 'Pierre-Henri Chavanis' - Bruno Denet title: 'Supernovae: an example of complexity in the physics of compressible fluids' --- Introduction ============ It is a great pleasure to write this contribution in honor of Paul Manneville. We present below work belonging to the general field where he contributed so eminently, nonlinear effects in fluid mechanics. However, our topic is perhaps slightly unusual in this respect because it has to do with fluid mechanics on a grand scale, namely the scale of the Universe. We all know that Astrophysics has to tackle a huge variety of phenomena, mixing widely different scales of space and time. Our contribution below is perhaps the closest one can imagine of a problem of nonlinear and highly non trivial fluid mechanics in Astrophysics, the explosion of supernovae. In this fascinating field, many basic questions remain to be answered. The most basic one can be formulated as follows: stars evolve on very long time scales, in the billions years range, so why is it that some stars abruptly collapse (the word collapse is used here in a loose sense, without implying for the moment an inward fall of the star material) in a matter of days or even of seconds (the ten seconds duration of the neutrino burst observed in 1987A, the only case where neutrino emission of a supernova was recorded)? This huge difference of time scales is described here in the light of catastrophe theory. The basic mechanism for star collapse is by the loss of equilibrium between pressure and self-gravity. The theory of this equilibrium with the relevant equations is well-known. We consider the case where the star is in equilibrium during a long period, then the series of equilibria presents a saddle-node bifurcation. We expose in section \[Scaling laws\] the hypothesis that the early stage of the loss of equilibrium at the saddle-node should follow a kind of universal equation of the Painlevé I form. Using a particular equation of state, we show in section \[EulerPoisson\] that by a slow decrease of a given parameter (here the temperature), the series of equilibria do show a saddle-node bifurcation. In section \[sec\_dynA\] we study the approach towards the saddle-node. We show that the full Euler-Poisson equations can be reduced to a normal form of the Painlevé I form valid at the first stage of the catastrophe, then we compare the numerical solution of the full Euler-Poisson equations with the solution of this universal equation. Section \[sec:singular\] is devoted to the final stage of the collapse, just before the appearance of the singularity (divergence of the density and velocity). We show that the existence of a self-similar collapsing solution which agrees with the numerical simulations imposes that the gravity forces are stronger than the pressure ones, a situation which was not understood before. Usually the self-similar collapse, also called “homologous” collapse, is treated by assuming that pressure and gravity forces are of the same order that leads to scaling laws such as $\rho\sim r^{-\alpha}$ for the density with parameter $\alpha$ equal to $2$. This corresponds to the Penston-Larson solution [@Penston; @larson]. Assuming that the gravity forces are larger than the pressure ones inside the core, we show first that a collapsing solution with $\alpha$ larger than $2$ displays relevant asymptotic behavior in the outer part of the core, then we prove that it requires that $\alpha$ takes the value $24/11$, which is larger than $2$. We show that this result is actually in agreement with the numerical works of Penston (see Fig. 1 in [@Penston]) and Larson (see Fig. 1 in [@larson]) and many others[^1] (see Figs. 4, 9, 10 and the first stage of Fig. 8 in [@Brenner]) and that this small discrepancy between $\alpha=2$ and $\alpha=24/11$ leads to non negligible consequences for the collapse characteristics. Contrary to the $\alpha=2$ case for which the velocity remains finite close to the center and tends to a constant supersonic value at large distances, our self-similar solution (in the sense of Zel’dovich) displays a velocity diverging at the center, and slowly vanishing as the boundary of the star is approached. The latter property could be important for helping the output of material in the post-collapse regime, see the next paragraph. Finally, in section \[sec:beyond\], we describe the post-collapse dynamics without introducing any new ingredient in the physics. We point out that just at the collapse time, there is no mass in the center of the star, as in the case of the Bose-Einstein condensation [@BoseE; @bosesopik]; the mass begins to accumulate in the inner core just after the singularity. Within the same frame as before (gravity forces dominant with respect to pressure ones), we derive the self-similar equations for the post-collapse regime and compare the solutions with a generalized version of the parametric free-fall solution proposed by Penston [@Penston]. Let us discuss now some ideas concerning the difficulty of interpretation of what happens after the collapse. Indeed the understanding of the pre-collapse stage does not help as much as one would like to explain the observations: besides the neutrino burst of 1987 A, supernovae are sources of intense radiation in the visible range or nearby, this occurring days if not weeks after the more energetic part of the collapse. Although this does not seem to follow from general principles, the collapse is a true collapse because it shows a [*[centripetal]{}*]{} motion of the material in the star, at least in its early stage. Instead what is observed is the [*[centrifugal]{}*]{} motion of a dilute glowing gas (with usually a complex nuclear chemistry) called the remnant, something believed to follow a centripetal collapse. Such a change of sign (from centripetal to centrifugal) occurring in the course of time has to be explained. It has long been a topic of active research, relying on increasingly complex equations of state of nuclear matter with high resolution numerical simulations of the fluid equations. Without attempting to review the literature on this topic, one can say that no clear-cut conclusion seems to have emerged on this. In particular there remains a sensitivity of the results to a poorly understood production of neutrinos. In short, one has to explain how an inward motion to the center of the star reverses itself into an outward motion, something requiring a large acceleration. To understand how this reverse is possible, one may think to the classical Saint-Venant analysis of bouncing of a vertical rod [@saintvenant]: at the end of its free-fall this rod hits the ground and then reverses its motion to lift off the ground. This reversal is possible because the initial kinetic energy is stored first in the compression elastic energy when the lower end of the rod is in contact with the ground and then the energy is released to feed an upward motion. Even though Saint-Venant dealt with solid mechanics, it is not so different of fluid mechanics. Somehow, the comparison with Saint-Venant brings two things to the fore: what could be the equivalent of the solid ground in a collapsing star? Then how much time will it take to trigger an outward motion out of the compression of the star? In particular, thanks to the well defined initial value problem derived in section \[Scaling laws\], we can have a fair picture of what happens until the elastic wave due to the bouncing reaches the outer edge of the star and starts the emission of matter, as does Saint-Venant’s rod. But, compared to this classical problem, there is something different (among many other things of course) in supernova explosion. To explain the emission of remnants, one has to do more than to reverse the speed from inward to outward: the outward speed must be above the escape velocity to counteract the gravitational attraction of what remains of the star (this excluding cases where the core of the star becomes a black hole). This requires some kind of explosion and, somehow, an explosion requires an explosive, particularly because an additional supply of energy has to be injected into the fluid to increase the outward velocity beyond the escape value. This source of energy was long identified by Hoyle and Fowler [@HF] in the nuclear reactions taking place in the compressed star material. This explains type I supernovae. In this model, the pressure increase in the motionless material left behind the outgoing shock should be due to a nuclear reaction triggered by the shock, defining a detonation wave. Such a wave could be triggered by the infalling material on the center, which has a very large (even diverging) speed in the model of singularity developed here in section \[sec:singular\]. In the other type of supernova, called type II, the infall on the center is believed to yield a neutron core, observed in few cases as a neutron star emitting radio waves near the center of the cloud of remnants. The increase of pressure in the shocked gas would be due there to the neutrinos. They are emitted, by the reaction making one neutron from one proton $+$ one electron, out of the dense neutron core in formation which is bombarded by infalling nuclear matter. Such a boosting of the pressure is likely localized in the neighborhood of the interface between the neutron core (at the center of the star) and the collapsing nuclear matter, and can hardly increase the pressure far away from this interface. As observed in the numerics, it is hard to maintain a shock wave far from the surface of the neutron core, and so it could well be that nuclear reactions behind the propagating shock are necessary to increase the pressure sufficiently to reach the escape velocity when the shock reaches the outer edge of the star. This is also a consequence of our discussion of the initial conditions for the collapse of the star: the singularity at the center of the star occurs at a time where the star has collapsed by a finite amount and keeps a radius of the same order of magnitude as its initial radius, making it order of magnitude bigger than the radius of its neutron core. Therefore the emission of neutrinos from the boundary of this neutron core cannot increase the pressure far from the core. The observation of neutrinos in supernova explosion could be due to the nuclear reactions taking place in the detonation wave, not to the nuclear reaction due to the growth of the neutron core. Our approach of the phenomenon of supernova explosion is not to try to describe quantitatively this immensely complex phenomenon, something which could well be beyond reach because it depends on so many uncontrolled and poorly known physical phenomena, like equations of state of matter in conditions not realizable in laboratory experiments, the definition of the initial conditions for the star collapse, the distribution of various nuclei in the star, etc. Therefore we try instead to solve a simple model in a, what we believe, completely correct way. The interest of our model and analysis is that we fully explain the transition from the slow evolution before the collapse to the fast collapse itself. Continuing the evolution we observe and explain the occurrence of a finite time singularity at the center, a singularity where the velocity field diverges. This singularity is not the standard homologous Penston-Larson collapse where all terms in the fluid equations are of the same order of magnitude. Instead this is a singularity of free-fall dynamics, that is such that the pressure force becomes (locally) negligible compared to gravitational attraction[^2]. This point is more than a mathematical nicety, because the laws for this collapse, contrary to the ones of the homologous Penston-Larson collapse, are such that the velocity of infall tends to zero far from the center instead of tending to a constant supersonic value. This makes possible that the shock wave generated by the collapse escapes the center without the additional help of neutrinos as needed in models where the initial conditions are a homologous Penston-Larson collapse far from the center. The Painlevé equation and the scaling laws {#Scaling laws} ========================================== A supernova explosion lasts about ten seconds, when measured by the duration of the neutrino burst in SN1987A, and this follows a “slow" evolution over billions of years, giving an impressive $10^{13}$ to $10^{14}$ ratio of the slow to fast time scale. Such hugely different time scales make it a priori impossible to have the same numerical method for the slow and the fast dynamics. More generally it is a challenge to put in the same mathematical picture a dynamics with so widely different time scales. On the other hand the existence of such huge dimensionless numbers in a problem is an incentive to analyze it by using asymptotic methods. Recently it has been shown [@catastrophe] that such a slow-to-fast transition can be described as resulting from a slow sweeping across a saddle-node bifurcation. In such a bifurcation, if it has constant parameters, two fixed points, one stable the other unstable, merge and disappear when a parameter changes, but not as a function of time. We have to consider here a dynamical transition, occurring when a parameter changes slowly as a function of time. It means that the relevant parameter drifts in time until it crosses a critical value at the time of the catastrophe, this critical time being at the onset of saddle-node bifurcation for the dynamical system. Such a slow-to-fast transition is well known to show up in the van der Pol equation in the relaxation limit [@dor]. Interestingly, the analysis shows that this slow-to-fast transition occurs on a time scale intermediate between the slow and long time scale, and that it is described by a universal equation solvable by the Riccati method. This concerns dynamical systems with dissipation, where the “universal equation" is first order in time. The supernovae likely belong to the class of dynamical catastrophes in our sense, because of the huge difference of time scales, but, if one assumes that the early post-bifurcation dynamics is described by inviscid fluid dynamics, one must turn to a model of non dissipative dynamics. Such a dynamical model of catastrophes without dissipation and with time dependent sweeping across a saddle-node bifurcation is developed below and applied to supernovae. We deal mostly with the early stage of the collapse, which we assume to be described by compressible fluid mechanics, without viscosity. Indeed the slow evolution of a star before the transition is a highly complex process not modeled in this approach because of the large difference in time scales: it is enough to assume that this slow evolution makes a parameter cross a critical value where a pair of equilibria merge by a saddle-node bifurcation. The universal equation describing the transition is the Painlevé I equation, valid for the dissipationless case. We explain how to derive it from the fluid mechanical equations in the inviscid case, assumed to be valid for the interior of the star. Although applications of the ideas developed below could be found in more earthly situations like in subcritical bifurcation of Euler’s Elastica with broken symmetry or the venerable Archimedes problem of (loss of) stability of floating bodies in an inviscid fluid [@Coullet], we shall refer below explicitly to the supernova case only. Our starting point is the following equation of Newtonian dynamics, $$\frac{{\mathrm{d}}^2 r_0}{{\mathrm{d}} t^2} = - \frac{\partial V}{\partial r_0} \mathrm{,} \label{eq:1}$$ where $r_0$ can be seen as the radius of the star and $V(r_0,t)$ a time dependent potential. No mass multiplies the acceleration, which is always possible by rescaling the potential $V(.)$. We shall derive later this equation for an inviscid compressible fluid with gravitation and an equation of state changing slowly as a function of time, for a radially symmetric geometry and with a finite mass. Contrary to the case studied in [@catastrophe], this equation is second order in time because one neglects dissipation compared to inertia. The potential $V(.)$ on the right-hand side represents the potential energy of the star, with the contributions of gravity and of internal energy [@LL]. At equilibrium the right-hand side is zero. Given the potential $V(.)$ this depends on two parameters (linked to the total mass and energy), $r_0$ and another physical parameter which may be seen as the temperature. Because of the long term evolution of the star interior by nuclear reactions and radiation to the outside, its temperature changes slowly. We shall assume that this slow change of parameter makes the equilibrium solution disappear by a saddle-node bifurcation when the temperature $T$ crosses a critical value. A saddle-node bifurcation is sometimes called turning, or tipping point instability, whereas the word “saddle-node" (noeud-col in french) was coined by H. Poincaré in his Ph.D. thesis. Such a bifurcation is a fairly standard problem treated by Emden [@emden] for a self-gravitating gas at finite (and changing, but not as function of time) temperature in a spherical box. It was also discussed by Ebert [@ebert], Bonnor [@bonnor], and McCrea [@crea] by varying the pressure, and by Antonov [@antonov] and Lynden-Bell and Wood [@lbw] by varying the energy. See Chavanis [@aaiso; @grand] for recent studies. A saddle node also occurs in the mass-radius relation of neutron stars determined by Oppenheimer and Volkoff [@ov] when the mass crosses a critical value $M_{OV}$ (see also section 109 of [@LL], figure 52) and in the mass-radius relation of boson stars [@colpi; @prd1; @ch]. A saddle node is also present in the caloric curve of self-gravitating fermions at finite temperature which has the form of a “dinosaur’s neck” [@dino]. As we do not solve the energy equation, the parameter $T$ could be any parameter describing the smooth changes of the star interior prior to the fast transition. Following the ideas of reference [@catastrophe] we look for a finite change in the system on a time scale much shorter than the time scale of the control parameter (here the temperature $T$). Two time scales are involved: the long time scale of evolution of $T$, denoted as $\theta$ below, and the short time scale $\tau$ which is the fundamental period of a pressure oscillation in the star. Our approach will show that the early stage of the collapse is on a time scale intermediate between the fast and slow scale and give a precise definition of the initial conditions for the fast process. ![Potential evolution close to a saddle-node, equation (\[eq:2\]) with $b=c=1$ and two values of $a=-ct$; $t=-2$ for the blue curve, $t=2$ for the red dashed curve. []{data-label="Fig:data1Ra"}](pot.pdf){height="2.0in"} Let us expand the potential $V(.)$ in Poincaré normal form near the saddle-node bifurcation: $$V = -a R + \frac{b}{3} R^3 +... \mathrm{,} \label{eq:2}$$ In the expression above, $R$, a relative displacement, can be seen as the difference between $r_c$, the value of the radius of the star at the saddle-node bifurcation and its actual value, $R=(r_0-r_c)/r_c$, a quantity which decreases as time increases, because we describe the collapse of the star. Actually the quantity $R$ will be seen later as the Lagrangian radial coordinate, a function depending on $r$, the radial distance. The saddle-node bifurcation is when the - now time dependent - coefficient $a$ of equation (\[eq:2\]) crosses $0$. Setting to zero the time of this crossing, one writes $a = - ct$, where $c$, a constant, is small because the evolution of $V$ is slow. This linear time dependence is an approximation because $a(t)$ is, in general, a more complex function of $t$ than a simple ramp. However, near the transition, one can limit oneself to this first term in the Taylor expansion of $a(t)$ with respect to $t$, because the transition one is interested in takes place on time scales much shorter than the typical time of change of $a(t)$. Limiting oneself to displacements small compared to $r_c$, one can keep in $V(R)$ terms which are linear and cubic (the coefficient $b$ is assumed positive) with respect to $R$ because the quadratic term vanishes at the saddle-node transition (the formal statement equivalent to this lack of quadratic term in this Taylor expansion of $V(R)$ is the existence of a non trivial solution of the linearized equation at the bifurcation). Moreover higher order terms in the Taylor expansion of $V(.)$ near $R = 0$ are neglected in this analysis because they are negligible with the scaling law to be found for the magnitude of $R$ near the transition. This is true at least until a well defined time where the solution has to be matched with the one of another dynamical problem, valid for finite $R$. At $t = 0$, the potential $V(.)$ is a cubic function of $R$, exactly the local shape of a potential in a metastable state. For $a$ and $b$ positive, the potential has two extrema, one corresponding to a stable equilibrium point at $R = \sqrt{a/b}$ and one unstable at $R =-\sqrt{a/b}$. In the time dependent case, the potential evolves as shown in Fig. \[Fig:data1Ra\] and the equations (\[eq:1\])-(\[eq:2\]) become $$\frac{d^2R}{dt^2} = \ddot{R} = - ct - bR^2 \mathrm{,} \label{eq:ab}$$ where the parameter $c$ is supposed to be positive, so that the solution at large negative time is close to equilibrium and positive, crosses zero at a time close to zero and diverges at finite positive time. To show that the time scale for the dynamical saddle-node bifurcation is intermediate between the long time scale of the evolution of the potential $V(.)$ and the short time scale of the pressure wave in the star, let us derive explicitly these two relevant short and long time scales. For large negative time the solution of equation (\[eq:ab\]) is assumed to evolve very slowly such that the left-hand side can be set to zero. It gives $$R(t)\simeq \sqrt{\frac{c}{b}(-t)} \label{eq:depart}$$ which defines the long time scale as $\theta= {b}/{c}$ (recall that $R$, a relative displacement scaled to the star radius $r_c$, has no physical scale). As for the short time scale, it appears close to the time $t=t_*$ where the solution of equation (\[eq:ab\]) tends to minus infinity. In this domain the first term in the right-hand side is negligible with respect to the second one, the equation reduces to $\ddot{R} = - bR^2$, which has the characteristic time $\tau= {1}/{\sqrt{b}}$. Let us scale out the two parameters $b,c$ of equation (\[eq:ab\]). Defining $ \hat{ R}={R}/{r_s} $ and $\hat{ t}={t}/{t_0}$ the original equation takes the scaled form $$\frac{d^2\hat{R}}{d\hat{t}^2}= -\hat{t} - \hat{R}^2 \mathrm{,} \label{eq:eqfin}$$ when setting $c ={r_s}/{t_0^3}$ and $b={1}/({r_st_0^2})$. Inversely, $t_0=1/(bc)^{1/5}$ and $r_s=c^{2/5}/b^{3/5}$. The solution of equation (\[eq:eqfin\]) is called the first Painlevé transcendent, and cannot be reduced to elementary functions [@Ince]. The writing of the Painlevé equation in its parameter free form yields the characteristic time scale $t_0$ of equation (\[eq:ab\]) in terms of the short and long times, $$t_0=(\theta \tau ^4 )^{1/5} \mathrm{.} \label{eq:to}$$ This intermediate time is such that $\tau \ll t_0 \ll \theta$; it could be of the order of several hours when taking $\theta \sim$ one billion years, $\tau \sim 10$ sec. The corresponding spatial extension $R$ is of order $$r_s= \left (\frac{\tau}{\theta}\right )^{2/5} \mathrm{,} \label{eq:rs}$$ much smaller than unity. The one-fifth power in equations (\[eq:to\]) and (\[eq:rs\]) is “typical" of the Painlevé I equation, which has a symmetry expressed in terms of the complex fifth root of unity. To solve equation (\[eq:eqfin\]) we have to define the initial conditions. Choosing the initial conditions at large negative time $t_i$, we may assume that the asymptotic relation (\[eq:depart\]) is fulfilled at this time, that gives, $$\left \{ \begin{array}{l} \hat{R}(\hat{t}_i)=\sqrt{-\hat{t}_i}, \\ \dot{\hat{R}}(\hat{t}_i)=-\frac{1}{2\sqrt{-\hat{t}_i}} \mathrm{.} \end{array} \right. \label{eq:ci2}$$ The numerical solution of equation (\[eq:eqfin\]) is drawn in Fig. \[Fig:data1R\] leading to a finite time singularity. With the initial conditions (\[eq:ci2\]) the solution is a non oscillating function (blue curve) diverging at a finite time $\hat{t}_*\simeq 3.4$ (note that the divergence is not yet reached in Fig. \[Fig:data1R\]). ![ Numerical solution of equation (\[eq:eqfin\]), or equation (\[eq:ab\]) with $b=c=1$, for two different initial conditions taken at time $t_i=-20$; (i) relation (\[eq:ci2\]) for the blue curve without any oscillation; (ii) $R(t_i) =\sqrt{-t_i}+0.5$ and $R'(t_i)=-\frac{1}{2\sqrt{-t_i}}$ for the red oscillating curve. []{data-label="Fig:data1R"}](generic-sol.pdf){height="2.0in"} But we may assume that, at very large negative time, the initial conditions slightly differ from the asymptotic quasi-equilibrium value (\[eq:ci2\]). In that case the solution displays oscillations of increasing amplitude and period as time increases, in agreement with a WKB solution of the linearized problem. Let us put $\hat{R}(\hat{t}) \approx \sqrt{-\hat{t}} + \delta \hat{R}$, $\delta \hat{R}$ small which satisfies the linear equation $$\delta \ddot{\hat{R}} = -2 \sqrt{-\hat{t}} { \delta \hat{R}} \mathrm{.}$$ A WKB solution, valid for $(-\hat{t})$ very large is $$\delta \hat{R} = \sum_{\pm} c_{\pm} (-\hat{t})^{-1/4} e^{\pm i \frac{4\sqrt{2}}{5} (-\hat{t})^{5/4}} \mathrm{.}$$ It represents oscillations in the bottom of the potential $V(\hat{R},\hat{t}) = \hat{t} \hat{R} + {\hat{R}^3}/{3}$ near $ \hat{R} = \sqrt{-\hat{t}}$. The two complex conjugate coefficients $c_{\pm}$ defining the amplitudes are arbitrary and depend on two real numbers. Therefore, the cancelation of the oscillations defines uniquely a solution of the Painlevé I equation. This is illustrated in Fig. \[Fig:data1R\] where the blue curve has no oscillation (see above) while the red curve displays oscillations of increasing period and a shift of the divergence time. Near the singularity, namely just before time $\hat{t} = \hat{t}_*$, the dominant term on the right-hand side of equation (\[eq:eqfin\]) is $\hat{R}^2$ so that $\hat{R}$ becomes approximately $\hat{R}(t)\simeq - {6}/{(\hat{t}_* - \hat{t})^2}$, or in terms of the original variables $R$ and $t$, $$R(t)\simeq -6 r_s\left(\frac{t_0}{t_* - t}\right)^2. \label{eq:t-2}$$ This behavior will be compared later to the full Euler-Poisson model (see Fig. \[Fig:rhoc\] and relative discussion). Note that this divergence is completely due to the nonlinearity, and has little to do with a [*[linear]{}*]{} instability. The applicability of this theory requires $R\ll 1$, because it relies on the Taylor expansion of $V(.)$ in equation (\[eq:1\]) near $r_0=r_c$. It is valid if $|t - t_*| \gg \tau$. Therefore the collapse (we mean by collapse the very fast dynamics following the saddle-node bifurcation) can be defined within a time interval of order $\tau$, the center of this interval being the time where the solution of equation (\[eq:eqfin\]) diverges, not the time where the linear term in the same equation changes sign. Moreover the duration of the early stage of the collapse is, physically, of order $(\theta \tau^{4})^{1/5}$, much shorter than the time scale of evolution of the temperature, but much longer than the elastic reaction of the star interior. The blow-up of the solution of equation (\[eq:ab\]) at finite time does [*not*]{} imply a physical singularity at this instant. It only shows that, when $t$ approaches $t_*$ by negative values, $R(t)$ grows enough to reach an order of magnitude, here the radius of the star, such that the approximation of $V$ by the first two terms (linear and cubic with respect to $R$) of its Taylor expansion is no longer valid, imposing to switch to a theory valid for finite displacements. In this case, it means that one has to solve, one way or another, the full equations of inviscid hydrodynamics, something considered in section \[EulerPoisson\]. A warning at this stage is necessary: we have to consider more than one type of finite time singularity in this problem. Here we have met first a singularity of the solution of the Painlevé I equation, a singularity due to various approximations made for the full equations which disappear when the full system of Euler-Poisson equations is considered. But, as we shall see, the solution of this Euler-Poisson set of dynamical equations shows a finite time singularity also, which is studied in section \[sec:singular\] and which is related directly to the supernova explosion. Below we assume exact spherical symmetry, although non spherical stars could be quite different. A given star being likely not exactly spherically symmetric, the exact time $t_*$ is not so well defined at the accuracy of the short time scale $\tau$ because it depends on small oscillations of the star interior prior to the singularity (the amplitude of those oscillations depends on the constants $c_{\pm}$ in the WKB part of the solution, and the time $t_*$ of the singularity depends on this amplitude). One can expect those oscillations to have some randomness in space and so not to be purely radial. The induced loss of sphericity at the time of the collapse could explain the observed expulsion of the central core of supernovae with large velocities, up to $500$ km per second [@coreexpulsion] a very large speed which requires large deviations to sphericity. However there is an argument against a too large loss of sphericity: the time scale $t_0$ for the part of the collapse described by the Painlevé equation is much longer than $\tau$, the typical time scale for the evolution of the inside of the star. Therefore one may expect that during a time of order $t_0$, the azimuthal heterogeneities are averaged, restoring spherical symmetry on average on the longer time scale $t_0$. However this does not apply if the star is intrinsically non spherically symmetric because of its rotation. Within this assumption of given slow dependence with respect to a parameter called $T$, we shall derive the dynamical equation (\[eq:ab\]) from the fluid equations with a general pressure-density relation and the gravity included. To streamline equations and explanations, we shall not consider the constraint of conservation of energy (relevant on the fast time scale). Euler-Poisson system for a barotropic star presenting a saddle-node {#EulerPoisson} =================================================================== Barotropic Euler-Poisson system ------------------------------- We shall assume that the star can be described as a compressible inviscid fluid with a barotropic equation of state $p=p(\rho)$. The relevant set of hydrodynamic equations are the barotropic Euler-Poisson system. These are dynamical equations for a compressible inviscid fluid with a pressure-density relation, including the gravitational interaction via Poisson equation. Note that there is no dynamical equation for the transport of energy. They read $$\frac{\partial\rho}{\partial t}+\nabla\cdot (\rho {\bf u})=0, \label{iso1}$$ $$\rho\left\lbrack \frac{\partial {\bf u}}{\partial t}+({\bf u}\cdot \nabla){\bf u}\right\rbrack=-\nabla p-\rho\nabla\Phi, \label{iso2}$$ $$\Delta\Phi=4\pi G\rho \mathrm{,} \label{iso3}$$ where ${\bf u}$ is the fluid velocity vector, $\rho$ the mass density, and $G$ Newton’s constant. Using the equation of continuity (\[iso1\]), the momentum equation (\[iso2\]) may be rewritten as $$\frac{\partial}{\partial t}(\rho {\bf u})+\nabla (\rho {\bf u}\otimes {\bf u})=-\nabla p-\rho\nabla\Phi. \label{iso4}$$ The potential energy of this self-gravitating fluid is $V=U+W$ where $$\begin{aligned} U=\int\rho\int^{\rho}\frac{p(\rho')}{{\rho'}^2}\, d\rho' d{\bf r}, \label{eos1}\end{aligned}$$ is the internal energy and $$\begin{aligned} W=\frac{1}{2}\int\rho\Phi\, d{\bf r}, \label{eos2}\end{aligned}$$ is the gravitational energy. The internal energy can be written as $U=\int \lbrack \rho h(\rho)-p(\rho)\rbrack\, d{\bf r}=\int H(\rho)\, d{\bf r}$ where we have introduced the enthalpy $h(\rho)$, satisfying $dh(\rho)=dp(\rho)/\rho$, and its primitive $H(\rho) = \int_0^{\rho} h(\rho){\mathrm{d}}\rho $. Hydrostatic equilibrium and neutral mode {#sec_henm} ---------------------------------------- In this section we briefly recall different formulations of the equilibrium state of a self-gravitating gas. From equation (\[iso2\]), the condition of hydrostatic equilibrium writes $$\begin{aligned} \nabla p+\rho\nabla\Phi={\bf 0}. \label{he1}\end{aligned}$$ Dividing this equation by $\rho$, taking the divergence of the resulting expression, using Poisson equation (\[iso3\]), and recalling that $p=p(\rho)$ for a barotropic gas, we obtain a differential equation for $\rho$ that is $$\begin{aligned} \nabla\cdot\left \lbrack\frac{p'(\rho)}{\rho}\nabla\rho\right \rbrack+4\pi G\rho=0. \label{he2}\end{aligned}$$ For a barotropic equation of state by definition $p=p(\rho)$. The condition of hydrostatic equilibrium (\[he1\]) implies $\rho=\rho(\Phi)$. Substituting this relation in Poisson equation (\[iso3\]), we obtain a differential equation for $\Phi$ that is $$\begin{aligned} \Delta\Phi=4\pi G\rho(\Phi). \label{he3}\end{aligned}$$ Introducing the enthalpy, satisfying $\nabla h=\nabla p/\rho$, the condition of hydrostatic equilibrium (\[he1\]) can be rewritten as $$\begin{aligned} \nabla h+\nabla\Phi={\bf 0}. \label{he4}\end{aligned}$$ Therefore, at equilibrium, $h({\bf r})=-\Phi({\bf r})+C$ where $C$ is a constant. Since the gas is barotropic, we also have $\rho=\rho(h)$. Taking the divergence of equation (\[he4\]) and using Poisson equation (\[iso3\]), we obtain a differential equation for $h$ that is $$\begin{aligned} \Delta h+4\pi G\rho(h)=0. \label{he5}\end{aligned}$$ These different formulations are equivalent. In the following, we will solve the differential equation (\[he5\]). To determine the dynamical stability of a steady state of the Euler-Poisson system (\[iso1\])-(\[iso3\]), we consider a small perturbation about that state and write $f({\bf r},t)=f({\bf r})+\delta f({\bf r},t)$ for $f=(\rho,{\bf u},\Phi)$ with $\delta f({\bf r},t)\ll f({\bf r})$. Linearizing the Euler-Poisson system about that state, and writing the perturbation as $\delta f({\bf r},t)\propto e^{\lambda t}$, we obtain the eigenvalue equation $$\lambda^2\delta\rho=\nabla\cdot \left\lbrack \rho(\nabla\delta h+\nabla\delta\Phi)\right \rbrack. \label{he6}$$ The neutral mode ($\lambda=0$) which usually signals the change of stability is the solution of the differential equation $$\nabla\delta h+\nabla\delta\Phi={\bf 0}. \label{he7}$$ Taking the divergence of this equation and using Poisson equation (\[iso3\]), it can be rewritten as $$\Delta\delta h+4\pi G\rho'(h)\delta h=0. \label{he8}$$ This equation may also be written in terms of $\delta\rho$ by using $\delta h=p'(\rho)\delta\rho/\rho$. We get $$\Delta\left (\frac{p'(\rho)}{\rho}\delta \rho\right )+4\pi G\delta \rho=0. \label{he9}$$ In the following, we will solve the differential equation (\[he8\]). An isothermal equation of state with a polytropic envelope implying a saddle node {#sec_eos} --------------------------------------------------------------------------------- The series of equilibria of an isothermal self-gravitating gas with $p=\rho T$ is known to present a saddle node [@emden; @aaiso]. Therefore a self-gravitating isothermal gas is a good candidate for our investigation. However, it has the undesirable feature to possess an infinite mass because its density decreases too slowly (as $r^{-2}$) at large distances. Therefore, to have a finite mass, it must be confined artificially into a “box”. In order to skip this difficulty, we propose to use here an equation of state that is isothermal at high densities and polytropic at low densities, the polytropic equation of state serving as an envelope that confines the system in a finite region of space without artificial container. Specifically, we consider the equation of state[^3] $$\begin{aligned} p(\rho)=\rho_* T\left(\sqrt{1+\rho/\rho_*}-1\right )^2. \label{eos3}\end{aligned}$$ For $\rho\rightarrow +\infty$, it reduces to the isothermal equation of state $p=\rho T$. For $\rho\rightarrow 0$, it reduces to the polytropic equation of state $p=K\rho^2$ with polytropic index $\gamma=2$ and polytropic constant $K=T/(4\rho_*)$. The enthalpy function $h(\rho)$ defined by $dh={dp}/{\rho}$ is explicitly given by $$\begin{aligned} h(\rho)=2 T \ln \left ( 1+\sqrt{1+\rho/\rho_*}\right )-2T \ln (2), \label{eos5}\end{aligned}$$ where the constant of integration has been determined such that $h(\rho=0)=0$. With this choice, the enthalpy vanishes at the edge of the star. The inverse relation writes $${\rho}({h})=4\rho_*\left (e^{{h}/T}-e^{{h}/2T}\right ) \mathrm{.} \label{eq:M4}$$ In the following, it will be convenient to use dimensionless variables. The parameters regarded as fixed are $\rho_*$, $M$, and $G$. From $\rho_*$ and $M$ we can construct a length $L_*=(M/\rho_*)^{1/3}$. Then, we introduce the dimensionless quantities $${\tilde\rho}=\frac{\rho}{\rho_*},\quad {\tilde r}=\frac{r}{L},\quad {\tilde\Phi}=\frac{\Phi}{G\rho_* L^2}.$$ and $${\tilde T}=\frac{T}{G\rho_* L^2},\quad {\tilde p}=\frac{p}{GL^2\rho_*^2},\quad {\tilde t}=t \sqrt{G\rho_*}.$$ Working with the dimensionless variables with tildes amounts to taking $G=\rho_*=M=1$ in the initial equations, a choice that we shall make in the following. Equilibrium solution and temperature-radius relation {#sec_tr} ---------------------------------------------------- The equilibrium solution is obtained by solving equation (\[he5\]) with equation (\[eq:M4\]). Using the dimensionless variables defined in Sec. \[sec\_eos\], assuming spherical symmetry, and setting $\hat{r}=r/\sqrt{T}$, $\hat{h}=h/T$, $\hat{\Phi}=\Phi/T$, $\hat{\rho}=\rho$, and $\hat{M}=M/T^{3/2}$, we obtain $$\hat{h}_{,\hat{r}^2} + \frac{2}{\hat{r}} \hat{h}_{,\hat{r}} + 4 \pi \hat{\rho}(\hat{h}) = 0 \mathrm{,} \label{eq:5.2}$$ where $$\hat{\rho}(\hat{h}) =4\left (e^{\hat{h}}-e^{\hat{h}/2}\right ) \mathrm{.} \label{eq:5.2b}$$ Using Gauss theorem $\Phi_{,r}={M(r)}/{r^2}$, where $$M(r)=\int_0^r \rho(r') 4\pi {r'}^2\, dr' \mathrm{,} \label{eq:massr}$$ is the mass profile, and the equilibrium relation $\Phi_{,r}=-h_{,r}$ , we obtain $\hat{\Phi}_{,\hat{r}}=-\hat{h}_{,\hat{r}}={\hat{M}(\hat{r})}/{\hat{r}^2}$ that allows us to determine the mass profile from the enthalpy profile using[^4] $$\hat{M}(\hat{r})=-\hat{r}^2 \hat{h}_{,\hat{r}}. \label{fr}$$ ![Density $\hat{\rho}(\hat{r})$ versus the radial variable at the saddle-node ($T=T_c$, or $\hat{h}_0=2.296$). The density vanishes at the edge of the star indicated by the arrow ($\hat{r}=\hat{r}_0$). []{data-label="Fig:criticrho"}](rho.pdf){height="2.1in"} The boundary conditions of equation (\[eq:5.2\]) at $\hat{r}=0$ are $\hat{h}(0)= \hat{h}_0$ and $\hat{h}_{,\hat{r}}(0)=0$. For a given value of $\hat{h}_0$, the smallest root of $\hat{h}(\hat{r})$, which is also the one of $\hat{\rho}(\hat{r})$, see Figs. \[Fig:criticrho\] and \[Fig:heq\], defines the normalized radius $\hat{r}_{0}$ of the star. The radius $r_0$ of the star is therefore $r_0=\sqrt{T}\hat{r}_0$. On the other hand, Gauss theorem applied at the surface of the star where $M=1$ (i.e. $\hat{M}_0=1/T^{3/2}$) leads to $\hat{h}_{,\hat{r}}(\hat{r}_0)=-1/(\sqrt{T}r_0^2)$. From these equations, we obtain[^5] $$r_0=\left (\frac{\hat{r}_0}{-\hat{h}_{,\hat{r}}(\hat{r}_0)}\right )^{1/3},\qquad T=\frac{1}{\left (-\hat{r}_0^2 \hat{h}_{,\hat{r}}(\hat{r}_0)\right )^{2/3}}. \label{add1}$$ ![Numerical solution of equations (\[eq:5.2\]) and (\[eq:5.3\]), radial profile of the enthalpy $\hat{h}(\hat{r})$ (solid red curve) and neutral mode $j(\hat{r})$ (dashed blue curve) for $\hat{h}_0=2.296$ corresponding to the saddle-node, point $A$ of Fig. \[Fig:spi\]. []{data-label="Fig:heq"}](saddlehj.pdf){height="2.2in"} The solution of equation (\[eq:5.2\]), drawn in Fig. \[Fig:heq\] solid line, has a single free parameter $\hat{h}_0$ since its Taylor expansion near ${\hat{r}} = 0 $ is like $\hat{h} = \hat{h}_0 + h_2 {\hat{r}}^2 +...$ with $\hat{h}_0$ free, $h_2 = - \frac{2\pi}{3} \hat{\rho}(\hat{h}_0)$, and so on for the higher order coefficients. By varying $\hat{h}_0$ from $0$ to $+\infty$ we can obtain the whole series of equilibria $r_0(T)$ giving the radius of the star as a function of the temperature, using the quantities $\hat{h}_0$ (or $\hat{r}_{0}$) as a parameter. The result is a spiralling curve shown in Fig. \[Fig:spi\] where only the upper part is stable, the solution loosing its stability at the saddle-node (turning point A), as studied in the next subsection[^6]. The saddle-node is found numerically to occur at $\hat{h}_0=2.296..$, or $\hat{\rho}_0= 27.1299..$, that leads to the following critical values for the mass, temperature and radius respectively, $\hat{M}_c=0.52$, $T_c=1.546...$ and $\hat{r}_c=0.385..$ (hence $r_c=\sqrt{T_c}\hat{r}_c=0.479...$). The center of the spiral is obtained for $\hat{h}_0 \rightarrow \infty$. ![Radius $r_0= \hat{r}_0 \hat{M}_0^{-{1}/{3}}$ versus temperature $T=\hat{M}_0^{-{2}/{3}}$, obtained by solving equations (\[eq:5.2\])-(\[add1\]) (increasing the input parameter $\hat{h}_0$). []{data-label="Fig:spi"}](rcT.pdf){height="2.2in"} There is a saddle-node bifurcation when equation (\[eq:5.2\]) linearized about the profile $\hat{h}(\hat{r})$ determined previously has a non trivial solution. This corresponds to the neutral mode $\delta h$ defined by the unscaled equation (\[he8\]). In terms of the scaled variables this linearized equation reads $$\Omega[j(\hat{r})] = j_{,\hat{r}^2} + \frac{2}{\hat{r}} j_{,\hat{r}} + 4 \pi \frac{{\mathrm{d}}\hat{\rho}}{{\mathrm{d}}\hat{h}} j(\hat{r})=0 \mathrm{,} \label{eq:5.3}$$ where $\Omega$ is a linear operator acting on function $j$ of $\hat{r}$. Let us precise that we have the following boundary conditions: arbitrary $j(0)$ and $j'(0)=0$. Furthermore, we automatically have $j'(\hat{r}_0)=0$ since $\delta M(r_0)=0$. The neutral mode $j(\hat{r})$, valid at the critical temperature $T_c$, is pictured in Fig. \[Fig:heq\], dashed blue line. We consider below the dynamics of the function $M(r,t)$ which is the mass contained inside the sphere of radius $r$ in the star. Dynamics close to the saddle-node: derivation of Painlevé I equation {#sec_dynA} ==================================================================== In this section we show that the dynamics close to the saddle-node reduces to Painlevé I equation. This property will be proved first by showing that the normal form of the full Euler-Poisson system (\[iso1\])-(\[iso3\]) is of Painlevé I form, secondly by comparing the normal form solutions to the full Euler-Poisson ones derived by using a numerical package for high-resolution central schemes [@progbalbas]. Simplification of the hydrodynamic equations close to the saddle-node {#sec_simpli} --------------------------------------------------------------------- We now consider the dynamical evolution of the star, in particular its gravitational collapse when the temperature falls below $T_c$. In this section and in the following one we use a simplified model where advection has been neglected, an approximation valid in the first stage of the collapse only. In the following we restrict ourselves to spherically symmetric cases, likely an approximation in all cases, and certainly not a good starting point if rotation is present. However this allows a rather detailed analysis without, hopefully, forgetting anything essential. Defining $u$ as the radial component of the velocity, let us estimate the order of magnitude of the various terms in Euler’s equations during the early stage of the collapse, namely when equation (\[eq:ab\]) is valid (this assuming that it can be derived from the fluid equations, as done below). The order of magnitude of $u_{,t}$ is the one of $\ddot{R}$, that is $\dot{R}/t_0$, with $t_0$ the characteristic time defined by equation (\[eq:to\]). The order of magnitude of the advection term $u u_{,r}$ is $\dot{R}^2/r_0$ (here $R$ is dimensional), because one assumes (and will show) that the perturbation during this early stage extends all over the star. Therefore $u u_{,r} \sim u_{,t} (R/r_0) $ is smaller than $u_{,t}$ by a factor $R/r_0 $, which is the small a-dimensional characteristic length scale defined by the relation (\[eq:rs\]). Neglecting the advection term in equations (\[iso2\]) and (\[iso4\]) gives $$\frac{\partial}{\partial t}(\rho {\bf u})= \rho\frac{\partial}{\partial t}{\bf u}= -\nabla p-\rho\nabla\Phi. \label{iso5}$$ In the spherically symmetric case it becomes $$u_{,t} =-\frac{1}{\rho}p_{,r} - \frac{4\pi G}{r^2} \int_0 ^r {\mathrm{d}}r' r'^2 \rho(r',t) \mathrm{,} \label{eq:8}$$ where we used Gauss theorem $$\Phi_{,r} = \frac{4 \pi G}{r^2} \int_0 ^r {\mathrm{d}}r' r'^2 \rho(r',t) \mathrm{,} \label{gauss}$$ derived from Poisson equation (\[iso3\]). Taking the divergence of the integro-differential dynamical equation (\[eq:8\]) allowing to get rid of the integral term, we obtain $$\left (\frac{2}{r}u+u_{,r}\right )_{,t} =-\left ( h_{,r^2} + \frac{2}{r} h_{,r} + 4 \pi G \rho(h) \right ) \mathrm{,} \label{eq:8b}$$ which is the dynamical equation for the velocity field. This equation has been derived from the Euler-Poisson system (\[iso1\])-(\[iso3\]) where the advection has been neglected, that is valid during the time interval of order $t_0$ before the critical time. To derive the Painlevé I equation from the dynamical equation (\[eq:8b\]) we consider its right-hand side as a function of $\rho$ with an equation of state of the form $p(\rho)=\rho_* T f(\rho/\rho_*)$ depending on a slow parameter $T$, and we expand the solution near a saddle-node bifurcation which exists when there is more than one steady solution of equation (\[eq:8b\]) for a given total mass $M=4\pi\int_0 ^{\infty} {\mathrm{d}}r' r'^2 \rho(r')$ and temperature $T$, two solutions merging and disappearing as the temperature crosses a critical value $T_c$. This occurs for the equation of state defined by equation (\[eos3\]), see Fig. \[Fig:spi\] where a saddle-node exists at point $A$. Although this formulation in terms of the velocity field $u(r,t)$ is closely related to the heuristic description developed in Sec. \[Scaling laws\], in the following we find it more convenient to work in terms of the mass profile $M(r,t)$. Obviously the two formulations are equivalent. The equation for the mass profile $M(r,t)$ {#msaM} ------------------------------------------ In view of studying the dynamics of the solution close to the saddle-node, let us assume a slow decrease of the temperature versus time, of the form $T=T_c(1-\gamma' t)$ with positive $\gamma'$ in order to start at negative time from an equilibrium state. Taking the time derivative of the equation of continuity (\[iso1\]) and using equation (\[iso5\]), we get the two coupled equations[^7] $$\frac{\partial^2\rho}{\partial t^2}=\nabla\cdot (\nabla p+\rho\nabla\Phi), \label{iso6}$$ $$\Delta\Phi=4\pi G\rho. \label{iso7}$$ According to the arguments given in Sec. \[sec\_simpli\], these equations are valid close to the saddle-node during the early stage of the collapse[^8]. By contrast, when we are deep in the collapse regime (see Secs. \[sec:singular\] and \[sec:beyond\]) the advection term is important and we must come back to the full Euler-Poisson system (\[iso1\])-(\[iso3\]). In the following, we use the dimensionless variables of Sec. \[sec\_eos\]. In the spherically symmetric case, using Gauss theorem (\[gauss\]), the system (\[iso6\])-(\[iso7\]) writes $$\frac{\partial^2\rho}{\partial t^2}=\frac{1}{r^2}\left\lbrack r^2 p_{,r} +\rho \int_0 ^r {\mathrm{d}} r' 4\pi r'^2 \rho(r')\right\rbrack_{,r}. \label{iso6.2}$$ It has to be completed by the boundary conditions imposing zero mass at the center of the star, and a constant total mass $$\int_0 ^{r_0} {\mathrm{d}} r' 4\pi r'^2 \rho(r',t)=1, \label{mass0}$$ where $r_0$ is the star radius (practically the smallest root of $\rho(r) = 0$). Let us define the variable $$M(r,t)= \int_0 ^r {\mathrm{d}} r'4\pi r'^2 \rho(r', t) \label{mass}$$ which represents the mass of fluid contained inside a sphere of radius $r$ at time $t$. Multiplying the two sides of equation (\[iso6.2\]) by $ 4\pi r^2$, and integrating them with respect to the radius, we obtain the dynamical equation for the mass profile $M(r,t)$, $$\frac{\partial^2 M(r,t)}{\partial t^2}= 4 \pi r^2 p_{,r} + \frac{1}{r^2}M_{,r} M, \label{eq:d2M}$$ where the term $p_{,r}= p'(\rho) \rho_{,r}$ has to be expressed as a function of $\rho(r,t)=\frac{1}{4 \pi r^2}M_{,r}$ and $\rho_{,r}(r,t)=\frac{1}{4 \pi r^2}(M_{,r^2}-\frac{2}{r}M_{,r})$. Using the relation (\[eos3\]), one has $$p'(\rho)= T\left (1-\frac{1}{\sqrt{1+{\rho}}}\right ). \label{eq:m1}$$ The first term of equation (\[eq:d2M\]) becomes $$4\pi r^2 p_{,r}= T \mathcal{L}(M) g(M_{,r})$$ with $$\left \{ \begin{array}{l} \mathcal{L}(M)= M_{,r^2}-\frac{2}{r}M_{,r}\\ g(M_{,r})=1-\frac{1}{\sqrt{1+\frac{1}{4 \pi r^2}M_{,r}}} \mathrm{.} \end{array} \right. \label{eq:m11}$$ Introducing this expression into equation (\[eq:d2M\]), the dynamical equation for $M(r,t)$ writes $$\frac{\partial^2 M(r,t)}{\partial t^2}= T \mathcal{L}(M)g(M_{,r}) + \frac{1}{r^2}M_{,r} M. \label{eq:d2Mf}$$ The boundary conditions to be satisfied are $$\left \{ \begin{array}{l} M(0,t)=0 \\ M(r_0(t),t)= 1 = 4 \pi\int_0 ^{r_0(t)} {\mathrm{d}} r' r'^2 \rho(r',t) \mathrm{.} \end{array} \right. \label{eq:scales}$$ In the latter relation the radius of the star $r_0(t)$ depends on time. However this dependance will be neglected below, see equation (\[eq:bcMn\]), because we ultimately find that the star collapses, therefore its radius will decrease, leading to $r_0(t) < r_c$, or $M(r_0(t),t)= M(r_c) $ as time goes on. Equilibrium state and neutral mode {#eqmsa} ---------------------------------- A steady solution of equation (\[eq:d2Mf\]) is determined by $$T \mathcal{L}(M)g(M_{,r}) + \frac{1}{r^2}M_{,r} M=0. \label{mm1}$$ Using Gauss theorem $\Phi_{,r}={M(r)}/{r^2}$, and the equilibrium relation $\Phi_{,r}=-h_{,r}$, we can easily check that equation (\[mm1\]) is equivalent to equation (\[eq:5.2\]). We now consider a small perturbation about a steady state and write $M(r,t)=M(r)+\delta M(r,t)$ with $\delta M(r,t)\ll M(r)$. Linearizing equation (\[eq:d2Mf\]) about this steady state and writing the perturbation as $\delta M(r,t)\propto e^{\lambda t}$, we obtain the eigenvalue equation $$\begin{aligned} \lambda^2\delta M=T\left\lbrack {\cal L}(\delta M)g(M_{,r})+{\cal L}(M)g'(M_{,r})\delta M_{,r}\right\rbrack\nonumber\\ +\frac{1}{r^2}(M\delta M)_{,r}. \label{mm3}\end{aligned}$$ The neutral mode, corresponding to $\lambda=0$, is determined by the differential equation $$T\left\lbrack {\cal L}(\delta M)g(M_{,r})+{\cal L}(M)g'(M_{,r})\delta M_{,r}\right\rbrack +\frac{1}{r^2}(M\delta M)_{,r}=0. \label{mm4}$$ Using Gauss theorem $\delta\Phi_{,r}={\delta M(r)}/{r^2}$, and the relation $\delta\Phi_{,r}=-\delta h_{,r}$ satisfied at the neutral point (see Sec. \[sec\_henm\]), we can check that equation (\[mm4\]) is equivalent to equation (\[eq:5.3\]). This implies that the neutral mass profile is given by $$\delta M(r)=-r^2 j_{,r}. \label{mm5}$$ Normal form of the mass profile $M(r,t)$ ---------------------------------------- The derivation of the normal form close to the saddle-node proceeds mainly along the lines of [@cs04][^9]. The mass profile is expanded as $$M(r,t)= M^{(c)}(r) + \epsilon M^{(1)}(r,t)+\epsilon^2 M^{(2)}(r,t)+... \label{eq:serM}$$ where $M^{(c)}(r)$ is the equilibrium profile at $T=T_c$ (see above) drawn in solid line in Fig. \[Fig:criticM\], and $\epsilon$ is a small parameter which characterizes a variation of the temperature with respect to its value at the collapse. We set $$T=T_c(1-\epsilon^2 T^{(2)}), \label{eq:epsilon}$$ which amounts to defining $\epsilon^2 T^{(2)}=\gamma' t$, and rescaling the time as $ t=t'/\epsilon ^{{1}/{2}}$ (this implies that $\gamma'\sim \epsilon^{5/2}$ is a small quantity). Substituting the expansion (\[eq:serM\]) into equation (\[eq:d2Mf\]), we get at leading order the equilibrium relation $$T_c \mathcal{L}^{(c)} g^{(c)} + \frac{1}{r^2}M_{,r}^{(c)} M^{(c)}=0, \label{eq:M0}$$ which has to satisfy the boundary conditions $$M^{(c)}(0)= M^{(c)}_{,r}(0)=0;\qquad M^{(c)}(r_c)= 1 .$$ ![ Mass ${\hat{M}}$ (solid blue line) inside the star versus the radial variable ${\hat{r}}$ at the saddle-node, solution of equations (\[eq:5.2\])-(\[eq:massr\]) for $T=T_c$, i.e. $\hat{h}_0=2.296$. The dashed red line is for $\zeta(\hat{r})$, solution of equation (\[eq:zetadiff1\]) with appropriate initial conditions for solving the adjoint problem (in this caption, we have restored the “hat” on the variables). []{data-label="Fig:criticM"}](M-zetab.pdf){height="1.7in"} To order $1$ we have $$T_c\left( \mathcal{L}^{(1)}g^{(c)} + \mathcal{L}^{(c)}g'^{(c)} M^{(1)}_{,r} \right)+ \frac{1}{r^2}(M^{(1)} M^{(c)})_{,r}=0, \label{eq:M1}$$ and to order $2$ $$\frac{\partial^2 M^{(1)}(r,t')}{\partial t'^2} = T_c \mathcal{F}^{(2)}+ \frac{1}{r^2}\left\lbrack (M^{(2)} M^{(c)})_{,r}+ M^{(1)} M^{(1)}_{,r} \right\rbrack, \label{eq:M2}$$ where $$\mathcal{F}^{(2)}=\mathcal{F}^{(2)}_1+\mathcal{F}^{(2)}_2+\mathcal{F}^{(2)}_3 \label{eq:F2}$$ with $$\mathcal{F}^{(2)}_1=\left(\mathcal{L}^{(2)}-T^{(2)}\mathcal{L}^{(c)} \right) g^{(c)},$$ $$\mathcal{F}^{(2)}_2=\mathcal{L}^{(1)}g'^{(c)} M^{(1)}_{,r},$$ $$\mathcal{F}^{(2)}_3= \mathcal{L}^{(c)}\left\lbrack g'^{(c)} M_{,r}^{(2)}+\frac{g''^{(c)}}{2}(M_{,r}^{(1)})^2 \right\rbrack,$$ where $\mathcal{L}^{(c)}=\mathcal{L}(M^{(c)})$, $\mathcal{L}^{(n)}=\mathcal{L}(M^{(n)})$, $g^{(c)}=g(M_{,r}^{(c)})$, $g'^{(c)}=(\frac{dg}{dM_{,r}})^{(c)}$ and $g''^{(c)}=(\frac{d^2g}{dM_{,r}^2})^{(c)}$. The $r$-dependent quantities can be written in terms of the equilibrium density function $\rho^{(c)}(r)$ as $$\left \{ \begin{array}{l} \mathcal{L}^{(c)}=4\pi r^2\rho_{,r}^{(c)}, \\ g^{(c)} = 1-\frac{1}{\sqrt{1+\rho^{(c)}}}, \\ g'^{(c)} =\frac{1}{8\pi r^2(1+\rho^{(c)})^{3/2}}, \\ g''^{(c)} = -\frac{3}{4(4\pi r^2)^2(1+\rho^{(c)})^{5/2}} \mathrm{.} \end{array} \right. \label{eq:gg'}$$ The boundary conditions are $$\left \{ \begin{array}{l} M^{(n)}(0,t')=0 ; M^{(n)}_{,r}(0,t')=0; \\ M^{(n)}(r_c,t')=0 \mathrm{.} \end{array} \right. \label{eq:bcMn}$$ Let us rescale the quantities in equations (\[iso6.2\])-(\[eq:bcMn\]) by using the critical value $T_c$ for the temperature in the rescaled variables. We thus define $\hat{T}=T/T_c$, $\hat{r} =r/\sqrt{T_c}$, $\hat{t}=t$, $\hat{M}=M/T_c^{3/2}$, $\hat{h}={h}/{T_c}$, and $\hat{\rho}=\rho$. This rescaling leads to the same expressions as the unscaled ones in equations (\[iso6.2\])-(\[eq:bcMn\]), except that $T_c$ is set to one. Furthermore, at the critical point, the rescaled variables coincide with those introduced in Sec. \[sec\_tr\]. In the following, we drop the superscripts to simplify the notations. The foregoing equations have a clear interpretation. At zeroth order, equation (\[eq:M0\]) corresponds to the equilibrium state (\[mm1\]), equivalent to equation (\[eq:5.2\]), at the critical point $T_c$. The critical mass profile is drawn in Fig. \[Fig:criticM\] solid line. At order $1$, equation (\[eq:M1\]) has the same form as the differential equation (\[mm4\]), equivalent to equation (\[eq:5.3\]), determining the neutral mode (corresponding to the critical point). Because equation (\[eq:M1\]) is linear, its solution is $$M^{(1)}(r,t')= A^{(1)}(t')F(r), \label{eq:Maf1}$$ where $$F(r) = \delta M(r)= - r^2 j_{,r}, \label{eq:Maf}$$ according to equation (\[mm5\]). This solution, drawn in Fig. \[Fig:criticMR\]-(a), thick black line, fulfills the boundary conditions (\[eq:bcMn\]). The corresponding density profile $\rho^{(1)}(r,t')= A^{(1)}(t')\delta \rho(r)$ is drawn in Fig. \[Fig:criticMR\]-(b), where $$\delta \rho(r)= \frac{F_{,r}}{4\pi r^2}= j(r)\left (\frac{d\rho}{ dh}\right )_{(c)}. \label{eq:rho1}$$ At order $2$, equation (\[eq:M2\]) becomes $$F(r)\ddot{A}^{(1)}(t') = -T^{(2)}\mathcal{L}^{(c)} g^{(c)} +\mathcal{D}(F) A^{(1)2} + \mathcal{C}(M^{(2)}), \label{eq:AFeq}$$ where $$\mathcal{D}(F)=\frac{1}{r^2}FF_{,r} +\frac{1}{2}\mathcal{L}^{(c)} g''^{(c)} F_{,r}^2 +g'^{(c)}{\cal L}(F)F_{,r}, \label{eq:Deq}$$ and $$\mathcal{C}(M^{(2)})= \mathcal{L}^{(2)} g^{(c)} +\frac{1}{r^2}(M^{(2)}M^{(c)} )_{,r}+\mathcal{L}^{(c)} g'^{(c)} M^{(2)}_{,r}. \label{eq:Ceq0}$$ To write the dynamical equation for $A(t)^{(1)}$ in a normal form, we multiply equation (\[eq:AFeq\]) by a function $\zeta(r)$ and integrate over $r$ for $0 <r <r_c$, where $r_c$ is the radius of the star at $T=T_c$. We are going to derive the function $\zeta(r)$ so that the term ${\cal C}(M^{(2)})$ disappears after integration (see Appendix \[sec\_cl\] for details about the boundary conditions). Introducing the slow decrease of the temperature versus time, $T^{(2)}\sim \gamma' t/\epsilon^2$, and making the rescaling $A=\epsilon A^{(1)}$ to eliminate $\epsilon$ (we note that $A(t)$ is the true amplitude of the mass profile $\delta M(r,t)$), the result writes $$\ddot{A}(t)= \tilde{\gamma} t+K A^2, \label{eq:Ceq}$$ where $$\tilde{\gamma}= -\gamma' \frac{\int_0^{r_c}{\mathrm{d}}r {\cal L}^{(c)}(r) g^{(c)}(r)\zeta(r)}{\int_0^{r_c}{\mathrm{d}}rF(r)\zeta(r)} \label{eq:coefft}$$ is found equal to $\tilde{\gamma}= 120.2...\gamma'$ and $$K= \frac{\int_0^{r_c}{\mathrm{d}}r \mathcal{G}(r) \zeta(r)}{\int_0^{r_c}{\mathrm{d}}rF(r)\zeta(r)}, \label{eq:coeffA2}$$ with $$\begin{aligned} \mathcal{G}(r)=\frac{1}{2}{\cal L}^{(c)}(r) g''^{(c)}(r)F_{,r}^2 +g'^{(c)}(r) F_{,r}(F_{,r^2}-\frac{2}{r}F_{,r})\nonumber\\ +\frac{1}{r^2} F(r)F_{,r}\qquad\qquad\end{aligned}$$ is found to have the numerical value $K=12.32...$. We have therefore established that the amplitude $A(t)$ of the mass profile $\delta M(r,t)$ satisfies Painlevé I equation. By definition the function $\zeta$ must satisfy, for any function $M^{(2)}(r)$, the integral relation $$\label{ire} \int_0 ^{r_c} {\mathrm{d}}r\, \mathcal{C}(M^{(2)})(r) \zeta(r) =0.$$ Let us expand $\mathcal{C}$ as $$\mathcal{C}(M^{(2)})= g^{(c)} M^{(2)}_{,r^2} +b M^{(2)}_{,r}+c M^{(2)} \label{eq:integC2}$$ with $b(r) = -{2g^{(c)}}/{r}+{M^{(c)}}/{r^2}+\mathcal{L}^{(c)}g'^{(c)}$ and $c(r) = {M^{(c)}_{,r}}/{r^2}$, or in terms of the equilibrium values of the density and potential functions at the saddle-node $$\left \{ \begin{array}{l} g^{(c)}(r) = 1-\frac{1}{\sqrt{1+\rho^{(c)}}}, \\ b(r) = -\frac{2g^{(c)}}{r} -h_{,r}^{(c)}+\frac{\rho_{,r}^{(c)}}{2(1+\rho^{(c)})^{3/2}}, \\ c(r) = 4\pi \rho^{(c)} \mathrm{.} \end{array} \right. \label{eq:abc}$$ \(a) ![Comparison between theory (thick black curves) and the numerics (thin colored curves) for the first order terms: (a) mass $M^{(1)}({r,t})$ (b) density $\rho^{(1)}(r,t)$, in scaled variables. The numerical curves correspond to the times $t=0.2$ to $0.6$ in Fig. \[Fig:Painleve\]. []{data-label="Fig:criticMR"}](bruno1.pdf "fig:"){height="1.7in"} (b)![Comparison between theory (thick black curves) and the numerics (thin colored curves) for the first order terms: (a) mass $M^{(1)}({r,t})$ (b) density $\rho^{(1)}(r,t)$, in scaled variables. The numerical curves correspond to the times $t=0.2$ to $0.6$ in Fig. \[Fig:Painleve\]. []{data-label="Fig:criticMR"}](bruno2new.pdf "fig:"){height="1.7in"} Integrating equation (\[ire\]) by parts, and using $M^{(2)}=0$ on the boundaries $r=0$ and $r=r_c$ (see Appendix \[sec\_cl\]), we find that $\zeta(r)$ must be a solution of the second order differential equation $$(g^{(c)} \zeta)_{,r^2}-(b \zeta)_{,r}+ c\zeta=0, \label{eq:zetadiff}$$ with the initial condition $\zeta(0)=0$ (the radial derivative $\zeta_{,r}(0)$ is a free parameter since the differential equation is of second order). At the edge of the star we do not have $\zeta(r_c) = 0$, see below, but rather $\zeta_{,r}(r_c)=0$: the radial derivative of $\zeta$ vanishes because the second order differential equation (\[eq:zetadiff1\]) becomes a first order one (since $g^{(c)}(r_c) =0$, see equation (\[eq:gg’\])). This does not happen in the case studied in [@cs04] where the pressure-density relation was $p = \rho T$, that leads to similar relations as here, but $g^{(c)}(r_c)=1$. The differential equation for the unknown function $\zeta(r)$ writes $$g^{(c)}(r)\zeta_{,r^2}+ a_1(r) \zeta_{,r}+ a_0(r)\zeta=0, \label{eq:zetadiff1}$$ where the coefficients $$\left \{ \begin{array}{l} a_1(r) = 2g^{(c)}_{,r} -b(r), \\ a_0(r) = c(r)+ g^{(c)}_{,r^2}(r) -b_{,r}(r), \end{array} \right. \label{eq:a1a0}$$ may be expressed in terms of the radial density using equations (\[eq:gg’\]) and (\[eq:abc\]). It turns out that for $r=r_c$ we have $g^{(c)}= a_0=0$, but $a_1(r_c) \neq0$, that gives the boundary relation $\zeta _{,r}(r_c)=0$. ![Comparison between normal form (solid blue curve) and numerical solution (dots) for the maximum of $M^{(1)}(r,t)$ versus time. In the numerical simulations of the Euler-Poisson system we start from the critical profile $M_c(r)$ at $t=0$ and decrease the temperature as $T(t)=1-\gamma' t$ with $\gamma'=0.1$. []{data-label="Fig:Painleve"}](BrunoPainleve-b.pdf){height="1.7in"} The solution of equation (\[eq:zetadiff1\]) with the condition $\zeta(0)=0$ is shown in Fig. \[Fig:criticM\], red dashed line, where $\zeta_{,r}(r_c)=0$. Figure \[Fig:Painleve\] shows the evolution of the maximum value $M^{(1)}_{max} (t)$ of the profile $M^{(1)}(r,t)$ with time (solid line). This quantity is proportional to the function $A(t)$ that is the solution of Painlevé equation (\[eq:Ceq\]). It is compared with the numerical solution of the full Euler-Poisson equations (dots). We see that the results agree for small amplitudes but that the agreement ceases to be correct at large amplitudes where our perturbative approach loses its validity. It particular, the real amplitude increases more rapidly, and the singularity occurs sooner, than what is predicted by Painlevé equation. [*Remark:*]{} According to the results of Sec. \[Scaling laws\], and coming back to the original (but still dimensionless) variables, we find that the collapse time in the framework of Painlevé equation is $t_*=\hat{t}_*/(K\tilde\gamma)^{1/5}$ with $\hat{t}_*\simeq 3.4$, i.e. $$t_*=0.79... \left |\frac{T_c}{\dot T}\right |^{1/5}. \label{collp}$$ On the other hand, close to the collapse time, the amplitude of the mass profile diverges as $A(t)\sim (6/K)(t_*-t)^{-2}$ i.e. $$A(t)\sim 0.487 \frac{1}{(t_*-t)^2}. \label{collpf}$$ Discussion {#subsec:disc} ---------- This section was devoted to an explicit derivation of the “universal" Painlevé I equation for the beginning of the collapse following the slow crossing of the saddle-node bifurcation for the equilibrium problem. We have chosen to expose this detailed derivation in a simple model of equation of state and without taking into account exchange of energy in the fluid equations. Of course this makes our analysis qualitatively correct (hopefully!) but surely not quantitatively so for real supernovae, an elusive project anyway. We have shown that the Painlevé I equation represents the actual solution of the full Euler-Poisson system until the changes out of the solution at the saddle-node equilibrium are too large to maintain the validity of a perturbative approach. Our analysis explains well that the collapse of the star can be a very fast process following a very long evolution toward a saddle-node bifurcation. As we shall explain in the next section, after the crossing of the saddle-node bifurcation, the solution of the Euler-Poisson equations have a finite time singularity at the center. We point out that this happens when the radius of the star has the order of magnitude it had at the time of the saddle-node bifurcation. Therefore the size of the core should remain orders of magnitude smaller than the star radius, as found for the Penston-Larson solution which predicts a core containing a very small portion of the total star mass. If the saddle-node bifurcation is the key of the implosion mechanism, this result should not depend on the equation of state. However the question of how massive is the self-collapsing core has received various answers. For supernovae in massive stars, starting from the hypothesis that pressure and gravity forces are of the same order during the collapse, Yahil [@Yahil] considered equations of state of the form $p =K \rho^\Gamma$ with adiabatic indices in the range $ 6/5 < \Gamma \le 4/3$. He found that the ratio of the mass inside the core and the Chandrasekhar mass is almost constant, between $1.1$ and unity in this range of $\Gamma$. Moreover he found that the core moves at less than the sound speed, that was considered as essential for all its parts to move in unison [@Bethe]. In the next section we show that the hypothesis that pressure and gravity forces are of the same order is not relevant to describe the collapse. Our derivation leads to a drastically different velocity field, which is supersonic in the core and subsonic outside, tending to zero at the edge of the star. Finite time singularity of solutions of Euler-Poisson equations: pre-collapse {#sec:singular} ============================================================================= The perturbation analysis presented so far can deal only with perturbations of small amplitude, that is corresponding to a displacement small compared to the radius of the star. We have seen that, at least up to moderate values of the amplitude of perturbations to the equilibrium solution, the analysis derived from Painlevé equation yields correct results, not only for the exponents, but also for all the numerical prefactors. This defines somehow completely the starting point of the “explosion of the star". But there is still a long way toward the understanding of supernovae. As a next step forward, we shall look at the dynamics of the solution of the Euler-Poisson equations with radial symmetry, starting with a quasi-equilibrium numerical solution of the equations of motion. We emphasize the importance of the initial conditions for solving the dynamics, a delicate problem which could lead to various solutions as discussed and illustrated in [@Brenner] for instance. The most noticeable feature of our numerical study is the occurrence of a singularity at the center after a finite time. To describe the numerical results, we must invoke a singularity of the second kind, in the sense of Zel’dovich [@Zel]. Contrary to the singularity of the first kind where the various exponents occurring in the self-similar solution are derived by a simple balance of all terms present in the equations, a singularity of the second kind has to be derived from relevant asymptotic matching, that may require to neglect some terms, as described in the present section. The occurrence of a finite time singularity in the collapse of a self-gravitating sphere has long been a topic of investigations. An early reference is the paper by Mestel [@mestel] who found the exact trajectory of a particle during the free-fall[^10] of a molecular cloud (neglecting the pressure forces), assuming spherically symmetry. The exact Mestel solution displays a self-similar solution of the pressureless Euler-Poisson system as shown later on by Penston [@Penston], that leads to a finite time singularity with an asymptotic density as $\rho(r) \sim r^{-\alpha}$ with $\alpha=12/7$, smaller than $2$ (an important remark, as will be shown in the next subsection). Taking account of the pressure forces, another self-similar solution was found independently by Penston [@Penston] and Larson [@larson] which is usually called the Penston-Larson solution. It is characterized by $\alpha=2$. This solution was proposed to describe the gravitational collapse of an isothermal gas assuming that pressure and gravitational forces scale the same way. This corresponds to a self-similarity of the first kind (the exponent being defined simply by balancing all the terms in the original equations) by contrast to self-similarity of the second kind, or in the sense of Zel’dovich, that we are considering below. In the Penston-Larson solution, the magnitude of the velocity remains finite, something in contradiction with our numerical findings. Moreover this solution has a rather unpleasant feature, noticed by Shu [@Shu]: it implies a finite constant inward supersonic velocity far from the center, although one would expect a solution tending to zero far from the center, as observed numerically. We present below another class of singular solution which better fits the numerical observations than the one of Penston [@Penston] and Larson [@larson]. In the numerics we start from a physically relevant situation which consists in approaching slowly the saddle-node bifurcation in a quasi-equilibrium state. As time approaches the collapse, we observe that the numerical velocity tends to infinity in the core of the singularity and decays to zero far from the center, in agreement with the theoretical solution proposed, equations (\[eq:rhoa\])-(\[eq:ua\]) below with $\alpha$ larger than $2$. The equations we start from are the Euler-Poisson equations for the mass density $\rho(r, t)$ and radial speed $u(r, t)$, $$\rho_{,t} + \frac{1}{r^2} \left( r^2 \rho u \right)_{,r} = 0 \mathrm{.} \label{eq:Euler.1}$$ $$\rho\left(u_{,t} + u u_{,r}\right) = - T \rho_{,r} - \frac{G M(r,t) \rho}{r^2} \mathrm{,} \label{eq:Euler.2}$$ with $$M(r,t) = 4 \pi \int_0^r {\mathrm{d}}r' r'{^2} \rho(r',t) \mathrm{.} \label{eq:Euler.2.1}$$ In the equations above, we consider the case of an isothermal equation of state, $p=\rho T$, which amounts to considering the equation of state (\[eos3\]) in the limit of large density, that is the case in the central part of the star. The temperature $T$ has the physical dimension of a square velocity, as noticed first by Newton, and $G$ is Newton’s constant. The formal derivation of self-similar solutions for the above set of equations is fairly standard. Below we focus on the matching of the local singularity with the outside and on its behavior at $r = 0$. A solution blowing-up locally can do it only if its asymptotic behavior can be matched with a solution behaving smoothly outside of the core. More precisely, one expects that outside of the singular domain (in the outer part of the core) the solution continues its slow and smooth evolution during the blow-up, characterized in particular by the fact that the velocity should decrease to zero at the edge of the star meanwhile the local solution (near $r=0$) evolves infinitely fast to become singular. In summary, contrary the Penston-Larson derivation which imposes the value $\alpha=2$ by balancing the terms in the equations and leads to a free parameter value $R(0)$, our derivation starts with an unknown $\alpha$ value (larger than $2$), but leads to a given value of $R(0)$. In our case the unknown $\alpha$ value is found after expanding the solution in the vicinity of the center of the star. This yields a nonlinear eigenvalue problem of the second kind in the sense of Zel’dovich [@Zel], as was found, for instance, in the case of the Bose-Einstein condensation [@BoseE; @bosesopik] while the Penston-Larson singular solution is of the first kind (again because it is obtained by balancing all terms in the equations). General form of self-similar solutions {#subsec5A} -------------------------------------- The solution we are looking after is of the type for the density $\rho$, $$\rho(r, t) = (- t)^{\beta} R\left(r (-t)^{\beta/\alpha}\right) \mathrm{,} \label{selfrho}$$ and for the radial velocity $u$, $$u(r, t) = (- t)^{\gamma} U\left(r (-t)^{\beta/\alpha}\right) \mathrm{,} \label{selfu}$$ where $\alpha$, $\beta$ and $\gamma$ are real exponents to be found. The functions $R(.)$ (different from the function $R(t)$ introduced at the beginning of this paper. We keep this letter to remind that it is the scaled density $\rho$) and $U(.)$ are numerical functions with values of order one when their argument is of order one as well. They have to satisfy coupled differential equations without small or large parameter (this also concerns the boundary conditions). To represent a solution blowing up at time $t = 0$ (this time 0 is not the time zero where the saddle-node bifurcation takes place; we have kept the same notation to make the mathematical expressions lighter), one expects that the density at the core diverges. This implies $\beta$ negative. Moreover this divergence happens in a region of radius tending to zero at $t =0$. Therefore $\alpha$ must be positive. Finally, at large distances of the collapsing core the solution must become independent on time. This implies that $R(.)$ and $U(.)$ must behave with $$\xi= r(-t)^{\beta/\alpha} \mathrm{,} \label{Rasgh}$$ as power laws when $\xi\gg 1$ such that the final result obtained by combining this power law behavior with the pre-factor $(-t)^{\beta}$ for $R$ and $(-t)^{\gamma}$ for $U$ yields functions $\rho$ and $u$ depending on $r$ only, not on time. Therefore one must have $$R(\xi)\sim \xi^{-\alpha}\mathrm{,} \label{Ras}$$ and $$U(\xi)\sim \xi^{-\gamma\alpha/\beta}\mathrm{.} \label{Uas}$$ In that case, $$\left \{ \begin{array}{l} \rho(r,t)\propto r^{-\alpha}, \\ u(r,t)\propto r^{-\gamma\alpha/\beta} \mathrm{,} \end{array} \right. \label{eq:asrhou}$$ for $r\rightarrow +\infty$ where the proportionality constants are independent on time. Inserting those scaling assumptions in the dynamical equations, one finds that equation (\[eq:Euler.1\]) imposes the relation $$\frac{\beta}{\alpha}+\gamma + 1=0. \label{betaalpha}$$ This relation is also the one that yields the same order of magnitude to the two terms $u_{,t}$ and $u u_{,r}$ on the left-hand side of equation (\[eq:Euler.2\]). If one assumes, as usually done, that all terms on the right-hand side of equation (\[eq:Euler.2\]) are of the same order of magnitude at $t$ tending to zero, this imposes $\alpha = -\beta= 2$ and $\gamma=0$. This scaling corresponds to the Penston-Larson solution. However, let us leave $\alpha$ free (again contrary to what is usually done where $\alpha = 2$ is selected) and consider the relative importance of the two terms in the right-hand side of equation (\[eq:Euler.2\]), one for the pressure and the other for gravity. The ratio pressure to gravity is of order $t^{2\beta/\alpha-\beta}$. Therefore the pressure becomes dominant for $t$ tending to zero if $\alpha < 2$, of the same order as gravity if $\alpha = 2$ and negligible compared to gravity if $\alpha > 2$ (in all cases for $\beta$ negative). For pressure dominating gravity (a case where very likely there is no collapse because the growth of the density in the core yields a large centrifugal force acting against the collapse toward the center), the balance of left and right-hand sides of equation (\[eq:Euler.2\]) gives $\gamma=0$ and $\beta=-\alpha$, while in the opposite case, i.e. for $\alpha>2$, it gives $$\beta=-2 \mathrm{,} \label{beta}$$ and $$\gamma= 2/\alpha - 1\mathrm{.} \label{gamma}$$ Therefore the velocity in the collapse region where $r \sim (-t)^{-\beta/\alpha}$ diverges only in the case of gravity dominating pressure ($\alpha>2$). Our numerical study shows clearly that velocity diverges in the collapse region. We believe that the early numerical work by Larson [@larson] does not contradict our observation that $\alpha$ is larger than 2: looking at his Figure 1, page 276, in log scale, one sees rather clearly that the slope of the density as a function of $r$ in the outer part of the core is close to $-2$, but slightly smaller than $(-2)$. The author himself writes that this curve “approaches the form $r^{-2}$" without stating that its slope is exactly $(-2)$, and the difference is significant, without being very large. The slope $- \alpha = - {24}/{11}$ derived below fits better the asymptotic behavior in Figure 1 of Larson [@larson] than the slope $(-2)$ does (the same remarks apply to Figure 1 of Penston [@Penston]). Therefore we look for a solution with $\alpha >2$ for which the gravitational term dominates the pressure in equation (\[eq:Euler.2\]). As shown below, the existence of a solution of the similarity equations requires that $\alpha$ has a well defined value, one of the roots of a second degree polynomial, and the constraint $\alpha>2$ allows us to have a velocity field decaying to zero far from the singularity region, as observed in our numerics, although $\alpha < 2$ yields a velocity field growing to infinity far from the collapse region, something that forbids to match the collapse solution with an outer solution remaining smooth far from the collapse. The case $\alpha = 2$ imposes a finite velocity at infinity, also something in contradiction with the numerical results. A new self-similar solution where gravity dominates over pressure {#subsec5B} ----------------------------------------------------------------- ### Eigenvalue problem of the second kind In the following, we assume that gravity dominates over pressure forces, i.e. $\alpha>2$. The set of two integro-differential equations (\[eq:Euler.1\]) and (\[eq:Euler.2\]) becomes a set of coupled equations for the two numerical functions $R(\xi)$ and $U(\xi)$ such that $$\rho(r,t) = (-t)^{-2} R(r(-t)^{-2/\alpha}) \mathrm{,} \label{eq:rhoa}$$ and $$u(r,t) = (-t)^{-1+\frac{2}{\alpha}} U(r(-t)^{-2/\alpha}) \mathrm{,} \label{eq:ua}$$ where $\xi=r(-t)^{-2/\alpha}$ is the scaled radius. As explained previously, we must have $$R(\xi)\sim \xi^{-\alpha}, \qquad {\rm and}\qquad U(\xi)\sim \xi^{-(\alpha/2-1)}, \label{sunm}$$ for $\xi\rightarrow +\infty$ in order to have a steady profile at large distances. The equations of conservation of mass and momentum become in scaled variables $$2 R + \frac{2 \xi}{\alpha} R_{,\xi} + \frac{2}{\xi} R U + (R U)_{,\xi} = 0 \mathrm{,} \label{eq:Euler.4}$$ $$\left (1 - \frac{2}{\alpha}\right ) U + \frac{2}{\alpha} \xi U_{,\xi} + U U_{,\xi} = - \frac{4 \pi G}{\xi^2} \int_0^{\xi} {\mathrm{d}}\xi'\, \xi'^2 R(\xi') \mathrm{.} \label{eq:Euler.5}$$ The integro-differential equation (\[eq:Euler.5\]) can be transformed into a differential equation, resulting into the following second order differential equation for $U(.)$, supposing $R(.)$ known, $$\begin{aligned} U_{,\xi^2}\left (U+\frac{2}{\alpha}\xi\right ) + U_{,\xi}\left[1+ \frac{4}{\alpha}+\frac{2}{\xi}U + U_{,\xi}\right]\nonumber\\ -\frac{2\gamma}{\xi}U +4 \pi G R=0 \mathrm{.} \label{eq:Euler.5bis}\end{aligned}$$ From now on, we use the dimensionless variables defined in Sec. \[sec\_eos\]. Concerning the initial conditions (namely the conditions at $\xi =0$), they are derived from the possible Taylor expansion of $U$ and $R$ near $\xi = 0$, like $$R = R_0 + R_2 \xi^2 + R_4 \xi^4 +...$$ and $$U = U_1 \xi + U_3 \xi^3+ U_5 \xi^5 + ...$$ Putting those expansions in equations (\[eq:Euler.4\]) and (\[eq:Euler.5\]), one finds $U_1 = -{2}/{3}$ and $R_0 = {1}/({6 \pi })$. Note that $R = R_0 $ and $ U = \xi U_1$ is an exact solution of the equations (\[eq:Euler.4\]) and (\[eq:Euler.5\]), that is not the usual case for such Taylor expansions. This corresponds to the well-known free-fall solution of a homogeneous sphere [@Penston]. It follows from this peculiarity that, at next order, we obtain a linear homogeneous algebraic relation because the zero value of $R_2$ and $U_3$ must be a solution. Inserting the above values of $R_0$ and $U_1$ at this order, we obtain the homogeneous relations $$\frac{3-\alpha}{3\alpha} R_2 + \frac{5}{24 \pi} U_3 = 0 \mathrm{,} \label{eq:R2-U3}$$ and $$4 \pi R_2+ 5 \frac{12-5\alpha}{3\alpha} U_3 = 0 \mathrm{.} \label{eq:R2-U3b}$$ This has a non trivial solution (defined up to a global multiplying factor - see below for an explanation) if the determinant of the matrix of the coefficients is zero, namely if $\alpha $ is a root of the second degree polynomial $$\frac{7}{3}\alpha^2 -18\alpha +24=0 \label{eq:poly} \mathrm{.}$$ This shows that $\alpha$ cannot be left free and has to have a well defined value. However, it may happen that none of these two values of $\alpha$ is acceptable for the solution $R(\xi),U(\xi)$ we are looking for, so that we should take $R_2=U_3=0$ and pursue the expansion at next order. This is the case for our problem because one solution of equation (\[eq:poly\]) is $\alpha=12/7$ which does not belong to the domain $\alpha >2$ we are considering (because we assume that the gravity effects are stronger than the pressure effects)[^11], and the other solution $\alpha=6$ is excluded by the argument in section \[subsec:upperbound\] below. Therefore we have to choose $R_2=U_3=0$ and consider the next order terms of the expansion, which also provides a homogeneous linear system for the two unknown coefficients $R_4$ and $U_5$. It is $$4 \frac{3-\alpha}{3\alpha} R_4 + \frac{7}{12 \pi} U_5 = 0 \mathrm{,}$$ and $$4 \pi R_4+ 7 \frac{8-3\alpha}{\alpha} U_5 = 0 \mathrm{,}$$ which has non trivial solutions if $\alpha$ is a root of the secular equation $$\frac{11}{4}\alpha^2 -17\alpha +24=0 \label{eq:polybis} \mathrm{,}$$ whose solutions are $\alpha=4$ or $\alpha={24}/{11}$. The value $\alpha=4$ is excluded by the argument in section \[subsec:upperbound\] whereas the solution $$\alpha=\frac{24}{11} \label{eq:alpha} %\mathrm{.}$$ could be the relevant one for our problem. In that case, we get $\beta=-2$ and $\gamma=-1/12$. The density decreases at large distances as $r^{-24/11}$ and the velocity as $r^{-1/11}$ (while in the Penston-Larson solution, the density decreases at large distances as $r^{-2}$ and the velocity tends to a constant value). Of course, we can carry this analysis by beginning the expansion with an arbitrary power $k$ bigger than $2$ like $R=R_0+R_k \xi^k+...$ and $U=U_1\xi+U_k\xi^{k+1}+...$ with arbitrary $k$ (actually, $k$ must be even for reasons of regularity of the solution). In that case, we find the two exponents $$\alpha(k)=\frac{6k}{2k+3} \label{eq:alphak} %\mathrm{.}$$ and $\alpha=3k/(k-1)$. We note that the first exponent varies between $0$ (homogeneous sphere) and $3$, while the second exponent is larger than $3$ for $k>1$ which is unphysical by the argument in section \[subsec:upperbound\]. \(a) ![Density of the self-similar problem obtained by solving equations (\[eq:Euler.6\])-(\[eq:Euler.7\]) with $\alpha=24/11$. (a) $R(\xi)$; (b) $\rho(r,t)$ versus $r$ at times $1,0.1,0.05,0.01,0.001$. The initial conditions are $R(y_i)=R_0+R_4 \exp(4 y_i)$, $V(y_i)=U_1 +U_5 \exp(4 y_i)$, $V_{,y}(y_i)=U_1+4 U_5 \exp(4 y_i)$ at $y_i=-10$, with $R_0=\frac{1}{6\pi}$, $U_1=-\frac{2}{3}$ , $R_4 = -\frac{7(8 - 3\alpha)}{4\pi\alpha}U_5$ and $U_5=10^2$. []{data-label="Fig:Rfig"}](R_xi.pdf "fig:"){height="1.5in"} \(b) ![Density of the self-similar problem obtained by solving equations (\[eq:Euler.6\])-(\[eq:Euler.7\]) with $\alpha=24/11$. (a) $R(\xi)$; (b) $\rho(r,t)$ versus $r$ at times $1,0.1,0.05,0.01,0.001$. The initial conditions are $R(y_i)=R_0+R_4 \exp(4 y_i)$, $V(y_i)=U_1 +U_5 \exp(4 y_i)$, $V_{,y}(y_i)=U_1+4 U_5 \exp(4 y_i)$ at $y_i=-10$, with $R_0=\frac{1}{6\pi}$, $U_1=-\frac{2}{3}$ , $R_4 = -\frac{7(8 - 3\alpha)}{4\pi\alpha}U_5$ and $U_5=10^2$. []{data-label="Fig:Rfig"}](rho_r-t.pdf "fig:"){height="1.5in"} In the case considered above, we note that the exponent $\alpha(4)=24/11$ is close to $2$ so that it is not in contradiction with previous numerical simulations analyzed in terms of the Penston-Larson solution (which has $\alpha=2$). Moreover there is obviously a freedom in the solution because, even with $\alpha$ root of the secular equation, $R_4$ and $U_5$ are determined up to a multiplicative constant. This is the consequence of a property of symmetry of the equations (\[eq:Euler.4\]) and (\[eq:Euler.5\]): if $\left(R(\xi), U(\xi)\right)$ is a solution, then $\left(R(\xi/\lambda), \lambda^{-1} U(\xi/\lambda)\right)$ is also a solution with $\lambda$ an arbitrary positive number. This freedom translates into the fact that $U_5$ and $R_4$ are defined up to a multiplication by the same arbitrary (positive) constant. If $U_5$ and $R_4$ are multiplied by $\lambda$, the next order coefficients of the Taylor expansion, like $U_9$ and $R_8$ ($U_7$ and $R_6$ being set to zero) should be multiplied by $\lambda^2$, and more generally the coefficients $U_{4n+1}$ and $R_{4n}$, $n$ integer, by $\lambda^{2n}$, the coefficients $U_{2n}$ and $R_{2n+1}$ being all zero. (a)![Velocity of the self-similar problem, obtained by solving equations (\[eq:Euler.6\])-(\[eq:Euler.7\]) with $\alpha=24/11$. (a) $-U(\xi)$, (b) $-u(r,t)$ versus $r$ at same times and with same initial conditions as in Fig. \[Fig:Rfig\]. []{data-label="Fig:Ufig"}](U_xi.pdf "fig:"){height="1.5in"} (b)![Velocity of the self-similar problem, obtained by solving equations (\[eq:Euler.6\])-(\[eq:Euler.7\]) with $\alpha=24/11$. (a) $-U(\xi)$, (b) $-u(r,t)$ versus $r$ at same times and with same initial conditions as in Fig. \[Fig:Rfig\]. []{data-label="Fig:Ufig"}](u_r-t.pdf "fig:"){height="1.5in"} The behavior of $U(\xi)$ and $R(\xi)$ at $\xi \to \infty$ was derived in equation (\[sunm\]). As one can see, the power law behavior for $R$ at $\xi$ infinity follows from the assumption that terms linear with respect to $R$ in equation (\[eq:Euler.4\]) become dominant at large $\xi$. Keeping the terms linear with respect to $U$ in equation (\[eq:Euler.5\]) and canceling them yields $U(\xi) \sim \xi^{1 - \alpha/2}$. This shows that both the perturbation to $u$ and $\rho$ described by the self-similar solution have first a constant amplitude far from the core (defined as the range of radiuses $r \sim (-t)^{2/\alpha}$) and then an amplitude tending to zero as the distance to the core increases, which justifies that the linear part of the original equation has been kept to derive this asymptotic behavior of the similarity solution. As already said, this large distance behavior of the self-similar solution makes possible the matching of this collapsing solution with an outer solution behaving smoothly with respect to time. The numerical solution of equations (\[eq:Euler.4\])-(\[eq:Euler.5\]) was actually obtained by using the system (\[eq:Euler.6\])-(\[eq:Euler.7\]) for the coupled variables $R, V={U}/{\xi}$, then changing the variable $\xi$ into $y=\ln(\xi)$. It writes $$2 R + \frac{2}{\alpha}R_{,y} + 3 R V + (R V)_{,y} = 0 \mathrm{,} \label{eq:Euler.6}$$ and $$\mathcal{A}_{,y}(V)+ 3 \mathcal{A}(V) + 4 \pi R(y) = 0 \mathrm{,} \label{eq:Euler.7}$$ where $\mathcal{A}(V)= V + \frac{2}{\alpha} V_{,y} + V^2 + V V_{,y}$. The self-similar solutions $R(\xi)$ and $-U(\xi)$ are drawn in log scale in Figs. \[Fig:Rfig\] and \[Fig:Ufig\] respectively together with the corresponding time dependent density and velocity $\rho(r,t)$ and $-u(r,t)$. In Appendix \[sec:freefall\], by proceeding differently, we obtain the self-similar solution of the free-fall analytically, in parametric form. As shown later, the analytical solution is equivalent to the numerical solution of equations (\[eq:Euler.6\])-(\[eq:Euler.7\]), see Fig. \[Fig:compar\]. ### An upper bound for $\alpha$ {#subsec:upperbound} We have seen that $\alpha$ must be larger than $2$. It is interesting to look at a possible upper bound. Such a bound can be derived as follows. At the end of the collapse, the density and radial velocity follow simple power laws near $r =0$, derived from the asymptotics of the self-similar solution. As said below, at the end of the collapse one has precisely $\rho(r) \sim r^{-\alpha}$. Therefore, from elementary estimates, the total mass converges if $\alpha$ is less than 3, which gives an upper bound for $\alpha$. In summary, the exponent $\alpha$ has to be in the range $$2 < \alpha < 3 \label{rangealpha}$$ in order for a physically self-similar solution to fulfill the condition that gravity is dominant over pressure. ### Homologous solution for general polytropic equations of state {#Gammaqcq} The self-similar solution that we have found is independent on the pressure term in the original equation for momentum. Therefore, it is natural to ask the question of its dependence on the equation of state (namely the pressure-density relation). Because the density diverges at $r = 0$ in the similarity solution, it is reasonable to expect that, if the pressure grows too much at large densities, it will become impossible to neglect the pressure term compared to gravity. Let us consider a pressure depending on $\rho$ with a power law of the form $p = K \rho^{\Gamma}$ with $\Gamma\equiv 1+1/n$ a real exponent and $K$ a positive constant. We know already that, if $\Gamma = 1$, the pressure term can be neglected in the collapsing core, and the collapsing solution is characterized by the exponent $\alpha=24/11$. The same system of equations (\[eq:Euler.4\])-(\[eq:Euler.5\]) for the self-similar solution will be found whenever the pressure can be neglected. Therefore we expect that the above solution is valid, with the same $\alpha$, as long as the power $\Gamma$ in the pressure-density relation leads to negligible pressure effects in the collapsing region. Putting the power law estimate derived from the similarity solution without pressure, one finds that the marginal exponent $\Gamma$ is $\Gamma_c = 2 - {2}/{\alpha}$ which for $\alpha=24/11$ is equal to $$\Gamma_c = \frac{13}{12}, \qquad (n_c=12) \mathrm{.} \label{Gammac}$$ For $\Gamma > \Gamma_c$, the pressure becomes formally dominant compared to gravity in the collapse domain (still assuming $\alpha=24/11$), although if $\Gamma$ is less than $ \Gamma_c$ the pressure is negligible compared to gravity in the same collapse domain. When the pressure is dominant, either there is no collapse because the outward force it generates cannot physically produce an inward collapse, or other scaling laws with a different $\alpha$ yield a collapsing solution different from the one that we have derived (see below). If $ \Gamma$ is less than $\Gamma_c = {13}/{12}$ the collapse is driven by dominant gravity forces and the scaling laws derived above apply and are independent on the value of $\Gamma$. This occurs because the values of the exponents $\alpha={24}/{11}$, $\beta=-2$, and $\gamma= -{1}/{12}$ were deduced from the Euler-Poisson equations after canceling the pressure term in the right-hand side of equation (\[eq:Euler.2\]). Let us be more general and consider other possible values of $\alpha$. If we assume that pressure and gravity forces are of the same order, the exponents are $$\alpha=\frac{2}{2-\Gamma}, \qquad \beta=-2, \qquad \gamma=1-\Gamma. \label{press-et-grav}$$ The condition $\alpha<3$ (see Section \[subsec:upperbound\]) implies that $\Gamma<4/3$. It is well-known that a polytropic star with index $\Gamma>4/3$ is dynamically stable, so there is no collapse. The critical index $\Gamma=4/3$ corresponds to ultra-relativistic fermion stars such as white dwarfs and neutron stars. In that case, the system collapses and forms a core of mass of the order of the Chandrasekhar mass as studied by Goldreich and Weber [@gw]. The collapse of polytropic spheres with ${6}/{5} \le \Gamma \le {4}/{3}$ described by Euler-Poisson equations has been studied by Yahil [@Yahil]. For $\Gamma<4/3$, the star collapses in a finite time but since $\alpha<3$ the mass at $r=0$ at the collapse time $t=0$ is zero (in other words, the density profile is integrable at $r=0$ and there is no Dirac peak). We can also consider the case where gravity forces overcome pressure forces so that the system experiences a free fall. If we compare the magnitude of the pressure and gravity terms in the Euler-Poisson system when the homologous solutions (\[selfrho\])-(\[selfu\]) are introduced, we find that the pressure is negligible if $\alpha>2/(2-\Gamma)$. Therefore, for a given polytropic index $\Gamma$, the pressureless homologous solutions are characterized by the exponents $$\frac{2}{2-\Gamma} < \alpha \le 3, \label{rangealphaGamma}$$ and $$\beta=-2, \quad \gamma=2/\alpha-1. \label{BetaGamma}$$ The collapse exponent $\alpha$ is selected by considering the behavior of the solution close to the center. Setting $R(\xi)=R_0 + R_k \xi^k$ and $U(\xi)=U_1 + U_{k+1} \xi^{k+1}$, the relation (\[eq:alphak\]) between $\alpha$ and $k$ leads to the following choice: $\alpha$ will be the smallest value of $\alpha(k)$ satisfying both relations (\[rangealphaGamma\]) and (\[eq:alphak\]) for $k$ even. If follows that $$\alpha=\frac{12}{7} \quad {\rm for}\quad \Gamma \le \frac{5}{6}, \label{alphaPenston}$$ which is the exponent derived by Penston [@Penston] for zero pressure or $T=0$ assuming $k=2$. Next, we find $$\alpha=\frac{24}{11}, \quad {\rm for}\quad \frac{5}{6} <\Gamma \le \frac{13}{12}, \label{alphanous}$$ as obtained above assuming $k=4$. Finally, we find that $$\alpha=\frac{6k}{2k+3}, \quad {\rm for} \quad \frac{4k-3}{3k} <\Gamma \le \frac{4k+5}{3k+6},$$ for any $k\ge 4$ even. We note that there is no solution for $\Gamma\ge 4/3$ since the polytropic stars with such indices are stable as recalled above. Finally, when pressure forces dominate gravity forces, the scaling exponents are obtained by introducing the self-similar form (\[selfrho\])-(\[selfu\]) into the Euler-Poisson system without gravity forces, yielding $$\beta= -\frac{2}{{2}/{\alpha}+\Gamma-1}, \qquad \gamma=-\frac{\Gamma-1}{{2}/{\alpha}+\Gamma-1}. \label{pressuredom}$$ However, this situation is not of physical relevance to our problem since it describes a slow “evaporation” of the system instead of a collapse. Comparison of the self-similar solution with the numerical results ------------------------------------------------------------------ ### Invariant profiles and scaling laws The numerical solutions of the full Euler-Poisson system were obtained using a variant of the centpack program [@progbalbas] by Balbas and Tadmor. Comparing our theoretical predictions of the self-similar solution just before collapse with the numerical solution of the full Euler-Poisson system, we find that both lead to the same result, namely they give a value of the exponent $\alpha$ slightly larger than two. The numerical solutions of $\rho(r,t)$ and $u(r,t)$ versus the radial variable $r$ at different times before the collapse are shown in Figs. \[Fig:RBruno1\] and \[Fig:UBruno1\] respectively. ![Density $\rho(r,t)$ versus the radial variable $r$ in ${\rm log}_{10}$ scale: numerical solutions of the full Euler-Poisson system, equations (\[eq:Euler.1\])-(\[eq:Euler.2\]) at different times before the collapse. The solid line with slope $-24/11$ fits better the asymptotic behavior (large $r$) of the curves than the dotted-dashed line with slope $-2$. []{data-label="Fig:RBruno1"}](bruno5new.pdf){height="2.0in"} ![Velocity $-u(r,t)$ versus the radial variable $r$: numerical solutions of the full Euler-Poisson system, equations (\[eq:Euler.1\])-(\[eq:Euler.2\]) at different times before the collapse. []{data-label="Fig:UBruno1"}](bruno6new.pdf){height="2.0in"} To draw the self-similar curves, we may get around the difficult task of the exact determination of the collapse time by proceeding as follows. We define a core radius $r_0(t)$ such that $ \rho(0,t) r_0(t)^\alpha =1$ (or any constant value), then we draw $\rho(r,t)/\rho(0,t)$ and $u(r,t)/u(r_0,t)$ versus $r/r_0(t)$. The merging of the successive curves should be a signature of the self-similar behavior. The result is shown in Figs. \[Fig:numrhoBruno\] and \[Fig:UBruno\] for the density and velocity respectively. The $\log$ scale of the density curve illustrates the expected asymptotic behavior (large $\xi$ values) $R \sim \xi^{-\alpha}$ or $\rho(r,t)/\rho(0,t) \sim (r/r_0(t))^{-\alpha}$. The asymptotic behavior of the velocity, $U \sim \xi^{1-\alpha/2}$ is less clear on Fig. \[Fig:UBruno\] where the curves display an oscillating behavior below the line with slope $1-\alpha/2$. We attribute the progressive decrease of the curves below the expected asymptote to the shock wave clearly visible in the outer part of the velocity curves (in addition, as discussed by Larson [@larson] p. 294, the velocity profile approaches the self-similar solution much slower than the density). In Figs. \[Fig:numrhoBruno\] and \[Fig:UBruno\] the black curves display the theoretical self-similar solution shown in Figs. \[Fig:Rfig\]-(a) and \[Fig:Ufig\]-(a), which has analytical parametric expression given in Appendix \[App:precoll\]. ![ Self-similar density curves $\rho(r,t)/\rho(0,t)$ versus $r/r_0(t)$ in $\log$ scale with $r_0(t)$ defined in the text and $\alpha=24/11$. []{data-label="Fig:numrhoBruno"}](bruno4.pdf){height="2.0in"} ![ Velocity ratio $-u(r,t)/u_0(r,t)$ versus $r/r_0(t)$ in $\log$ scale deduced from the curves of Fig. \[Fig:UBruno1\] with the definition of $r_0(t)$ given in the text and $\alpha=24/11$. A shock wave is visible at the edge of the star, see the oscillations of the velocity. []{data-label="Fig:UBruno"}](bruno7.pdf){height="2.0in"} In Fig. \[Fig:numrhoBruno\] the merging density curves have all the same ordinate at the origin, since we have plotted $\rho(r,t)/\rho(0,t)$. To complete the comparison between the theory and the simulation for the self-similar stage, we have also drawn the series of self-similar density curves $R(\xi)$ in order to check whether the central behavior of the numerical curves agrees with the expected value $R(0)=1/(6\pi)$. To do this we have first to define the collapse time as precisely as possible, then to plot the quantity $ (t_*-t)^2 \rho(r,t)$ versus $r/(t_*-t)^{{2}/{\alpha}}$. These curves are shown in Fig. \[Fig:RBruno\]. They clearly merge except in a close domain around the center. We observe that the numerical value at $\xi=0$ is noticeably larger than the expected value $R(0)=1/(6\pi)\simeq 0.05$ (it is also substantially larger than the value $0.133...$ corresponding to the Penston-Larson solution). This shows that the system has not entered yet deep into the self-similar regime. Therefore, our numerical results should be considered with this limitation in mind. However, a precise study displays a clear decrease of the value of $(t_*-t)^2\rho(0,t)$ during the approach to collapse, as illustrated in Fig. \[Fig:rhoc\], which shows a good trend of the evolution (see below). ![ Numerical self-similar density curves $\rho(r,t)(-t)^2$ versus $\xi=r(-t)^{-{2}/{\alpha}}$ for $\alpha=24/11$. []{data-label="Fig:RBruno"}](bruno3.pdf){height="2.0in"} ![ Behavior of the density at the center of the star. We plot $\rho_c^{-1/2}$ versus $t$ to show a quasi-linear time dependence of the numerical solution in the Painlevé and in the pre-collapse regimes. The dots are the numerical results, the blue dotted-dashed curve is the Painlevé solution, the red curve the self-similar one which includes an additional second order term, see text. []{data-label="Fig:rhoc"}](rhoc1.pdf){height="2.0in"} In Fig. \[Fig:rhoc\] we compare the numerics with the theory, both in the Painevé regime described in section \[sec\_dynA\] and in the self-similar regime described here. In these two regimes, the central density is expected to behave as $(t_* - t)^{-2}$, see equations (\[eq:t-2\]) and (\[eq:rhoa\]) for the Painlevé and the homologous regime respectively. Therefore we draw $\rho(0,t)^{-1/2}$ which should decrease linearly with time (with different slopes). The dots result from the numerical integration of the full Euler-Poisson equations at constant temperature (actually similar results are obtained with a temperature decreasing with time), with initial condition at temperature $T=0.9 T_c$ (out of equilibrium). At the beginning of the integration, in the Painlevé regime, the density $\rho_c(t)=\rho(0,t)$ is expected to evolves as $\rho_c(0) + \rho_1 A(t)$, where $A(t)$ is the solution of the modified version of equation (\[eq:Ceq\]), valid for constant temperature, which writes $$\ddot{A }=\left (1-\frac{T}{T_c}\right )\gamma + K A^2,$$ with $\gamma=120.2$, and $K=12.32$, as in section \[sec\_dynA\]. The dotted-dashed blue line displays the function $\rho_p(t)^{-1/2}$ with $\rho_p(t)=36 A(t)+26.85$, where the coefficients are fitted to the numerical Euler-Poisson solution, and the initial conditions for the Painlevé equation are $A(0)=\dot{A}(0)=0$. Close to the collapse time ($t_*=0.55$ in the numerics), the numerical solution $\rho(0,t)$ is expected to behave as $\frac{1}{6 \pi}(t_*-t)^{-2}$, up to an additional second order term. A term of order $(t_*-t)^{-4/3}$ was chosen because it is the perturbation associated to the eigenvalue $\lambda=-2/3$ of the linear analysis around the fixed point $\mathcal{C}=[\overline{R}_0=1/(6\pi); \overline{U}_1=-2/3]$[^12] and fits well the numerical results. The red curve displays the function $ \rho_f(t)^{-1/2}$ with $\rho_f(t)=\frac{1}{6\pi} (t_*-t)^{-2} +6.5(t_*-t)^{-4/3}$, which agrees well with the numerical dots, indicating that the Euler-Poisson solution tends to converge towards the self-similar form close to the center, whereas with some delay. In the following subsection we show that the fixed point $\mathcal{C}$ is a saddle point, with one stable direction but another unstable. It follows that the numerical solution has a priori no reason to reach $\mathcal{C}$. However we observe that it clearly tends towards this fixed point as the collapse is approached. ### Dynamical behavior close to the center {#sec:fixcenter} Recall that we have derived the theoretical value of the exponent $\alpha =24/11$ by expanding the density as $R(\xi)=R_0 + R_4 \xi^4 +...$ close to $\xi =0$, with $R_0=1/(6\pi)$ and $U=U_1\xi + U_5 \xi^5+...$ with $U_1=-2/3$ ($R_4 $ and $U_5$ being defined up to a multiplicative coefficient). In order to explain the discrepancy between the numerics and the theoretical value $R(0)=1/(6\pi)$, we look at the stability of the self-similar solution close to $\xi=0$. Let us assume here that $R$ and $U$ are functions of $\xi$ and time, with $\xi= r(-t)^{-2/\alpha}$ and define the time dependent variable [@Apple]: $$s=-\ln{(-t)}. \label{eq:appell}$$ We set $$\rho(r,t) = (-t)^{-2} R(\xi,s) \mathrm{,} \label{eq:rhoXs}$$ and $$u(r,t) = (-t)^{-1+\frac{2}{\alpha}} U(\xi,s) \mathrm{.} \label{eq:uXs}$$ where the variable $s$ is positive for small $t$, increasing up to infinity as collapse is approached. Substituting this ansatz in equations (\[eq:Euler.1\])-(\[eq:Euler.2\]) which include the terms due to pressure and gravity, yields the dynamical equations for $R$ and $U$: $$R_{,s} + R_{,\xi}\left (U+\frac{2}{\alpha}\xi\right )+RU_{,\xi}+ \frac{2}{\xi} R (U +\xi) = 0 \mathrm{,} \label{eq:Euler.4s}$$ and $$\begin{aligned} U_{,s}+ U_{,\xi}\left (U+\frac{2}{\alpha}\xi\right )- \gamma U + T\frac{R_{,\xi}}{R}e^{2\gamma s} \nonumber\\ + \frac{4 \pi }{\xi^2} \int_0^{\xi} {\mathrm{d}}\xi' {\xi'}^2 R(\xi',s)=0 \mathrm{,} \label{eq:Euler.5s}\end{aligned}$$ where $\gamma$ is negative, see equation (\[gamma\]). These two coupled equations generalize the self-similar study of Larson [@larson], Penston [@Penston] and Brenner-Witelski [@Brenner] to the case of an exponent $\alpha$ different from $2$. Besides the fact that in equations (\[eq:Euler.4s\])-(\[eq:Euler.5s\]) the $\alpha$-dependent coefficients are slightly different from theirs, the main difference with previous works is that here the prefactor $e^{2\gamma s}$ of the pressure term decreases as $s$ increases (as the collapse is approached), while this factor was unity in their case. The self-similar functions $U$ and $R$ can be expanded as $R(\xi,s)=R_0(s) + R_2(s) \xi^2+ R_4(s) \xi^4 +...$ and $U=U_1(s)\xi + U_3(s)\xi^3+...$ close to $\xi =0$. Writing $R_i(s)=\overline{R}_i +r_i(s)$ and $U_i(s)=\overline{U}_i +u_i(s)$, for $i=0,1,2...$ one gets the asymptotic relations $ \overline{R}_0=1/(6\pi)$ and $\overline{U}_1=-2/3$ at lowest order, which is strictly the steady-state values found above in the equations without pressure, because asymptotically the pressure term vanishes. However these asymptotic values are not stable, as we shall prove now. Because we are interested in what happens just before the collapse time, we can neglect the pressure term in equation (\[eq:Euler.5s\]). It becomes $$U_{,s}+ U_{,\xi}\left (U+\frac{2}{\alpha}\xi\right )- \gamma U + \frac{4 \pi }{\xi^2} \int_0^{\xi} {\mathrm{d}}\xi' {\xi'}^2 R(\xi',s)=0 \mathrm{.} \label{eq:Euler.5s.wp}$$ The autonomous system (\[eq:Euler.4s\]) and (\[eq:Euler.5s.wp\]) has the useful property to reduce itself to a closed set of ODE’s for $R_0(s)$ and $U_1(s)$. This set reads $$U_{1,s}+ U_{1} \left (U_1 +\frac{2}{\alpha}\right ) +\left (1 - \frac{2}{\alpha}\right ) U_{1} + \frac{4 \pi }{3} R_0=0 \mathrm{,} \label{eq:Euler.5s.wp.or}$$ and $$R_{0,s} + 3 R_{0} U_1 + 2 R_0 = 0 \mathrm{.} \label{eq:Euler.Ros}$$ This system has three fixed points (namely solutions independent on $s$): (i) the point $\mathcal{C}$=$[\overline{ R}_0 = {1}/{(6\pi)}; \overline{U}_1 = -{2}/{3}]$ defined in the previous subsection (the values at $\xi =0$ of $R$ and $U$, solution of the similarity equations already derived); (ii) also $[\overline{R}_0 =\overline{ U}_1 = 0]$; (iii) and finally $[\overline{R}_0 = 0;\overline{ U}_1 = -1]$. Writing $R=\overline{R}_0+\delta r e^{\lambda s}$, and $U=\xi(\overline{U}_1+\delta u e^{\lambda s})$, the linear stability analysis of equations (\[eq:Euler.5s.wp.or\])-(\[eq:Euler.Ros\]) in the vicinity of the fixed point $[\overline{R}_0, \overline{U}_1]$ gives the eigenvalues equation $$\lambda^2+ (5 \overline{U}_1 + 3)\lambda + (2\overline{U}_1+1)(3\overline{U}_1+2)-4 \pi \overline{R}_0=0 \mathrm{.} \label{eq:eigenv}$$ It follows that the fixed point $\mathcal{C}$ has one unstable and one stable direction in the phase plane, with eigenvalues $+1$ and $-2/3$, independently of the $\alpha$ value. The fixed point $\overline{R}_0 = 0$ and $\overline{U}_1 = -1$ has two unstable directions with a degenerate eigenvalue $+1$, although $\overline{R}_0 = \overline{U}_1 = 0$ is stable in all directions, with eigenvalues $-1$ and $-2$. The consequences for the whole solution are not completely clear. This could explain why in the numerical work it seems so hard to get the right value of $\overline{R}_0$. This could be either because the initial condition for this set of ODE’s does not allow to reach the fixed point $\overline{ U}_1 = -{2}/{3}$ and $\overline{R}_0= {1}/({6\pi})$ or because the numerics does not have the accuracy necessary to reach in logarithmic times the fixed point. Moreover, this fixed point, because it is stable in only one direction and unstable in the other, is reached from special initial conditions, on its stable manifold. Otherwise the solution are attracted either to infinity or to $\overline{R}_0 = \overline{ U}_1 = 0$, depending on the initial condition. ### Near the stable fixed point Assuming that the solution approaches the stable fixed point $\overline{R}_0 = \overline{ U}_1 = 0$, one may write $R(s,\xi)=\delta r(s,\xi)$ and $ U(s,\xi)=\xi \delta u(s,\xi)$, where $\delta r$ and $\delta u$ are smaller than unity. Setting $x=-\ln(\xi)$, the functions $\delta r(s,x)$ and $\delta u(s,x)$ are solutions of a linear autonomous system derived from equations (\[eq:Euler.4s\]) and (\[eq:Euler.5s.wp\]). We obtain $$\delta r_{,s}(s,x) -\frac{2}{\alpha}\delta r_{,x}(s,x)+ 2 \delta r(s,x)=0 \mathrm{,} \label{eq:deltar}$$ and $$\delta u_{,s}(s,x) +\delta u(s,x)+ \frac{4\pi}{3} \delta r(s,x)=0 \mathrm{,} \label{eq:deltau}$$ where both variables $s$ and $x$ are positive and go to infinity as the collapse is approached. The solution of the linear homogeneous equation (\[eq:deltar\]) is $$\delta r(s,x)=e^{-2s} \tilde{r}\left (\frac{2}{\alpha}s+x\right ) \mathrm{,} \label{eq:soldeltar}$$ where $ \tilde{r}=\delta r(s,0)$ is the profile of the density at the initial time $t_0$ of the collapse regime, with $s=-\ln(t_0-t_*) $ by definition. It follows that the solution of the linear equation (\[eq:deltar\]) decreases exponentially to zero as the collapse is approached. Beyond the singularity: post-collapse {#sec:beyond} ===================================== The question of the post-collapse was considered by Yahil [@Yahil] in his study of Euler-Poisson equations with a polytropic equation of state $p=K\rho^\Gamma$ with ${6}/{5} \le \Gamma \le {4}/{3}$. For the critical index $\Gamma=4/3$, corresponding to ultra-relativistic neutron stars, during the homologous collapse all the mass in the core contracts towards the center, such that at the singularity time there is a non-zero mass, of the order of the Chandrasekhar mass, at $r=0$ [@gw]. In that case, the post-collapse regime begins with a non-zero mass at $r= 0$, represented in the equations by a Dirac peak at $r=0$. This is not what happens for polytropic equations of state with $\Gamma<4/3$ when pressure and gravity are of the same order [@Penston; @larson; @Yahil], or in our description of the self-similar collapse where gravity overcomes pressure forces (free fall), because, at the singularity time $t=0$, as we have seen, the density does not write as a Dirac distribution but as a power law $\rho(r,0) \propto r^{-\alpha}$ which yields for $\alpha< 3$ a mass converging at $r = 0$ (the large distance behavior is to be matched with an outer solution to make the total mass finite). Because we do not expect a Dirac peak of finite mass at $r = 0$ at the time of the singularity, our post-collapse situation looks (mathematically) like the one of the dynamics of the Bose-Einstein condensation where the mass of the condensate begins to grow from zero [*[after]{}*]{} the time of the singularity [@BoseE; @bosesopik][^13]. Let us derive the equations for the self-similar dynamics after the collapse. As in the case of the post-collapse dynamics of self-gravitating Brownian particles [@post] and of the Bose-Einstein condensation [@BoseE; @bosesopik], we have to add to the equations of density and momentum conservation an equation for the mass at the center. Let $M_c(t)$ be this mass. It is such that $M_c(0) = 0$. We need an equation for its growth. The mass flux across a sphere of radius $r$ is $ J = 4 \pi r^2 \rho(r) u(r)$. Therefore the equation for $M_c(t)$ is $${M}_{c,t} = \left[-4 \pi r^2 \rho(r) u(r)\right]_{r \to 0} \mathrm{.} \label{eq:Mc}$$ To have a non zero limit of $ \left[-4 \pi r^2 \rho(r) u(r)\right]$ as $r$ tends to zero constrains the behavior of $u(r)$ and $\rho(r)$ near $r =0$. The velocity near $r = 0$ is a free-fall velocity. At $r$ very small, it is completely dominated by the attraction of the mass at $r = 0$. Therefore it can be estimated by taking the relation of energy conservation in free-fall, with a zero total energy, because at such short distances the initial velocity is negligible compared to the velocity of free-fall. This yields $ u \approx - \left({2 M_c}/{r}\right)^{1/2}$, which shall define the limit behavior of $u(r, t)$ near $r = 0$. Because $r^2 \rho(r) u(r)$ must tend to a finite value at $r = 0$, one must have $\rho(r) \sim r^{-3/2}$. Note that this gives an infinite density at $r=0$ for $t>0$ while $\rho(0)$ was finite before the collapse time; but close to $r=0$ the density $\rho(r)$ decreases (versus $r$) less rapidly for positive $t$ than it did for negative $t$. The equations one has to solve now are the same as before plus the attraction by the mass $M_c(t)$ at $r = 0$ included (the pressure being again considered as negligible, which is to be checked at the end), $$\rho_{,t} + \frac{1}{r^2} \left( r^2 \rho u \right)_{,r} = 0 \mathrm{,} \label{eq:Euler.1+}$$ $$u_{,t} + u u_{,r} = - \frac{G M(r,t)}{r^2} \mathrm{,} \label{eq:Euler.2+}$$ and $$M(r,t) = 4 \pi \int_0^r {\mathrm{d}}r' r'^2 \rho(r',t) + M_c(t) \mathrm{.} \label{eq:Euler.2.1+}$$ The equation (\[eq:Mc\]) for $M_c(t)$ with the initial condition $M_c (0) = 0$ has to be added to the set of equations of motion. The scaling laws of this system are derived as was done for the self-similar dynamics before the singularity. Because the equations after singularity include the whole set of equations leading to the singularity, the scaling laws are the same as before, with a free exponent like the one denoted as $\alpha$ (this assuming, as we shall check it, that the scaling laws have as much freedom as they had before collapse, which is not necessarily true because one has another equation (\[eq:Mc\]) for another unknown function, $M_c(t)$). But the free exponent has to be the same as before collapse, because the asymptotic behavior of the solution remains the same before and after collapse: at very short times after collapse only the solution very close to $r = 0$ is changed by the occurrence of a finite mass at $r = 0$, a mass which is very small at short positive time. Therefore we look for a self-similar solution of the equations above with the same scaling laws as before collapse for $\rho(r, t)$ and $u(r,t)$ plus another scaling for $M_c(t)$: $$\rho(r,t) = t ^{-2} R_+(r t^{-2/\alpha}) \mathrm{,}$$ $$u(r,t) = t^{-1+\frac{2}{\alpha}} U_+(r t^{-2/\alpha}) \mathrm{,}$$ and $$M_c(t) = K_M t^b \mathrm{,}$$ where $\alpha=24/11$ and $b$ is a positive exponent to be found. Moreover there has been a change of sign from $(-t)$ to $t$ in the scaled functions, which is obviously due to the fact that we are looking for positive times after the singularity, this one taking place at $t =0$. To have the two terms on the right-hand side of equation (\[eq:Euler.2.1+\]) of the same order of magnitude with respect to $t$ imposes $$b = \frac{6}{\alpha} - 2,$$ a positive exponent as it should be (recall the condition that $\alpha$ is less than 3). For $\alpha=24/11$, we get $b=3/4$. This yields the following set of definitions of the self similar unknowns after collapse, $$\rho (r, t) = t^{ - 2} R_+ (\xi_+) \mathrm{,} \label{eq:Euler+rho}$$ $$u (r, t) = t^{2/\alpha - 1} U_{+}(\xi_+) \mathrm{,} \label{eq:Euler+u}$$ and $$M_c (t) = K_{M}t^{6/\alpha - 2} \mathrm{.} \label{eq:Euler+M}$$ The stretched radius is $\xi_+ = r t^{-2/\alpha}$. The equations to be satisfied by the scaled functions are $$- 2 R_+ - \frac{2 \xi_+}{\alpha} R_{+,\xi_+} + \frac{2}{\xi_+} R_+ U_+ + (R_+ U_+)_{,\xi_+} = 0 \mathrm{,} \label{eq:Euler.4+}$$ and $$\begin{aligned} \left (1 - \frac{2}{\alpha}\right ) U_+ + \frac{2}{\alpha} \xi_+ U_{+,\xi_+} - U_+ U_{+,\xi_+} \nonumber\\ =\frac{G}{\xi_+^2} \left(4 \pi \int_0^{\xi_+} {\mathrm{d}}\xi'_+ {\xi'}_+^2 R_+(\xi'_+) + K_M \right) \mathrm{.} \label{eq:Euler.5+}\end{aligned}$$ The coefficient $K_M$ in equation (\[eq:Euler.5+\]) is related to the limit values of $R_+$ and $U_+$ near $\xi_+ = 0$. The solution of the two equations near $\xi_+ = 0$ are $$R_+ \approx K_R \xi_+^{-3/2} \mathrm{,}$$ and $$U_+ \approx K_U \xi_+^{-1/2} \mathrm{.}$$ Equation (\[eq:Euler.4+\]) does not constrain the coefficients $K$’s. By setting to zero the coefficient of the leading order term, of order $\xi_+^{-5/2}$ near $\xi_+ = 0$, in equation (\[eq:Euler.5+\]) yields a relationship between the $K$’s, $$K_U = -(2 G K_M )^{1/2} \mathrm{.}$$ Another relation comes from equation (\[eq:Mc\]). It yields $$K _M = - \frac{2\pi}{3/\alpha - 1} K_U K_R \mathrm{.}$$ Therefore there is only one free parameter among the three coefficients $K$’s. This free parameter is fixed by the matching with the large distance behavior of $R_+$ and $U_+$, which is defined itself by the matching with the outside of the collapse domain. ![ Self-similar density $R_{+}(\xi_+)$ and velocity $U_{+}(\xi_+)$ in the post-collapse regime, solution of equations (\[eq:numRV\]), in $\ln$ scale, to be compared with solutions in the pre-collapse regime drawn in Figs. \[Fig:Rfig\]-\[Fig:Ufig\] also in $\ln$ scale. []{data-label="Fig:comparpost"}](postcollRUb.pdf){height="1.6in"} The system (\[eq:Euler.4+\])-(\[eq:Euler.5+\]) was solved numerically by using the coupled variables $R,V={U}/{\xi}$ and $y=\ln(\xi)$ (dropping the $+$ indices) as in subsection \[subsec5B\], that gives the coupled equations analogous to equations (\[eq:Euler.6\])-(\[eq:Euler.7\]), $$\left \{ \begin{array}{l} -2 R - \frac{2}{\alpha}R_{,y} + 3 R V + (R V)_{,y} = 0 \\ \mathcal{A}_{,y}(V)+ 3 \mathcal{A}(V) - 4 \pi R(y) = 0 \end{array} \right. \label{eq:numRV}$$ with $\mathcal{A}(V) = V+\frac{2}{\alpha}V_{,y}- V^2 - V V_{,y} $, which are free of the inner core mass term. In Appendix \[sec:freefall\], by proceeding differently, we obtain an analytical solution of the post-collapse dynamics which agrees with the numerical solution of equations (\[eq:numRV\]), see Fig. \[Fig:comparpost\] for comparison. Conclusion and perspectives {#sec:conc} =========================== This contribution introduced a theory of the early stage of supernova explosion which assumes that this belongs to the wide class of saddle-node bifurcations with a slow sweeping of the parameters across the bifurcation range. This explains well the suddenness of the explosion occurring after aeons of slow evolution. The hugely different time scales combine into a single intermediate time scale for the slow-to-fast transition which could be of the order of several hours. This transition is described by a “universal" dynamical equation, the Painlevé I equation. Comparing this prediction with a model of star presenting a saddle-node bifurcation shows a quantitative agreement with the predictions based on general arguments of bifurcation theory. This shows at least one thing, namely that the collapse of the star by the loss of equilibrium between pressure and gravitational forces is a global phenomenon depending on the full structure of the star and cannot be ascribed, for instance, to an instability of the core reaching the Landau-Chandrasekhar limit mass, as often assumed. We also looked at the evolution of the star following the onset of instability, namely when the amplitude of the perturbations grows to finite values and cannot be described by the Painlevé I equation anymore. In our equation of state model, the pressure becomes proportional to the density in the large density limit. The pressure increase is likely less steep than what is expected for the inner core of stars, even though there are big uncertainties on the interior of stars, particularly the ones yielding supernovae: showing no early warning on the incoming explosion they are not scrutinized spectroscopically. Nevertheless, an analysis of this situation teaches us a few interesting lessons. First, we do not consider self-similar (or homologous) collapse in the usual sense (where pressure and gravity scale the same way) because our numerical results and our analysis lead us to claim that the pressure becomes negligible in the core. Secondly, we find a new self-similar free-fall solution toward the center. Our numerical results together with physical considerations about the velocity field make us argue that besides the mathematically correct Penston-Larson solution, our new self-similar (free-fall) solution is relevant to describe the collapse. In other words, writing self-similar equations is not enough to guaranty their relevance for a given problem because there can be more than one such kind of solution, like in the present case, where Zel’dovich type 2 solution corresponds to the numerical results, although a type 1 solution also exists, but is not relevant. The numerical results presented here were obtained by starting from the equilibrium state of the star at the saddle-node, then decreasing slowly the temperature. However we notice that the same conclusions are obtained when starting slightly away from the saddle-node point and performing the numerical integration at constant temperature. We point out that the previous numerical studies of gravitational collapse by Penston [@Penston], Larson [@larson] and later by others [@Brenner] were performed starting from a uniform density initial state (and finite radius), that represents initial conditions which are very far from ours and from any physical situation; nevertheless these authors did find a density behaving asymptotically (at large distance) as $r^{-\alpha}$, with $\alpha$ larger than $2$, as we find here. The free-fall solution we found is *not* the free-fall solution studied for many years, because the exponents of our self-similar solution are not the ones usually found. This conclusion is based upon a detailed comparison between the direct numerical solution of the evolution equations and the solution of the simpler equations for the self-similar problem. As far as we are aware, although the self-similar paradigm is often invoked in this field, such a detailed comparison between dynamical solutions of the full Euler-Poisson system and the full self-similar solution has not yet been done (the merging of the curves before the collapse time was not shown). We show that it is a relatively non trivial endeavor to perform such a comparison. Moreover we point out that our self-similar pressure-free solution is more tricky to derive than the standard Penston-Larson homologous solution including the pressure for which standard scaling laws (Zel’dovich first kind) can be derived formally without any difficulty. Finally we have mentioned that the center is a saddle point for our self-similar solution. Numerically this property is manifested in the behavior towards $r=0$ of the density profile $\rho(r,t)-\rho(0,t)$ which should pass from $r^2$ to $r^4$ in the self-similar regime (for generic initial conditions). The mechanism of this change of exponent, if it really occurs, has not been clearly identified and requires a deeper study. This work leaves open many questions. One central issue is how the scenario we outlined, namely slow starting in the universality class Painlevé I, and later finite time collapse toward the central core, is dependent on the pressure/density relation. We suspect that, if the pressure increases more rapidly with the density than linearly at large densities, there will be no finite time singularity. Likely, because shock waves will form, irreversible transformations will take place in those shock waves and another equation of state will become relevant for the star. We greatly acknowledge the “Fondation des Treilles" where this work was initiated, and Paul Clavin for many very stimulating discussions. Boundary conditions to derive the normal form {#sec_cl} ============================================= Let us derive the boundary conditions to solve the integral equation (\[ire\]) by transforming it into the differential equation (\[eq:zetadiff\]). We have to cancel the terms $$\left[g^{(c)} \zeta M_{,r}^{(2)}\right]_{0}^{r_c}, \quad \left[(g^{(c)} \zeta)_{,r} M^{(2)}\right]_{0}^{r_c},\quad {\rm and} \quad \left[b \zeta M^{(2)}\right]_{0}^{r_c}.$$ $(\textit{i})$ At $r_c$ we have $g^{(c)}(r_c)=0$ and $M^{(2)}(r_c)=0$ that ensure the cancelation of the terms $g^{(c)} \zeta M_{,r}^{(2)}$, $g^{(c)} \zeta_{,r} M ^{(2)}$, $b \zeta M ^{(2)}$, and $g^{(c)}_{,r} \zeta M ^{(2)}$ at $r=r_c$ (while $g^{(c)}_{,r}$ and $\zeta$ are both non zero at $r=r_c$, see Fig. \[Fig:criticM\]). This suppresses all the terms taken at $r=r_c$. $(\textit{ii})$ At $r=0$ we impose $\zeta=0$ that cancels the terms $g^{(c)} \zeta M_{,r}^{(2)}$, $g^{(c)}_{,r}\zeta M ^{(2)}$, and $ b \zeta M^{(2)}$. The last term $g^{(c)} \zeta_{,r} M ^{(2)}$ vanishes under the condition $M^{(2)}(0)=0$ (because $g^{(c)}$ and $\zeta_{,r}$ are both non zero at $r=0$). This suppresses all the terms taken at $r=0$. Analytical self-similar solutions for the free-fall {#sec:freefall} =================================================== Penston [@Penston] has given an exact solution of the free-fall problem without thermodynamic pressure ($p=0$). It could seem that, because of the absence of thermodynamic pressure, this is irrelevant for the problem of singularity in the evolution of the collapsing core of models of stars. However, this is not quite true because we have shown that during the collapse this thermodynamic pressure becomes negligible, and so the evolution of the system is essentially like a free-fall. By analyzing the equations for this pressureless collapse we have shown that, actually, a discrete set of solutions exists, with different singularity exponents. The free-fall solution found by Penston corresponds to the exponent $\alpha =12/7$. Since this exponent is smaller than $2$ pressure effects become important at a certain point of the evolution (Penston obtains the estimate $\delta t/t_f\sim 10^{-4}$) and this is why he considers in a second step the case where pressure and gravity forces are of the same order leading to another self-similar solution (the Penston-Larson solution) with $\alpha=2$. Actually, we propose another possibility which is in agreement with our numerical results (and actually with many others). We show below that other exponents than $12/7$ are possible for the free-fall, some of them being larger than $2$ and providing therefore a possible solution of the initial problem in which gravity always dominates over pressure forces[^14]. Our solutions are based on the choice of initial conditions for the radial dependence of the density $\overline{\rho}(a)=\rho_0(1-a^k/A^k)$ where $a$ is the radial variable (same notations as in Penston [@Penston]). The exponent $k$ is left free, although Penston takes $k=2$ with the comment: “we are ’almost always’ correct in taking the form $\overline{\rho}(a)=\rho_0(1-a^2/A^2)$”. We consider a sphere of gas initially at rest and call $M(a,0)$ the mass of gas contained within the sphere of radius $a$ and $\overline{\rho}(a)=3M(a,0)/4\pi a^3$ the average density of that sphere. Using Gauss theorem, the Euler equation (\[iso2\]) with the pressure neglected is equivalent to $$\begin{aligned} \frac{d^2 r}{dt^2}=\frac{du}{dt}=-\frac{GM(a,0)}{r^2}, \label{ff1}\end{aligned}$$ where $r$ and $u$ are the position and the velocity at time $t$ of a fluid particle located at $r=a$ at $t=0$. This equation can be solved analytically [@mestel] and the solution can be expressed in parametric form as $$\begin{aligned} r=a\cos^2\theta, \label{ff2}\end{aligned}$$ $$\begin{aligned} t=\left (\frac{3}{8\pi G\overline{\rho}(a)}\right )^{1/2}\left (\theta+\frac{1}{2}\sin(2\theta)\right ), \label{ff3}\end{aligned}$$ where $\theta$ runs between $0$ and $\pi/2$. Taking $\theta=\pi/2$, we find that a particle initially at $r=a$ arrives at $r=0$ at a time $t(a)=(3\pi/32G\overline{\rho}(a))^{1/2}$. Setting $a=0^+$ in the foregoing expression, we find that the first particle reaches the center at the time $$\begin{aligned} t_{f}=\left (\frac{3\pi}{32 G \rho_0}\right )^{1/2}, \label{ff4}\end{aligned}$$ where $\rho_0=\overline{\rho}(0)$. This is called the free-fall time. At $t=t_f$, the central density becomes infinite ($\rho_c=+\infty$). Using the equation of motion (\[ff2\])-(\[ff3\]) giving $r=r(a)$ and the relation $\rho(r,t)r^2\, dr=\rho(a,0) a^2\, da$, which is equivalent to the equation of continuity (\[iso1\]), we can determine the evolution of the density profile $\rho(r,t)$ and of the velocity profile $u(r,t)$ in the pre- and post-collapse regimes. For $t\rightarrow t_f$ and $r$ not too large, they have a self-similar form. The derivation of this self-similar solution follows rather closely the one by Penston with the only difference that his assumption $\overline{\rho}(a)=\rho_0(1-a^2/A^2)$ is replaced by $\overline{\rho}(a)=\rho_0(1-a^k/A^k)$. Therefore, we skip the details of the derivation and directly give the final results. The pre-collapse regime {#App:precoll} ----------------------- In the pre-collapse regime ($t<t_{f}$), the self-similar density and velocity profiles are given in parametric form by $$\begin{aligned} \frac{\rho(r,t)}{\rho_c(t)}=\frac{3}{3+2(3+k)y+(3+2k)y^2}, \label{ff5}\end{aligned}$$ $$\begin{aligned} \frac{r}{r_0(t)}=y^{1/k}(1+y)^{2/3}, \label{ff6}\end{aligned}$$ $$\begin{aligned} \frac{u(r,t)}{u_0(t)}=-\frac{y^{1/k}}{(1+y)^{1/3}}, \label{ff7}\end{aligned}$$ where $y=\frac{1}{2}(\frac{a}{A})^k\frac{t_f}{\delta t}$ goes from $0$ to $+\infty$ (here $\delta t=t_f-t$). For $k=4$, the curves ${\rho(r,t)}/{\rho_c(t)}$ and $-{u(r,t)}/{u_0(t)}$ drawn in Fig. \[Fig:compar\], solid lines, coincide with the self-similar numerical solution (dashed line) of equations (\[eq:Euler.4\])-(\[eq:Euler.5\]) derived in section \[subsec5B\]. (a)![Parametric solutions (\[ff5\])-(\[ff7\]) compared with the self-similar solutions of section \[subsec5B\] for $\alpha=24/11$. (a) Density $\rho(r,t)/\rho_c(t)$ versus $r/r_0(t)$ for $k=4$ in solid line, $R(\xi)/R(0)$ versus $\xi=r/r_0(t)$ for $R_4= - 4$ in dashed line. (b) Velocity $u(r,t)/u_0(t)$ versus $r/r_0(t)$ in solid line; $-1.6 U(\xi)$ in dashed line. []{data-label="Fig:compar"}](R-X-compare-c.pdf "fig:"){height="1.25in"} ![Parametric solutions (\[ff5\])-(\[ff7\]) compared with the self-similar solutions of section \[subsec5B\] for $\alpha=24/11$. (a) Density $\rho(r,t)/\rho_c(t)$ versus $r/r_0(t)$ for $k=4$ in solid line, $R(\xi)/R(0)$ versus $\xi=r/r_0(t)$ for $R_4= - 4$ in dashed line. (b) Velocity $u(r,t)/u_0(t)$ versus $r/r_0(t)$ in solid line; $-1.6 U(\xi)$ in dashed line. []{data-label="Fig:compar"}](u-X-compare-c.pdf){height="1.25in"} (b) In the above parametric representation the central density is given by the relation $$\begin{aligned} \rho_c(t)=\left (\frac{4}{3\pi}\right )^2\rho_0\left (\frac{t_{f}}{t_{f}-t}\right )^2. \label{ff8}\end{aligned}$$ Using equation (\[ff4\]) it can be rewritten as $$\begin{aligned} \rho_c(t)=\frac{1}{6\pi G}\frac{1}{(t_f-t)^2}, \label{ff8b}\end{aligned}$$ which agrees with the result of Sec. \[subsec5B\]. Moreover, we have $$\begin{aligned} r_0(t)=\left (\frac{3\pi}{4}\right )^{2/3}2^{1/k}A\left |\frac{t_{f}-t}{t_{f}}\right |^{(2k+3)/3k}, \label{ff9}\end{aligned}$$ $$\begin{aligned} u_0(t)=\frac{\pi}{2^{(k-1)/k}}\left (\frac{4}{3\pi}\right )^{1/3}\frac{A}{t_{f}}\left |\frac{t_{f}-t}{t_{f}}\right |^{(3-k)/3k}. \label{ff10}\end{aligned}$$ For $r\rightarrow 0$, we get $$\begin{aligned} \rho(r,t)\sim \rho_c(t) \left\lbrack 1-\frac{2}{3}(3+k)\left (\frac{r}{r_0(t)}\right )^{k}\right\rbrack, \label{ff11}\end{aligned}$$ $$\begin{aligned} u(r,t)\sim -u_0(t) \frac{r}{r_0(t)}. \label{ff12}\end{aligned}$$ For $r\rightarrow +\infty$, we get $$\begin{aligned} \rho(r)\sim \rho_0 \frac{3}{2k+3}\left (\frac{8}{3\pi}A^k\right )^{6/(2k+3)}\frac{1}{r^{6k/(3+2k)}}, \label{ff13}\end{aligned}$$ $$\begin{aligned} u(r)\sim -\left (\frac{8\pi\rho_0 G}{3}\right )^{1/2} \left (\frac{8}{3\pi}A^k\right )^{3/(2k+3)}r^{(3-k)/(3+2k)},\nonumber\\ \label{ff14}\end{aligned}$$ which are independent on time as it should. We have $\rho\sim r^{-\alpha_k}$ and $u\sim r^{\nu_k}$ with $$\begin{aligned} \alpha_k=\frac{6k}{2k+3},\qquad \nu_k=\frac{3-k}{2k+3}. \label{ff15}\end{aligned}$$ The expressions (\[ff13\]) and (\[ff14\]) also give the density and velocity profiles for all $r$ at $t=t_{f}$. For $k=2$, we get $\alpha_2=12/7$ and $\nu_2=1/7$; for $k\rightarrow +\infty$, we get $\alpha_{\infty}=3$ and $\nu_{\infty}=-1/2$; for $k=4$, we get $\alpha_4=24/11$ and $\nu_4=-1/11$. The exponent $\alpha$ achieves the critical value $2$ for $k=3$. For $k<3$, i.e. $\alpha<2$, the pressure wins over gravity as we approach the collapse time $t_{f}$, and the free-fall solution is not valid anymore. For $k>3$, i.e. $\alpha>2$, the gravity always wins over pressure so the free-fall solution may be valid for all times. Let us discuss the form of the density and velocity profiles depending on $k$. For any $k$, the density profile $\rho(r,t)$ starts from a finite value (for $t<t_f$) and decreases with the distance $r$. The central density $\rho_c(t)$ increases with time and diverges at the collapse time $t_f$. At $t=t_f$, the density profile is singular at the origin. For $k<3$, i.e. $\alpha<2$, the velocity profile $-u(r,t)$ starts from zero at $r=0$ and increases with the distance $r$. The magnitude of the velocity $u_0(t)$ decreases with time and tends to zero at the collapse time $t_f$. At $t=t_f$, the velocity is still zero at the origin. For $k=3$, i.e. $\alpha=2$, the velocity profile $-u(r,t)$ starts from zero at $r=0$ (for $t<t_f$), increases with the distance $r$, and reaches an asymptotic value $u_0$ (the prefactor $u_0(t)$ is constant). At $t=t_f$, the velocity profile has a constant non-zero value $u_0$. For $k>3$, i.e. $\alpha>2$, the velocity profile $-u(r,t)$ starts from zero at $r=0$, increases with the distance $r$, reaches a maximum, and decreases towards zero at large distances. The prefactor $u_0(t)$ increases with time and diverges at the collapse time $t_f$. At $t=t_f$, the velocity profile is singular at the origin. The post-collapse regime ------------------------ In the post-collapse regime ($t>t_{f}$), the self-similar density and velocity profiles are given in parametric form by $$\begin{aligned} \frac{\rho(r,t)}{\rho_c(t)}=\frac{3}{3+2(3+k)y+(3+2k)y^2}, \label{ff16}\end{aligned}$$ $$\begin{aligned} \frac{r}{r_0(t)}=|y|^{1/k}|1+y|^{2/3}, \label{ff17}\end{aligned}$$ $$\begin{aligned} \frac{u(r,t)}{u_0(t)}=-\frac{|y|^{1/k}}{|1+y|^{1/3}}, \label{ff18}\end{aligned}$$ where $y$ goes from $-\infty$ to $-1$, and $\rho_c(t)$, $r_0(t)$ and $u_0(t)$ are defined by equations (\[ff8\])-(\[ff10\]) as in the pre-collapse regime. For $r\rightarrow +\infty$, the behavior is the same as in the pre-collapse regime, but for $t>t_f$ and $r\rightarrow 0$, we get $$\begin{aligned} \rho(r,t)\sim \rho_c(t) \frac{3}{2k}\left (\frac{r_0(t)}{r}\right )^{3/2}, \label{ff19}\end{aligned}$$ $$\begin{aligned} u(r,t)\sim -u_0(t) \left (\frac{r_0(t)}{r}\right )^{1/2}. \label{ff20}\end{aligned}$$ We note that the density and the velocity are always singular at $r=0$. For any $k$, the density profile $\rho(r,t)$ is decreasing, as illustrated in Fig. \[Fig:postcoll\]-(a) . For $k<3$, the velocity profile $-u(r,t)$ decreases, reaches a minimum value, and increases. For $k=3$ it decreases towards an asymptotic value $u_0$ and for $k>3$ it decreases towards zero, see Fig. \[Fig:postcoll\]-(b). \(a) ![Parametric solutions of equations (\[ff16\])-(\[ff18\]) and self-similar solutions of equations (\[eq:numRV\]) in the post-collapse regime. (a) density $\rho(r,t)/\rho_c(t)$ versus $r/r_0(t)$ for $k=4$, or $\alpha=24/11$, in solid line, $15 R(\xi)$ versus $\xi=r/r_0(t)$ for $K_U=-1$ in dashed line, (b) velocity $-u(r,t)/u_0(t)$ versus $r/r_0(t)$ in solid line, $-U(\xi)$ in dashed line. []{data-label="Fig:postcoll"}](R-postcoll.pdf "fig:"){height="1.75in"} \(b) ![Parametric solutions of equations (\[ff16\])-(\[ff18\]) and self-similar solutions of equations (\[eq:numRV\]) in the post-collapse regime. (a) density $\rho(r,t)/\rho_c(t)$ versus $r/r_0(t)$ for $k=4$, or $\alpha=24/11$, in solid line, $15 R(\xi)$ versus $\xi=r/r_0(t)$ for $K_U=-1$ in dashed line, (b) velocity $-u(r,t)/u_0(t)$ versus $r/r_0(t)$ in solid line, $-U(\xi)$ in dashed line. []{data-label="Fig:postcoll"}](u-postcoll.pdf "fig:"){height="1.75in"} Finally, the mass contained in the Dirac peak $\rho_{D}({\bf r},t)=M_{D}(t)\delta({\bf r})$ at time $t>t_{f}$ is $$\begin{aligned} M_{D}(t)=\frac{8\pi}{3}2^{(3-k)/k}\rho_0 A^{3}\left (\frac{t-t_{f}}{t_{f}}\right )^{3/k}. \label{ff21}\end{aligned}$$ The mass in the core grows algebraically with an exponent $b_k=3/k$. For $k=2$, we get $b_2=3/2$; for $k\rightarrow +\infty$, we get $b_{\infty}=0$; for $k=3$, we get $b_3=1$; for $k=4$, we get $b_4=3/4$. The homogeneous sphere ---------------------- Finally, for completeness, we recall the solution corresponding to the collapse of a homogeneous sphere with mass $M$, initial density $\rho_0$ and initial radius $R_0$. Since $\overline{\rho}(a)=\rho_0$, we find from equations (\[ff2\])-(\[ff4\]) that all the particles collapse at $r=0$ at the same time $t_f$. Therefore, a Dirac peak $\rho_{D}({\bf r})=M\delta({\bf r})$ is formed at $t=t_f$. The evolution of the radius $R(t)$ of the homogeneous sphere is given by $$\begin{aligned} R(t)=R_0\cos^2\theta,\qquad \frac{t}{t_f}=\frac{2}{\pi}\left (\theta+\frac{1}{2}\sin(2\theta)\right ), \label{ff22}\end{aligned}$$ where $\theta$ runs between $0$ and $\pi/2$. For $t\rightarrow t_f$, we get $$\begin{aligned} R(t)=R_0\left (\frac{3\pi}{4}\right )^{2/3}\left (1-\frac{t}{t_f}\right )^{2/3}. \label{ff23}\end{aligned}$$ The density $\rho_c(t)=3M/4\pi R(t)^3$ increases as $$\begin{aligned} \rho_c(t)=\rho_0 \left (\frac{4}{3\pi}\right )^{2}\left (1-\frac{t}{t_f}\right )^{-2}. \label{ff24}\end{aligned}$$ The velocity field is $u(r,t)=-H(t)r$ with $$\begin{aligned} H=-\frac{\dot R}{R}=\frac{2}{3}(t_f-t)^{-1}. \label{ff25}\end{aligned}$$ [99]{} M. V. Penston, Mon. Not. R. astr. Soc. [**[144]{}**]{}, (1969) 425. R.B. Larson, Mon. Not. R. astr. Soc. [**[145]{}**]{}, (1969) 271. M. P. Brenner, T. P. Witelski, J. Stat. Phys. [**93**]{}, (1998) 863. C. Josserand, Y. Pomeau, S. Rica, Journal of Low Temperature Physics [**[145]{}**]{}, (2006) 231. J. Sopik, C. Sire, P.H. Chavanis, Phys. Rev. E [**74**]{}, (2006) 011112. S. Timoshenko, *History of the strength of materials* (Dover, New-York 1983); W. G. B. Britton, J. J. Fendley, M. E. Michael, Am. J. of Phys. [**46**]{}, (1978) 1124. F. Hoyle, W.A. Fowler, Astrophys. J. [**[132]{}**]{}, (1960) 165. R. D. Peters, M. Le Berre, Y. Pomeau, Phys. Rev. E [**86**]{}, (2012) 026207 \[arXiv:1204.1551\]; Y. Pomeau, M. Le Berre, *Chaos, CNN, Memristors and Beyond*, Eds. A. Adamatzky and G. Chen, World Scientific, (2012), chap. 28; Y. Pomeau, M. Le Berre, \[arXiv:1107.3331\] A.A. Dorodnicyn, Am. Math. Soc. Transl. Series One, [**[4]{}**]{}, (1953) 1 \[translated from Priklad Mat. i Mek. [**[11]{}**]{}, (1947) 313\]. P. Coullet, C.R. Mécanique [**340**]{}, (2012) 777. L.D. Landau and E.M. Lifshitz, [*[Statistical physics]{}*]{}, Pergamon, Oxford, (1987), Chapter IX , p. 317 [*[et sq]{}*]{} in second edition. R. Emden, *Gaskugeln anwendungen der mechanischen wärmetheorie auf kosmologische und meteorologie probleme* (Teubner, Leipzig, 1907). R. Ebert, Z. Astrophys. [**37**]{}, (1955) 217. W.B. Bonnor, Mon. Not. R. astr. Soc. [**116**]{}, (1956) 351. W.H. McCrea, Mon. Not. R. astr. Soc. [**117**]{}, (1957) 562. V.A. Antonov, Vest. Leningr. Gos. Univ. [**7**]{}, (1962) 135. D. Lynden-Bell, R. Wood, Mon. Not. R. astr. Soc. [**138**]{}, (1968) 495. P.H. Chavanis, Astron. Astrophys. [**381**]{}, (2002) 340. P.H. Chavanis, Astron. Astrophys. [**401**]{}, (2003) 15. J.R. Oppenheimer, G.M. Volkoff, Phys. Rev. [**55**]{}, (1939) 374. M. Colpi, S.L. Shapiro, I. Wasserman, Phys. Rev. Lett. [**57**]{}, (1986) 2485. P.H. Chavanis, Phys. Rev. D [**[84]{}**]{}, (2011) 043531. P.H. Chavanis, T. Harko, Phys. Rev. D [**86**]{}, (2012) 064011. P.H. Chavanis, Phys. Rev. E [**65**]{}, 056123 (2002); P.H. Chavanis, Astron. Astrophys. [**432**]{}, (2005) 117; P.H. Chavanis, Int. J. Mod. Phys. B [**20**]{}, (2006) 3113. P. Painlevé, Bull. Soc. Math. Phys. France [**[28]{}**]{}, (1900), 201; I. L. Edward, [*[Ordinary Differential Equations]{}*]{}, Dover, New-York (1956). P. Hoflich, P. Kumar and J. C. Wheeler, *Cosmic Explosions in Three Dimensions: Asymmetries in Supernovae and Gamma Ray Bursts*, Cambridge University Press, Cambridge (2004), p.276. P.H. Chavanis, Astron. Astrophys. [**451**]{}, (2006) 109. H. Nessyahu, E. Tadmor, Journal of Computational Physics [**87**]{}, (1990) 408; J. Balbas, E. Tadmor, CentPack, http://www.cscamm.umd.edu/centpack. P.H. Chavanis, C. Sire, Phys. Rev. E [**70**]{}, (2004) 026115. A. Yahil, Astrophys. J. [**[265]{}**]{}, (1983) 1047. H. A. Bethe, Rev. Mod. Phys. [**[62]{}**]{}, (1990) 801. G.I. Barrenblatt, Y.B. Zel’dovich, Annual Review of Fluid Mechanics [**4**]{}, (1972) 285. L. Mestel, Q. Jl R. Astr. Soc. [**[6]{}**]{}, (1965) 161. F. H. Shu, Astrophys. J. [**[214]{}**]{}, (1977) 488497. P. Goldreich, S.V. Weber, Astrophys. J. [**238**]{}, (1980) 991. J. Bricmont, A. Kupiainen and G. Lin, Comm. Pure Appl. Math. [**47**]{}, (1994) 285; G. L. Eyink and J. Xin, J. Stat. Phys. [**100**]{}, (2000) 679. C. Sire, P.H. Chavanis, Phys. Rev. E [**69**]{}, (2004) 066109. [^1]: Our initial condition (a star undergoing a loss of equilibrium at a saddle node) differs drastically from the initial conditions taken in [@Penston; @larson; @Brenner]. These studies assume an initial constant density over the whole star, $\rho(r)=\rho_0$, that seems very far from any physical situation. Note that, in this context, Brenner and Witelski [@Brenner] point out the existence of solutions which do *not* behave as the theoretical Penston-Larson self-similar solution with $\alpha=2$. The numerical study presented here corresponds to a parameter value $N=50$ in the notation of [@Brenner]. Note that despite the very different initial conditions, their Figs. 9 and 10 which are for $N=50$ show an asymptotic behavior with $\alpha$ larger than $2$ and a velocity diverging in the core, in agreement with our results (see below). [^2]: Of course, the free-fall solution of a self-gravitating gas is well-known [@Penston]. However, it has been studied assuming either a purely homogeneous distribution of matter or an inhomogeneous distribution of matter behaving as $\rho(r,t)-\rho(0,t)\sim r^2$ for $r\rightarrow 0$, leading to a large distance decay $\rho\sim r^{-\alpha}$ with an exponent $\alpha=12/7$. We show that these assumptions are not relevant to our problem, and we consider for the first time a behavior $\rho(r,t)-\rho(0,t)\sim r^4$ for $r\rightarrow 0$, leading to the large distance decay with the exponent $\alpha=24/11$. [^3]: This equation of state is inspired by the study of self-gravitating boson stars in general relativity [@colpi; @prd1; @ch]. Such an equation of state could hold in the core of neutron stars because of its superfluid properties [@ch]. The neutrons (fermions) could form Cooper pairs and behave as bosons. In this context $\rho c^2$ represents the energy density and the parameter $T$ has an interpretation different from the temperature (in the core of neutron stars $T$ is much less than the Fermi temperature or than the Bose-Einstein condensation temperature so it can be taken as $T=0$). We use here this equation of state with a different interpretation. [^4]: Equation (\[fr\]) may also be obtained by multiplying equation (\[eq:5.2\]) by $\hat{r}^2$ and integrating between $0$ and $\hat{r}$. [^5]: We can come back to the original (dimensional) variables by making the substitution $R\rightarrow R/L=R\rho_*^{1/3}/M^{1/3}$ and $T\rightarrow T/(G\rho_{*}L^{2})= T/(G\rho_{*}^{1/3}M^{2/3})$. [^6]: This temperature-radius relation $T(R)$ is the counterpart of the mass-radius relation $M(R)$ of boson stars in general relativity, that also presents a spiralling behavior [@ch]. The dynamical stability of the configurations may be determined from the theory of Poincaré on the linear series of equilibria as explained in [@aaa]. If we plot the temperature $T$ as a function of the parameter $\hat{h}_0$, a change of stability can occur only at a turning point of temperature. Since the system is stable at high temperatures (or low $\hat{h}_0$) because it is equivalent to a polytrope $n=1$ that is known to be stable, we conclude that the upper branch in Fig. \[Fig:spi\] is stable up to the turning point $A$. Then, the series of equilibria loses a mode of stability at each turning point of temperature $T$ and becomes more and more unstable. [^7]: These equations are similar to the Smoluchowski-Poisson system (describing self-gravitating Brownian particles in the strong friction limit) studied in [@cs04] except that it is second order in time instead of first order in time. [^8]: These equations are also valid for small perturbations about an equilibrium state since we can neglect the advection term ${\bf u}\cdot \nabla {\bf u}$ at linear order. [^9]: The authors of [@cs04] study the dynamics of Smoluchowski-Poisson equations close to a saddle-node but for a [*fixed*]{} value of the temperature $T\rightarrow T_c^{-}$. [^10]: By free-fall, we mean a situation where the collapse is due only to the gravitational attraction, i.e. in which pressure forces are neglected. This corresponds to the Euler-Poisson system (\[iso1\])-(\[iso3\]) with $p=0$. [^11]: We note that the exponent $\alpha=12/7$ was previously found by Penston [@Penston] for the free-fall of a pressureless gas ($T=0$) by assuming a regular Taylor expansion $\rho=\rho_0+\rho_2 r^2+...$ close to the origin. This solution is valid if $T$ is exactly zero but, when $T>0$, as it is in reality, this solution cannot describe a situation where gravity dominates over pressure (the situation that we are considering) since $\alpha=12/7<2$. This is why Penston [@Penston] and Larson [@larson] considered a self-similar solution of the isothermal Euler-Poisson system (\[eq:Euler.1\])-(\[eq:Euler.2.1\]) where both pressure and gravity terms scale the same way. Alternatively, by assuming a more general expansion $\rho=\rho_0+\rho_k r^k+...$ with $k>2$ close to the origin, we find a new self-similar solution where gravity dominates over pressure. [^12]: See the next subsection where the change of variable in equation (\[eq:appell\]) is a trick converting a problem with algebraic decay into exponential decay permitting spectral analysis. [^13]: Some analogies between the post-collapse dynamics of self-gravitating Brownian particles [@post] and the Bose-Einstein condensation have been discussed in [@bosesopik]. [^14]: It does not mean that the Penston-Larson solution is incorrect. It represents a mathematically exact (type I) self-similar solution of the isothermal Euler-Poisson equations. However, we argue that [*other*]{} (type II) self-similar solutions exist in which gravity overcomes pressure. They are characterized by $\alpha>2$ and by a density behaving as $\rho_0+\rho_k r^{k}$ with $k>3$ close to the origin (see below), while the Penston-Larson solution has $\alpha=2$ and the density behaves as $\rho_0+\rho_2 r^{2}$ close to the origin. Our numerical work (despite its limitations because we follow the collapse only over a few decades in density) together with important physical considerations (e.g. the fact that the velocity profile in our solution decreases to zero instead of tending to a constant value) suggest that these new solutions are relevant to describe the collapse.
--- abstract: 'Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical treatment of Fredholm determinants to be found in the literature. Instead, the few numerical evaluations that are available rely on eigenfunction expansions of the operator, if expressible in terms of special functions, or on alternative, numerically more straightforwardly accessible analytic expressions, e.g., in terms of Painlevé transcendents, that have masterfully been derived in some cases. In this paper we close the gap in the literature by studying projection methods and, above all, a simple, easily implementable, general method for the numerical evaluation of Fredholm determinants that is derived from the classical Nyström method for the solution of Fredholm equations of the second kind. Using Gauss–Legendre or Clenshaw–Curtis as the underlying quadrature rule, we prove that the approximation error essentially behaves like the quadrature error for the sections of the kernel. In particular, we get exponential convergence for analytic kernels, which are typical in random matrix theory. The application of the method to the distribution functions of the Gaussian unitary ensemble (GUE), in the bulk and the edge scaling limit, is discussed in detail. After extending the method to systems of integral operators, we evaluate the two-point correlation functions of the more recently studied Airy and $\text{Airy}_1$ processes.' author: - 'Folkmar Bornemann[^1]' bibliography: - 'article.bib' title: On the Numerical Evaluation of Fredholm Determinants --- Fredholm determinant, Nyström’s method, projection method, trace class operators, random matrix theory, Tracy–Widom distribution, Airy and $\text{Airy}_1$ processes 65R20, 65F40, 47G10, 15A52 Introduction {#sect:intro} ============ landmark paper[^2] on linear integral equations is generally considered to be the forefather of those mathematical concepts that finally led to modern functional analysis and operator theory—see the historical accounts in and . Fredholm was interested in the solvability of what is now called a Fredholm equation of the second kind, $$\label{eq:fred1} u(x) + z \int_a^b K(x,y) u(y)\,dy = f(x) \qquad (x \in(a,b)),$$ and explicit formulas for the solution thereof, for a right hand side $f$ and a kernel $K$, both assumed to be continuous functions. He introduced his now famous determinant $$\label{eq:det1} d(z) = \sum_{k=0}^\infty \frac{z^n}{n!} \int_a^b \cdots \int_a^b \det\left(K(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n,$$ which is an entire function of $z\in\C$, and succeeded in showing that the integral equation is uniquely solvable if and only if $d(z)\neq 0$. Realizing the tremendous potential of Fredholm’s theory, Hilbert started working on integral equations in a flurry and, in a series of six papers from 1904 to 1910,[^3] transformed the determinantal framing to the beginnings of what later, in the hands of Schmidt, Carleman, Riesz, and others, would become the theory of compact operators in Hilbert spaces. Consequently, over the years Fredholm determinants have faded from the core of general accounts on integral equations to the historical remarks section—if they are mentioned at all.[^4] So, given this state of affairs, why then study the numerical evaluation of the Fredholm determinant $d(z)$? The reason is, simply enough, that the Fredholm determinant and the more general notions, generalizing (\[eq:fred1\]) and (\[eq:det1\]) to $$u + z A u = f,\qquad d(z) = \det(I+z A),$$ for certain classes of compact operators $A$ on Hilbert spaces, have always remained important tools in operator theory and mathematical physics [@MR1744872; @MR2154153]. In turn, they have found many significant applications: e.g., in atomic collision theory [@MR0044404; @Moi77], inverse scattering [@MR0406201], in Floquet theory of periodic differential equations [@0287.34016], in the infinite-dimensional method of stationary phase and Feynman path integrals [@MR0474436; @MR1284645], as the two-point correlation function of the two-dimensional Ising model [@Wilk78], in renormalization in quantum field theory [@MR2154153], as distribution functions in random matrix theory [@MR2129906; @MR1677884; @MR1659828] and combinatorial growth processes [@Johan00; @MR1933446; @Sasa05; @MR2363389]. As puts it most aptly upon including Fredholm’s theory as a chapter of its own in his recent textbook on functional analysis: “Since this determinant appears in some modern theories, it is time to resurrect it.” In view of this renewed interest in operator determinants, what numerical methods are available for their evaluation? Interestingly, this question has apparently never—at least to our knowledge—been systematically addressed in the numerical analysis literature.[^5] Even experts in the applications of Fredholm determinants commonly seem to have been thinking [@Spohn1] that an evaluation is only possible if either the eigenvalues of the integral operator are, more or less, explicitly known or if an alternative analytic expression has been found that is numerically more accessible—in each specific case anew, lacking a general procedure. #### The Nyström-type method advocated in this paper In contrast, we study a simple general numerical method for Fredholm determinants which is exceptionally efficient for smooth kernels, yielding small *absolute* errors (i.e., errors that are small with respect to the scale $\det(I)=1$ inherently given by the operator determinant). To this end we follow the line of thought of classical quadrature method for the numerical solution of the Fredholm equation (\[eq:fred1\]). Namely, given a quadrature rule $$Q(f) = \sum_{j=1}^m w_j\, f(x_j) \approx \int_a^b f(x)\,dx,$$ discretized (\[eq:fred1\]) as the linear system $$\label{eq:ny1} u_i + z \sum_{j=1}^m w_j K(x_i,x_j) u_j = f(x_i)\qquad (i=1,\ldots,m),$$ which has to be solved for $u_i \approx u(x_i)$ ($i=1,\ldots,m)$. Nyström’s method is extremely simple and, yet, extremely effective for *smooth* kernels. So much so that , in a chapter comparing different numerical methods for Fredholm equations of the second kind, write: > Despite the theoretical and practical care lavished on the more complicated algorithms, the clear winner of this contest has been the Nyström routine with the $m$-point Gauss–Legendre rule. This routine is extremely simple; it includes a call to a routine which provides the points and weights for the quadrature rule, about twelve lines of code to set up the Nyström equations and a call to the routine which solves these equations. Such results are enough to make a numerical analyst weep. By keeping this conceptual and algorithmic simplicity, the method studied in this paper approximates the Fredholm determinant $d(z)$ simply by the determinant of the $m\times m$-matrix that is applied to the vector $(u_i)$ in the Nyström equation (\[eq:ny1\]): $$\label{eq:ny} d_Q(z) = \det \left(\delta_{ij} + z\,w_i K(x_i,x_j)\right)_{i,j=1}^m.$$ If the weights $w_j$ of the quadrature rule are positive (which is always the better choice), we will use the *equivalent* symmetric variant $$\label{eq:ny2} d_Q(z) = \det\left(\delta_{ij} + z\, w_i^{1/2} K(x_i,x_j) w_j^{1/2}\right)_{i,j=1}^m.$$ Using Gauss–Legendre or Curtis–Clenshaw quadrature rules, the computational cost[^6] of the method is of order $O(m^3)$. The implementation in Matlab, or Mathematica, is straightforward and takes just a few lines of code.[^7] In Matlab: > function d = DetNyström(K,z,a,b,m) > [w,x] = QuadratureRule(a,b,m); > w = sqrt(w); > [xj,xi] = meshgrid(x,x); > d = det(eye(m)+z*(w'*w).*K(xi,xj)); In Mathematica: \[prog:math\] ![image](FredDetMathematica.pdf){width="83.50000%"} Strictly speaking we are not the first to suggest this simple method, though. In fact, it was himself [-@35.0378.02 pp. 52–60] in his very first paper on integral equations, who motivated[^8] the expression (\[eq:det1\]) of the Fredholm determinant by essentially using this method with the *rectangular rule* for quadrature, proving locally uniform convergence; see also and, for the motivational argument given just heuristically, without a proof of convergence, , (who speaks of a “poetic license” to be applied “without too many scruples”), , , and —to name just a few but influential cases. Quite astonishingly, despite of all its presence as a motivational tool in the expositions of the classical theory, we have found just one example of the use of this method (with Gauss–Legendre quadrature) in an actual numerical calculation: a paper by the physicists on low-energy elastic scattering of electrons from hydrogen atoms. However, the error estimates (Theorem \[thm:nyerr\]) that we will give in this paper seem to be new at least; we will prove that the approximation error essentially behaves like the quadrature error for the sections $x \mapsto K(x,y)$ and $y \mapsto K(x,y)$ of the kernel. In particular, we will get exponential convergence rates for analytic kernels. #### Examples {#ex:intro} Perhaps the generality and efficiency offered by our direct numerical approach to Fredholm determinants, as compared to analytic methods if they are available at all, is best appreciated by an example. The probability $E_2(0;s)$ that an interval of length $s$ does not contain, in the bulk scaling limit of level spacing $1$, an eigenvalue of the Gaussian unitary ensemble (GUE) is given by the Fredholm determinant of the sine kernel, $$E_2(0;s) = \det\left(I - A_s\projected{L^2(0,s)}\right),\qquad A_s u(x) = \int_0^s \frac{\sin(\pi(x-y))}{\pi(x-y)} u(y)\,dy\,;$$ see and . has further shown that the eigenfunctions of this selfadjoint integral operator are exactly given by a particular family of special functions, namely the radial prolate spheroidal wave functions with certain parameters. Using tables [@MR0074130] of these functions he was finally able to evaluate $E_2(0;s)$ numerically. On the other hand, in an admirably intricate analytic [tour de force]{} expressed the Fredholm determinant of the sine kernel as $$\label{eq:jimbo} E_s(0;s) = \exp\left(\int_0^{\pi s} \frac{\sigma(x)}{x}\,dx\right)$$ in terms of the sigma, or Hirota, representation of the fifth Painlevé equation, namely $$(x \sigma'')^2 + 4(x \sigma' - \sigma)(x \sigma' - \sigma + \sigma'^2) = 0,\qquad \sigma(x) \sim -\frac{x}{\pi} - \frac{x^2}{\pi^2} \quad(x \to 0),$$ see also and . With respect to the numerical evaluation, the latter two authors conclude in a footnote, most probably by comparing to Gaudin’s method: “Without the Painlevé representations, the numerical evaluation of the Fredholm determinants is quite involved.” However, one does not need to know more than the smooth kernel $\sin(\pi(x-y))/(\pi(x-y))$ ifself to approximate $E_2(0;s)$ with the method of this paper. For instance, the Gauss–Legendre rule with just $m=5$ quadrature points already gives, in 0.2ms computing time, 15 accurate digits of the value $$E_2(0,0.1) = 0.90002\,72717\,98259\,\cdots,$$ that is, by calculating the determinant of a $5\times 5$-matrix easily built from the kernel. Even though it is satisfying to have an alternative and simpler way of calculating already known quantities, it is far more exciting to be able to calculate quantities that otherwise have defeated numerical evaluations so far. For instance, the joint distribution functions of the Airy and the $\text{Airy}_1$ processes are given as determinants of systems of integral operators, see , , and . Even though a nonlinear partial differential equation of third order in three variables has been found by for the logarithm of the joint distribution function of the Airy process at two different times, this masterful analytic result is probably of next to no numerical use. And in any case, no such analytic results are yet known for the $\text{Airy}_1$ process. However, the Nyström-type method studied in this paper can easily be extended to systems of integral operators. In this way, we have succeeded in evaluating the two-point correlation functions of both stochastic processes, see Section \[sect:matrixkernels\]. #### Outline of this paper For the proper functional analytic setting, in Section \[sect:trace\] we review some basic facts about trace class and Hilbert–Schmidt operators. In Section \[sect:det\] we review the concept of the determinant $\det(I+A)$ for trace class operators $A$ and its relation to the Fredholm determinant. In Section \[sect:cond\] we study perturbation bounds implying that numerical calculations of determinants can only be expected to be accurate with respect to [*absolute*]{} errors in general. In Section \[sect:proj\] we use the functional analytic formulation of the problem to obtain convergence estimates for projection methods of Galerkin and Ritz–Galerkin type. The convergence rate is shown to depend on a proper balance between the decay of the singular values of the operator and the growth of bounds on the derivatives of the corresponding singular functions. This is in sharp contrast with Section \[sect:quad\], where we study the convergence of the Nyström-type method (\[eq:ny2\]) by directly addressing the original definition of the Fredholm determinant. Here, only the smoothness properties of the kernel enter the convergence estimates. It turns out that, for kernels of low regularity, the order of convergence of the Nyström-type method can be even higher than that of a Ritz–Galerkin method. In Section \[sect:random\] we give examples for the exponential convergence rates enjoyed by analytic kernels. To this end we discuss the details of the numerical evaluation of the determinants of the sine and Airy kernels, which express the probability distributions $E_2(0;s)$ and $F_2(s)$ (the Tracy–Widom distribution) of random matrix theory. Finally, after extending the Nyström-type method to systems of integral operators we report in Section \[sect:matrixkernels\] on the numerical evaluation of the two-point correlation functions of the Airy and $\text{Airy}_1$ processes. Trace Class and Hilbert–Schmidt Operators {#sect:trace} ========================================= We begin by recalling some basic material about the spectral theory of nonselfadjoint compact operators, which can be found, e.g., in , and . We consider a complex, separable Hilbert space $\mathcal{H}$ with an inner product $\langle\cdot,\cdot\,\rangle$ that is linear in the [*second*]{} factor and conjugate linear in the first. The set of bounded linear operators will be denoted by $\mathcal{B}(\mathcal{H})$, the compact operators by $\mathcal{J}_\infty(\mathcal{H})$. The spectrum of a compact operator $A \in \mathcal{J}_\infty(\mathcal{H})$ has no non-zero limit point; its non-zero points are eigenvalues of finite algebraic multiplicity. We list these eigenvalues as $(\lambda_n(A))_{n=1}^{N(A)}$, counting multiplicity, where $N(A)$ is either a finite non-negative integer or infinity, and order them by $$|\lambda_1(A)| \geq |\lambda_2(A)| \geq \cdots.$$ The positive eigenvalues $$s_1(A) \geq s_2(A) \geq \cdots > 0$$ of the associated positive-semidefinite, selfadjoint operator $$|A|=(A^* A)^{1/2}$$ are called the singular values of $A$. Correspondingly, there is the Schmidt or singular-value representation of $A$, that is, the norm convergent expansion $$\label{eq:schmidt} A = \sum_{n=1}^{N(|A|)} s_n(A) \langle u_n,\cdot\,\rangle v_n,$$ where the $u_n$ and $v_n$ are certain (not necessarily complete) orthonormal sets in $\mathcal{H}$. Note that $s_n(A)=|\lambda_n(A)|$ if $A$ is selfadjoint. In general we have Weyl’s inequality $$\label{eq:weyl} \sum_{n=1}^N |\lambda_n(A)|^p \;\leq\; \sum_{n=1}^N s_n(A)^p\qquad (N \leq N(A),\;1\leq p< \infty).$$ The Schatten–von Neumann classes of compact operators are defined as $$\mathcal{J}_p(\mathcal{H}) = \{A \in \mathcal{J}_\infty(\mathcal{H}) \;:\; \sum_{n=1}^{N(|A|)} s_n(A)^p < \infty \}\qquad (1\leq p < \infty)$$ with the corresponding norm[^9] $$\|A\|_{\scriptscriptstyle\mathcal{J}_p} = \left(\sum_{n=1}^{N(|A|)} s_n(A)^p\right)^{1/p}.$$ The operator norm on $\mathcal{J}_\infty(\mathcal{H})$ perfectly fits into this setting if we realize that $$\|A\| = s_1(A) = \max_{n=1,\ldots,N(|A|)} s_n(A) = \|A\|_{\scriptscriptstyle\mathcal{J}_\infty}.$$ There are the continuous embeddings $\mathcal{J}_p(\mathcal{H}) \subset \mathcal{J}_q(\mathcal{H})$ for $1\leq p \leq q \leq \infty$ with $$\|A\|_{\scriptscriptstyle\mathcal{J}_q} \leq \|A\|_{\scriptscriptstyle\mathcal{J}_p}.$$ The classes $\mathcal{J}_p(\mathcal{H})$ are two-sided operator ideals in $\mathcal{B}(\mathcal{H})$, that is, for $A \in \mathcal{J}_p(\mathcal{H})$ ($1\leq p\leq\infty$) and $B \in \mathcal{B}(\mathcal{H})$ we have $AB,BA \in \mathcal{J}_p(\mathcal{H})$ with $$\label{eq:ideal} \|A B\|_{\scriptscriptstyle\mathcal{J}_p} \leq \|A\|_{\scriptscriptstyle\mathcal{J}_p} \|B\|,\qquad \|B A\|_{\scriptscriptstyle\mathcal{J}_p} \leq \|B\|\, \|A\|_{\scriptscriptstyle\mathcal{J}_p}.$$ Of special interest to us are the [*trace class operators*]{} $\mathcal{J}_1(\mathcal{H})$ and the [*Hilbert–Schmidt operators*]{} $\mathcal{J}_2(\mathcal{H})$. The product of two Hilbert–Schmidt operators is of trace class: $$\|AB\|_{\scriptscriptstyle\mathcal{J}_1} \leq \|A\|_{\scriptscriptstyle\mathcal{J}_2} \|B\|_{\scriptscriptstyle\mathcal{J}_2}\qquad (A,B\in \mathcal{J}_2(\mathcal{H})).$$ The *trace* of a trace class operator $A$ is defined by $$\tr(A) = \sum_{n=1}^\infty \langle u_n,Au_n\rangle$$ for any orthonormal basis $(u_n)_n$. A deep theorem of Lidskii’s [@MR2154153 Chap. 3] tells us that $$\label{eq:tracedef} \tr(A) = \sum_{n=1}^{N(A)} \lambda_n(A),$$ which implies by Weyl’s inequality (\[eq:weyl\]) that $$\label{eq:sum1lambda} |\tr(A)| \leq \sum_{n=1}^{N(A)} |\lambda_n(A)| \leq \tr(|A|) = \|A\|_{\scriptscriptstyle\mathcal{J}_1}.$$ Likewise, for a Hilbert–Schmidt operator $A \in \mathcal{J}_2(\mathcal{H})$ we have $$\label{eq:sum2lambda} \tr(A^2) = \sum_{n=1}^{N(A)} \lambda_n(A)^2,\qquad |\tr(A^2)| \leq \sum_{n=1}^{N(A)} |\lambda_n(A)|^2 \leq \|A\|_{\scriptscriptstyle\mathcal{J}_2}^2.$$ #### Integral operators with $L^2$-kernel In the Hilbert space $\mathcal{H}=L^2(a,b)$ of square-integrable functions on a finite interval $(a,b)$ the Hilbert–Schmidt operators are exactly given by the integral operators with $L^2$-kernel. That is, there is a one-to-one correspondence [@MR2154153 Thm. 2.11] between $A \in \mathcal{J}_2(\mathcal{H})$ and $K \in L^2((a,b)^2)$ mediated through $$\label{eq:intop} A u(x) = \int_a^b K(x,y) u(y)\,dy$$ with equality of norms $\|A\|_{\scriptscriptstyle\mathcal{J}_2} = \|K\|_{L^2}$: the spaces $\mathcal{J}_2(\mathcal{H})$ and $L^2((a,b)^2)$ are thus isometrically isomorphic. In particular, by (\[eq:sum2lambda\]) and a well known basic result on infinite products [@MR0183997 p. 232], we get for such operators that $$\prod_{n=1}^{N(A)} (1+\lambda_n(A)) \text{ converges (absolutely) } \;\Leftrightarrow\; \sum_{n=1}^{N(A)} \lambda_n(A) \text{ converges (absolutely)}.$$ Since the product is a natural candidate for the definition of $\det(I+A)$ it makes sense requiring $A$ to be of trace class; for then, by (\[eq:sum1lambda\]), the absolute convergence of the sum can be guaranteed. #### Integral operators with a continuous kernel A continuous kernel $K \in C([a,b]^2)$ is certainly square-integrable. Therefore, the induced integral operator (\[eq:intop\]) defines a Hilbert–Schmidt operator $A$ on the Hilbert space $\mathcal{H}=L^2(a,b)$. Moreover, other than for $L^2$ kernels in general, the integral $$\int_a^b K(x,x)\,dx$$ over the diagonal of $(a,b)^2$ is now well defined and constitutes, in analogy to the matrix case, a “natural” candidate for the trace of the integral operator. Indeed, if an integral operator $A$ with continuous kernel is of trace class, one can prove [@MR1744872 Thm. 8.1] $$\label{eq:tracediag} \tr(A) = \int_a^b K(x,x)\,dx.$$ Unfortunately, however, just the continuity of the kernel $K$ does not guarantee the induced integral operator $A$ to be of trace class.[^10] Yet, there is some encouraging positive experience stated by : > However, the counter-examples which prevent nice theorems from holding are generally rather contrived so that I have found the following to be true: If an integral operator with kernel $K$ occurs in some ‘natural’ way and $\int |K(x,x)|\,dx < \infty$, then the operator can (almost always) be proven to be trace class (although sometimes only after some considerable effort). Nevertheless, we will state some simple criteria that often work well: 1. If the continuous kernel $K$ can be represented in the form $$K(x,y) = \int_c^d K_1(x,y) K_2(z,y)\, dz\qquad (x,y \in [a,b])$$ with $K_1 \in L^2((a,b)\times(c,d))$, $K_2 \in L^2((c,d)\times(a,b))$, then the induced integral operator $A$ is trace class on $L^2(a,b)$. This is, because $A$ can then be written as the product of two Hilbert–Schmidt operators. 2. If $K(x,y)$ and $\partial_y K(x,y)$ are continuous on $[a,b]^2$, then the induced integral operator $A$ is trace class on $L^2(a,b)$. This is, because we can write $A$ by partial integration in the form $$Au(x) = K(x,b) \int_a^b u(y)\,dy - \int_a^b \left( \int_y^b \partial_z K(x,z)\,dz \right) u(y)\,dy$$ as a sum of a rank one operator and an integral operator that is trace class by the first criterion. In particular, integral operators with smooth kernels are trace class [@MR1892228 p. 345]. 3. A continuous Hermitian[^11] kernel $K(x,y)$ on $[a,b]$ that satisfies a Hölder condition in the second argument with exponent $\alpha>1/2$, namely $$|K(x,y_1)-K(x,y_2)| \leq C |y_1 - y_2|^\alpha\qquad (x,y_1,y_2 \in [a,b]),$$ induces an integral operator $A$ that is trace class on $L^2(a,b)$; see . 4. If the continuous kernel $K$ induces a selfadjoint, positive-semidefinite integral operator $A$, then $A$ is trace class [@MR1744872 Thm. IV.8.3]. The hypothesis on $A$ is fulfilled for positive-semidefinite kernels $K$, that is, if $$\label{eq:semidef} \sum_{j,k=1}^n \overline{z_j}z_k K(x_j,x_k)\geq 0$$ for any $x_1,\ldots,x_n \in (a,b)$, $z \in \C^n$ and any $n \in \N$ [@MR2154153 p. 24]. Definition and Properties of Fredholm and Operator Determinants {#sect:det} =============================================================== In this section we give a general operator theoretical definition of infinite dimensional determinants and study their relation to the Fredholm determinant. For a trace class operator $A \in \mathcal{J}_1(\mathcal{H})$ there are several equivalent constructions that all define one and the same *entire* function $$d(z) = \det(I+zA)\qquad(z \in \C);$$ in fact, each construction has been chosen at least once, in different places of the literature, as the basic definition of the operator determinant: 1. define the determinant by the locally uniformly convergent (infinite) product $$\label{eq:detlidskii} \det(I+z A) = \prod_{n=1}^{N(A)} (1+z\lambda_n(A)),$$ which possesses zeros exactly at $z_n=-1/\lambda_n(A)$, counting multiplicity. 2. define the determinant as follows. Given any sequence of finite rank operators $A_n$ with $A_n \to A$ converging in trace class norm, the sequence of finite dimensional determinants[^12] $$\label{eq:findef} \det\left(I + z A_n\projected{\range(A_n)}\right)$$ (which are polynomials in $z$) converges locally uniform to $\det(I+z A)$, independently of the choice of the sequence $A_n$. The existence of at least one such sequence follows from the singular value representation (\[eq:schmidt\]). 3. define the determinant by what is often called Plemelj’s formula[^13] $$\label{eq:detdunford} \det(I+z A) = \exp(\tr\log(I+zA)) = \exp\left(-\sum_{n=1}^\infty \frac{(-z)^n}{n} \tr A^n \right),$$ which converges for $|z| < 1/|\lambda_1(A)|$ and can analytically be continued as an entire function to all $z \in \C$. 4. and define the determinant most elegantly with a little exterior algebra [@MR0224623]. With $\bigwedge^n(A) \in \mathcal{J}_1(\bigwedge^n(\mathcal{H}))$ being the $n^{\text{th}}$ exterior product of $A$, the power series $$\label{eq:detgrothen} \det(I+z A) = \sum_{n=0}^\infty z^n \tr \bigwedge\nolimits^n(A)$$ converges for all $z \in \C$. Note that $\tr \bigwedge\nolimits^n(A) = \sum_{i_1<\cdots < i_n} \lambda_{i_1}(A)\cdots\lambda_{i_n}(A)$ is just the $n^{\text{th}}$ symmetric function of the eigenvalues of $A$. Proofs of the equivalence can be found in [@MR1744872 Chap. 2] and [@MR2154153 Chap. 3]. We will make use of all of them in the course of this paper. We state two important properties [@MR2154153 Thm. 3.5] of the operator determinant: First its multiplication formula, $$\label{eq:mult} \det(I + A + B + A B) = \det(I+B)\det(I+A)\qquad (A,B \in \mathcal{J}_1(\mathcal{H})),$$ then the characterization of invertibility: $\det(I+A) \neq 0$ if and only if the inverse operator $(I+A)^{-1}$ exists. #### The matrix case In Section \[sect:quad\] we will study the convergence of finite dimensional determinants to operator determinants in terms of the power series (\[eq:detgrothen\]). Therefore, we give this series a more common look and feel for the case of a matrix $A \in \C^{m\times m}$. By evaluating the traces with respect to a Schur basis of $A$ one gets $$\tr \bigwedge\nolimits^n(A) = \sum_{i_1<\cdots <i_n} \det(A_{i_p, i_q})_{p,q=1}^n = \frac{1}{n!} \sum_{i_1,\ldots,i_n=1}^m \det(A_{i_p, i_q})_{p,q=1}^n,$$ that is, the sum of all $n \times n$ principal minors of $A$. The yields the form of the matrix determinant $$\label{eq:vonKoch} \det(I+z A) = \sum_{n=0}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^m \det(A_{i_p, i_q})_{p,q=1}^n\qquad (A \in \C^{m\times m}).$$ (In fact, the series must terminate at $n=m$ since $\det(I+zA)$ is a polynomial of degree $m$ in $z$.) A more elementary proof of this classical formula, by a Taylor expansion of the polynomial $\det(I+z A)$, can be found, e.g., in . #### The Fredholm determinant for integral operators with continuous kernel Suppose that the continuous kernel $K \in C([a,b]^2)$ induces an integral operator $A$ that is trace class on the Hilbert space $\mathcal{H}=L^2(a,b)$. Then, the traces of $\bigwedge^n(A)$ evaluate to [@MR2154153 Thm. 3.10] $$\tr \bigwedge\nolimits^n(A) = \frac{1}{n!} \int_{(a,b)^n} \det(K(t_p,t_q))_{p,q=1}^n\,dt_1\cdots\,dt_n \qquad (n=0,1,2,\ldots).$$ The power series representation (\[eq:detgrothen\]) of the operator determinant is therefore exactly Fredholm’s expression (\[eq:det1\]), that is, $$\label{eq:detfred} \det(I+zA) = \sum_{n=0}^\infty \frac{z^n}{n!} \int_{(a,b)^n} \det(K(t_p,t_q))_{p,q=1}^n\,dt_1\cdots\,dt_n.$$ The similarity with von Koch’s formula (\[eq:vonKoch\]) is striking and, in fact, it was just an analogy in form that had led Fredholm to conceive his expression for the determinant. It is important to note, however, that the right hand side of (\[eq:detfred\]) perfectly makes sense for any continuous kernel, independently of whether the corresponding integral operator is trace class or not. #### The regularized determinant for Hilbert–Schmidt operators For a general Hilbert–Schmidt operator we only know the convergence of $\sum_n \lambda(A)^2$ but not of $\sum_n \lambda_n(A)$. Therefore, the product (\[eq:detlidskii\]), which is meant to define $\det(I+zA)$, is not known to converge in general. Instead, and introduced the entire function[^14] $$\det\nolimits_2(I+z A) = \prod_{n=1}^{N(A)} (1 + z \lambda_n(A)) e^{-z\lambda_n(A)}\qquad (A \in \mathcal{J}_2(\mathcal{H})),$$ which also has the property to possess zeros exactly at $z_n=-1/\lambda_n(A)$, counting multiplicity. Plemelj’s formula (\[eq:detdunford\]) becomes [@MR1744872 p. 167] $$\det\nolimits_2(I+z A) = \exp\left(-\sum_{n=2}^\infty \frac{(-z)^n}{n} \tr A^n \right)$$ for $|z|<1/|\lambda_1(A)|$, which perfectly makes sense since $A^2$, $A^3$, …are trace class for $A$ being Hilbert–Schmidt.[^15] Note that for trace class operators we have $$\det(I+zA) = \det\nolimits_2(I+z A) \exp(z\cdot\tr A)\qquad (A \in \mathcal{J}_1(\mathcal{H})).$$ For integral operators $A$ of the form (\[eq:intop\]) with a continuous kernel $K$ on $\mathcal{H}=L^2(a,b)$ the Fredholm determinant (\[eq:det1\]) is related to the Hilbert–Carleman determinant by the equation [@MR0390680 p. 250] $$d(z) = \det\nolimits_2(I+z A) \exp(z \int_a^b K(x,x)\,dx)$$ in general, even if $A$ is not of trace class. It is important to note, though, that if $A \not\in \mathcal{J}_1(\mathcal{H})$ with such a kernel, we have $\int_a^b K(x,x)\,dx \neq \tr(A)$ simply because $\tr(A)$ is not well defined by (\[eq:tracedef\]) anymore then. Perturbation Bounds {#sect:cond} =================== In studying the conditioning of operator and matrix determinants we rely on the fundamental perturbation estimate $$\label{eq:perturb0} |\det(I+A) - \det(I+B)| \leq \|A-B\|_{\scriptscriptstyle\mathcal{J}_1} \exp\left(1+\max(\|A\|_{\scriptscriptstyle\mathcal{J}_1},\|B\|_{\scriptscriptstyle\mathcal{J}_1})\right)$$ for trace class operators, which can beautifully be proven by means of complex analysis [@MR2154153 p. 45]. This estimate can be put to the form $$\label{eq:perturb1} |\det(I+(A+E)) - \det(I+A)| \leq e^{1+\|A\|_{\scriptscriptstyle\mathcal{J}_1}} \cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1} + O(\|E\|_{\scriptscriptstyle\mathcal{J}_1}^2),$$ showing that the condition number $\kappa_{\text{abs}}$ of the operator determinant $\det(I+A)$, with respect to absolute errors measured in trace class norm, is bounded by $$\kappa_{\text{abs}} \leq e^{1+\|A\|_{\scriptscriptstyle\mathcal{J}_1}}.$$ This bound can considerably be improved for certain selfadjoint operators that will play an important role in Section \[sect:random\]. \[lem:perturb\] Let $A\in \mathcal{J}_1(\mathcal{H})$ be selfadjoint, positive-semidefinite with $\lambda_1(A) < 1$. If $\|E\|_{\scriptscriptstyle\mathcal{J}_1} < \|(I-A)^{-1}\|^{-1}$ then $$\label{eq:perturb2} |\det(I-(A+E)) - \det(I-A)| \leq \|E\|_{\scriptscriptstyle\mathcal{J}_1}.$$ That is, the condition number $\kappa_{\text{abs}}$ of the determinant $\det(I-A)$, with respect to absolute errors measured in trace class norm, is bounded by $\kappa_{\text{abs}} \leq 1$. Because of $1>\lambda_1(A)\geq \lambda_2(A) \geq \cdots \geq 0$ there exists the inverse operator $(I-A)^{-1}$. The product formula (\[eq:detlidskii\]) implies $\det(I-A) > 0$, the multiplicativity (\[eq:mult\]) of the determinant gives $$\det(I-(A+E)) = \det(I-A) \det(I-(I-A)^{-1}E).$$ Upon applying Plemelj’s formula (\[eq:detdunford\]) and the estimates (\[eq:ideal\]) and (\[eq:sum1lambda\]) we get $$\begin{gathered} |\log\det(I-(I-A)^{-1}E)| = \left| \tr\left( \sum_{n=1}^\infty \frac{((I-A)^{-1} E)^n}{n}\right) \right|\\*[2mm] \leq \sum_{n=1}^\infty \frac{\|(I-A)^{-1}\|^n \cdot\|E\|_{\scriptscriptstyle\mathcal{J}_1}^n}{n} = \log\left(\frac{1}{1-\|(I-A)^{-1}\|\cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1}}\right)\end{gathered}$$ if $\|(I-A)^{-1}\|\cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1} < 1$. Hence, exponentiation yields $$\begin{gathered} 1-\|(I-A)^{-1}\|\cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1} \leq \det(I-(I-A)^{-1}E)\\*[2mm] \leq \frac{1}{1-\|(I-A)^{-1}\|\cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1}} \leq 1+\|(I-A)^{-1}\|\cdot \|E\|_{\scriptscriptstyle\mathcal{J}_1},\end{gathered}$$ that is $$|\det(I-(A+E)) - \det(I-A)| \leq \det(I-A)\cdot\|(I-A)^{-1}\|\cdot\|E\|_{\scriptscriptstyle\mathcal{J}_1}.$$ Now, by the spectral theorem for bounded selfadjoint operators we have $$\|(I-A)^{-1}\| = \frac{1}{1-\lambda_1(A)} \leq \prod_{n=1}^{N(A)} \frac{1}{1-\lambda_n(A)} = \frac{1}{\det(I-A)}$$ and therefore $\det(I-A)\cdot\|(I-A)^{-1}\| \leq 1$, which finally proves the assertion. Thus, for the operators that satisfy the assumptions of this lemma the determinant is a really *well* conditioned quantity—with respect to absolute errors, like the eigenvalues of a Hermitian matrix [@MR1417720 p. 396]. #### Implications on the accuracy of numerical methods The Nyström-type method of Section \[sect:quad\] requires the calculation of the determinant $\det(I+ A)$ of some matrix $A \in \C^{m\times m}$. In the presence of roundoff errors, a backward stable method like Gaussian elimination with partial pivoting [@MR1927606 Sect. 14.6] gives a result that is *exact* for some matrix $\tilde{A} = A + E$ with $$\label{eq:roundoff} |E_{j,k}| \leq \epsilon |A_{j,k}|\qquad (j,k=1,\ldots,m)$$ where $\epsilon$ is a small multiple of the unit roundoff error of the floating-point arithmetic used. We now use the perturbation bounds of this section to estimate the resulting error in the value of determinant. Since the trace class norm is not a monotone matrix norm [@MR1927606 Def. 6.1], we cannot make direct use of the componentwise estimate (\[eq:roundoff\]). Instead, we majorize the trace class norm of $m\times m$ matrices $A$ by the Hilbert–Schmidt (Frobenius) norm, which is monotone, using $$\|A\|_{\scriptscriptstyle\mathcal{J}_1} \leq \sqrt{m} \|A\|_{\scriptscriptstyle\mathcal{J}_2},\qquad \|A\|_{\scriptscriptstyle\mathcal{J}_2} = \left(\sum_{j,k=1}^m |A_{j,k}|^2\right)^{1/2}.$$ Thus, the general perturbation bound (\[eq:perturb1\]) yields the following a priori estimate of the roundoff error affecting the value of the determinant: $$|\det(I+\tilde{A}) - \det(I+A)| \leq \sqrt{m}\|A\|_{\scriptscriptstyle\mathcal{J}_2} \exp\left(1+\|A\|_{\scriptscriptstyle\mathcal{J}_1}\right) \cdot \epsilon + O(\epsilon^2).$$ If the matrix $A$ satisfies the assumptions of Lemma \[lem:perturb\], the perturbation bound (\[eq:perturb2\]) gives the even sharper estimate $$\label{eq:roundoff2} |\det(I-\tilde{A}) - \det(I-A)| \leq \sqrt{m}\|A\|_{\scriptscriptstyle\mathcal{J}_2}\cdot \epsilon.$$ Therefore, if $\det(I-A) \ll \|A\|_{\scriptscriptstyle\mathcal{J}_2}$, we have to be prepared that we probably cannot compute the determinant to the full precision given by the computer arithmetic used. Some digits will be lost. A conservative estimate stemming from (\[eq:roundoff2\]) predicts the loss of at most $$\log_{10}\left( \frac{\sqrt{m}\cdot \|A\|_{\scriptscriptstyle\mathcal{J}_2}}{\det(I-A)} \right)$$ decimal places. For instance, this will affect the *tails* of the probability distributions to be calculated in Section \[sect:random\]. Projection Methods {#sect:proj} ================== The general idea (\[eq:findef\]) of defining the infinite dimensional determinant $\det(I+A)$ for a trace class operator $A$ by a continuous extension from the finite dimensional case immediately leads to the concept of a projection method of Galerkin type. We consider a sequence of $m$-dimensional subspaces $V_m \subset \mathcal{H}$ together with their corresponding orthonormal projections $$P_m : \mathcal{H} \to V_m.$$ The Galerkin projection $P_m AP_m$ of the trace class operator $A$ is of finite rank. Given an orthonormal basis $\phi_1,\ldots,\phi_m$ of $V_m$, its determinant can be effectively calculated as the finite dimensional expression $$\label{eq:galerkin} \det(I+z\,P_mAP_m) = \det\left(I + z\,P_mAP_m\projected{V_m}\right) = \det\left(\delta_{ij} + z\,\langle \phi_i, A \phi_j \rangle\right)_{i,j=1}^m$$ if the matrix elements $\langle \phi_i, A \phi_j \rangle$ are numerically accessible. Because of $\|P_m\|\leq 1$, and thus $\|P_m AP_m \|_{\scriptscriptstyle\mathcal{J}_1} \leq \| A\|_{\scriptscriptstyle\mathcal{J}_1} \leq 1$, the perturbation bound (\[eq:perturb0\]) gives the simple error estimate $$\label{eq:galerkinerr} |\det(I+z\,P_mAP_m) - \det(I+z \,A)| \leq \| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} \cdot |z|\, e^{1+ |z|\cdot\| A\|_{\scriptscriptstyle\mathcal{J}_1}}.$$ For the method to be convergent we therefore have to show that $P_m AP_m \to A$ in [*trace class norm*]{}. By a general result about the approximation of trace class operators [@MR1130394 Thm. 4.3] all we need to know is that $P_m$ converges *pointwise*[^16] to the identity operator $I$. This pointwise convergence is obviously equivalent to the *consistency* of the family of subspaces $V_m$, that is, $$\label{eq:consist} \bigcup_{m=1}^\infty V_m \text{ is a dense subspace of $\mathcal{H}$}.$$ In summary, we have proven the following theorem. \[thm:galerkin\] Let $A$ be a trace class operator. If the sequence of subspaces satisfies the consistency condition (\[eq:consist\]), the corresponding Galerkin approximation (\[eq:galerkin\]) of the operator determinant converges, $$\det(I+z\,P_mAP_m) \to \det(I+z\,A) \qquad (m\to\infty),$$ uniformly for bounded $z$. A quantitative estimate of the error, that is, in view of (\[eq:galerkinerr\]), of the projection error $\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1}$ in trace class norm, can be based on the singular value representation (\[eq:schmidt\]) of $A$ and its finite-rank truncation $A_N$: (We assume that $A$ is non-degenerate, that is, $N(|A|)=\infty$; since otherwise we could simplify the following by putting $A_N=A$.) $$A = \sum_{n=1}^{\infty} s_n(A) \langle u_n,\cdot\,\rangle v_n, \qquad A_N = \sum_{n=1}^{N} s_n(A) \langle u_n,\cdot\,\rangle v_n.$$ We obtain, by using $\|P_m\|\leq 1$ once more, $$\begin{aligned} \label{eq:projest} &\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} \leq \| P_mAP_m - P_mA_NP_m\|_{\scriptscriptstyle\mathcal{J}_1} + \| P_mA_NP_m - A_N\|_{\scriptscriptstyle\mathcal{J}_1} + \| A_N- A\|_{\scriptscriptstyle\mathcal{J}_1}\notag\\*[3mm] &\qquad \qquad\leq 2\| A_N- A\|_{\scriptscriptstyle\mathcal{J}_1} + \| P_mA_NP_m - A_N\|_{\scriptscriptstyle\mathcal{J}_1}\notag\\*[1mm] &\qquad \qquad\leq 2 \sum_{n=N+1}^{\infty} s_n(A) \;+\; \sum_{n=1}^N s_n(A) \| \langle P_m u_n,\cdot\,\rangle P_m v_n - \langle u_n,\cdot\,\rangle v_n\|_{\scriptscriptstyle\mathcal{J}_1}\notag\\*[1mm] &\qquad \qquad\leq 2 \sum_{n=N+1}^{\infty} s_n(A) \;+\; \sum_{n=1}^N s_n(A) \left(\|u_n - P_m u_n\| + \|v_n - P_m v_n\|\right).\end{aligned}$$ There are two competing effects that contribute to making this error bound small: First, there is the convergence of the truncated series of singular values, $$\sum_{n=N+1}^{\infty} s_n(A) \to 0 \qquad (N\to \infty),$$ which is *independent* of $m$. Second, there is, for *fixed* $N$, the collective approximation $$P_m u_n \to u_n,\qquad P_m v_n \to v_n\qquad (m \to \infty)$$ of the first $N$ singular functions $u_n, v_n$ $(n=1,\ldots,N)$. For instance, given $\epsilon > 0$, we can first choose $N$ large enough to push the first error term in (\[eq:projest\]) below $\epsilon/2$. After fixing such an $N$, the second error term can be pushed below $\epsilon/2$ for $m$ large enough. This way we have proven Theorem \[thm:galerkin\] once more. However, in general the convergence of the second term might considerably slow down for growing $N$. Therefore, a good quantitative bound requires a carefully balanced choice of $N$ (depending on $m$), which in turn requires some detailed knowledge about the decay of the singular values on the one hand and of the growth of the derivatives of the singular functions on the other hand (see the example at the end of this section). While some general results are available in the literature for the singular values—e.g., for integral operators $A$ induced by a kernel $K \in C^{k-1,1}([a,b]^2)$ the bound $$\label{eq:snAsmithies} s_n(A) = O(n^{-k-\frac{1}{2}}) \qquad (n \to \infty)$$ obtained by —the results are sparse for the singular functions [@Fenyo §8.10]. Since the quadrature method in Section \[sect:quad\] does not require any such knowledge, we refrain from stating a general result and content ourselves with the case that there is no projection error in the singular functions; that is, we consider projection methods of Ritz–Galerkin type for selfadjoint operators. \[thm:ritzerr\] Let $A$ be a selfadjoint integral operator that is induced by a continuous Hermitian kernel $K$ and that is trace class on the Hilbert space $\mathcal{H} = L^2(a,b)$. Assume that $A$ is not of finite rank and let $(u_n)$ be an orthonormal basis of eigenfunctions of $A$. We consider the associated Ritz projection $P_m$, that is, the orthonormal projection $$P_m : \mathcal{H} \to V_m = \spn\{u_1,\ldots,u_m\}.$$ Note that in this case $$\det(I+z\,P_mAP_m) = \prod_{n=1}^m (1+z\lambda_n(A)).$$ If $K \in C^{k-1,1}([a,b]^2)$, then there holds the error estimate (\[eq:galerkinerr\]) with $$\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} = o(m^{\frac{1}{2}-k})\qquad (m\to\infty).$$ If $K$ is bounded analytic on $\mathcal{E}_\rho \times \mathcal{E}_\rho$ (with the ellipse $\mathcal{E}_\rho$ defined in Theorem \[thm:quaderr\]), then the error estimate improves to $$\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} = O(\rho^{-m(1-\epsilon)/4})\qquad (m\to\infty),$$ for any fixed choice of $\epsilon>0$. With the spectral decompositions $$A = \sum_{n=1}^{\infty} \lambda_n(A) \langle u_n,\cdot\,\rangle u_n, \qquad P_m A P_m = A_m = \sum_{n=1}^{m} \lambda_n(A) \langle u_n,\cdot\,\rangle u_n,$$ at hand the bound (\[eq:projest\]) simplifies, for $N=m$, to $$\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} = \sum_{n=m+1}^\infty |\lambda_n(A)|.$$ Now, by some results of , we have, for $K \in C^{k-1,1}([a,b]^2)$, the eigenvalue decay $$\label{eq:hille} \lambda_n(A) = o(n^{-k-\frac{1}{2}})\qquad (n\to \infty)$$ (which is just slightly stronger than Smithies’ singular value bound (\[eq:snAsmithies\])) and, for $K$ bounded analytic on $\mathcal{E}_\rho \times \mathcal{E}_\rho$, $$\lambda_n(A) = O(\rho^{-n(1-\epsilon)/4}) \qquad (n\to \infty);$$ which proves both assertions. However, for kernels of low regularity, by taking into account the specifics of a particular example one often gets better results than stated in this theorem. (An example with an analytic kernel, enjoying the excellent convergence rates of the second part this theorem, can be found in Section \[sect:random\].) #### An example: Poisson’s equation For a given $f \in L^2(0,1)$ the Poisson equation $$-u''(x) = f(x),\qquad u(0)=u(1)=0,$$ with Dirichlet boundary conditions is solved [@MR0390680 p. 5] by the application of the integral operator $A$, $$\label{eq:greenA} u(x) = A f(x) = \int_0^1 K(x,y) f(y)\,dy,$$ which is induced by the Green’s kernel $$\label{eq:greenK} K(x,y) = \begin{cases} x(1-y) & x \leq y, \\*[1mm] y(1-x) & \text{otherwise}. \end{cases}$$ Since $K$ is Lipschitz continuous, Hermitian, and positive definite, we know from the results of Section \[sect:trace\] that $A$ is a selfadjoint trace class operator on $\mathcal{H}=L^2(0,1)$. The eigenvalues and normalized eigenfunctions of $A$ are those of the Poisson equation which are known to be $$\lambda_n(A) = \frac{1}{\pi^2 n^2},\qquad u_n(x) = \sqrt{2}\sin(n \pi x)\qquad (n=1,2,\ldots).$$ Note that the actual decay of the eigenvalues is better than the general Hille–Tamarkin bound (\[eq:hille\]) which would, because of $K \in C^{0,1}([0,1]^2)$, just give $\lambda_n(A)=o(n^{-3/2})$. The trace formulas (\[eq:tracedef\]) and (\[eq:tracediag\]) reproduce a classical formula of Euler’s, namely $$\sum_{n=1}^\infty \frac{1}{\pi^2 n^2} = \tr(A) = \int_0^1 K(x,x)\,dx = \int_0^1 x(1-x)\,dx = \frac{1}{6};$$ whereas, by (\[eq:detlidskii\]) and the product representation of the sine function, the Fredholm determinant explicitly evaluates to the entire function $$\label{eq:greendet} \det(I-z A) = \prod_{n=1}^\infty \left(1-\frac{z}{\pi^2 n^2}\right) = \frac{\sin(\sqrt z)}{\sqrt z}.$$ The sharper perturbation bound of Lemma \[lem:perturb\] applies and we get, for each finite dimensional subspace $V_m \subset \mathcal{H}$ and the corresponding orthonormal projection $P_m: \mathcal{H} \to V_m$, the error estimate $$\label{eq:greenerr} |\det(I-P_m A P_m) - \det(I-A)| \leq \| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1}.$$ Now, we study two particular families of subspaces. #### Trigonometric polynomials Here, we consider the subspaces $$V_m = \spn \{ \sin(n\pi\, \cdot) : n=1,\ldots,m\} = \spn\{ u_n : n=1,\ldots, m\},$$ which are exactly those spanned by the eigenfunctions of $A$. In this case, the projection method is of Ritz–Galerkin type; the estimates become particularly simple since we have the explicit spectral decomposition $$P_m A P_m - A = \sum_{n=m+1}^\infty \lambda_n(A) \langle u_n,\cdot\,\rangle u_n$$ of the error operator. Hence, the error bound (\[eq:greenerr\]) directly evaluates to $$\begin{gathered} \label{eq:ritzbound} |\det(I-P_m A P_m) - \det(I-A)| \\*[2mm] \leq \| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} = \sum_{n=m+1}^\infty \lambda_n(A) = \frac{1}{\pi^2} \sum_{n=m+1}^\infty \frac{1}{n^2} \leq \frac{1}{\pi^2 m}.\end{gathered}$$ Figure \[fig:ritz\] shows that this upper bound overestimates the error in the Fredholm determinant by just about 20%. [![Convergence of Ritz–Galerkin (circles) and Galerkin (dots) for approximating the Fredholm determinant of the integral operator induced by Green’s kernel of Poisson’s equation. The solid line shows the upper bound $1/\pi^2 m$ of Ritz–Galerkin as given in (\[eq:ritzbound\]).[]{data-label="fig:ritz"}](RitzGalerkin.pdf "fig:"){width="\textwidth"}]{} #### Algebraic polynomials Here, we consider the subspaces of algebraic polynomials of order $m$, that is, $$V_m = \{u \in L^2(0,1) : \text{$u$ is a polynomial of degree at most $m-1$}\}.$$ We apply the bound given in (\[eq:projest\]) and obtain (keeping in mind that $A$ is selfadjoint) $$\| P_mAP_m - A\|_{\scriptscriptstyle\mathcal{J}_1} \leq 2 \sum_{n=N+1}^{\infty} \lambda_n(A) \;+\; 2\sum_{n=1}^N \lambda_n(A)\,\|u_n - P_m u_n\|$$ with a truncation index $N$ yet to be skilfully chosen. As before in (\[eq:ritzbound\]), the first term of this error bound can be estimated by $2/\pi^2N$. For the second term we recall that the projection error $\|u_n - P_m u_n\|$ is, in fact, just the error of polynomial best approximation of the eigenfunction $u_n$ with respect to the $L^2$-norm. A standard Jackson-type inequality [@MR1261635 p. 219] from approximation theory teaches us that $$\|u_n - P_m u_n\| \leq \frac{c_k}{m^k}\|u^{(k)}_n\| = c_k \frac{(\pi n)^k}{m^k},$$ where $c_k$ denotes a constant that depends on the smoothness level $k$. A *fixed* eigenfunction $u_n$ (being an entire function in fact) is therefore approximated beyond every algebraic order in the dimension $m$; but with increasingly larger constants for higher “wave numbers” $n$. We thus get, with some further constant $\tilde{c}_k$ depending on $k \geq 2$, $$\| P_m A P_m - A\|_{\scriptscriptstyle\mathcal{J}_1} \leq \tilde{c}_k \left( \frac{1}{N} + \frac{N^{k-1}}{(k-1)m^k} \right).$$ We now balance the two error terms by minimizing this bound: the optimal truncation index $N$ turns out to be exactly $N=m$, which finally yields the estimate $$|\det(I-P_m A P_m) - \det(I-A)| \leq \| P_m A P_m - A\|_{\scriptscriptstyle\mathcal{J}_1} \leq \frac{\tilde{c}_k}{1-k^{-1}}\, m^{-1}.$$ Thus, in contrast to the approximation of a [*single*]{} eigenfunction, for the Fredholm determinant the *order* of the convergence estimate does finally not depend on the choice of $k$ anymore; we obtain the same $O(m^{-1})$ behavior as for the Ritz–Galerkin method. In fact, a concrete numerical calculation[^17] shows that this error estimate really reflects the actual order of the error decay of the Galerkin method, see Figure \[fig:ritz\]. #### Remark The analysis of this section has shown that the error decay of the projection methods is essentially determined by the decay $$\sum_{k=m+1}^\infty s_k(A) \to 0$$ of the singular values of $A$, which in turn is related to the smoothness of the kernel $K$ of the integral operator $A$. In the next section, the error analysis of Nyström-type quadrature methods will relate in a much more direct fashion to the smoothness of the kernel $K$, giving even much better error bounds, a priori and in actual numerical computations. For instance, the Green’s kernel (\[eq:greenA\]) of low regularity can be treated by an $m$-dimensional approximation of the determinant with an actual convergence rate of $O(m^{-2})$ instead of $O(m^{-1})$ as for the projection methods. Moreover, these methods are much simpler and straightforwardly implemented. Quadrature Methods {#sect:quad} ================== In this section we directly approximate the Fredholm determinant (\[eq:det1\]) using the Nyström-type method (\[eq:ny\]) that we have motivated at length in Section \[sect:intro\]. We assume throughout that the kernel $K$ is a continuous function on $[a,b]^2$. The notation simplifies considerably by introducing the $n$-dimensional functions $K_n$ defined by $$K_n(t_1,\ldots,t_n) = \det\left(K(t_p,t_q)\right)_{p,q=1}^n.$$ Their properties are given in Lemma \[lem:Kn\] of the appendix. We then write the Fredholm determinant shortly as $$d(z) = 1 + \sum_{n=1}^\infty \frac{z^n}{n!} \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n.$$ For a given quadrature formula $$Q(f) = \sum_{j=1}^m w_j f(x_j)\,\approx\,\int_a^b f(x)\,dx$$ we define the associated Nyström-type approximation of $d(z)$ by the expression $$\label{eq:detny} d_Q(z) = \det \left(\delta_{ij} + z\,w_i K(x_i,x_j)\right)_{i,j=1}^m.$$ The key to error estimates and a convergence proof is the observation that we can rewrite $d_Q(z)$ in a form that closely resembles the Fredholm determinant. Namely, by using the von Koch form (\[eq:vonKoch\]) of matrix determinants, the multi-linearity of minors, and by introducing the $n$-dimensional product rule (\[eq:productrule\]) associated with $Q$ (see the appendix), we get $$\begin{aligned} d_Q(z) &= 1+\sum_{n=1}^\infty \frac{z^n}{n!} \sum_{j_1,\ldots,j_n=1}^m \det\left( w_{j_p} K(x_{j_p},x_{j_q}) \right)_{p,q=1}^n \\*[1mm] &= 1+\sum_{n=1}^\infty \frac{z^n}{n!} \sum_{j_1,\ldots,j_n=1}^m w_{j_1}\cdots w_{j_n}\,\det\left( K(x_{j_p},x_{j_q}) \right)_{p,q=1}^n\\*[1mm] &= 1+\sum_{n=1}^\infty \frac{z^n}{n!} \sum_{j_1,\ldots,j_n=1}^m w_{j_1}\cdots w_{j_n}\,K_n(x_{j_1},\ldots,x_{j_n})\\*[1mm] &= 1+\sum_{n=1}^\infty \frac{z^n}{n!}\, Q^n(K_n).\end{aligned}$$ Thus, alternatively to the motivation given in the introductory Section \[sect:intro\], we could have introduced the Nyström-type method by approximating each of the multi-dimensional integrals in the power series defining the Fredholm determinant with a product quadrature rule. Using this form, we observe that the error is given by $$\label{eq:nyerr} d_Q(z) - d(z) = \sum_{n=1}^\infty \frac{z^n}{n!} \left( Q^n(K_n) - \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n \right),$$ that is, by the exponentially generating function of the quadrature errors for the functions $K_n$. The following theorem generalizes a result that had proven for a specific class of quadrature formulae, namely, the rectangular rule. \[thm:nyconv\] If a family $Q_m$ of quadrature rules converges for continuous functions, then the corresponding Nyström-type approximation of the Fredholm determinant converges, $$d_{Q_m}(z) \to d(z)\qquad (m\to\infty),$$ uniformly for bounded $z$. Let $z$ be bounded by $M$ and choose any $\epsilon>0$. We split the series (\[eq:nyerr\]) at an index $N$ yet to be chosen, getting $$\begin{gathered} |d_{Q_m}(z) - d(z)| \leq \sum_{n=1}^N \frac{M^n}{n!} \left| Q^n_m(K_n) - \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n \right| \\*[2mm] + \sum_{n=N+1}^\infty \frac{M^n}{n!} \left| Q^n_m(K_n) - \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n \right|\end{gathered}$$ Let $\Lambda$ be the stability bound of the convergent family $Q_m$ of quadrature rules (see Theorem \[thm:polya\] of the appendix) and put $\Lambda_1 = \max(\Lambda,b-a)$. Then, by Lemma \[lem:Kn\], the second part of the splitting can be bounded by $$\begin{gathered} \sum_{n=N+1}^\infty \frac{M^n}{n!} \left| Q^n_m(K_n) - \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n \right| \\*[2mm] \leq \sum_{n=N+1}^\infty \frac{M^n}{n!} \left( |Q^n_m(K_n)| + |\int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n | \right) \\*[2mm] \leq \sum_{n=N+1}^\infty \frac{M^n}{n!} \left(\Lambda^n + (b-a)^n\right) \|K_n\|_{\scriptscriptstyle L^\infty} \leq 2 \sum_{n=N+1}^\infty \frac{n^{n/2}}{n!} (M \Lambda_1 \|K\|_{\scriptscriptstyle L^\infty})^n.\end{gathered}$$ The last series converges by Lemma \[lem:phi\] and the bound can, therefore, be pushed below $\epsilon/2$ by choosing $N$ large enough. After fixing such an $N$, we can certainly also push the first part of the splitting, that is, $$\sum_{n=1}^N \frac{M^n}{n!} \left| Q^n_m(K_n) - \int_{[a,b]^n} K_n(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n \right|\,,$$ below $\epsilon/2$, now for $m$ large enough, say $m\geq m_0$, using the convergence of the product rules $Q_m^n$ induced by $Q_m$ (see Theorem \[thm:quaderrn\]). In summary we get $$|d_{Q_m}(z) - d(z)| \leq \epsilon$$ for all $|z|\leq M$ and $m \geq m_0$, which proves the asserted convergence of the Nyström-type method. If the kernel $K$ enjoys, additionally, some smoothness, we can prove error estimates that exhibit, essentially, the same rates of convergence as for the quadrature of the sections $x \mapsto K(x,y)$ and $y \mapsto K(x,y)$. \[thm:nyerr\] If $K \in C^{k-1,1}([a,b]^2)$, then for each quadrature rule $Q$ of order $\nu \geq k$ with positive weights there holds the error estimate $$|d_Q(z)-d(z)| \leq c_k\, 2^k(b-a)^k \nu^{-k} \,\Phi\!\left(|z|(b-a)\|K\|_k\right),$$ where $c_k$ is the constant (depending only on $k$) from Theorem \[thm:quaderr\], and $\|K\|_k$ and $\Phi$ are the norm and function defined in (\[eq:Knormk\]) and (\[eq:phi\]), respectively. If $K$ is bounded analytic on $\mathcal{E}_\rho \times \mathcal{E}_\rho$ (with the ellipse $\mathcal{E}_\rho$ defined in Theorem \[thm:quaderr\]), then for each quadrature rule $Q$ of order $\nu$ with positive weights there holds the error estimate $$|d_Q(z)-d(z)| \leq \frac{4\,\rho^{-\nu}}{1-\rho^{-1}} \,\Phi\!\left(|z|(b-a)\|K\|_{\scriptscriptstyle L^\infty(\mathcal{E}_\rho \times \mathcal{E}_\rho)}\right).$$ By Theorem \[thm:quaderrn\] and Lemma \[lem:Kn\] we can estimate the error (\[eq:nyerr\]) in both cases in the form $$|d_Q(z)-d(z)| \leq \alpha \sum_{n=1}^\infty \frac{n^{(n+2)/2}}{n!} \,(|z|\beta)^n = \alpha\, \Phi(|z|\beta)\,;$$ with the particular values $\alpha = c_k\, 2^k(b-a)^k \nu^{-k}$ and $\beta= (b-a)\, \|K\|_k$ in the first case, and $\alpha = 4\,\rho^{-\nu}/(1-\rho^{-1})$ and $\beta = (b-a)\,\|K\|_{\scriptscriptstyle L^\infty(\mathcal{E}_\rho \times \mathcal{E}_\rho)}$ in the second case. This proves both assertions. An example with an analytic kernel, enjoying the excellent convergence rates of the second part this theorem, can be found in Section \[sect:random\]. Note that Theorem \[thm:nyerr\] is based on a general result (Theorem \[thm:quaderr\]) about quadrature errors that stems from the convergence rates of polynomial best approximation. There are cases (typically of low regularity), however, for which certain quadrature formulae enjoy convergence rates that are actually *better* than best approximation. The Nyström-type method inherits this behavior; one would just have to repeat the proof of Theorem \[thm:nyerr\] then. We refrain from stating a general theorem, since this would involve bounds on the highest derivatives involving weights[^18] that take into account the boundary of the interval $[a,b]$. Instead, we content ourselves with the detailed discussion of a particular example. #### An example: Poisson’s equation We revisit the example of Section \[sect:proj\], that is the integral operator (\[eq:greenA\]) belonging to the Green’s kernel $K$ (defined in (\[eq:greenK\])) of Poisson’s equation. Recall from (\[eq:greendet\]) that $$d(-1) = \det(I-A) = \sin(1).$$ The kernel $K$ is just Lipschitz continuous, that is, $K \in C^{0,1}([0,1]^2)$. If we apply the Nyström-type method with the $m$-point Gauss–Legendre (order $\nu=2m$) or the Curtis–Clenshaw (order $\nu=m$) formulae as the underlying quadrature rule $Q_m$, Theorem \[thm:nyerr\] proves an error bound of the form $$d_{Q_m}(-1) - d(-1) = O(m^{-1}),$$ which superficially indicates the same convergence rate as for the $m$-dimensional Galerkin methods of Section \[sect:proj\]. However, the actual numerical computation shown in Figure \[fig:ny1\] exhibits the far better convergence rate of $O(m^{-2})$. [![Convergence of the Nyström-type method for approximating the Fredholm determinant of the integral operator induced by Green’s kernel (\[eq:greenK\]) of Poisson’s equation; the underlying quadrature rules $Q_m$ are the $m$-point Gauss–Legendre (dots) and Clenshaw–Curtis (circles) rules. Note that, in accordance with , both behave essentially the same. The solid line shows the function $1/25 m^2$, just to indicate the rate of convergence. For comparison we have included the results of the Ritz–Galerkin method (stars) from Figure \[fig:ritz\].[]{data-label="fig:ny1"}](Nystrom1.pdf "fig:"){width="\textwidth"}]{} This deviation can be understood in detail as follows: On the one hand, by inverse theorems of approximation theory [@MR1261635 p. 220], valid for *proper* subintervals of $[a,b]$, the polynomial best approximation (of degree $m$) of sections of the Green’s kernel $K$ cannot give a better rate than $O(m^{-1})$; since otherwise those sections could not show jumps in the first derivative. Given the line of arguments leading from polynomial best approximation to Theorem \[thm:nyerr\], the error estimate of $O(m^{-1})$ was therefore the *best* that could be established *this* way. On the other hand, the sections of the Green’s kernel look like piecewise linear hat functions. Therefore, the coefficients $a_m$ of their Chebyshev expansions decay as $O(m^{-2})$ [@MR760629 Eq. (4.8.1.3)]. Given this decay rate, one can then prove—see, for Gauss–Legendre, and, for Clenshaw–Curtis, —that the quadrature error is of rate $O(m^{-2})$, too. Now, one can lift this estimate to the Nyström-like method essentially as in Theorem \[thm:nyerr\]; thus *proving* in fact that $$d_{Q_m}(-1) - d(-1) = O(m^{-2}),$$ as numerically observed. #### Remark This “superconvergence” property of certain quadrature rules, as opposed to best approximation, for kernels with jumps in a higher derivative is therefore also the deeper reason that the Nyström-type method then outperforms the projection methods of Section \[sect:proj\] (see Figure \[fig:ny1\]): Best approximation, by direct (Jackson) and inverse (Bernstein) theorems of approximation theory, is strongly tied with the regularity of $K$. And this, in turn, is tied to the decay of the singular values of the induced integral operator $A$, which governs the convergence rates of projection methods. #### A note on implementation If the quadrature weights are positive (which in view of Theorem \[thm:polya\] is anyway the better choice), as is the case for Gauss–Legendre and Clenshaw–Curtis, we recommend to implement the Nyström-type method (\[eq:detny\]) in the equivalent, symmetric form $$\label{eq:detnysym} d_Q(z) = \det(I+z A_Q), \qquad A_Q = \left(w_i^{1/2} K(x_i,x_j) w_j^{1/2}\right)_{i,j=1}^m.$$ (Accordingly short Matlab and Mathematica code is given in the introductory Section \[sect:intro\].) The reason is that the $m\times m$-matrix $A_Q$ inherits some important structural properties from the integral operator $A$: - If $A$ is selfadjoint, then $A_Q$ is Hermitian (see Footnote \[ft:hermitian\]).\ - If $A$ is positive semidefinite, then, by (\[eq:semidef\]), $A_Q$ is positive semidefinite, too. This way, for instance, the computational cost for calculating the finite-dimensional determinant is cut to half, if by structural inheritance $I+zA_Q$ is Hermitian positive definite; the Cholesky decomposition can then be employed instead of Gaussian elimination with partial pivoting. Application to Some Entire Kernels of Random Matrix Theory {#sect:random} ========================================================== In this section we study two important examples, stemming from random matrix theory, with entire kernels. By Theorem \[thm:nyerr\], the Nyström-type method based on Gauss–Legendre or Curtis–Clenshaw quadrature has to exhibit exponential convergence. [![The probability $E_2(0;s)$ that an interval of length $s$ does not contain, in the bulk scaling limit of level spacing $1$, an eigenvalue of the Gaussian unitary ensemble (GUE). The result shown was calculated with the Nyström-like method based on Gauss–Legendre with $m=30$; and cross-checked against the asymptotic expansion $\log E_2(0;s) = -\pi^2 s^2/8 - \log(s)/4 + \log(2)/3 - \log(\pi)/4 + 3\zeta'(-1) + O(s^{-1})$ for $s\to\infty$ [@MR1469319].[]{data-label="fig:E0s"}](E0s.pdf "fig:"){width="\textwidth"}]{} The sine kernel {#sect:sine} --------------- The probability $E_2(0;s)$ (shown in Figure \[fig:E0s\]) that an interval of length $s$ does not contain, in the bulk scaling limit of level spacing $1$, an eigenvalue of the Gaussian unitary ensemble (GUE) is given [@MR2129906 Sect. 6.3] by the Fredholm determinant $$E_2(0;s) = \det\left(I - A_s\right)$$ of the integral operator $A_s$ on $L^2(0,s)$ that is induced by the sine kernel $K$: $$A_s u(x) = \int_0^s K(x,y)u(y)\,dy,\qquad K(x,y) = \frac{\sin(\pi(x-y))}{\pi(x-y)}.$$ Note that $K(x,y)$ is Hermitian and entire on $\C\times\C$; thus $A_s$ is a selfadjoint operator of trace class on $L^2(0,s)$. (This is already much more than we would need to know for successfully applying and understanding the Nyström-type method. However, to facilitate a comparison with the Ritz–Galerkin method, we analyze the operator $A_s$ in some more detail.) The factorization $$\label{eq:sinkernesqrt} K(x,y) = \frac{1}{2\pi}\int_{-\pi}^\pi e^{i (x-y)\xi}\, d\xi = \frac{1}{2\pi}\int_{-\pi}^\pi e^{i x\xi}e^{-i y\xi}\, d\xi$$ of the kernel implies that $A_s$ is positive *definite* with maximal eigenvalue $\lambda_1(A_s)<1$; since, for $0 \neq u \in L^2(0,s)$, we obtain $$\begin{gathered} 0 < \langle u ,A_s u\rangle = \int_{-\pi}^\pi \left| \frac{1}{\sqrt{2\pi}}\int_0^s e^{-i x \xi} u(x) \,dx \right|^2 \,d\xi \\*[2mm] = \int_{-\pi}^\pi |\hat u(\xi)|^2\,d\xi < \int_{-\infty}^\infty |\hat u(\xi)|^2\,d\xi = \|\hat u\|_{\scriptscriptstyle L^2}^2 = \|u\|_{\scriptscriptstyle L^2}^2.\end{gathered}$$ Here, in stating that the inequalities are *strict*, we have used the fact that the Fourier transform $\hat u$ of the function $u \in L^2(0,s)$, which has compact support, is an *entire* function. Therefore, the perturbation bound of Lemma \[lem:perturb\] applies and we obtain, for Ritz–Galerkin as for any Galerkin method, like in the example of Section \[sect:proj\], the basic error estimate $$|\det(I-P_m A_s P_m) - \det(I-A_s)| \leq \| P_mA_sP_m - A_s\|_{\scriptscriptstyle\mathcal{J}_1}.$$ Now, Theorems \[thm:ritzerr\] and \[thm:nyerr\] predict a rapid, exponentially fast convergence of the Ritz–Galerkin and the Nyström-type methods: In fact, an $m$-dimensional approximation will give an error that decays like $O(e^{-c m})$, for any fixed $c>0$ since, for entire kernels, the parameter $\rho>1$ can be chosen arbitrarily in these theorems. #### Details of the implementation of the Ritz–Galerkin method There is certainly no general recipe on how to actually construct the Ritz–Galerkin method for a specific example, since one would have to know, more or less exactly, the eigenvalues of $A$. In the case of the sine kernel, however, had succeeded in doing so. (See also and .) He had observed that the integral operator $\tilde A_t$ on $L^2(-1,1)$, defined by $$\tilde A_t u(x) = \int_{-1}^1 e^{i\pi t x y} u(y)\,dy$$ (which is, by (\[eq:sinkernesqrt\]), basically a rescaled “square-root” of $A_{2t}$), is commuting with the selfadjoint, second-order differential operator $$L u(x) = \frac{d}{dx}\left((x^2-1) u'(x) \right) + \pi^2t^2 x^2 u(x)$$ with boundary conditions $$(1-x^2) u(x) |_{x=\pm 1} = (1-x^2) u'(x) |_{x = \pm 1} = 0.$$ Thus, both operators share the same set of eigenfunctions $u_n$, namely the radial prolate spheroidal wave functions (using the notation of Mathematica 6.0) $$u_n(x) = S^{(1)}_{n,0}(\pi t,x) \qquad (n=0,1,2\ldots).$$ These special functions are even for $n$ even, and odd for $n$ odd. By plugging them into the integral operator $\tilde A_t$ had obtained, after evaluating at $x=0$, the eigenvalues $$\lambda_{2k}(\tilde A_t) = \frac{1}{u_{2k}(0)}\int_{-1}^1 u_{2k}(y)\,dy,\qquad \lambda_{2k+1}(\tilde A_t) = \frac{i\pi t}{u_{2k+1}'(0)}\int_{-1}^1 u_{2k+1}(y) y \,dy.$$ Finally, we have (starting with the index $n=0$ here) $$\lambda_n(A_s) = \frac{s}{4} |\lambda_n(\tilde A_{s/2})|^2\qquad (n=0,1,2,\ldots).$$ Hence, the $m$-dimensional Ritz–Galerkin approximation of $\det(I-A_s)$ is given by $$\det(I-P_m A_s P_m) = \prod_{n=0}^{m-1} (1-\lambda_n(A_s)).$$ While himself had to rely on tables of the spheroidal wave functions [@MR0074130], we can use the fairly recent implementation of these special functions by , which now comes with Mathematica 6.0. This way, we get the following implementation: \[prog:mathspher\] ![image](Spheroidal.pdf){width="75.00000%"} [![Convergence of various $m$-dimensional approximations of the Fredholm determinants $E_2(0;1)$ (left) and $E_2(0;2)$ (right): the Nyström-type quadrature methods based on Gauss–Legendre (dots) and Curtis–Clenshaw (circles), as well as Ritz–Galerkin method based on spheroidal wave functions (stars). The dashed line shows the amount, according to (\[eq:roundoff2\]), of roundoff error due to the numerical evaluation of the finite-dimensional determinants; all calculations were done in IEEE double arithmetic with machine precision $\epsilon = 2^{-53}$.[]{data-label="fig:E0err"}](E01.pdf "fig:"){width="\textwidth"}]{} [![Convergence of various $m$-dimensional approximations of the Fredholm determinants $E_2(0;1)$ (left) and $E_2(0;2)$ (right): the Nyström-type quadrature methods based on Gauss–Legendre (dots) and Curtis–Clenshaw (circles), as well as Ritz–Galerkin method based on spheroidal wave functions (stars). The dashed line shows the amount, according to (\[eq:roundoff2\]), of roundoff error due to the numerical evaluation of the finite-dimensional determinants; all calculations were done in IEEE double arithmetic with machine precision $\epsilon = 2^{-53}$.[]{data-label="fig:E0err"}](E02.pdf "fig:"){width="\textwidth"}]{} Given all this, one can understand that beautiful discovery of expressing $E_2(0;s)$ by formula (\[eq:jimbo\]) in terms of the fifth Painlevé transcendent was generally considered to be a major break-through even for its numerical evaluation [@MR1791893 Footnote 10]. However, note how much less knowledge suffices for the application of the far more general Nyström-type method: continuity of $K$ makes it applicable, and $K$ being entire guarantees rapid, exponentially fast convergence. That is all. #### An actual numerical experiment Figure \[fig:E0err\] shows the convergence (in IEEE machine arithmetic) of an actual calculation of the numerical values $E_2(0;1)$ and $E_2(0;2)$. We observe that the Nyström-type method based on Gauss–Legendre has an exponentially fast convergence rate comparable to the Ritz–Galerkin method. Clenshaw–Curtis needs a dimension $m$ that is about twice as large as for Gauss–Legendre to achieve the same accuracy. This matches the fact that Clenshaw–Curtis has the order $\nu=m$, which is half the order $\nu=2m$ of Gauss–Legendre, and shows that the bounds of Theorem \[thm:nyerr\] are rather sharp with respect to $\nu$ (there is no “kink” phenomenon here, cf. ). The dashed line shows the amount, as estimated in (\[eq:roundoff2\]), of roundoff error that stems from the numerical evaluation of the finite dimensional $m\times m$-determinant itself. Note that this bound is essentially the same for all the three methods and can easily be calculated in course of the numerical evaluation. We observe that this bound is explaining the actual onset of numerical “noise” in all the three methods reasonably well. #### Remark Note that the Nyström-type method *outperforms* the Ritz–Galerkin method by far. First, the Nyström-type method is general, simple, and straightforwardly implemented (see the code given in Section \[sect:intro\]); in contrast, the Ritz–Galerkin depends on some detailed knowledge about the eigenvalues and requires numerical access to the spheroidal wave functions. Second, there is no substantial gain, as compared to the Gauss–Legendre based method, in the convergence rate from knowing the eigenvalues exactly. Third, and most important, the computing time for the Ritz–Galerkin runs well into several minutes, whereas both versions of the Nyström-type method require just a few *milliseconds*. The Airy kernel {#sect:airy} --------------- [![The Tracy–Widom distribution $F_2(s)$; that is, in the edge scaling limit, the probability of the maximal eigenvalue of the Gaussian unitary ensemble (GUE) being not larger than $s$. The result shown was calculated with the Nyström-like method based on Gauss–Legendre with $m=50$; and cross-checked against the asymptotic expansion $\log F_2(-s) = -s^3/12 - \log(s)/8 + \log(2)/24 + \zeta'(-1) + O(s^{-3/2})$ for $s\to\infty$ [@MR2373439].[]{data-label="fig:F2"}](F2.pdf "fig:"){width="\textwidth"}]{} The Tracy–Widom distribution $F_2(s)$ (shown in Figure \[fig:F2\]), that is, in the edge scaling limit, the probability of the maximal eigenvalue of the Gaussian unitary ensemble (GUE) being not larger than $s$, is given [@MR2129906 §24.2] by the Fredholm determinant $$\label{eq:F2} F_2(s) = \det(I-A_s)$$ of the integral operator $A_s$ on $L^2(s,\infty)$ that is induced by the Airy kernel $K$: $$\label{eq:airykernel} A_s u(x) = \int_s^\infty K(x,y) u(y)\,dy,\qquad K(x,y) = \frac{\Ai(x)\Ai'(y) - \Ai(y)\Ai'(x)}{x-y}.$$ Note that $K$ is Hermitian and entire on $\C \times \C$; thus $A_s$ is selfadjoint. (Again, this is already already about all we would need to know for successfully applying and understanding the Nyström-type method. However, we would like to show that, as for the sine kernel, the strong perturbation bound of Lemma \[lem:perturb\] applies to the Airy kernel, too.) There is the factorization [@MR1257246 Eq. (4.5)] $$K(x,y) = \int_0^\infty \Ai(x+\xi)\Ai(y+\xi)\,d\xi,$$ which relates the Airy kernel with the Airy transform [@MR2114198 §4.2] in a similar way as the sine kernel is related by (\[eq:sinkernesqrt\]) with the Fourier transform. This proves, because of the super-exponential decay of $\Ai(x) \to 0$ as $x\to 0$, that $A_s$ is the product of two Hilbert–Schmidt operators on $L^2(s,\infty)$ and therefore of trace class. Moreover, $A_s$ is positive semi-definite with maximal eigenvalue $\lambda_1(A) \leq 1$; since by the Parseval–Plancherel equality [@MR2114198 Eq. (4.27)] of the Airy transform we obtain, for $u \in L^2(s,\infty)$, $$\begin{gathered} 0 \leq \langle u,A_s u\rangle = \int_0^\infty \left| \int_0^\infty \Ai(x+\xi)u(x)\,dx \right|^2 \,d\xi \\*[2mm] \leq \int_{-\infty}^\infty \left| \int_0^\infty \Ai(x+\xi)u(x)\,dx \right|^2 \,d\xi = \|u\|_{L^2}^2.\end{gathered}$$ More refined analytic arguments, or a carefully controlled numerical approximation, show the strict inequality $\lambda_1(A) < 1$; the perturbation bound of Lemma \[lem:perturb\] applies. #### Modification of the Nyström-type method for infinite intervals The quadrature methods of Section \[sect:quad\] are not directly applicable here, since the integral operator $A_s$ is defined by an integral over the infinite interval $(s,\infty)$. We have the following three options: [![Values of the expression $\|P_T A_s P_T - A_s \|_{\scriptscriptstyle\mathcal{J}_2}$, which bounds, by (\[eq:truncerr\]), the error in $F_2(s)$ committed by truncating the integral in (\[eq:airykernel\]) at a point $T>s$.[]{data-label="fig:truncerr"}](AiryTruncation.pdf "fig:"){width="\textwidth"}]{} 1. Using a Gauss-type quadrature formula on $(s,\infty)$ that is tailor-made for the super-exponential decay of the Airy function. Such formulae have recently been constructed by .\ 2. Truncating the integral in (\[eq:airykernel\]) at some point $T>s$. That is, before using the Nyström-type method with a quadrature formula on the finite interval $[s,T]$ (for which the second part of Theorem \[thm:nyerr\] is then applicable, showing exponential convergence), we approximate the Fredholm determinant (\[eq:F2\]) by $$\det(I-P_T A_s P_T) = \det\left(I - A_s\projected{L^2(s,T)}\right),$$ where the orthonormal projection $P_T : L^2(s,\infty) \to L^2(s,T)$, $P u = u \cdot \chi_{[s,T]}$, denotes the multiplication operator by the characteristic function of $[s,T]$. This way we commit an additional truncation error, which has, by passing through the perturbation bound of Lemma \[lem:perturb\], the computable bound $$\begin{gathered} \label{eq:truncerr} |\det(I-P_T A_s P_T) - \det(I-A_s) | \leq \|P_T A_s P_T - A_s \|_{\scriptscriptstyle\mathcal{J}_1} \leq \\*[2mm] \|P_T A_s P_T - A_s \|_{\scriptscriptstyle\mathcal{J}_2} = \left(\int_T^\infty\int_T^\infty |K(x,y)|^2\,dxdy\right)^{1/2}.\end{gathered}$$ Figure \[fig:truncerr\] shows this bound as a function of the truncation point $T$. We observe that, for the purpose of calculating (within IEEE machine arithmetic) $F_2(s)$ for $s \in [-8,2]$—as shown in Figure \[fig:F2\]—, a truncation point at $T=16$ would be more than sufficiently safe.\ 3. Transforming the infinite intervals to finite ones. By using a monotone and smooth transformation $\phi_s:(0,1) \to (s,\infty)$, defining the transformed integral operator $\tilde A_s$ on $L^2(0,1)$ by $$\tilde A_s u(\xi) = \int_0^1 \tilde K_s(\xi,\eta) u(\eta)\,d\eta,\quad \tilde K_s(\xi,\eta) = \sqrt{\phi'_s(\xi)\phi'_s(\eta)}\, K(\phi_s(\xi),\phi_s(\eta)),$$ gives the identity $$F_s(s) = \det\left(I-A_s\projected{L^2(s,\infty)}\right) = \det\left(I-\tilde A_s\projected{L^2(0,1)}\right).$$ For the super-exponentially decaying Airy kernel $K$ we suggest the transformation $$\label{eq:transform} \phi_s(\xi) = s + 10 \tan(\pi \xi/2)\qquad (\xi\in(0,1)).$$ Note that though $\tilde K(\xi,\eta)$ is a smooth function on $[0,1]$ it possesses, as a function on $\C\times \C$, essential singularities on the lines $\xi=1$ or $\eta=1$. Hence, we can only apply the first part of Theorem \[thm:nyerr\] here, which then shows, for Gauss–Legendre and Clenshaw–Curtis, a super-algebraic convergence rate, that is, $O(m^{-k})$ for arbitrarily high algebraic order $k$. The actual numerical experiments reported in Figure \[fig:F2err\] show, in fact, even exponential convergence. [![Convergence of the $m$-dimensional Nyström-type approximation—using the transformation (\[eq:transform\])—of the Fredholm determinants $F_2(-2)$ (left) and $F_2(-4)$ (right), based on Gauss–Legendre (dots) and Curtis–Clenshaw (circles). The dashed line shows the amount, according to (\[eq:roundoff2\]), of roundoff error due to the numerical evaluation of the finite-dimensional determinants; all calculations were done in IEEE double arithmetic ($\epsilon = 2^{-53}$).[]{data-label="fig:F2err"}](F2_2.pdf "fig:"){width="\textwidth"}]{} [![Convergence of the $m$-dimensional Nyström-type approximation—using the transformation (\[eq:transform\])—of the Fredholm determinants $F_2(-2)$ (left) and $F_2(-4)$ (right), based on Gauss–Legendre (dots) and Curtis–Clenshaw (circles). The dashed line shows the amount, according to (\[eq:roundoff2\]), of roundoff error due to the numerical evaluation of the finite-dimensional determinants; all calculations were done in IEEE double arithmetic ($\epsilon = 2^{-53}$).[]{data-label="fig:F2err"}](F2_4.pdf "fig:"){width="\textwidth"}]{} From the general-purpose point of view, we recommend the third option. It is straightforward and does not require any specific knowledge, or construction, as would the first and second option. #### Remarks on other numerical methods to evaluate $F_2(s)$ As for the sine kernel, there is a selfadjoint second-order ordinary differential operator commuting with $A_s$ [@MR1257246 p. 166]. Though this has been used to derive some asymptotic formulas, nothing is known in terms of special functions that would enable us to base a Ritz–Galerkin method on it. As puts it: “In the case of the Airy kernel … the differential equation did not receive much attention and its solutions are not known.” Prior to our work of calculating $F_2(s)$ directly from its determinantal expression, all the published numerical calculations started with remarkable discovery of expressing $F_2(s)$ in terms of the second Painlevé transcendent; namely $$F_2(s) = \exp\left(-\int_s^\infty (z-s) q(z)^2 \,dz\right)$$ with $q(z)$ being the Hastings–McLeod [-@MR555581] solution of Painlevé II, $$\label{eq:hastings} q''(z) = 2 q(z)^3 + z\,q(z),\qquad q(z)\sim\Ai(z)\text{ \;as\; $z \to \infty$}.$$ Initial value methods for the numerical integration of (\[eq:hastings\]) suffer from severe stability problems [@MR2070096]. Instead, the numerically stable way of solving (\[eq:hastings\]) goes by considering $q(z)$ as a connecting orbit, the other asymptotic state being $$q(z) \sim \sqrt\frac{-z}{2} \text{ \;as\; $z \to -\infty$},$$ and using numerical two-point boundary value solvers [@Dieng05]. Extension to Systems of Integral Operators {#sect:matrixkernels} ========================================== We now consider an $N\times N$ system of integrals operators that is induced by continuous kernels $K_{ij} \in C(I_i\times I_j)$ ($i,j=1,\ldots,N$), where the $I_i \subset \R$ denote some finite intervals. The corresponding system of integral equations $$\label{eq:system} u_i(x) + z\sum_{j=1}^N \int_{I_j} K_{ij}(x,y) u_j(y)\,dy = f_i(x)\qquad (x \in I_i,\; i,j=1,\ldots,N)$$ defines, with $u=(u_1,\ldots,u_N)$ and $f=(f_1,\ldots,f_N)$, an operator equation $$u + z A u = f$$ on the Hilbert space $\mathcal{H} = L^2(I_1) \oplus \cdots \oplus L^2(I_N)$. The Fredholm determinant for systems {#sect:sysdet} ------------------------------------ Assuming $A$ to be trace class, let us express $\det(I+z A)$ in terms of the system $(K_{ij})$ of kernels. To this end we show that the system (\[eq:system\]) is equivalent to a [*single*]{} integral equation; an idea that, essentially, can already be found in the early work of . To simplify notation, we assume that the $I_k$ are *disjoint* (a simple transformation of the system of integral equations by a set of translations will arrange for this). We then have[^19] $$\mathcal{H} = \bigoplus_{k=1}^N L^2(I_k) \cong L^2(I), \qquad I = I_1 \cup \ldots \cup I_n.$$ by means of the natural isometric isomorphism $$(u_1,\ldots,u_N) \mapsto u = \sum_{k=1}^N \chi_k u_k$$ where $\chi_k$ denotes the characteristic function of the interval $I_k$. Given this picture, the operator $A$ can be viewed being the integral operator on $L^2(I)$ that is induced by the kernel $$K(x,y) = \sum_{i,j=1}^N \chi_i(x) K_{ij}(x,y) \chi_j(y).$$ By (\[eq:detfred\]) we finally get (cf. ) $$\begin{aligned} \det(I + z A) &= \sum_{n=0}^\infty \frac{z^n}{n!} \int_{I^n} \det\left(K(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n \\*[2mm] &= \sum_{n=0}^\infty \frac{z^n}{n!} \int_{I^n} \underbrace{\left(\sum_{i_1,\ldots,i_n=1}^N \chi_{i_1}(t_1) \cdots \chi_{i_n}(t_n)\right)}_{=1} \det\left(K(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n \\*[2mm] &= \sum_{n=0}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \int_{I_{i_1}\times\cdots\times I_{i_n}} \det\left(K(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n \\*[2mm] &= \sum_{n=0}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \int_{I_{i_1}\times\cdots\times I_{i_n}} \det\left(K_{i_p i_q}(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n.\end{aligned}$$ By eventually transforming back to the originally given, non-disjoint intervals $I_k$, the last expression is the general formula that we have sought for: $\det(I+z A) = d(z)$ with $$\label{eq:systemdet} d(z) = \sum_{n=0}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \int_{I_{i_1}\times\cdots\times I_{i_n}} \det\left(K_{i_p i_q}(t_p,t_q)\right)_{p,q=1}^n \,dt_1\cdots\,dt_n.$$ This is a perfectly well defined entire function for *any* system $K_{ij}$ of continuous kernels, independently of whether $A$ is a trace class operator or not. We call it the Fredholm determinant of the system. #### The determinant of block matrices In preparation of our discussion of Nyström-type methods for approximating (\[eq:systemdet\]) we shortly discuss the determinant of $N\times N$-block matrices $$A = \begin{pmatrix} A_{11} & \cdots & A_{1N} \\*[1mm] \vdots & & \vdots \\*[1mm] A_{N1} & \cdots & A_{NN} \end{pmatrix} \in \C^{M\times M},\qquad A_{ij} \in \C^{m_i \times m_j},\quad M = m_1 + \cdots + m_N.$$ Starting with von Koch’s formula (\[eq:vonKoch\]), an argument[^20] that is similar to the one that has led us to (\[eq:systemdet\]) yields $$\label{eq:blockmatrixdet} \det(I+z A) = \sum_{n=0}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \sum_{k_1=1}^{m_{i_1}} \cdots \sum_{k_n=1}^{m_{i_n}} \det\left((A_{i_p,i_q})_{k_p,k_q}\right)_{p,q=1}^n.$$ Quadrature methods for systems ------------------------------ Given a quadrature formula for each of the intervals $I_i$, namely $$\label{eq:quadsys} Q_i(f) = \sum_{j=1}^{m_i} w_{ij} f(x_{ij}) \;\approx\; \int_{I_i} f(x)\,dx,$$ we aim at generalizing the Nyström-type method of Section \[sect:quad\]. We restrict ourselves to the case of positive weights, $w_{ij}>0$, and generalize the method from the single operator case as given in (\[eq:detnysym\]) to the system case in the following form: $$\label{eq:nysysdef} d_Q(z) = \det(I+z A_Q), \qquad A_Q = \begin{pmatrix} A_{11} & \cdots & A_{1N} \\*[1mm] \vdots & & \vdots \\*[1mm] A_{N1} & \cdots & A_{NN} \end{pmatrix}$$ with the sub-matrices $A_{ij}$ defined by the entries $$(A_{ij})_{p,q} = w_{ip}^{1/2} K_{ij}(x_{ip},x_{jq}) w_{jq}^{1/2} \qquad (p=1,\ldots,m_i,\; q=1,\ldots,m_j).$$ This can be as straightforwardly implemented as in the case of a single operator. Now, a convergence theory can be built on a representation of the error $d_Q(z)-d(z)$ that is analogous to (\[eq:nyerr\]). To this end we simplify the notation by introducing the following functions on $I_{i_1}\times\cdots\times I_{i_n}$, $$K_{i_1,\ldots,i_n}(t_1,\ldots,t_n) = \det\left(K_{i_p i_q}(t_p,t_q)\right)_{p,q=1}^n\,,$$ and by defining, for functions $f$ on $I_{i_1}\times\cdots\times I_{i_n}$, the product quadrature formula $$\begin{gathered} \left(\prod_{k=1}^n Q_{i_k} \right)(f) = \sum_{j_1=1}^{m_{i_1}} \cdots \sum_{j_n=1}^{m_{i_n}} w_{i_1j_1} \cdots w_{i_nj_n} f(x_{i_1j_1},\ldots,x_{i_nj_n}) \\*[2mm] \approx \int_{I_{i_1}\times\cdots\times I_{i_n}} f(t_1,\ldots,t_n)\,dt_1\cdots\,dt_n.\end{gathered}$$ Thus, we can rewrite the Fredholm determinant (\[eq:systemdet\]) in the form $$d(z) = 1 + \sum_{n=1}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \int_{I_{i_1}\times\cdots\times I_{i_n}}K_{i_1,\ldots,i_n}(t_1,\ldots,t_n) \,dt_1\cdots\,dt_n.$$ Likewise, by observing the generalized von Koch formula (\[eq:blockmatrixdet\]), we put the definition (\[eq:nysysdef\]) of $d_Q(z)$ to the form $$d_Q(z) = 1+ \sum_{n=1}^\infty \frac{z^n}{n!} \sum_{i_1,\ldots,i_n=1}^N \left(\prod_{k=1}^n Q_{i_k} \right)(K_{i_1,\ldots,i_n}).$$ Thus, once again, the Nyström–type method amounts for approximating each multidimensional integral of the power series of the Fredholm determinant by using a product quadrature rule. Given this representation, Theorem \[thm:nyerr\] can straightforwardly be generalized to the system case: If $K_{ij} \in C^{k-1,1}(I_i \times I_j)$, then for each set (\[eq:quadsys\]) of quadrature formulae of a common order $\nu \geq k$ with positive weights there holds the error estimate $$d_Q(z) - d(z) = O(\nu^{-k}) \qquad (\nu\to\infty),$$ uniformly for bounded $z$. If the $K_{ij}$ are bounded analytic on $\mathcal{E}_\rho(I_i) \times \mathcal{E}_\rho(I_j)$ (with the ellipse $\mathcal{E}_\rho(I_i)$ defined, with respect to $I_i$, as in Theorem \[thm:quaderr\]), then for each set (\[eq:quadsys\]) of quadrature formulae of a common order $\nu$ with positive weights there holds the error estimate $$d_Q(z)-d(z) = O(\rho^{-\nu}) \qquad (\nu\to\infty),$$ uniformly for bounded $z$. Examples from random matrix theory ---------------------------------- Here, we apply the Nyström-type method (\[eq:nysysdef\]) to two $2\times 2$-systems of integral operators that have recently been studied in random matrix theory. [![Values of the two-point correlation function $\cov(\mathcal{A}_2(t),\mathcal{A}_2(0))$ of the Airy process $\mathcal{A}_2(t)$ (solid line). The dashed line shows the first term of the asymptotic expansion $\cov(\mathcal{A}_2(t),\mathcal{A}_2(0)) \sim t^{-2}$ as $t \to \infty$.[]{data-label="fig:Airy2"}](Airy2.pdf "fig:"){width="\textwidth"}]{} #### Two-point correlation of the Airy process The Airy process $\mathcal{A}_2(t)$ describes, in a properly rescaled limit of infinite dimension, the maximum eigenvalue of Hermitian matrix ensemble whose entries develop according to the Ornstein–Uhlenbeck process. This stationary stochastic process was introduced by and further studied by . These authors have shown that the joint probability function is given by a Fredholm determinant; namely $$\label{eq:jointprobairy2} \mathbb{P}(\mathcal{A}_2(t) \leq s_1, \mathcal{A}_2(0) \leq s_2) = \det\left(I - \begin{pmatrix} A_0 & A_t \\*[1mm] A_{-t} & A_0 \end{pmatrix}{\projected{L^2(s_1,\infty)\oplus L^2(s_2,\infty)}}\right)$$ with integral operators $A_t$ that are induced by the kernel functions $$\label{eq:airy2kernel} K_t(x,y) = \begin{cases} \phantom{-}\displaystyle\int_0^\infty e^{-\xi t} \Ai(x+\xi)\Ai(y+\xi)\,d\xi, &\qquad t> 0, \\*[4mm] -\displaystyle\int_{-\infty}^0 e^{-\xi t} \Ai(x+\xi)\Ai(y+\xi)\,d\xi, &\qquad \text{otherwise}. \end{cases}$$ Of particular interest is the two-point correlation function $$\begin{aligned} \label{eq:twopintairy2} \cov(\mathcal{A}_2(t),\mathcal{A}_2(0)) &= \mathbb{E}(\mathcal{A}_2(t)\mathcal{A}_2(0))- \mathbb{E}(\mathcal{A}_2(t)) \mathbb{E}(\mathcal{A}_2(0))\\*[2mm] &= \int_{\R^2} s_1 s_2 \frac{\partial^2\mathbb{P}(\mathcal{A}_2(t) \leq s_1, \mathcal{A}_2(0) \leq s_2)}{\partial s_1 \partial s_2} \,ds_1ds_2 -c_1^2,\notag\end{aligned}$$ where $c_1$ denotes the expectation value of the Tracy–Widom distribution (\[eq:F2\]). We have calculated this correlation function for $0 \leq t \leq 100$ in steps of $0.1$ to an absolute error of $\pm 10^{-10}$, see Figure \[fig:Airy2\].[^21] Here are some details about the numerical procedure: - Infinite intervals of integration, such as in the definition (\[eq:airy2kernel\]) of the kernels or for the domain of the integral operators (\[eq:jointprobairy2\]) themselves, are handled by a transformation to the finite interval $[0,1]$ as in Section \[sect:airy\].\ - The kernels (\[eq:airy2kernel\]) are evaluated, after transformation, by a Gauss–Legendre quadrature.\ - The joint probability distribution (\[eq:jointprobairy2\]) is then evaluated, after transformation, by the Nyström-type method of this section, based on Gauss–Legendre quadrature.\ - To avoid numerical differentiation, the expectation values defining the two-point correlation (\[eq:twopintairy2\]) are evaluated by truncation of the integrals, partial integration, and using a Gauss–Legendre quadrature once more. Because of analyticity, the convergence is always exponential. With parameters carefully (i.e., adaptively) adjusted to deliver an absolute error of $\pm 10^{-10}$, the evaluation of the two-point correlation takes, for a single time $t$ and using a 2 GHz PC, about 20 minutes on average. The results were cross-checked, for small $t$, with the asymptotic expansion [@MR1933446; @Hagg07] $$\begin{gathered} \cov(\mathcal{A}_2(t),\mathcal{A}_2(0)) = \var(\mathcal{A}_2(0)) - \tfrac{1}{2}\var(\mathcal{A}_2(t)-\mathcal{A}_2(0))\\*[1mm] = \var(\mathcal{A}_2(0))\, -\,t \,+ \, O(t^2)\qquad (t\to0),\end{gathered}$$ and, for large $t$, with the asymptotic expansion[^22] [@MR2054175; @MR2150191] $$\cov(\mathcal{A}_2(t),\mathcal{A}_2(0)) = t^{-2} + c t^{-4} + O(t^{-6})\qquad (t\to\infty),$$ where the constant $c=-3.542\cdots$ can explicitly be expressed in terms of the Hastings–McLeod solution (\[eq:hastings\]) of Painlevé II. [![Values of the two-point correlation function $\cov(\mathcal{A}_1(t),\mathcal{A}_1(0))$ of the $\text{Airy}_1$ process $\mathcal{A}_1(t)$.[]{data-label="fig:Airy1"}](Airy1.pdf "fig:"){width="\textwidth"}]{} #### Two-point correlation of the $\text{Airy}_1$ process and have introduced the $\text{Airy}_1$ process $\mathcal{A}_1(t)$ for which, once again, the joint probability distribution can be given in terms of a Fredholm determinant; namely $$\mathbb{P}(\mathcal{A}_1(t) \leq s_1, \mathcal{A}_1(0) \leq s_2) = \det\left(I - \begin{pmatrix} A_0 & A_t \\*[1mm] A_{-t} & A_0 \end{pmatrix}{\projected{L^2(s_1,\infty)\oplus L^2(s_2,\infty)}}\right)$$ with integral operators $A_t$ that are now induced by the kernel functions $$K_t(x,y) = \begin{cases} \Ai(x+y+t^2) e^{t(x+y)+2t^3/3} - \displaystyle\frac{\exp(-(x-y)^2/(4t))}{\sqrt{4\pi t}}, & \;t> 0, \\*[4mm] \Ai(x+y+t^2) e^{t(x+y)+2t^3/3}, &\;\text{otherwise}. \end{cases}$$ By basically employing the same numerical procedure as for the Airy process, we have succeeded in calculating the two-point correlation function $\cov(\mathcal{A}_1(t),\mathcal{A}_1(0))$ for $0 \leq t \leq 2.5$ in steps of $0.025$ to an absolute error of $\pm 10^{-10}$, see Figure \[fig:Airy1\].[^23] For a single time $t$ the evaluation takes about 5 minutes on average (using a 2 GHz PC). This numerical result has been used by as a strong evidence that the $\text{Airy}_1$ process is, unlike previously conjectured, *not* the limit of the largest eigenvalue in GOE matrix diffusion. Appendices ========== Quadrature Rules {#app:quad} ---------------- For the ease of reference, we collect in this appendix some classical facts about quadrature rules in one and more dimensions. #### Quadrature rules in one dimension We consider quadrature rules of the form $$\label{eq:quad1} Q(f) = \sum_{j=1}^m w_j f(x_j)$$ which are meant to approximate $\int_a^b f(x)\,dx$ for continuous functions $f$ on some finite interval $[a,b]\subset \R$. We define the norm of a quadrature rule by $$\|Q\| = \sum_{j=1}^m |w_j|$$ Convergence of a sequence of quadrature rules is characterized by the following theorem of Pólya [@MR760629 p. 130]. \[thm:polya\] A sequence $Q_n$ of quadrature rules converges for continuous functions, $$\lim_{n\to\infty} Q_n(f) = \int_a^b f(x)\,dx \qquad (f \in C[a,b]),$$ if and only if the sequence $\|Q_n\|$ of norms is bounded by some stability constant $\Lambda$ and if $$\label{eq:monomial} \lim_{n\to\infty} Q_n(x^k) = \int_a^b x^k\,dx \qquad (k =0,1,2,\ldots).$$ If the weights are all positive, then (\[eq:monomial\]) already implies the boundedness of $\|Q_n\|=Q_n(1)$. A quadrature rule $Q$ is of order $\nu\geq 1$, if it is exact for all polynomials of degree at most $\nu-1$. Using results from the theory polynomial best approximation one can prove quite strong error estimates [@MR760629 §4.8]. \[thm:quaderr\] If $f \in C^{k-1,1}[a,b]$, then for each quadrature rule $Q$ of order $\nu\geq k$ with positive weights there holds the error estimate $$\left| Q(f) - \int_a^b f(x)\,dx \right| \leq c_k\, (b-a)^{k+1} \nu^{-k} \|f^{(k)}\|_{\scriptscriptstyle L^\infty(a,b)}\,,$$ with a constant[^24] $c_k$ depending only on $k$. If $f$ is bounded analytic in the ellipse $\mathcal{E}_\rho$ with foci at $a$, $b$ and semiaxes of lengths $s > \sigma$ such that $$\rho = \sqrt\frac{s+\sigma}{s-\sigma}\,,$$ then for each quadrature rule $Q$ of order $\nu$ with positive weights there holds the error estimate $$\left| Q(f) - \int_a^b f(x)\,dx \right| \leq \frac{4(b-a)\rho^{-\nu} }{1-\rho^{-1}} \|f\|_{\scriptscriptstyle L^\infty(\mathcal{E}_\rho)}.$$ #### Quadrature rules in two and more dimensions For the $n$-dimensional integral $$\int_{[a,b]^n} f(t_1,\ldots,t_n) \,dt_1 \cdots\,dt_n$$ we consider the product quadrature rule $Q^n$ that is induced by an one dimensional quadrature rule $Q$ of the form (\[eq:quad1\]), namely $$\label{eq:productrule} Q^n(f) = \sum_{j_1,\ldots,j_n=1}^m w_{j_1}\cdots w_{j_n} \,f(x_{j_1},\ldots,x_{j_n}).$$ We introduce some further notation for two classes of functions $f$. First, for $f \in C^{k-1,1}([a,b]^n)$, we define the seminorm $$\label{eq:seminormk} |f|_k = \sum_{i=1}^n \|\partial_i^k f\|_{\scriptscriptstyle L^\infty((a,b)^n)}.$$ Second, if $f \in C([a,b]^n)$ is sectional analytic—that is, analytic with respect to each variable $t_i$ while the other variables are fixed in $[a,b]$—in the ellipse $\mathcal{E}_\rho$ (defined in Theorem \[thm:quaderr\]), and if $f$ is uniformly bounded there, we call $f$ to be of class $\mathcal{C}_\rho$ with norm $$\label{eq:normcrho} \|f\|_{\scriptscriptstyle \mathcal{C}_\rho} = \sum_{i=1}^n \;\max_{(t_1,\ldots,t_{i-1},t_{i+1},\ldots,t_n)\in[a,b]^{n-1}} \|f(t_1,\ldots,t_{i-1},\cdot\,,t_{i+1},\ldots,t_n)\|_{\scriptscriptstyle L^\infty(\mathcal{E}_\rho)}.$$ By a straightforward reduction argument [@MR760629 p. 361] to the quadrature errors of the one-dimensional coordinate sections of $f$, Theorems \[thm:polya\] and \[thm:quaderr\] can now be generalized to $n$ dimensions. \[thm:quaderrn\] If a sequence of quadrature rules converges for continuous functions, then the same holds for the induced $n$-dimensional product rules. If $f \in C^{k-1,1}([a,b]^n)$, then for each one-dimensional quadrature rule $Q$ of order $\nu\geq k$ with positive weights there holds the error estimate $$\left| Q^n(f) - \int_{[a,b]^n} f(t_1,\ldots,t_n) \,dt_1 \cdots\,dt_n \right| \leq c_k\, (b-a)^{n+k}\, \nu^{-k} |f|_k\,,$$ with the same constant $c_k$ depending only on $k$ as in Theorem \[thm:quaderr\]. If $f\in C([a,b]^n)$ is of class $\mathcal{C}_\rho$, then for each one-dimensional quadrature rule $Q$ of order $\nu$ with positive weights there holds the error estimate $$\left| Q^n(f) - \int_{[a,b]^n} f(t_1,\ldots,t_n) \,dt_1 \cdots\,dt_n \right| \leq \frac{4(b-a)^{n} \rho^{-\nu}}{1-\rho^{-1}} \|f\|_{\scriptscriptstyle \mathcal{C}_\rho}.$$ #### Notes on Gauss–Legendre and Curtis–Clenshaw quadrature Arguably, the most interesting families of quadrature rules, with *positive* weights, are the Clenshaw–Curtis and Gauss-Legendre rules. With $m$ points, the first is of order $\nu=m$, the second of order $\nu=2m$. Thus, Theorems \[thm:polya\] and \[thm:quaderr\] apply. The cost of computing the weights and points of Clenshaw–Curtis is $O(m\log m)$ using FFT, that of Gauss–Legendre is $O(m^2)$ using the Golub–Welsh algorithm; for details see [@MR2214855] and [@Tref08]. The latter paper studies in depth the reasons why the Clenshaw–Curtis rule, despite having only half the order, performs essentially as well as Gauss–Legendre for most integrands. To facilitate reproducibility we offer the Matlab code (which is just a minor variation of the code given in the papers mentioned above) that has been used in our numerical experiments: > function [w,c] = ClenshawCurtis(a,b,m) > m = m-1; > c = cos((0:m)*pi/m); > M = [1:2:m-1]'; l = length(M); n = m-l; > v0 = [2./M./(M-2); 1/M(end); zeros(n,1)]; > v2 = -v0(1:end-1)-v0(end:-1:2); > g0 = -ones(m,1); g0(1+l)=g0(1+l)+m; g0(1+n)=g0(1+n)+m; > g = g0/(m^2+mod(m,2)); > w = ifft(v2+g); w(m+1) = w(1); > c = ((1-c)/2*a+(1+c)/2*b)'; > w = ((b-a)*w/2)'; for Clenshaw–Curtis; and > function [w,c] = GaussLegendre(a,b,m) > k = 1:m-1; beta = k./sqrt((2*k-1).*(2*k+1)); > T = diag(beta,-1) + diag(beta,1); > [V,L] = eig(T); > c = (diag(L)+1)/2; c = (1-c)*a+c*b; > w = (b-a)*V(1,:).^2; for Gauss–Legendre, respectively. Note, however, that the code for Gauss–Legendre is, unfortunately, suboptimal in requiring $O(m^3)$ rather than $O(m^2)$ operations, since it establishes the full matrix $V$ of eigenvectors of the Jacobi matrix $T$ instead of directly calculating just their first components $V(1,:)$ as in the fully fledged Golub–Welsh algorithm. Even then, there may well be more accurate, and more efficient, alternatives of computing the points and weights of Gauss–Legendre quadrature, see the discussions in and and the literature cited therein. Determinantal bounds -------------------- In Section \[sect:quad\], for a continuous kernel $K \in C([a,b]^2)$ of an integral operator, we need some bounds on the derivatives of the induced $n$-dimensional function $$K_n(t_1,\ldots t_n) = \det\left(K(t_p,t_q)\right)_{p,q=1}^n.$$ To this end, if $K \in C^{k-1,1}([a,b]^2)$ we define the norm $$\label{eq:Knormk} \|K\|_k = \max_{i+j\leq k} \|\partial_1^i\partial_2^j K\|_{\scriptscriptstyle L^\infty}.$$ \[lem:Kn\] If $K \in C([a,b]^2)$, then $K_n \in C([a,b]^n)$ with $$\label{eq:Knbound} \|K_n\|_{\scriptscriptstyle L^\infty} \leq n^{n/2} \|K\|_{\scriptscriptstyle L^\infty}^n.$$ If $K \in C^{k-1,1}([a,b]^2)$, then $K_n \in C^{k-1,1}([a,b]^n)$ with the seminorm (defined in (\[eq:seminormk\])) $$\label{eq:Knboundk} |K_n|_k \leq 2^k n^{(n+2)/2} \|K\|_k^n.$$ If $K$ is bounded analytic on $\mathcal{E}_\rho \times \mathcal{E}_\rho$ (with the ellipse $\mathcal{E}_\rho$ defined in Theorem \[thm:quaderr\]), then $K_n$ is of class $\mathcal{C}_\rho$ (defined in (\[eq:normcrho\])) and satisfies $$\label{eq:KnboundCrho} \|K_n\|_{\scriptscriptstyle \mathcal{C}_\rho} \leq n^{(n+2)/2} \|K\|_{\scriptscriptstyle L^\infty(\mathcal{E}_\rho \times \mathcal{E}_\rho)}^n.$$ Using the multilinearity of the determinant we have $$\begin{gathered} \frac{\partial^k}{\partial t_i^k} \frac{\partial^l}{\partial s_j^l} \begin{vmatrix} K(t_1,s_1) & \cdots & K(t_1,s_j) & \cdots & K(t_1,s_n) \\*[1mm] \vdots & & \vdots & & \vdots \\*[1mm] K(t_i,s_1) & \cdots & K(t_i,s_j) & \cdots & K(t_i,s_n) \\*[1mm] \vdots & & \vdots & & \vdots \\*[1mm] K(t_n,s_1) & \cdots & K(t_n,s_j) & \cdots & K(t_n,s_n) \end{vmatrix} \\*[2mm] = \begin{vmatrix} K(t_1,s_1) & \cdots & \partial_2^l K(t_1,s_j) & \cdots & K(t_1,s_n) \\*[1mm] \vdots & & \vdots & & \vdots \\*[1mm] \partial_1^k K(t_i,s_1) & \cdots & \partial_1^k\partial_2^lK(t_i,s_j) & \cdots & \partial_1^kK(t_i,s_n) \\*[1mm] \vdots & & \vdots & & \vdots \\*[1mm] K(t_n,s_1) & \cdots & \partial_2^lK(t_n,s_j) & \cdots & K(t_n,s_n) \end{vmatrix},\end{gathered}$$ which is, by Hadamard’s inequality [@MR17382 p. 469],[^25] bounded by the expression (see also ) $$n^{n/2}\left( \max_{i+j\leq k+l} \|\partial_1^i\partial_2^j K\|_{\scriptscriptstyle L^\infty}\right)^n.$$ Now, with $$\partial_j^k K_n(t_1,\ldots,t_n) = \sum_{l=0}^k \binom{k}{l} \frac{\partial^{k-l}}{\partial t_j^{k-l}} \frac{\partial^l}{\partial s_j^l} \begin{vmatrix} K(t_1,s_1) & \cdots & K(t_1,s_n) \\*[1mm] \vdots & & \vdots \\*[1mm] K(t_n,s_1) & \cdots & K(t_n,s_n) \end{vmatrix}_{s_1=t_1,\ldots,s_n=t_n}$$ we thus get $$\|\partial_j^k K_n\|_{\scriptscriptstyle L^\infty} \leq \sum_{l=0}^k \binom{k}{l} n^{n/2} \left( \max_{i+j\leq k} \|\partial_1^i\partial_2^j K\|_{\scriptscriptstyle L^\infty}\right)^n = 2^k n^{n/2} \left( \max_{i+j\leq k} \|\partial_1^i\partial_2^j K\|_{\scriptscriptstyle L^\infty}\right)^n.$$ This proves the asserted bounds (\[eq:Knbound\]) and (\[eq:Knboundk\]) with $k=0$ and $k\geq 1$, respectively. The class $\mathcal{C}_\rho$ bound (\[eq:KnboundCrho\]) follows analogously to the case $k=0$. Properties of a certain function used in Theorem \[thm:nyerr\] -------------------------------------------------------------- The power series $$\label{eq:phi} \Phi(z) = \sum_{n=1}^\infty \frac{n^{(n+2)/2}}{n!}\, z^n$$ defines an entire function on $\C$ (as the following lemma readily implies). \[lem:phi\] Let $\Psi$ be the entire function given by the expression $$\Psi(z) = 1 +\frac{\sqrt{\pi}}{2} z \,e^{z^2/4} \left(1+\erf\left(\frac z2\right)\right).$$ If $x > 0$, then the series $\Phi(x)$ is enclosed by:[^26] $$\sqrt{\frac{e}{\pi}}\, x \,\Psi(x\sqrt{2e}) \leq \Phi(x) \leq x\, \Psi(x\sqrt{2e}).$$ For $x>0$ we have $$\Phi(x) = x \sum_{n=1}^\infty \frac{n^{n/2}}{\Gamma(n)}x^{n-1}.$$ By Stirling’s formula and monotonicity we get for $n\geq 1$ $$\sqrt{\frac{e}{\pi}} \leq \frac{n^{n/2}}{\Gamma((n+1)/2)\,(\sqrt{2e}\,)^{n-1}} \leq 1;$$ in fact, the upper bound is obtained for $n=1$ and the lower bound for $n\to \infty$. Thus, by observing $$\sum_{n=1}^\infty \frac{\Gamma((n+1)/2)}{\Gamma(n)} z^{n-1} = 1 +\frac{\sqrt{\pi}}{2} z \,e^{z^2/4} \left(1+\erf\left(\frac z2\right)\right) = \Psi(z)$$ we get the asserted enclosure. Acknowledgements {#acknowledgements .unnumbered} ================ It is a pleasure to acknowledge that this work has started when I attended the programme on “Highly Oscillatory Problems” at the Isaac Newton Institute in Cambridge. It was a great opportunity meeting Percy Deift there, who introduced me to numerical problems related to random matrix theory (even though he was envisioning a general numerical treatment of Painlevé transcendents, but not of Fredholm determinants). I am grateful for his advice as well as for the communication with Herbert Spohn and Michael Prähofer who directed my search for an “open” numerical problem to the two-point correlation functions of the Airy and $\text{Airy}_1$ processes. I thank Patrik Ferrari who pointed me to the [-@Sasa05] paper of . Given the discovery that Theorem \[thm:nyconv\], which was pivotal to my study, is essentially a long forgotten (see my discussion on p. ) result of Hilbert’s 1904 work, I experienced much reconciliation—please allow me this very personal statement—from reading the poem “East Coker” (1940), in which T. S. Eliot, that “radical traditionalist”, described the nature of the human struggle for progress in life, art, or science: > … And so each venture\ > Is a new beginning …\ > … And what there is to conquer\ > By strength and submission, has already been discovered\ > Once or twice, or several times, by men whom one cannot hope\ > To emulate—but there is no competition—\ > There is only the fight to recover what has been lost\ > And found and lost again and again … [^1]: Zentrum Mathematik, Technische Universität München, Boltzmannstr. 3, 85747 Garching, Germany ([bornemann@ma.tum.de]{}). Manuscript as of . [^2]: writes: “This deep paper is extremely readable and I recommend it to those wishing a pleasurable afternoon.” An English translation of the paper can be found in . [^3]: Later reproduced as one of the first books on linear integral equations [@43.0423.01]. [^4]: For example, , , and start with the Fredholm determinant right from the beginning; yet already , the translation of the German edition from 1931, give it just a short mention (“since we shall not make any use of the Fredholm formulas later on”); while and , acknowledging the fact that “classical” Fredholm theory yields a number of results that functional analytic techniques do not, postpone Fredholm’s theory to a later chapter; whereas , , , , and ignore the Fredholm determinant altogether. Among the newer books on linear integral equations, the monumental four volume work of is one of the few we know of that give Fredholm determinants a balanced treatment. [^5]: Though we can only speculate about the reasons, there is something like a disapproving attitude towards determinants in general that seems to be quite common among people working in “continuous” applied mathematics. Here are a few scattered examples: writes at the beginning of the chapter on determinants (p. 460): “Today matrix and linear algebra are in the main stream of applied mathematics, while the role of determinants has been relegated to a minor backwater position.” has a paper with the provocative title “Down with Determinants!” and a correspondingly worked out textbook on linear algebra [-@Axler97]. The quintessential book of on “Matrix Computations” does not explicitly address the computation of determinants at all, it is only implicitly stated as part of Theorem 3.2.1. writes at the beginning of Section 14.6: “Like the matrix inverse, the determinant is a quantity that rarely needs to be computed.” He then continues with the argument, well known to every numerical analyst, that the determinant cannot be used as a measure of ill conditioning since it scales as $\det(\alpha A) = \alpha^m \det(A)$ for a $m\times m$-matrix $A$, $\alpha \in \R$. Certainly there is much truth in all of their theses, and Cramer’s rule and the characteristic polynomial, which were the most common reasons for a call to the numerical evaluation of determinants [@MR1653546 p. 176], have most righteously been banned from the toolbox of a numerical analyst for reasons of efficiency. However, with respect to the infinite dimensional case, the elimination of determinants from the thinking of numerical analysts as a subject of computations might have been all too successful. For instance, the scaling argument does not apply in the infinite dimensional case: operator determinants $\det(I+A)$ are defined for compact perturbations of the identity, which perfectly determines the scaling since, for $\alpha \neq 1$, $\alpha (I+A)$ cannot be written in the form $I+\tilde A$ with another compact operator $\tilde A$. (This is because the identity operator is *not* compact then.) [^6]: The computational cost of $O(m^3)$ for the matrix determinant using either Gaussian elimination with partial pivoting [@MR1653546 p. 176] or Hyman’s method [@MR1927606 Sect. 14.6] clearly dominates the cost of $O(m\log m)$ for the weights and points of Clenshaw–Curtis quadrature using the FFT, as well as the cost of $O(m^2)$ for Gauss–Legendre quadrature using the Golub–Welsh algorithm; for implementation details of these quadrature rules see , , and the appendix of this paper. The latter paper carefully compares Clenshaw–Curtis with Gauss–Legendre and concludes (p. 84): “Gauss quadrature is a beautiful and powerful idea. Yet the Clenshaw–Curtis formula has essentially the same performance for most integrands.” [^7]: The command [= QuadratureRule(a,b,m)]{} is supposed to supply the weights and points of a $m$-point quadrature rule on the interval $[a,b]$ as a $1\times m$ vector [w]{} and a $m\times 1$-vector [x]{}, respectively. For Gauss–Legendre and Clenshaw–Curtis, such a Matlab code can be found in the appendix. [^8]: Fredholm himself does not give the slightest hint of a motivation in his early papers [-@Fred00; -@34.0422.02]. He apparently conceived his determinant in [*analogy*]{} to similar expressions that his fellow countryman had obtained for infinite matrices a few years earlier; see , , or . [^9]: In matrix theory these norms are not commonly used—with the exception of $p=2$: $\|A\|_{\scriptscriptstyle\mathcal{J}_2}$ is then the Schur or Frobenius norm of the matrix $A$. [^10]: A counter-example was discovered by , see also . [^11]: An\[ft:hermitian\] $L^2$-kernel $K$ is Hermitian if $K(x,y) = \overline{K(y,x)}$. This property is equivalent to the fact that the induced Hilbert–Schmidt integral operator $A$ is selfadjoint, $A^*=A$. [^12]: have later extended this idea to generally define traces and determinants on embedded algebras of compact operators by a continuous extension from the finite dimensional case. Even within this general theory the trace class operators enjoy a most unique position: it is only for them that the values of trace and determinant are *independent* of the algebra chosen for their definition. On the contrary, if $A$ is Hilbert–Schmidt but not trace class, by varying the embedded algebra, the values of the trace $\tr(A)$ can be given *any* complex number and the values of the determinant $\det(I+A)$ are either always zero or can be made to take *any* value in the set $\C\setminus\{0\}$ [@MR1744872 Chap. VII]. [^13]: had given a corresponding form of the Fredholm determinant for integral operators. However, it can already be found in . [^14]: In fact, using such exponential “convergence factors” is a classical technique in complex analysis to construct, by means of infinite products, entire functions from their intended sets of zeros, see . A famous example is $$\frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(1+\frac{z}{n}\right)e^{-z/n},$$ which corresponds to the eigenvalues $\lambda_n(A) = 1/n$ of a Hilbert–Schmidt operator that is not trace class. [^15]: Equivalently one can define [@MR2154153 p. 50] the regularized determinant by $$\det\nolimits_2(I+z A) = \det(I + ((I+zA)e^{-z A} - I)) \qquad (A \in \mathcal{J}_2(\mathcal{H})).$$ This is because one can then show $(I+zA)e^{-z A} - I \in \mathcal{J}_1(\mathcal{H})$. For integral operators $A$ on $L^2(a,b)$ with an $L^2$ kernel $K$, had found the equivalent expression $$\det\nolimits_2(I+z A) = \sum_{n=0}^\infty \frac{z^n}{n!} \int_{(a,b)^n} \begin{vmatrix} 0 & K(t_1,t_2) & \cdots & K(t_1,t_n) \\*[1mm] K(t_2,t_1) & 0 & \cdots & K(t_2,t_n)\\*[1mm] \vdots & \vdots & \ddots & \vdots\\*[1mm] K(t_n,t_1) & K(t_n,t_2) & \cdots & 0 \end{vmatrix}\,dt_1\cdots\,dt_n,$$ simply replacing the problematic “diagonal” entries $K(t_j,t_j)$ in Fredholm’s determinant (\[eq:detfred\]) by zero. gives an elegant proof of Hilbert’s formula. [^16]: In infinite dimensions $P_m$ cannot converge [*in norm*]{} since the identity operator is not compact. [^17]: By (\[eq:galerkin\]) all we need to know for implementing the Galerkin method are the matrix elements $\langle \phi_i,A\phi_j\rangle$ for the normalized orthogonal polynomials $\phi_n$ (that is, properly rescaled Legendre polynomials) on the interval $[0,1]$. A somewhat lengthy but straightforward calculation shows that these elements are given by $$(\langle \phi_i,A \phi_j\rangle)_{i,j} = \begin{pmatrix} \frac{1}{12} & 0 & b_0 \\*[2mm] 0 & \frac{1}{60} & 0 & b_1 \\ b_0 & 0 & a_1 & \ddots & \ddots \\ & b_1 & \ddots & a_2 \\ & & \ddots & & \ddots \end{pmatrix}$$ with the coefficients $$a_n = \frac{1}{2(2n+1)(2n+5)},\qquad b_n = -\frac{1}{4(2n+3)\sqrt{(2n+1)(2n+5)}}.$$ [^18]: For the interval $[-1,1]$ this weight would be $(1-x^2)^{1/2}$, see . [^19]: The general case could be dealt with by the topological sum, or coproduct, of the intervals $I_k$, $$\coprod_{k=1}^N I_k = \bigcup_{k=1}^N I_k \times \{k\}.$$ One would then use [@MR2018275] the natural isometric isomorphism $$\mathcal{H} = \bigoplus_{k=1}^N L^2(I_k) \cong L^2\left(\coprod_{k=1}^N I_k \right).$$ [^20]: Alternatively, we can use (\[eq:detgrothen\]) and, recursively, the “binomial” formula [@MR0224623 p. 121] $$\bigwedge\nolimits^k(V_0 \oplus V_1) = \bigoplus_{j=0}^k \left(\bigwedge\nolimits^jV_0\right) \otimes \left(\bigwedge\nolimits^{k-j}V_1 \right)$$ of exterior algebra, which is valid for general vector spaces $V_0$ and $V_1$. [^21]: A table can be obtained from the author upon request. shows a plot (which differs by a scaling factor of two in both the function value and the time $t$) of the closely related function $$g_2(t) = \sqrt{\var(\mathcal{A}_2(t)-\mathcal{A}_2(0))/2} = \sqrt{\var(\mathcal{A}_2(0)) - \cov(\mathcal{A}_2(t),\mathcal{A}_2(0))}$$ —without, however, commenting on either the numerical procedure used or on the accuracy obtained. [^22]: have derived this asymptotic expansion from the masterfully obtained result that $G(t,x,y) = \log\mathbb{P}(\mathcal{A}_2(t)\leq x, \mathcal{A}_2(0) \leq y)$ satisfies the following nonlinear 3rd order PDE with certain (asymptotic) boundary conditions: $$\begin{gathered} t \frac{\partial}{\partial t}\left(\frac{\partial^2}{\partial x^2}- \frac{\partial^2}{\partial y^2}\right) G = \frac{\partial^3 G}{\partial x^2\partial y}\left(2\frac{\partial^2 G}{\partial y^2}+ \frac{\partial^2 G}{\partial x\partial y}-\frac{\partial^2 G}{\partial x^2}+x-y-t^2\right)\\*[2mm] -\frac{\partial^3 G}{\partial y^2\partial x}\left(2\frac{\partial^2 G}{\partial x^2}+ \frac{\partial^2 G}{\partial x\partial y}-\frac{\partial^2 G}{\partial y^2}-x+y-t^2\right) +\left(\frac{\partial^3 G}{\partial x^3}\frac{\partial}{\partial y}-\frac{\partial^3 G}{\partial y^3}\frac{\partial}{\partial x}\right) \left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right) G.\end{gathered}$$ The reader should contemplate a numerical calculation of the two-point correlation based on this PDE, rather than directly treating the Fredholm determinant as suggested by us. [^23]: A table can be obtained from the author upon request. shows a plot (which differs by a scaling factor of two in both the function value and the time $t$) of the closely related function $$g_1(t) = \sqrt{\var(\mathcal{A}_1(t)-\mathcal{A}_1(0))/2} = \sqrt{\var(\mathcal{A}_1(0)) - \cov(\mathcal{A}_1(t),\mathcal{A}_1(0))}$$ —without, however, commenting on either the numerical procedure used or on the accuracy obtained. [^24]: Taking Jackson’s inequality as given in , $c_k = 2 (\pi e/4)^k/\sqrt{2\pi k}$ will do the job. [^25]: This inequality, discovered by in 1893, was already of fundamental importance to Fredholm’s original theory [@Fred00 p. 41]. [^26]: Note the sharpness of this enclosure: $\sqrt{e/\pi}=0.93019\cdots$.
--- abstract: 'Past work in computational sarcasm deals primarily with sarcasm detection. In this paper, we introduce a novel, related problem: sarcasm target identification (*i.e.*, extracting the target of ridicule in a sarcastic sentence). We present an introductory approach for sarcasm target identification. Our approach employs two types of extractors: one based on rules, and another consisting of a statistical classifier. To compare our approach, we use two baselines: a naïve baseline and another baseline based on work in sentiment target identification. We perform our experiments on book snippets and tweets, and show that our hybrid approach performs better than the two baselines and also, in comparison with using the two extractors individually. Our introductory approach establishes the viability of sarcasm target identification, and will serve as a baseline for future work.' bibliography: - 'main.bib' title: 'Automatic Identification of Sarcasm Target: An Introductory Approach' --- *This paper was uploaded to arXiv on 20 October, 2016; but was submitted to EACL 2017 at an earlier date. The paper was not on arXiv at the time of EACL submission.* Introduction ============ Sarcasm is a form of verbal irony that is intended to express contempt or ridicule[^1]. Past work in computational sarcasm deals primarily with sarcasm detection, *i.e.*, to predict whether or a not a given piece of text is sarcastic [@joshi2016automatic]. So the sentence ‘*A woman needs a man like fish needs bicycle[^2]*’ will be predicted as sarcastic. While several approaches have been reported for sarcasm detection [@3; @4; @6; @22], no past work, to the best of our knowledge, attempts to identify a crucial component of sarcasm: the target of ridicule [@campbell2012there]. In case of the example above, this target of ridicule is the word ‘*man*’. In this paper, we introduce a new avenue in computational sarcasm research. We explore a novel problem called ‘**sarcasm target identification**’: the task of extracting the target of ridicule (*i.e.*, **sarcasm target**) of a sarcastic text. This sarcasm target is either a subset of words in the sentence or a fallback label ‘Outside’[^3]. We present an introductory approach **that takes as input a sarcastic text and returns its sarcasm target**. Our hybrid approach employs two extractors: a rule-based extractor (that implements a set of rules) and a statistical extractor (that uses a word-level classifier for every word in the sentence, to predict if the word will constitute the sarcasm target). We evaluate our approach using two manually labeled datasets consisting of book snippets and tweets. We consider two versions of our approach: Hybrid OR (where prediction by the two extractors is OR-ed) and Hybrid AND (where prediction by the two extractors is AND-ed). Since this is the first work in sarcasm target detection, no past work exists to be used as a baseline. Hence, we devise two baselines to validate the strength of our work. The first is a simple, intuitive baseline to show if our approach (which is computationally more intensive than the simple baseline) holds any value[^4]. As the second baseline, we use a technique reported for sentiment/opinion target identification. For both our datasets, we observe that the hybrid approach outperforms both the baselines. In addition, the hybrid OR approach also works better than using either rule-based or statistical extractors individually. Sarcasm target identification will be useful for aspect-based sentiment analysis so that the negative sentiment expressed in the sarcastic text can be attributed to the correct aspect. To the best of our knowledge, this is the first work that attempts identification of sarcasm targets. Our results will serve as a baseline for future work. Our manually labeled datasets are available for download at: [Anonymous](Anonymous). Each unit consists of a piece of text (either book snippet or tweet) with the annotation as the sarcasm target. The rest of the paper is organized as follows. Section \[sec:motiv\] formulates the problem, while Section \[sec:archi\] describes our architecture in detail. Experiment setup is in Section \[sec:expsetup\]. The results are presented in Section \[sec:results\] while an error analysis is in Section \[sec:discuss\]. We present related work in Section \[sec:relwork\] and conclude the paper in Section \[sec:concl\]. ![Architecture of our Sarcasm Target Identification Approach[]{data-label="fig:archi"}](Fig1_Again.png){width="53.00000%"} Formulation {#sec:motiv} =========== Sarcasm is a well-known challenge to sentiment analysis [@pang2008opinion]. Consider the sarcastic sentence ‘*My cell phone has an awesome battery that lasts 20 minutes*’. This sentence ridicules the battery of the cell phone. Aspect-based sentiment analysis needs to identify that the sentence ridicules the battery of the phone and hence, expresses negative sentiment towards the aspect ‘*battery*’. Our proposed problem ‘sarcasm target identification’ thus enables aspect-based sentiment analysis to attribute the negative sentiment to the correct target aspect. We define the **sarcasm target** as the entity or situation being ridiculed in a sarcastic text. In case of ‘*Can’t wait to go to class today*’, the word ‘*class*’ is the sarcasm target. Every sarcastic text has at least one sarcasm target (by definition of sarcasm), and the notion of sarcasm target is applicable for only sarcastic text (*i.e.*, non-sarcastic text does not have sarcasm target). Thus, we define **sarcasm target identification** as the task of extracting the subset of words that indicate the target of ridicule, given a sarcastic text. In case the target of ridicule is not present among these words, a fallback label ‘Outside’ is expected. Examples of some sarcasm targets are given in Table \[tab:examples\]. Some challenges of sarcasm target identification are: - **Presence of multiple candidate phrases**: Consider the sentence ‘*This phone heats up so much that I strongly recommend chefs around the world to use it as a cook-top*’. In this sentence, the words ‘*chefs*’, ‘*cook-top*’ and ‘*phone*’ are candidate phrases. However, only the ‘*phone*’ is being ridiculed in this sentence. - **Multiple sarcasm targets**: A sentence like ‘*You are as good at coding as he is at cooking*’ ridicules both ‘*you*’ and ‘*he*’, and hence, both are sarcasm targets. - **Absence of a sarcasm target word (the ‘Outside’ case)**: Consider the situation where a student is caught copying in a test, and the teacher says, ‘*Your parents must be so proud today!*’. No specific word in the sentence is the sarcasm target. The target here is the student. We refer to such cases as the ‘outside’ cases. **Example** **Target** ------------------------------------------------------------------------------------------- ----------------- Love when you don’t have two minutes to send me a quick text. you Don’t you just love it when Microsoft tells you that you’re spelling your own name wrong. Microsoft I love being ignored. being ignored He is as good at coding as Tiger Woods is at avoiding controversy. He, Tiger Woods Yeah, right! I hate catching the bus on time anyway! Outside Architecture {#sec:archi} ============ Our hybrid approach for sarcasm target identification is depicted in Figure \[fig:archi\]. The input is a sarcastic sentence while the output is the sarcasm target. The approach consists of two kinds of extractors: (a) a **rule-based extractor** that implements nine rules to identify different kinds of sarcasm targets, and (b) a **statistical extractor** that uses statistical classification techniques. The two extractors individually generate lists of candidate sarcasm targets. The third component is the **integrator** that makes an overall prediction of the sarcasm target by choosing among the sarcasm targets returned by the individual extractors. The overall output is a subset of words in the sentence. In case no word is found to be a sarcasm target, a fallback label ‘Outside’ is returned. In the forthcoming subsections, we describe the three modules in detail. Rule-based Extractor -------------------- Our rule-based extractor consists of **nine rules** that take as input the sarcastic sentence, and return a set of candidate sarcasm targets. The rules are summarized in Table \[tab:rules\]. We now describe each rule, citing past work that motivated the rule, wherever applicable: 1. **R1** (**Pronouns and Pronominal Adjectives**): R1 returns pronouns such as ‘*you, she, they*’ and pronominal adjectives (followed by their object) (as in the case of ‘*your shoes*’). Thus, for the sentence ‘*I am so in love with my job*’, the phrases ‘*I*’ (pronoun) and ‘*my job*’ (based on the pronominal adjective ‘my’) are returned as candidate sarcasm targets. This is based on observations by  . 2. **R2** (**Named Entities**): Named entities in a sentence may be sarcasm targets. This rule returns all *named entities* in the sentence. In case of ‘*Olly Riley is so original with his tweets*’, R2 predicts the phrase ‘*Olly Riley*’ as a candidate sarcasm target. 3. **R3** (**Sentiment-bearing verb as the pivot**): This rule is based on the idea by that sarcasm may be expressed as a contrast between a positive sentiment verb and a negative situation. In case of ‘*I love being ignored*’, the sentiment-bearing verb ‘*love*’ is positive. The object of ‘*love*’ is ‘*being ignored*’. Therefore, R3 returns ‘*being ignored*’ as the candidate sarcasm target. If the sentiment-bearing verb is negative, the rule returns ‘*Outside*’ as a candidate sarcasm target. This is applicable in case of situations like humble bragging[^5] as in ‘*I hate being so popular*’ where the speaker is either ridiculing the listener or just bragging about themselves. 4. **R4** (**Non-sentiment-bearing verb as the pivot**): This rule applies in case of sentences where the verb does not bear sentiment. The rule identifies which out of subject or object has a lower sentiment score, and returns the corresponding portion as the candidate sarcasm target. For example, rule R4 returns ‘*to have a test on my birthday*’ as the candidate sarcasm target in case of ‘*Excited that the teacher has decided to have a test on my birthday!*’ where ‘decided’ is the non-sentiment-bearing verb. This is also based on  . 5. **R5** (**Gerundial verb phrases and Infinitives**): R5 returns the gerundial phrase ‘*being covered in rashes*’ in case of ‘*Being covered in rashes is fun.*’ as the candidate sarcasm target. Similarly, in case of ‘*Can’t wait to wake up early to babysit!*’, the infinitive ‘*to wake up early to babysit*’ is returned. 6. **R6** (**Noun phrases containing positive adjective**): R6 extracts noun phrases of the form ‘JJ NN’ where JJ is a positive adjective, and returns the noun indicated by NN. Specifically, 1-3 words preceding the nouns in the sentence are checked for positive sentiment. In case of ‘*Look at the most realistic walls in a video game*’, the noun ‘*walls*’ is returned as the sarcasm target. 7. **R7** : **Interrogative sentences**: R7 returns the subject of an interrogative sentence as the sarcasm target. Thus, for ‘*A murderer is stalking me. Could life be more fun?*’, the rule returns ‘*life*’ as the target. 8. **R8** : **Sarcasm in Similes**: This rule captures the *subjects/noun phrases* involved in similes and ‘as if’ comparisons. The rule returns the subject on both sides, as in ‘*He is as good at coding as Tiger Woods is at avoiding controversy.*’ Both ‘*He*’ and ‘*Tiger Woods*’ are returned as targets. This is derived from work on sarcastic similes by  . 9. **R9** : **Demonstrative adjectives**: This rule captures nouns associated with demonstrative adjectives - *this/that/these/those*. For example, for the sentence ‘*Oh, I love this jacket!*’, R9 returns ‘*this jacket*’ as the sarcasm target. **Rule** **Definition** **Example** ---------- -------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------- R1 Return pronouns (inluding possessive) and pronoun based adjectives Love when **you** don’t have two minutes to send me a quick text .. ; I am so in love with **my job**. R2 Return named entities as target Don’t you just love it when **Microsoft** tells you that you’re spelling your own name wrong. R3 Return direct object of a positive sentiment verb I *love* **being ignored**. R4 Return phrase on lower sentiment side of primary verb So happy to just find out it has been *decided* **to reschedule all my lectures and** **tutorials for me to night classes at the** **exact same times**! R5 Return Gerund and Infinitive verb phrases **Being covered in hives** is so much fun!; Can’t wait **to wake up early to babysit**... R6 Return nouns preceded by a positive sentiment adjective Yep, this is indeed an *amazing* **donut** .. R7 Return subject of interrogative sentences A murderer is stalking me. Could **life** be more fun? R8 Return subjects of comparisons (similes) **He** is as good at coding as **Tiger Woods** is at avoiding controversy. R9 Return demonstrative adjective-noun pairs Oh, I love **this jacket**! **Combining the outputs of individual rules to generate candidate sarcasm targets of the rule-based extractor**: To generate the set of candidate sarcasm targets returned by the rule-based extractor, a weighted majority approach is used as follows. Every rule above is applied to the input sarcastic sentence. Then, every word is assigned a score that sums the accuracy of rules which predicted that this word is a part of the sarcasm target. This accuracy is the overall accuracy of the rule as determined by solely the rule-based classifier[^6]. Thus, the integrator weights each word on the basis of how good a rule predicting it as a target was. Words corresponding to the maximum value of this score are returned as candidate sarcasm targets. Statistical Extractor --------------------- The statistical extractor uses a classifier that takes as input a word (along with its features) and returns if the word is a sarcasm target. To do this, we decompose the task into $n$ classification tasks, where $n$ is the total number of words in the sentence. This means that every word in input text is considered as an instance, such that the label can be 1 or 0 depending on whether or not the given word is a part of sarcasm target. For example, ‘*Tooth-ache is fun*’ with sarcasm target as ‘*tooth-ache*’ is broken down into three instances: ‘*tooth-ache*’ with label 1, ‘*is*’ with label 0 and ‘*fun*’ with label 0. In case the target lies outside the sentence, all words have the label 0. We then represent the instance (*i.e.*, the word) as a set of following features: (A) **Lexical**: *Unigrams*, (B) **Part of Speech (POS)**-based features: *Current POS*, *Previous POS*, *Next POS*, (C) **Polarity**-based features: *Word Polarity* : Sentiment score of the word, *Phrase Polarity* : Sentiment score for the trigram formed by considering the previous word, current word and the next word together (in that order). These polarities lie in the range \[-1,+1\]. These features are based on our analysis that the target phrase or word tends to be more neutral than the rest of the sentence, and (D) **Pragmatic features**: *Capitalization* : Number of capital letters in the word. Capitalization features are chosen based on features from  . The classifiers are trained with words as instances while the sarcasm target is to be computed at the sentence level. Hence, the candidate sarcasm target returned by the statistical extractor consists of words for which the classifier returned 1. For example, the sentence ‘This is fun’ is broken up into three instances: ‘this’, ‘is’ and ‘fun’. If the classifier returns 1, 0, 0 for the three instances respectively, the statistical extractor returns ‘this’ as the candidate sarcasm target. Similarly, if the classifier returns 0, 0, 0 for the three instances, the extractor returns the fallback label ‘Outside’. Integrator ---------- The integrator determines the sarcasm target based on the outputs of the two extractors. We consider two configurations of the integrator: 1. Hybrid **OR**: In this configuration, the integrator predicts the set of words that occur in the output of either of the two extractors as the sarcasm target. If the lists are empty, the output is returned as ‘Outside’. 2. Hybrid **AND** : In this configuration, the integrator predicts the set of words that occur in the output of both the two extractors as the sarcasm target. If the intersection of the lists is empty, the output is returned as ‘Outside’. The idea of using two configurations OR and AND is based on a rule-based sarcasm detector by [@precedes]. While AND is intuitive, the second configuration OR is necessary because our extractors individually may not capture all forms of sarcasm target. This is intuitive because *our rules may not cover all forms of sarcasm targets*. **Snippets** **Tweets** ---------------------------------------------------------------- -------------- ------------ Count 224 506 Average \#words 28.47 13.06 Vocabulary 1710 1458 Total words 6377 6610 Average length of sarcasm target 1.6 2.08 Average polarity strength of sarcasm target 0.0087 0.035 Average polarity strength of portion apart from sarcasm target 0.027 0.53 : Statistics of our datasets; ‘Snippets’: Book Snippets[]{data-label="tab:stats"} Experiment setup {#sec:expsetup} ================ We evaluate our approach using two datasets: one consisting of book snippets and another of tweets. The dataset of book snippets is a sarcasm-labeled dataset by  . 224 book snippets marked as sarcastic are used. On the other hand, for our dataset of tweets, we use the sarcasm-labeled dataset by  . 506 sarcastic tweets from this dataset are used. The statistics of the two datasets are shown in Table \[tab:stats\]. The average length of a sarcasm target is 1.6 words in case of book snippets and 2.08 words in case of tweets. The last two rows in the table point to an interesting observation. In both the datasets, the average polarity strength[^7] of sarcasm target is lower than polarity strength of rest of the sentence. This shows that sarcasm target is likely to be more neutral than sentiment-bearing. Note that all textual units (tweets as well as book snippets) in both datasets are sarcastic. We use **SVM Perf** [@joachims2006training] to train the classifiers, optimized for F-score with epsilon e=0.5 and RBF kernel[^8]. We set C=1000 for tweets and C=1500 for snippets. We report our results on **four-fold cross validation** for both datasets. that we convert individual sentences into words. Therefore, the dataset in case of book snippets has 6377 instances, while the one of tweets has 6610 instances. The four folds for cross-validation are created over these instances. With a word as instance, the task is binary classification: 1 indicating that the word is a sarcasm target and 0 indicating that it is not. For rules in the rule-based extractor, we use tools in NLTK [@nltk], wherever necessary. We consider two baselines with which our hybrid approach is compared: 1. **Baseline 1: All Objective Words**: As the first baseline, we design a naïve approach for our task: include all words of the sentence which are not stop words, and have neutral sentiment polarity, as the predicted sarcasm target. We reiterate that in case of papers with no past work, simplistic baselines have been commonly reported in sentiment analysis. However, to validate that our hybrid approach is valuable, we compare our hybrid approach against other possible versions of the system as well. 2. **Baseline 2**: Baseline 2 is derived from past work in opinion target identification because sarcasm target identification may be considered as but a form of opinion target identification. Sequence labeling has been reported for opinion target identification [@jin2009opinionminer]. Therefore, we use SVM-HMM [@svmhmm] with default parameters as the second baseline. We report performance using two metrics: Exact Match Accuracy and Dice Score. These metrics have been used in past work in information extraction [@michelson2007unsupervised]. As per their conventional use, these metrics are computed at the sentence level. The metrics that we use are: - **Exact Match (EM) Accuracy** : An exact match occurs if the list of predicted target(s) is exactly the same as the list of actual target(s). The accuracy is computed as number of instances with exact match divided by total instances. - **Dice Score** : Dice score[@sorensen1948method] is used to compare similarity between two samples. This is considered to be a better metric than Exact match accuracy because it accounts for missing words and extra words in the target. -------- ---------- ---------- ----------- ----------- **EM** **DS** **EM** **DS** **R1** 7.14 **32.8** 7.65 35.23 **R2** **8.48** 16.7 19.19 37.81 **R3** 4.91 6.27 16.92 21.62 **R4** 2.67 11.89 4.38 19.45 **R5** 1.34 6.39 2.32 11.11 **R6** 4.01 6.77 8.91 15.02 **R7** 3.12 10.76 9.46 32.6 **R8** 4.91 6.78 **35.02** 45.17 **R9** 4.46 6.94 34.48 **53.67** -------- ---------- ---------- ----------- ----------- : Results for individual rules for book snippets[]{data-label="rule_results_snippets"} -------- ----------- ----------- -------- ----------- **EM** **DS** **EM** **DS** **R1** 6.32 19.19 8.69 26.39 **R2** 11.26 16.18 30.32 43.56 **R3** **12.45** 20.28 34.24 **55.77** **R4** 6.91 13.51 18.42 36.0 **R5** 9.28 **23.87** 15.36 39.47 **R6** 10.08 16.91 19.31 32.42 **R7** 9.88 15.21 32.25 49.65 **R8** 11.26 11.26 **50** 50 **R9** 11.46 13.28 43.59 50.51 -------- ----------- ----------- -------- ----------- : Results for individual rules for tweets[]{data-label="rules_results_tweets"} **Note**: If the actual target is the fallback label ‘Outside’, then the expected predicted target is either ‘Outside’ or empty prediction list. In such a case, the instance will contribute to exact match accuracy. Results {#sec:results} ======= This section presents our results in two steps: performance of individual rules that are a part of the rule-based extractor, and performance of the overall approach. Performance of rules in the rule-based extractor ------------------------------------------------ Tables  \[rule\_results\_snippets\] and  \[rules\_results\_tweets\] present the performance of the rules in our rule-based extractor, for snippets and tweets respectively. The two metrics (exact match accuracy and dice score) are reported for two cases: Overall and Conditional. ‘Overall’ spans all text units in the dataset whereas ‘Conditional’ is limited to text units which match a given rule (*i.e.*, where the given linguistic phenomenon of, say, gerunds, etc. is observed). Considering the ‘Conditional’ case is crucial because a rule may be applicable for a specific form of sarcasm target, but may work accurately in those cases. Such a rule will have a low ‘overall exact match/dice score’ but a high ‘conditional exact match/dice score.’ Values in bold indicate the best performing rule for a given performance metric. As seen in the tables, the values for ‘conditional’ are higher than those for ‘Overall’. For example, consider rule **R7** in Table \[rule\_results\_snippets\]. Exact match of 3.12 (for overall accuracy) as against 9.46 (for conditional accuracy). This situation is typical of rule-based systems where rules may not cover all cases but be accurate for situations that they do cover. For tweets, R3 has a very high dice score (conditional) (55.77). This rule validates the benefit of utilizing the structure of sarcastic tweets as explored by [@riloff] : ‘contrast of positive sentiment with negative situation’ being a strong indicator of sarcasm target. Overall Performance ------------------- We now compare the performance of the approach with the baseline (as described in Section \[sec:expsetup\]). In order to understand the benefit of individual extractors, we also show their performance when they are used individually. Thus, we compare five approaches: (A) Baseline, (B) Rule-based (when only the rule-based extractor is used), (C) Statistical (when only the statistical extractor is used), and (D) & (E) Hybrid (two configurations: OR and AND). It must be noted that **since no existing sarcasm target identification approach exists, we rely on the approach of using a simple baseline, and verify if our approach does any better than a simpler, obvious baseline**. Such baselines have been used in early work in sentiment analysis. For example,   compare against a ‘random-choice’ baseline, or   who use a simple majority-voting baseline, in absence of past work. We also use a second baseline from a related area: sentiment/opinion target identification. Tables \[results\_snippets\_1\] and  \[results\_tweets\_1\] compare the five approaches for snippets and tweets respectively. All our approaches outperform the baseline in case of exact match and dice score. In case of tweets, Table  \[results\_tweets\_1\] shows that the rule-based extractor achieves a dice score of 29.13 while that for statistical extractor is 31.8. Combining the two together (owing to our hybrid architecture) improves the dice score to 39.63. This improvement also holds for book snippets. This **justifies the ‘hybrid’ nature of our approach**. Hybrid OR performs the best in terms of Dice Score. However, for exact match accuracy, Hybrid AND achieves the best performance (16.51 for snippets and 13.45 for tweets). This is likely because Hybrid AND is restrictive with respect to the predictions it makes for individual words. The statistical extractor performs better than rule-based extractor for all three metrics. For example, in case of tweets, the dice score for statistical extractor is 31.8 while that for rule-based extractor is 29.13. Also, nearly all results (across approaches and metrics) are higher in case of tweets as compared to snippets. Since tweets are shorter than snippets (as shown in Table \[tab:stats\]), it is likely that they are more direct in their ridicule as compared to snippets. **Approach** **EM** **DS** ------------------------------------- ----------- ----------- **Baseline 1: All Objective Words** 0.0 16.14 **Baseline 2: Seq. Labeling** 12.05 31.44 **Only Rule-Based** 9.82 26.02 **Only Statistical** 12.05 31.2 **Hybrid OR** 7.01 **32.68** **Hybrid AND** **16.51** 21.28 : Performance of sarcasm target identification for snippets[]{data-label="results_snippets_1"} **Approach** **EM** **DS** ------------------------------------- ----------- ----------- **Baseline 1: All Objective Words** 1.38 27.16 **Baseline 2: Seq. Labeling** 12.26 33.41 **Only Rule-Based** 9.48 29.13 **Only Statistical** 10.48 31.8 **Hybrid OR** 9.09 **39.63** **Hybrid AND** **13.45** 20.82 : Performance of sarcasm target identification for tweets[]{data-label="results_tweets_1"} ----------------- -------- -------- -------- -------- **EM** **DS** **EM** **DS** Overall 7.01 32.68 9.09 39.63 ‘Outside’ cases 6.81 6.81 4.71 4.71 ----------------- -------- -------- -------- -------- : Comparison of performance of our approach in case of examples with target outside the text (indicated by ‘Outside’ cases), with complete dataset (indicated by ‘Overall’); EM: Exact Match, DS: Dice Score[]{data-label="tab:discuss"} Error Analysis {#sec:discuss} ============== A key source of error is cases where the target lies outside the text. In this section, we describe such examples and compare the impact of these errors with the overall performance. In our dataset of book snippets, there are 11 texts ( 5%) with sarcasm target outside the text. In case of tweets, such cases are much higher: 53 tweets ( 10%). Table  \[tab:discuss\] compares the results of our hybrid (OR) approach for the specific case of target being ‘outside’ the text (indicated by ‘Outside cases’ in the table), with the results on the complete dataset (indicated by ‘Overall’ in the table). Dice Score (DS) for book snippets is 6.81 for ‘outside’ cases as compared to 32.68 for the complete dataset. In general, the performance for the ‘outside’ cases is lower than the overall performance. This proves the difficulty that the ‘Outside’ cases presents. The EM and DS values for ‘Outside’ cases are the same by definition. This is because when the target is ‘Outside’, a partial match and an exact match are the same. Our approach correctly predicts the label ‘Outside’ for sentences like ‘*Yeah, just ignore me. That is TOTALLY the right way to handle this!*’ However, our approach gives the incorrect output for some examples. For example, for ‘*Oh, and I suppose the apples ate the cheese*’, the predicted target is not ‘Outside’ (the expected label) but ‘*I*’. Similarly, for ‘*Please keep ignoring me for all of senior year. It’s not like we’re friends with the exact same people*’, the incorrectly predicted target is ‘*me*’ instead of the expected label ‘Outside’. Related Work {#sec:relwork} ============ Computational sarcasm primarily focuses on sarcasm detection: classification of a text as sarcastic or non-sarcastic. present a survey of sarcasm detection approaches. They observe three trends in sarcasm detection: semi-supervised extraction of sarcastic patterns, use of hashtag-based supervision, and use of contextual information for sarcasm detection [@3; @4; @22]. However, to the best of our knowledge, no past work aims to identify phrases in a sarcastic sentence that indicate the target of ridicule in the sentence. Related to sarcasm target identification is sentiment target identification. Sentiment target identification deals with identifying the entity towards which sentiment is expressed in a sentence. present an approach to extract opinion words and targets collectively from a dataset. Aspect identification for sentiment has also been studied. This deals with extracting aspects of an entity (for example, color, weight, battery in case of a cell phone). Probabilistic topic models have been commonly used for the same. present a probabilistic topic model that jointly estimates sentiment and aspect in order to achieve sentiment summarization. perform multi-aspect sentiment analysis using a topic model. Several other topic model-based approaches to aspect extraction have been reported [@mukherjee2012aspect]. To the best of our knowledge, ours is the first work that deals with sarcasm target identification. Conclusion & Future Work {#sec:concl} ======================== In this paper, we introduced a novel problem: sarcasm target identification. This problem aims to identify the target of ridicule in a sarcastic text. This target may be a subset of words in the text or a fallback label ‘Outside’. The task poses challenges such as multiple sarcasm targets or sarcasm targets that may not even be present as words in the sentence. We present an introductory approach for sarcasm target identification that consists of two kinds of extractors: a rule-based and a statistical extractor. Our rule-based extractor implements nine rules that capture forms of sarcasm target. The statistical extractor splits a sentence of length $n$ into $n$ instances, where each instance is represented by a word, and a label that indicates if this word is a sarcasm target. A statistical classifier that uses features based on POS and sentiment, predicts if a given word is likely to be a target or not. Finally, an integrator combines the outputs of the two extractors in two configurations: OR and AND. We evaluate our approach on two datasets: one consisting of snippets from books, and another of tweets. In general, our hybrid OR system performs the best with a Dice score of 39.63. This is higher than two baselines: a naïve baseline designed for the task, and a baseline based on sentiment target identification. Our hybrid approach is also higher than the two extractors individually used. This shows that the two extractors collectively form a good sarcasm target identification approach. Finally, we discuss performance in case of examples where the target is outside the sentence. In such cases, our approach performs close to the overall system in terms of exact match, but there is a severe degradation in Dice score. We finally present an analysis of errors due to target being outside the text. Our work forms a foundation for future approaches to identify sarcasm targets. As future work, additional rules in the rule-based extractor and novel sets of features in the statistical extractor may be used. Use of syntactic dependencies has been found to be useful in case of opinion target extraction [@qiu2011opinion]. Applying these techniques for sarcasm target identification can be useful. A special focus on the ‘outside’ cases (*i.e.*, cases where the target of ridicule in a sarcastic text is beyond the words present in the sentence) is likely to be helpful for sarcasm target identification. [^1]: Source: The Free Dictionary [^2]: This quote is attributed to Irina Dunn, an Australian writer and social activist. [^3]: This label is necessary because the sarcasm target may not be present as a word in the sentence. Section 2 discusses this in detail. [^4]: In absence of past work, using simple and obvious techniques to solve a problem have been considered as baselines in sentiment analysis [@tan2011user; @pang2005seeing] [^5]: <http://www.urbandictionary.com/define.php?term=humblebrag> [^6]: These values are shown in Tables 4 and 5 [^7]: Polarity strength is the sum of polarities of words. We use a sentiment word-list to get the strength values [^8]: RBF Kernel performed better than linear kernel.
--- abstract: 'The structures of three negatively charged forms (anionic keto-1 and enol-1, dianonic enol-2) of oxyluciferin (OxyLuc), which are the most probable emitters responsible for the firefly bioluminescence, have been fully relaxed at the variational Monte Carlo (VMC) level. Absorption energies of the S$_1 \leftarrow$ S$_0$ vertical transition have been computed using different levels of theory, such as TDDFT, CC2 and many body Green’s function Theory (MBGFT). The use of MBGFT, by means of the Bethe-Salpeter (BS) formalism, on VMC structures provides results in excellent agreement with the value (2.26(8) eV) obtained by action spectroscopy experiments for the keto-1 form (2.32 eV). To unravel the role of the quality of the optimized ground state geometry, BS excitation energies have also been computed on CASSCF geometries, inducing a non negligible blue shift (0.08 and 0.07 eV for keto-1 and enol-1 forms, respectively) with respect to the VMC ones. Structural effects have been analyzed in terms of over- or under-correlation along the conjugated bonds of OxyLuc by using different methods for the ground-state optimization. The relative stability of the S$_1$ state for the keto-1 and enol-1 forms depends on the method chosen for the excited state calculation, thus representing a fundamental caveat for any theoretical study on these systems. Finally, Kohn-Sham HOMO and LUMO orbitals of enol-2 are (nearly) bound only when the dianion is embedded into a solvent (water and toluene in the present work); excited state calculations are therefore meaningful only in presence of a dielectric medium which localizes the electronic density. The combination of VMC for the ground state geometry and BS formalism for the absorption spectra clearly outperforms standard TDDFT and quantum chemistry approaches.' author: - Emanuele Coccia - Daniele Varsano - Leonardo Guidoni bibliography: - 'bib.bib' title: 'Theoretical S$_1 \leftarrow$ S$_0$ absorption energies of the anionic forms of oxyluciferin by Variational Monte Carlo and Many Body Green’s Function Theory' ---
--- abstract: 'In this note we consider the $k$th level of the uniform random recursive tree after $n$ steps, and prove that the proportion of nodes with degree greater than $t\log n$ converges to $(1-t)^k$ almost surely, as $n\to\infty$, for every $t\in(0,1)$. In addition, we show that the number of degree $d$ nodes in the first level is asymptotically Poisson distributed with mean $1$; moreover, they are asymptotically independent for $d=1,2,\dots$.' address: - | Department of Probability Theory and Statistics\ Faculty of Science\ Eötvös Loránd University\ Pázmány P. s. 1/C, H-1117 Budapest, Hungary - | Department of Probability Theory and Statistics\ Faculty of Science\ Eötvös Loránd University\ Pázmány P. s. 1/C, H-1117 Budapest, Hungary author: - Ágnes Backhausz - 'Tamás F. Móri' title: Degree distribution in the lower levels of the uniform recursive tree --- Introduction ============ Let us consider the following random graph model. We start from a single node labelled with $0$. At the $n$th step we choose a vertex at random, with equal probability, and independently of the past. Then a new node, vertex $n$, is added to the graph, and it is connected to the chosen vertex. In this way a random tree, the so called uniform recursive tree, is built. This model has a long and rich history. Apparently, the first publication where the uniform recursive tree appeared was [@Tapia]. Since then a huge number of papers have explored the properties of this simple combinatorial structure. Recursive trees serve as probabilistic models for system generation, spread of contamination of organisms, pyramid scheme, stemma construction of philology, Internet interface map, stochastic growth of networks, and many other areas of application, see [@FHN] for references. For a survey of probabilistic properties of uniform recursive trees see [@Dr] or [@SM]. Among others, it is known that this random tree has an asymptotic degree distribution, namely, the proportion of nodes with degree $d$ converges, as $n\to\infty$, to $2^{-d}$ almost surely. Another important quantity is the maximal degree, which is known to be asymptotically equal to $\log_2 n$ [@DL]. Considering our graph a rooted tree, we can define the levels of the tree in the usual way: level $k$ is the set $L_n(k)$ of the vertices that are of distance $k$ from vertex $0$, the root. It is not hard to find the a.s. asymptotics of the size of level $k$ after step $n$; it is $$|L_n(k)|\sim {\mathbb E}|L_n(k)|\sim\frac{(\log n)^k}{k!},\quad k=1,2,\dots\,.$$ Recursive trees on nodes $0,\,1,\,\dots,\,n-1$ can be transformed into permutations $\sigma=(\sigma_1,\sigma_2,\dots,\sigma_n)$ in the following recursive way. Start from the identity permutation $\sigma=(1,2,\dots,n)$. Then, taking the nodes $1,\,2,\,\dots,\,n-1$ one after another, update the permutation by swapping $\sigma_{i+1}$ and $\sigma_{i+1-j}$ if node $i$ was connected to node $j<i$ at the time it was added to the tree. It is easy to see that in this way a one-to-one correspondence is set between trees and permutations, and the uniform recursive tree is transformed into a uniform random permutation. Another popular recursive tree model is the so called plane oriented recursive tree. It was originally proposed by Szymański [@Szym], but it got in the focus of research after the seminal paper of Barabási and Albert [@BA]. A non-oriented version of it starts from a single edge, and at each step a new vertex is added to the graph. The new vertex is then connected to one of the old nodes at random; the other endpoint of the new edge is chosen from the existing vertices with probability proportional to the instanteneous degree of the node (preferential attachment). This can also be done in such a way that we select an edge at random with equal probability, then choose one of its endpoints. In this tree the proportion of degree $d$ nodes converges to $\frac{4}{d(d+1)(d+2)}$ with probability $1$. Katona has shown [@Kat] that the same degree distribution can be observed if one is confined to any of the largest levels. On the other hand, if we only consider a fixed level, the asymptotic degree distribution still exists, but it becomes different [@M]. This phenomenon has been observed in other random graphs, too. A general result of that kind has been published recently [@BM]. In the present note we will investigate the lower levels of the uniform recursive tree. We will show that, unlike in many scale free recursive tree models, no asymptotic degree distribution emerges. Instead, for almost all nodes in the lower levels the degree sequence grows to infinity at the same rate as the overall maximum of degrees does. We also investigate the number of degree $d$ vertices in the first level for $d=1,\,2,\,\dots$, and show that they are asymptotically i.i.d. Poisson with mean $1$. Nodes of high degree in the lower levels ======================================== Let $\deg_n(i)$ denote the degree of node $i$ after step $n$ $(i\le n)$. Further, let $Z_{n,k}(t)$ denote the proportion of nodes in level $k$ with degree greater than $t\log n$. Formally, $$Z_{n,k}(t)=\frac{1}{|L_n(k)|}\,\bigl|\left\{i\le n: i\in L_n(k),\ \deg_n(i)> t\log n\right\}\bigr|.$$ The main result of this section is the following theorem. \[T2.1\] For $k=1,2,\dots$ and $0<t<1$ $$\lim_{n\to\infty}Z_{n,k}(t)=(1-t)^k\ \ a.s.$$ For the proof we need some auxiliary lemmas, interesting in their own right. Let the number $n$ of steps be fixed, and $1<i<n$. Firstly, we are interested in $X=\deg_n(i)-1$. \[L2.1\] Let $0<\varepsilon<t<1$. Then for every $i>n^{1-t+\varepsilon}$ we have $${\mathbb P}(X>t\log n)\le \exp\left(-\frac{\varepsilon^2}{2t}\,\log n\right).$$ $X=I_{i+1}+I_{i+2}+\dots+I_n$, where $I_j=1$, if vertex $i$ gets a new edge at step $n$, and $0$ otherwise. These indicators are clearly independent and $EI_j=1/j$, hence $${\mathbb E}X=\frac{1}{i+1}+\dots+\frac{1}{n}\,.$$ Let us abbreviate it by $s$. Clearly, $$\log\frac{n}{i+1}\le s\le \log\frac{n}{i}\,.$$ Let $a>s$, then by [@AS Theorem A.1.12] we have $${\mathbb P}(X\ge a)\le\left(e^{\beta-1}\beta^{-\beta}\right)^s,$$ where $\beta=a/s$. Hence $$\begin{gathered} {\mathbb P}(X\ge a)\le e^{a-s}\Bigl(\frac{s}{a}\Bigr)^a =e^{a-s}\Bigl(1-\frac{a-s}{a}\Bigr)^a\\ =\exp\left(a-s-a\Bigl(\frac{a-s}{a}+\frac 12\Bigl(\frac{a-s}{a} \Bigr)^2+\dots \Bigr)\right)\le\exp\left(-\frac{(a-s)^2}{2a}\right).\end{gathered}$$ Now, set $a=t\log n$. Then $s\le(t-\varepsilon)\log n$, and $${\mathbb P}(X\ge t\log n)\le\exp\left(-\frac{(t\log n-s)^2}{2t\log n}\right) \le\exp\left(-\frac{\varepsilon^2}{2t}\,\log n\right).$$ \[L2.2\] Let $0<t<1$, and $0<\varepsilon<1-t$. Then for every $i\le n^{1-t-\varepsilon}-1$ we have $${\mathbb P}(X\le t\log n)\le \exp\left(-\frac{\varepsilon^2}{2(t+\varepsilon)}\, \log n\right).$$ This time $s>\log\frac{n}{i+1}\ge(t+\varepsilon)\log n$, thus [@AS Theorem A.1.13] implies that $${\mathbb P}(X\le t\log n)\le\exp\left(-\frac{(s-t\log n)^2}{2s}\right).$$ Notice that the exponent in the right-hand side, as a function of s, is decreasing for $s>t\log n$. Therefore $s$ can be replaced by $(t+\varepsilon)\log n$, and the proof is complete. **Proof of Theorem \[T2.1\]**. Since $\deg_n(i)$ is approximately equal to $\log\frac{n}{i}$, it follows that $\deg_n(i)\ge t\log n$ is approximately equivalent to $i\le n^{1-t}$. Basing on Lemmas \[L2.1\] and \[L2.2\] we can quantify this heuristic reasoning. Let $0<\varepsilon<\min\{t,\,1-t\}$, and $a=a(n)=\left\lfloor n^{1-t-\varepsilon}\right\rfloor-1$, $b=b(n)=\left\lceil n^{1-t+\varepsilon}\right\rceil$. Then by Lemma \[L2.2\] $$\begin{aligned} &{\mathbb P}\bigl(\exists i\in L_n(k)\text{ such that }i\le a,\ \deg_n(i)\le 1+t\log n\bigr)\\ &\qquad\le \sum_{i=1}^a {\mathbb P}\bigl(i\in L_n(k),\ \deg_n(i)\le 1+t\log n\bigr)\\ &\qquad =\sum_{i=1}^a {\mathbb P}\bigl(i\in L_n(k)\bigr){\mathbb P}\bigl(\deg_n(i)\le 1+t\log n\bigr)\\ &\qquad\le {\mathbb E}L_n(k)\cdot\exp\left(-\frac{\varepsilon^2}{2(t+\varepsilon)}\, \log n\right).\end{aligned}$$ Similarly, by Lemma \[L2.1\], $$\begin{aligned} &{\mathbb P}\bigl(\exists i\in L_n(k)\text{ such that }i>b,\ \deg_n(i)> 1+t\log n\bigr)\\ &\qquad\le \sum_{i=b+1}^n {\mathbb P}\bigl(i\in L_n(k),\ \deg_n(i)> 1+t\log n\bigr)\\ &\qquad =\sum_{i=b+1}^n {\mathbb P}\bigl(i\in L_n(k)\bigr){\mathbb P}\bigl(\deg_n(i)> 1+t\log n\bigr)\\ &\qquad\le {\mathbb E}L_n(k)\cdot\exp\left(-\frac{\varepsilon^2}{2t}\, \log n\right).\end{aligned}$$ Introduce the events $$A(n)=\bigl\{L_a(k)\subset\{i\in L_n(k):\deg_n(i)>1+t\log n\} \subset L_b(k)\bigr\},$$ then the probability of their complements can be estimated as follows. $${\mathbb P}\left(\overline{A(n)}\right)\le 2{\mathbb E}|L_n(k)|\, \exp\left(-\frac{\varepsilon^2}{2t}\,\log n\right).$$ Note that $|L_a(k)|\sim(1-t-\varepsilon)^k|L_n(k)|$, and $|L_b(k)|\sim(1-t+\varepsilon)^k|L_n(k)|$, a.s. Let $c>2(t+\varepsilon)\varepsilon^{-2}$, then $\sum_{n=1}^\infty {\mathbb P}\left(\overline{A(n^c)}\right)<\infty$, hence by the Borel–Cantelli lemma it follows almost surely that $A(n^c)$ occurs for every $n$ large enough. Consequently, $$\begin{gathered} (1-t-\varepsilon)^k\left|L_{n^c}(k)\right|\bigl(1+o(1)\bigr)\\ \le \left|\left\{i\in L_{n^c}(k):\deg_{n^c}(i)>1+t\log(n^c)\right\} \right|\\ \le(1-t+\varepsilon)^k\left|L_{n^c}(k)\right|\bigl(1+o(1)\bigr).\end{gathered}$$ This implies $$\liminf_{n\to\infty} Z_{n^c,k}(t)\ge(1-t-\varepsilon)^k\text{ and } \limsup_{n\to\infty} Z_{n^c,k}(t)\le(1-t+\varepsilon)^k$$ for every positive $\varepsilon$, hence Theorem \[T2.1\] is proven along the subsequence $(n^c)$. To the indices in between we can apply the following esimation. For $n^c\le N \le(n+1)^c$ with sufficiently large $n$ we have $$\begin{aligned} Z_{N,k}(t)&\le\frac{1}{\left|L_{n^c}(k)\right|}\,\left|\left\{i\in L_{(n+1)^c}(k):\deg_{(n+1)^c}(i)\ge t\log(n^c)\right\}\right|\\ &=\frac{\left|L_{(n+1)^c}(k)\right|}{\left|L_{n^c}(k)\right|}\, Z_{(n+1)^c,k}\left(t\,\frac{\log n}{\log(n+1)}\right).\end{aligned}$$ Here the first term tends to $1$, while the second term’s asymptotic behaviour is just the same as that of $Z_{(n+1)^c,k}(t)$. Hence $Z_{N,k}(t)\le\bigl(1+o(1)\bigr)(1-t)^k$. Similarly, $$\begin{aligned} Z_{N,k}(t) &\ge\frac{\left|L_{n^c}(k)\right|}{\left|L_{(n+1)^c}(k)\right|}\, Z_{n^c,k}\left(t\,\frac{\log(n+1)}{\log n}\right)\\ &=\bigl(1+o(1)\bigr)Z_{n^c,k}(t)\\ &=\bigl(1+o(1)\bigr)(1-t)^k.\end{aligned}$$ This completes the proof. Nodes of small degree in the first level ======================================== Looking at the picture Theorem \[T2.1\] shows us on the degree distribution one can naturally ask how many points of fixed degree remain in the lower levels at all. In this respect the first level and the other ones behave differently. It is easy to see that degree $1$ nodes in level $1$ correspond to the fixed points of the random permutation described in the Introduction. Hence their number has a Poisson limit distribution with parameter $1$ without any normalization. More generally, let $$X[n,d]=\bigl|\{i\in L_n(1): \deg_n(i)=d\}\bigr|;$$ this is the number of nodes with degree $d$ in the first level after $n$ steps. The main result of this section is the following limit theorem. \[T3.1\] $X[n,1],\,X[n,2],\,\dots$ are asymptotically i.i.d. Poisson with mean $1$, as $n\to\infty$. We will apply the method of moments in the following form. For any real number $a$ and nonnegative integer $k$ let us define $(a)_0=1$, and $(a)_k=a(a-1)\cdots(a-k+1)$, $k=1,2,\dots$. In order to verify the limiting joint distribution in Theorem \[T3.1\] it suffices to show that $$\label{3.1} \lim_{n\to\infty}{\mathbb E}\left(\prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\right)=1$$ holds for every $d=1,2,\dots$, and nonnegative integers $k_1,\,\dots,\,k_d$. This can easily be seen from the following expansion of the joint probability generating function of the random variables $Y[n,1],\,\dots,\,Y[n,d]$. $${\mathbb E}\left(\prod_{i=1}^d z_i^{Y[n,i]}\right)= \sum_{k_1=0}^\infty\dots\sum_{k_d=0}^\infty{\mathbb E}\left(\prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\right)\prod_{i=1}^d \frac{(z_i-1)^{k_i}}{k_i!}.$$ In the proof we shall rely on the following obvious identities. $$\begin{gathered} (a+1)_k-(a)_k=k\,(a)_{k-1},\label{3.2}\\ a\bigl[(a-1)_k(b+1)_\ell-(a)_k(b)_\ell\bigr]=\ell\,(a)_{k+1} (b)_{\ell-1}-k\,(a)_k(b)_\ell,\label{3.3}\\ \sum_{a=k}^n(a)_k=\frac{1}{k+1}\,(n+1)_{k+1}.\label{3.4}\end{gathered}$$ Let us start from the conditional expectation of the quantity under consideration with respect to the sigma-field generated by the past of the process. $$\label{3.5} {\mathbb E}\Biggl(\prod_{i=1}^d\bigl(X[n+1,i]\bigr)_{k_i}\Biggm|\mathcal F_n \Biggr)=\prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}+\sum_{j=0}^d S_j,$$ where in the rightmost sum $j$ equals $0,\,1,\,\dots,\,d$, according that the new vertex at step $n+1$ is connected to the root $(j=0)$, or to a degree $j$ node in level $1$. This happens with (conditional) probability $\dfrac 1n\,,\,\dfrac{X[n,1]}{n}\,,\ \dots,\,\dfrac{X[n,d]}{n}\,,$ respectively. That is, $$S_0=\frac 1n\,\prod_{i=2}^d\bigl(X[n,i]\bigr)_{k_i}\left[ \bigl(X[n,1]+1\bigr)_{k_1}-\bigl(X[n,1]\bigr)_{k_1}\right],$$ and for $1\le j\le d-1$ $$\begin{gathered} S_j=\frac{X[n,j]}{n}\prod_{i\ne\{j,j+1\}}\bigl(X[n,i]\bigr)_{k_i} \times\\ \times\Bigl[\bigl(X[n,j]-1\bigr)_{k_j}\bigl(X[n,j+1]+1\bigr)_{k_{j+1}} -\bigl(X[n,j]\bigr)_{k_j}\bigl(X[n,j+1]\bigr)_{k_{j+1}}\Bigr].\end{gathered}$$ Finally, $$S_d=\frac{X[n,d]}{n}\ \prod_{i=1}^{d-1}\bigl(X[n,i]\bigr)_{k_i} \left[\bigl(X[n,d]-1\bigr)_{k_d}-\bigl(X[n,d]\bigr)_{k_d}\right].$$ Let us apply to $S_0$ with $k=k_1$, to $S_j$ with $k=k_j$, $\ell=k_{j+1}$ $(1\le j\le d-1)$, and to $S_d$ with $k=k_d$, $\ell=0$, to obtain $$\begin{gathered} S_0=\frac{k_1}{n}\ \prod_{i=2}^d\bigl(X[n,i]\bigr)_{k_i} \,\bigl(X[n,1]\bigr)_{k_1-1}\,,\label{3.9}\\ \begin{aligned}\label{3.10} S_j=\frac{k_{j+1}}{n}\ \prod_{i\ne\{j,j+1\}}\bigl(X[n,i]\bigr)_{k_i} \,\bigl(X[n,j]\bigr)_{k_j+1}\,\bigl(X[n,j+1]\bigr)_{k_{j+1}-1}\\ -\frac{k_j}{n}\ \prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\,, \end{aligned}\\ S_d=-\frac{k_d}{n}\ \prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\,. \label{3.11}\end{gathered}$$ In – it can happen that some of the $k_j$ are zero, and, though $(a)_{-1}$ has not been defined, it always gets a zero multiplier, thus the expressions do have sense. Let us plug – into . $$\begin{gathered} {\mathbb E}\Biggl(\prod_{i=1}^d\bigl(X[n+1,i]\bigr)_{k_i}\Biggm|\mathcal F_n \Biggr)\\ =\prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\Biggl(1-\frac 1n\, \sum_{j=1}^d k_j\Biggr) +\frac{k_1}{n}\,\prod_{i=2}^d\bigl(X[n,i]\bigr)_{k_i} \,\bigl(X[n,1]\bigr)_{k_1-1}\\ +\sum_{j=1}^{d-1}\,\frac{k_{j+1}}{n}\,\prod_{i\ne\{j,j+1\}} \bigl(X[n,i]\bigr)_{k_i}\,\bigl(X[n,j]\bigr)_{k_j+1}\, \bigl(X[n,j+1]\bigr)_{k_{j+1}-1}.\end{gathered}$$ Introducing $$E(n,k_1,\dots,k_d)={\mathbb E}\left(\prod_{i=1}^d\bigl(X[n,i]\bigr)_{k_i}\right), \quad K=k_1+\dots+k_d,$$ we have the following recursion. $$\begin{gathered} E(n+1,k_1,\dots,k_d)=\left(1-\frac{K}{n}\right)E(n,k_1,\dots,k_d)+ \frac{k_1}{n}\,E(n,k_1-1,k_2,\dots,k_d)\\ +\sum_{j=1}^{d-1}\,\frac{k_{j+1}}{n}\,E(n,k_1,\dots,k_j+1,k_{j+1}-1,\dots,k_d),\end{gathered}$$ or equivalently, $$\begin{gathered} \label{3.12} (n)_K E(n+1,k_1,\dots,k_d)=(n-1)_K E(n,k_1,\dots,k_d)\\ +(n-1)_{K-1}\sum_{j=1}^d k_j E(k_1,\dots,k_{j-1}+1,k_j-1,\dots,k_d).\end{gathered}$$ Based on , the proof can be completed by induction on the exponent vectors $(k_1,\dots,k_n)$. We say that $\underline k=(k_1,k_2,\dots,k_d)$ is majorized by $\underline{\ell}=(\ell_1,\ell_2,\dots,\ell_d)$, if $k_d\le\ell_d$, $k_{d-1}+k_d\le\ell_{d-1}+\ell_d$, …, $k_1+\dots+k_{d}\le\ell_1+\dots+\ell_{d}$. This is a total order on $\mathbb N^d$. Now, is clearly holds for $\underline k=(1,0,\dots,0)$, since ${\mathbb E}X[n,1]=1$ for every $n=1,2,\dots\,$, which is obvious considering the fixed points of a random permutation. In every term of the sum on the right hand side of the argument of $E(\,\cdot\,)$ is majorized by $\underline k=(k_1,\dots,k_d)$, hence the induction hypothesis can be applied to them. We get that $$(n)_K E(n+1,\underline k)=(n-1)_K E(n,\underline k) +(n-1)_{K-1}K\bigl(1+o(1)\bigr),$$ from which gives that $(n-1)_K E(n,\underline k) \sim(n-1)_K$, that is, $$\lim_{n\to\infty}E(n,k_1,\dots,k_d)=1,$$ as needed. Turning to higher levels one finds the situation changed. Fixing a degree $d$ we find, roughly speaking, that each node in level $k-1$ has a Poisson number of degree $d$ children in level $k$ (a freshly added node is considered the child of the old node it is connected to). Now, strong-law-of-large-numbers-type heuristics imply that the number of nodes with degree $d$ in level $k\ge 2$ is approximately equal to $|L_n(k-1)|$, that is, their proportion is $$\approx\frac{|L_n(k-1)|}{|L_n(k)|}\sim\frac{1}{k\log n}\,.$$ Another interesting problem worth of dealing with is the number of nodes with unusually high degree. In every fixed level Theorem \[T2.1\] implies that the proportion of nodes with degree higher than $\log n$ is asymptotically negligible, but they must exist, since the maximal degree is approximately $\log_2 n=\log_2e\,\log n$. How many of them are there? We are planning to return to this issue in a separate paper. [99]{} **Alon, N. and J.H. Spencer,** *The Probabilistic Method, 2nd ed.*, Wiley, New York, 2000. **Backhausz, Á. and T.F. Móri,** Local degree distribution in scale free random graphs, *Electron. J. Probab.*, **16** (2011), 1465–1488.\ `http://www.math.washington.edu/~ejpecp/include/getdoc.php?id= 6160&article=2250&mode=pdf` **Barabási, A.-L. and R. Albert,** Emergence of scaling in random networks, *Science*, **286** (1999), 509–512. **Devroye, L. and J. Lu,** The strong convergence of maximal degrees in uniform random recursive trees and dags, *Random Structures Algorithms*, **7** (1995), 1–14. **Drmota, M.,** *Random Trees*, Springer, Wien, 2009. **Fuchs, M., H.-K. Hwang and R. Neininger,** Profiles of random trees: Limit theorems for random recursive trees and binary search trees, *Algorithmica* **46** (2006), 367–407. **Katona, Zs.,** Levels of a scale-free tree, *Random Structures Algorithms*, **29** (2006), 194–207. **Móri, T.F.,** A surprising property of the Barabási-Albert random tree, *Studia Sci. Math. Hungar.*, **43** (2006), 263–271. **Smythe, R.T. and H.M. Mahmoud,** A survey of recursive trees, *Theory Probab. Math. Statist.*, **51** (1995), 1–27. **Szymański, J.,** On a nonuniform random recursive tree, *Ann. Discrete Math.*, **33** (1987), 297–306. **Tapia, M.A. and B.R. Myers,** Generation of concave node-weighted trees, *IEEE Trans. Circuit Theory*, **CT-14** (1967), 229–230.
--- author: - Henryk Fukś bibliography: - 'kochanski.bib' title: 'Adam Adamandy Kochański’s approximations of $\pi$: reconstruction of the algorithm' --- Introduction ============ Adam Adamandy Kochański SJ (1631–1700) was a Polish Jesuit mathematician, inventor, and polymath. His interest were very diverse, including problems of geometry, mechanics, and astronomy, design and construction of mechanical clocks, *perpetuum mobile* and mechanical computers, as well as many other topics. He published relatively little, and most of his mathematical works appeared in *Acta Eruditorum* between 1682 and 1696. He left a reach correspondence, however, which currently consists of 163 surviving letters [@LisiakGrzebien05]. These letters include correspondence with Gottfried Leibniz, Athanasius Kircher SJ, Johannes Hevelius, Gottfried Kirch, and many other luminaries of the 17-th century, giving a rich record of Kochański’s activities and a vivid description of the intellectual life of the period. Recently published comprehensive monograph [@Lisiak05] gives detailed account of his life and work, and includes extensive bibliography of the relevant literature. Among his mathematical works, the most interesting and well-known is his paper on the rectification of the circle and approximations of $\pi$, published in 1685 in *Acta Eruditorum* under the title *Observationes cyclometricae ad facilitandam praxin accomodatae* [@Kochanski1685]. Annotated English translation of *Observationes* with parallel Latin version has been made available online by the author [@FuksObservationes2011]. The paper has three distinct parts, the first one giving a sequence of rational approximations of $\pi$. This will be the main subject of this note, so more about this will follow in the next section. The second part of *Observationes* is the one which is most often commented and quoted. There the author proposes an approximate solution of the problem of the rectification of the circle, giving an elegant and simple construction of a linear segment whose length approximates $\pi$. Figure 1 shows this construction ![Kochański’s construction of approximation of $\pi$.](construction-restored.pdf) exacly as Kochański had it the original paper. We start with drawing a semi-circle of radius $AB$ centered at $A$, inscribed in a rectangle $BGHD$. Than we draw a line $AI$ such that $\measuredangle IAC = 60^\circ$. When the lower side of the rectangle is extended so that $HL$ is equal to the diameter of the circle, and a line is drawn from $I$ to $L$, one can easily show that $$|IL|=\frac{1}{3}\sqrt{120-18\sqrt{3}}=3.1415333\ldots,$$ which agrees with $\pi$ in the first four digits after the decimal point. This compass and ruler construction is often referred to as *Kochański’s construction*. The third part of [@Kochanski1685] gives yet another approximation of $\pi$, this time expressing it as a sum of multiples and fractional parts of $1/32$, $$\frac{96}{32}+\frac{4}{32}+\frac{1}{2} \cdot \frac{1}{32} + \frac{1}{32 \cdot 32}=\frac{3217}{1024}=3.1416015625.$$ It is fair to say that if the rectification of the circle reported in *Observationes* received a lot of attention from both contemporaries of Kochański and historians of mathematics [@Montucla1755; @Cantor1880; @Gunther1921], then the first part of his paper has mostly been forgotten. In what follows we will show that this is perhaps unjustly so, as it includes some intriguing sequences of fractions approximating $\pi$, origin of which has not been explained by commentators of Kochański’s work. Sequence of rational approximations of $\pi$ ============================================ In the table on p. 395 of [@Kochanski1685] (also p. 2 of [@FuksObservationes2011]), Kochański gives the following sequence of pairs of lower and upper rational approximants of $\pi$: $$\begin{aligned} \label{kochanskisequence} \nonumber \left\{\frac{25}{8},\frac{22}{7}\right\}, \left\{\frac{333}{106}, \frac{355}{113}\right\}, &\left\{\frac{1667438}{530762}, \frac{1667793}{530875}\right\}, \left\{ \frac{9252915567}{2945294501},\frac{9254583360}{2945825376} \right\}, \\ &\left\{\frac{136727214560643}{43521624105025}, \frac{136736469144003}{43524569930401}\right\}.\end{aligned}$$ As mentioned in the original paper, two of these fractions can be further reduced, $\frac{1667438}{530762}=\frac{833719}{265381}$, and $\frac{9254583360}{2945825376}=\frac{96401910}{30685681}$, while all the others are already written in their lowest terms. He then partially describes the algorithm generating these approximants, which could be explained using modern terminology and notation as follows. Let us denote the first element of the $n$-th pair (lower approximant) by $P_n/Q_n$, and the second element (upper approximant) by $R_n/S_n$. The approximants are then generated by recurrence equations $$\begin{aligned} \label{kochconv} Q_{n+1}&=S_{n} x_{n}+1,\\ P_{n+1}&=R_{n} x_{n}+3, \\ S_{n+1}&=S_{n} (x_{n}+1)+1,\\ R_{n+1}&=R_{n} (x_{n}+1)+3,\end{aligned}$$ where $R_0=22$, $S_0=7$. In these formulae $x_n$ is a sequence of numbers which Kochański calls *genitores*, giving the first four values of $x_n$: $$15,4697,5548,14774.$$ Unfortunately, he does not explain how he obtained these numbers. He only makes the following remark regarding them: > Methodicam prædictorum Numerorum Synthesin in *Cogitatis, & Inventis Polymathematicis*, quæ, si DEUS vitam prorogaverit, utilitati publicæ destinavi, plenius exponam;[^1] In spite of this declaration, *Cogitata & Inventa Polymathematica* have never appeared in print. It is possible that some explanation could have been found in unpublished manuscripts of Kochański, but unfortunately all his personal papers gathered by the National Library in Warsaw perished during the Warsaw Uprising in 1944. To the knowledge of the author, nobody has ever attempted to find the algorithm for generating the sequence of *genitores*. In the most comprehensive analysis of Kochański’s mathematical works published up to date [@Pawlikowska69], Z. Pawlikowska did not offer any explanation either. In the subsequent section, we attempt to reproduce the most likely method by which Kochański could have obtained the sequence of *genitores*, and, consequently, a sequence of approximants of $\pi$ converging to $\pi$. We will also explain why he gave only first four terms of the sequence. Construction of genitores ========================= It seems plausible that the starting point for Kochański’s considerations was Archimedes approximation of $\pi$ by $22/7$ and the result of Metius $$\begin{aligned} \frac{333}{106} < \pi < \frac{355}{113}.\end{aligned}$$ It is also likely that Kochański then noticed that Metius’ result can be obtained from Archimedes’ approximation by writing $$\begin{aligned} \frac{333}{106}&=\frac{22 \cdot 15 + 3}{7 \cdot 15 +1}.\end{aligned}$$ Where is the factor 15 coming from? The key observation here is that 15 is the “optimal” factor, in the sense that it is the largest integer value of $x$ for which $\frac{22 \cdot x + 3}{7 \cdot x +1}$ remains smaller than $\pi$. The next most likely step in Kochański’s reasoning was the observation that the upper approximant can be obtained by incrementing 15 to 16, $$\begin{aligned} \frac{355}{113} &=\frac{22 \cdot 16 + 3}{7 \cdot 16 + 1}.\end{aligned}$$ Repeating this procedure for $\frac{355}{113}$ produces another pair of approximants, $$\begin{aligned} \frac{355\cdot4697+3}{113\cdot4697+1}&=\frac{1667438}{530762}, \\ \frac{355\cdot4698+3}{113\cdot4698+1}&=\frac{1667793}{530875},\end{aligned}$$ where $4697$ is again the largest integer $x$ for which $\frac{355 \cdot x + 3}{113 \cdot x +1}<\pi$. Recursive application of the above process produces the desired sequence of pairs given in eq. (\[kochanskisequence\]), and the values of $x$ thus obtained are precisely what Kochański calls *genitores*. What remains to be done is proving that the above algorithm indeed produces a sequence of lower and upper approximants of $\pi$, and that these converge to $\pi$ in the limit of $n \to \infty$. Kochański approximants ====================== We will present the problem in a general setting. In what follows, $\alpha$ will denote a positive irrational number which we want to approximate by rational fractions. Suppose that we have a pair of positive integers $R$ and $S$ such that their ratio is close to $\alpha$ but exceeds $\alpha$, $R/S > \alpha$. Together with $\lfloor\alpha\rfloor$, we then have two rational bounds on $\alpha$, $$\label{zerobounds} \frac{\lfloor\alpha\rfloor}{1} < \alpha < \frac{R}{S}.$$ Suppose now that we want to improve these bounds. As we will shortly see, this can be achieved by considering fractions which have the form $$\frac{Rx+\lfloor\alpha\rfloor}{Sx+1},$$ where $x$ is some positive integer. Before we go on, let us first note that the function $f(x)=\frac{Rx+\lfloor\alpha\rfloor}{Sx+1}$, treated as a function of real $x$ has positive derivative everywhere except at $x=-1/S$, where it is undefined, and that there exists $x$ where $f(x)=\alpha$, given by $x=(\alpha-\lfloor\alpha\rfloor)/(R-\alpha S)$. Let $\alpha$ be a positive irrational number, and let $R$ and $S$ be positive integers such that $\frac{R}{S}> \alpha$. *Genitor* of $R, S$ with respect to $\alpha$ will be defined as $$\label{defg} g_{\alpha}(R,S)=\left\lfloor \frac{\alpha-\lfloor\alpha\rfloor}{R-\alpha S} \right\rfloor.$$ Let us note that if $g_{\alpha}(R,S)$ is positive, then it is the largest positive integer $x$ such that $\frac{Rx+\lfloor\alpha\rfloor}{Sx+1} < \alpha$, i.e., $$\label{altdef} g_{\alpha}(R,S)=\max \{ x \in {{\mathbbm{N}}}: \frac{Rx+\lfloor\alpha\rfloor}{Sx+1} < \alpha\}.$$ In this notation, the four *genitores* given in the paper can thus be written as $g_{\pi}(22,7)=15$, $g_{\pi}(355, 113)=4697$, $g_{\pi}(1667793,530875)=5548$, and $g_{\pi}(9254583360,2945825376)=14774$. Using the concept of *genitores*, we can now tighten the bounds given in eq. (\[zerobounds\]). For any $\alpha \in \mathbbm{IQ}^{+}$ and $R,S \in \mathbbm{Q}^{+}$, if $\frac{R}{S}>\alpha$ and if the genitor $g_{\alpha}(R,S)$ is positive, then $$\label{propqinequalities} \lfloor\alpha\rfloor < \frac{R g_\alpha(R,S)+\lfloor\alpha\rfloor}{S g_\alpha(R,S)+1} < \alpha < \frac{R (g_\alpha(R,S)+1)+\lfloor\alpha\rfloor}{S (g_\alpha(R,S)+1)+1}<\frac{R}{S}.$$ The second and third inequality is a simple consequence of the definition of $g_\alpha(R,S)$ and eq. (\[altdef\]). The first one can be demonstrated as follows. Since $R/S>\alpha$, then $R/S>\lfloor\alpha\rfloor$, and therefore $R g_\alpha(R,S) > \lfloor\alpha\rfloor S g_\alpha(R,S)$. Now $$R g_\alpha(R,S)+\lfloor\alpha\rfloor > \lfloor\alpha\rfloor S g_\alpha(R,S) +\lfloor\alpha\rfloor,$$ and $$\frac{Rg_\alpha(R,S)+\lfloor\alpha\rfloor}{Sg_\alpha(R,S)+1}>\lfloor\alpha\rfloor,$$ as required. To show the last inequality let us note that $$\frac{R (g_\alpha(R,S)+1)+\lfloor\alpha\rfloor}{S (g_\alpha(R,S)+1)+1}-\frac{R}{S} =\frac{\lfloor\alpha\rfloor S-R}{S(S (g_\alpha(R,S)+1)+1)}.$$ Since $R/S>\lfloor\alpha\rfloor$, the numerator is negative, and the last inequality of (\[propqinequalities\]) follows. $\Box$ The above proposition gives us a method to tighten the bounds of (\[zerobounds\]), and the next logical step is to apply this proposition recursively. \[kochapproximants\] Let $\alpha \in \mathbbm{IQ}^{+}$ and let $R_0,S_0$ be positive integers such that $R_0/S_0>\alpha$ and $g_\alpha(R_0,S_0)>0$. *Kochański approximants* of $\alpha$ starting from $R_0,S_0$ are sequences of rational numbers $\{P_n/Q_n\}_{n=1}^\infty$ and $\{R_n/S_n\}_{n=0}^\infty$ defined recursively for $n\in {{\mathbbm{N}}}\cup \{0\}$ by $$\begin{aligned} \label{mainrec} P_{n+1}&=R_{n} x_{n}+\lfloor\alpha\rfloor, \\ \nonumber Q_{n+1}&=S_{n} x_{n}+1,\\ \nonumber R_{n+1}&=R_{n} (x_{n}+1)+\lfloor\alpha\rfloor,\\ \nonumber S_{n+1}&=S_{n} (x_{n}+1)+1, \nonumber\end{aligned}$$ where $x_n= g_\alpha(R_{n},S_{n})$. Elements of the sequence $\{P_n/Q_n\}_{n=1}^\infty$ will be called *lower approximants*, and element of the sequence $\{R_n/S_n\}_{n=0}^\infty$ – *upper approximants*. Note that $$\begin{aligned} P_n&=R_n-R_{n-1},\\ Q_n&=S_n-S_{n-1},\end{aligned}$$ therefore it is sufficient to consider sequences of $R_n$ and $S_n$ only, as these two sequences uniquely define both upper and lower approximants. Kochański approximants have the following properties: 1. $x_n$ is non-decreasing sequence of positive numbers, 2. $\displaystyle \lfloor\alpha\rfloor < \frac{P_n}{Q_n} < \alpha < \frac{R_n}{S_n}<\frac{R_0}{S_0}$ for all $n\geq1$, 3. $\displaystyle \frac{R_n}{S_n}$ is decreasing, 4. $\displaystyle \frac{P_n}{Q_n}$ is increasing, 5. $\displaystyle \lim_{n \to \infty} \frac{R_n}{S_n}= \lim_{n \to \infty} \frac{P_n}{Q_n} = \alpha$. For (i), because of the definition of $x_n=g_\alpha(R_n,S_n)$ shown in eq. (\[defg\]), we need to demonstrate that $R_n-\alpha S_n$ is non-increasing. To do this, let us check the sign of $$\begin{aligned} R_n - \alpha S_n -(R_{n+1}-\alpha S_{n+1})&= R_n - \alpha S_n - (R_{n}(x_n+1) + \lfloor\alpha\rfloor)\\ + \pi (S_{n}(x_n+1)+1) &=\alpha - \lfloor\alpha\rfloor - (R_n-\alpha S_n)x_n\\ &=\alpha - \lfloor\alpha\rfloor - (R_n-\alpha S_n)\left\lfloor \frac{\alpha-\lfloor\alpha\rfloor}{R_n-\alpha S_n} \right\rfloor.\end{aligned}$$ The last expression, by the definition of the floor operator, must be non-negative, thus $R_n-\pi S_n$ is non-increasing, and $x_n$ is non-decreasing as a result. Now, since the definition of Kochański approximants requires that $x_0$ is positive, all other $x_n$ must be positive too. Property (ii) is just a consequence of the Proposition 1, which becomes clear once we note that $$\frac{R_n}{S_n}=\frac{R_{n-1}(x_{n-1}+1) + 3}{S_{n-1}(x_{n-1}+1)+1},$$ and $$\frac{P_n}{Q_n}=\frac{R_{n-1}x_{n-1} + 3}{S_{n-1}x_{n-1}+1},$$ where $x_{n-1}=g_\alpha(R_{n-1}, S_{n-1})$. To show (iii), let us compute the difference between two consecutive terms of the sequence $R_n/S_n$, $$\frac{R_n}{S_n}-\frac{R_{n-1}}{S_{n-1}}=\frac{R_{n-1}y_{n-1}+\lfloor\alpha\rfloor}{S_{n-1}y_{n-1}+1}-\frac{R_{n-1}}{S_{n-1}},$$ where we defined $y_{n-1}=g_\alpha(R_{n-1}, S_{n-1})+1$. This yields $$\begin{aligned} \frac{R_n}{S_n}-\frac{R_{n-1}}{S_{n-1}}&= \frac{(R_{n-1} y_{n-1} +\lfloor\alpha\rfloor) S_{n-1} - R_{n-1} (S_{n-1} y_{n-1} +1)}{S_{n-1} (S_{n-1} y_{n-1} + 1)}\\ &=\frac{\lfloor\alpha\rfloor S_{n-1}-R_{n-1}}{S_{n-1} (S_{n-1} y_{n-1} + 1)}< 0,\end{aligned}$$ because, by (ii), $R_{n-1}/S_{n-1}>\lfloor\alpha\rfloor$. The sequence $R_n/S_n$ is thus decreasing. Proof of (iv) is similar and will not be presented here. Le us now note that $R_n/S_n$ is bounded from below by $\alpha$ and decreasing, thus it must have a limit. Similarly, $\frac{P_n}{Q_n}=\frac{R_n-R_{n-1}}{S_n-S_{n-1}}$ is bounded from above by $\alpha$ and increasing, so again it must have a limit. To demonstrate (v), it is therefore sufficient to show that limits of $\frac{R_n}{S_n}$ and $\frac{R_n-R_{n-1}}{S_n-S_{n-1}}$ are the same, or, what is equivalent, that $$\lim_{n \to \infty} \left( \frac{R_n}{S_n}- \frac{R_n-R_{n-1}}{S_n-S_{n-1}}\right)=0.$$ We start by defining $\gamma_n=R_n/S_n - P_n/Q_n$ and observing that $$\gamma_n=\frac{R_n}{S_n}- \frac{R_n-R_{n-1}}{S_n-S_{n-1}}=\frac{R_{n-1}S_n-R_n S_{n-1}}{S_n (S_n - S_{n-1})}.$$ By substituting $R_n=R_{n-1}(x_{n-1}+1) + \lfloor\alpha\rfloor$ and $S_n=S_{n-1}(x_{n-1}+1)+1$, one obtains after simplification $$\gamma_n=\frac{R_{n-1}-\lfloor\alpha\rfloor S_{n-1}}{S_n (S_n - S_{n-1})}=\frac{ \frac{R_{n-1}}{S_{n-1}}-\lfloor\alpha\rfloor} {S_n (\frac{S_n}{S_{n-1}} - 1)}.$$ Since $R_n/S_n$ is decreasing, and starts from $R_0/S_0$, we can write $$\gamma_n< \frac{ \frac{R_0}{S_0}-\lfloor\alpha\rfloor} {S_n (\frac{S_n}{S_{n-1}} - 1)} =\frac{\frac{R_0}{S_0}-\lfloor\alpha\rfloor}{S_n(x_{n-1}+\frac{1}{S_{n-1}})},$$ where we used the fact that $S_{n}=S_{n-1}(x_{n-1}+1)+1$ and where $x_{n-1}=g_\alpha(R_{n-1}, S_{n-1})$. Since $x_n$ is non-decreasing, and $S_n$ increases with $n$, we conclude that $\gamma_n \to 0 $ ad $n\to \infty$, as required. $\Box$ Let us remark here that $x_n$ is indeed only non-decreasing, and it is possible for two consecutive values of $x_n$ to be the same. For example, for $\alpha=\sqrt{2}$ and $R_0/S_0=3/2$, we obtain $$\{x_n\}_{n=0}^\infty=2, 4, 4, 15, 17, 77, 101, 119, \ldots,$$ where $x_1=x_2$. Initial values ============== One last thing to explain is the choice of the starting values $R_0$, $S_0$. Definition \[kochapproximants\] requires that the *genitor* of these initial values is positive, so how can we choose $R_0,S_0$ to ensure this? We start by noticing that the second pair of Kochański’s approximants (Metius’ fractions $\frac{ 333}{106}$, $\frac{ 355}{113}$) are known to appear in the sequence of convergents of the continuous fraction representation of $\pi$. As we shall see, this is not just a coincidence. Let us first recall two basic properties of continuous fraction expansion of a positive irrational number $\alpha$, $$\alpha= a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{a_4 + \ddots}}}}.$$ By convergent $p_n/q_n$ we will mean a fraction (written in its lowest terms) obtained by truncation of the above infinite continued fraction after $a_n$. The first property we need is the recursive algorithm for generating convergents and values of $a_n$. \[defconvergents\] Consecutive convergents $p_n/q_n$ of $\alpha$ can be obtained by applying the recursive formula $$\begin{aligned} a_{n+1}&=\left\lfloor \frac{\alpha q_{n-1} - p_{n-1}}{p_n-\alpha q_n} \right\rfloor, \label{acf}\\ p_{n+1}&=p_n a_{n+1} + p_{n-1},\\ q_{n+1}&=q_n a_{n+1} + q_{n-1},\end{aligned}$$ with initial conditions $a_0=\lfloor \alpha \rfloor$, $a_1=\lfloor \frac{1}{\alpha - a_0} \rfloor$, $p_0=a_0$, $q_0=1$, $p_1=a_0 a_1+1$, $q_1=a_1$. For example, for $\alpha=\pi$ we obtain $$\begin{gathered} \left\{ \frac{p_n}{q_n} \right\}_{n=0}^\infty = \left\{ \frac{3}{1}, \frac{22}{7}, \frac{ 333}{106}, \frac{ 355}{113}, \frac{ 103993}{33102}, \frac{ 104348}{33215}, \frac{ 208341}{66317}, \frac{ 312689}{99532}, \right.\\ \left. \frac{ 833719}{265381}, \frac{ 1146408}{364913}, \frac{ 4272943}{1360120}, \ldots \right\}\end{gathered}$$ Convergents are know to be the best rational approximations of irrational numbers, which can formally be stated as follows. If $p_n/q_n$ is a convergent for an irrational number $\alpha$ and $p/q$ is an arbitrary fraction with $q<q_{n+1}$, then $$\label{bestrational} |q_n \alpha - p_n|< |q a -p|$$ Elementary proofs of both of the above propositions can be found in [@Wall67]. We also need to recall that convergents $p_n/q_n$ are alternatively above and below $\alpha$, so that for odd $n$ we always have $p_n/q_n>\alpha$, and for even $n$, $p_n/q_n<\alpha$. Suppose that we now take some odd convergent $p_{2k+1}/q_{2k+1}$, and further set $p=\lfloor \alpha \rfloor$, $q=1$. Inequality (\[bestrational\]) then becomes $$p_{2k+1}-\alpha q_{2k+1}<\alpha - \lfloor \alpha \rfloor,$$ and hence $$\frac{\alpha -\lfloor \alpha \rfloor}{p_{2k+1}-\alpha q_{2k+1}}>1.$$ This, by the definition of the *genitor* given in eq. (\[defg\]), yields $g_\alpha(p_{2k+1},q_{2k+1})>0$, leading to the following corollary. If $p_{2k+1}/q_{2k+1}$ is an odd convergent of $\alpha$, then $g_\alpha(p_{2k+1},q_{2k+1})>0$, and $R_0=p_{2k+1}$, $S_0=q_{2k+1}$ can be used as initial values in the construction of Kochański’s approximants. In particular, one can generate Kochański’s approximants starting from the first convergent of $\alpha$, by taking $R_0=a_0 a_1+1$, $S_0=a_1$, where $a_0=\lfloor \alpha \rfloor$, $a_1=\lfloor 1/(\alpha - a_0) \rfloor$. Note that Kochański in his paper indeed started from the first convergent of $\pi$, by taking $R_0/S_0=22/7$. Obviously, if one starts from the first convergent $R_0=p_1$, $S_0=q_1$, then the first lower approximant will be the second convergent, $P_1=p_2$, $Q_1=q_2$, and indeed in Kochański’s case $P_1/Q_1=p_2/q_2=333/106$. Other approximants do not have to be convergents, and they normally aren’t, although convergents may occasionally appear in the sequence of lower or upper approximants. For example, in the case of $\alpha=\pi$, $R_1/S_1=p_3/q_3=355/113$ and $P_2/Q_2=p_8/q_8=833719/265381$. We should also add here that the choice of the first convergent as the starting point is the most natural one. Among all pairs $R_0,S_0$ where $S_0<106$, the only cases for which $g_\pi(R_0,S_0)>0$ are $R=22k$, $R=7k$, where $k\in \{1,2,\ldots,15\}$. If one wants to obtain fractions expressed by as small integers as possible, then taking $k=1$ is an obvious choice. Concluding remarks ================== We have reconstructed the algorithm for construction of rationals approximating $\pi$ used in [@Kochanski1685], and we have demonstrated that it can be generalized to produce approximants of arbitrary irrational number $\alpha$. Under a suitable choice of initial values, approximants converge to $\alpha$. Using these results, we can generate more terms of the sequence of *genitores* for $\alpha=\pi$, $R_0/S_0=22/7$, going beyond first four terms found in Kochański’s paper: $$\begin{gathered} \{x_n\}_{n=0}^\infty=\{g_\pi(R_n,S_n)\}_{n=0}^\infty= \{15, 4697, 5548, 14774, 33696, 61072, 111231,\\ 115985, 173819, 563316, 606004,\ldots\}.\end{gathered}$$ We propose to call this sequence *Kochański sequence*. It has been submitted to the Online Encyclopedia of Integer Sequences as A191642 [@A191642], and its entry in the Encyclopedia includes Maple code for generating its consecutive terms. Knowing that $x_n=\left\lfloor \frac{\alpha-\lfloor\alpha\rfloor}{R_n-\alpha S_n} \right\rfloor$, we can also understand why only four terms of the sequence are given in the paper. In order to compute $x_n$, one needs to know $\pi$ with sufficient accuracy. For example, 20 digits after the decimal point are needed in order to compute $x_0$ to $x_3$. Kochański was familiar with the work of Ludolph van Ceulen, who computed 35 digits of $\pi$, and this was more than enough to compute $x_4$. Nevertheless, Kochański in his paper performed all computations keeping track of “only” 25 digits, and this was falling just one digit short of the precision needed to compute $x_4$. It is also interesting to notice that the recurrence equations in Definition \[kochapproximants\] strongly resemble recurrence equations for convergents $p_n/q_n$ in Proposition \[defconvergents\]. Kochański was always adding $3$ and $1$ to the numerator and denominator in his approximants, because, as remarked earlier, he noticed that $$\begin{aligned} \frac{22 \cdot 15 + 3}{7 \cdot 15 +1}=\frac{333}{106}, \,\,\,\,\,\,\, \frac{22 \cdot 16 + 3}{7 \cdot 16 + 1}=\frac{355}{113}.\end{aligned}$$ He apparently failed to notice that $$\frac{333 \cdot 1 + 22}{106 \cdot 1 + 7}=\frac{355}{113},$$ that is, instead of finding the largest $x$ for which $(22x + 3)/(7x + 1)<\pi$, one can take the last two approximants, $22/7$ and $333/106$, and then find the largest $x$ such that $(333 x + 22)/(106 x + 7)>\pi$. If he had done this he would have discovered convergents and continued fractions. His genitores would then be $a_n$ values in the continued fraction expansion of $\pi$. In the meanwhile, continued fractions and convergents had to wait until 1695 when John Wallis laid the groundwork for their theory in his book *Opera Mathematica* [@Wallis1695]. One little puzzling detail remains, however. If we look at the Definition \[kochapproximants\], we notice that the sequence of lower approximants $P_n/Q_n$ starts from $n=1$, not from $n=0$, as is the case for the upper approximants $R_n/S_n$. Indeed, $P_0, Q_0$ are not needed to start the recursion. Nevertheless, in the table of approximants given in [@Kochanski1685], in the second row there is a pair of values corresponding to $n=0$, namely $P_0/Q_0=25/8$ (in the first row of the table he also gives the obvious bounds $3<\pi<4$). These numbers are not needed in any subsequent calculation, and Kochański does not explain where do they come from. One can only speculate that perhaps he wanted the table to appear “symmetric”, thus he entered some arbitrary fraction approximating $\pi$ from below as $P_0/Q_0$. Acknowledgements ================ The author acknowledges partial financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) in the form of a Discovery Grant. He also wishes to thank Danuta Makowiec for help in acquiring relevant literature, and Fr. Bogdan Lisiak SJ for prompt and helpful replies to inquires regarding A. A. Kochański and his works. [^1]: I will explain the method of generating the aforementioned numbers more completely in *Cogitata & Inventa Polymathematica*, which work, if God prolongs my life, I have decided to put out for public benefit. (transl. H.F.)
--- abstract: 'We present an explicit pseudorandom generator for oblivious, read-once, width-$3$ branching programs, which can read their input bits in any order. The generator has seed length $\tilde{O}( \log^3 n ).$ The previously best known seed length for this model is $n^{1/2+o(1)}$ due to Impagliazzo, Meka, and Zuckerman (FOCS ’12). Our work generalizes a recent result of Reingold, Steinke, and Vadhan (RANDOM ’13) for *permutation* branching programs. The main technical novelty underlying our generator is a new bound on the Fourier growth of width-3, oblivious, read-once branching programs. Specifically, we show that for any $f:{\{0,1\}}^n\rightarrow {\{0,1\}}$ computed by such a branching program, and $k\in [n],$ $$\sum_{s\subseteq [n]: |s|=k} \left| \hat{f}[s] \right| \leq n^2 \cdot (O(\log n))^k,$$ where $\widehat{f}[s] = {\underset{U}{\mathbb{E}}\left[ f[U] \cdot (-1)^{s \cdot U} \right]}$ is the standard Fourier transform over ${\mathbb{Z}}_2^n$. The base $O(\log n)$ of the Fourier growth is tight up to a factor of $\log \log n$.' author: - | Thomas Steinke[^1]\ `tsteinke@seas.harvard.edu` - | Salil Vadhan[^2]\ `salil@seas.harvard.edu` - | Andrew Wan[^3]\ `atw12@seas.harvard.edu` bibliography: - 'fourier.bib' date: 27 May 2014 title: | Pseudorandomness and Fourier Growth Bounds\ for Width 3 Branching Programs --- [^1]: School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge MA. Supported by NSF grant CCF-1116616 and the Lord Rutherford Memorial Research Fellowship. [^2]: School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge MA. Supported in part by NSF grant CCF-1116616, US-Israel BSF grant 2010196, and a Simons Investigator Award. [^3]: Simons Institute for the Theory of Computing, UC Berkeley. Part of this work was completed while at Harvard University.
--- author: - | \ \ \ \ \ title: '[Dynamical Vacuum in Quantum Cosmology]{}' --- ABSTRACT .1cm By regarding the vacuum as a perfect fluid with equation of state $p = - \rho$, de Sitter’s cosmological model is quantized. Our treatment differs from previous ones in that it endows the vacuum with dynamical degress of freedom, following modern ideas that the cosmological term is a manifestation of the vacuum energy. Instead of being postulated from the start, the cosmological constant arises from the degrees of freedom of the vacuum regarded as a dynamical entity, and a time variable can be naturally introduced. Taking the scale factor as the sole degree of freedom of the gravitational field, stationary and wave-packet solutions to the Wheeler-DeWitt equation are found, whose properties are studied. It is found that states of the Universe with a definite value of the cosmological constant do not exist. For the wave packets investigated, quantum effects are noticeable only for small values of the scale factor, a classical regime being attained at asymptotically large times. .4cm PACS numbers: 98.80.Hw , 04.60.Gw [**1. INTRODUCTION**]{} Quantum cosmology is hopefully relevant to describe quantum gravitational effects in the very early Universe. In view of the nonexistence of a consistent quantum theory of gravity, minisuperspace quantization, which consists in “freezing out” all but a finite number of degrees of freedom of the gravitational field and its sources and quantizing the remaining ones, is expected to provide general insights on what an acceptable quantum gravity should be like. This line of attack, initiated by DeWitt \[1\], has been extensively pursued to quantize model universes with different symmetries and varying matter content, and allows one to conceive theories of initial conditions for the wave function of the Universe \[2\]. Manifold schemes have been devised to quantize gravity coupled to matter in minisuperspace, the commonest of such quantization methods being those that rely on the Wheeler-DeWitt equation, advocate the quantization of only the conformal factor of the spacetime metric, or perform canonical quantization in the reduced phase space. In inflationary cosmology de Sitter’s model plays a fundamental role, since it describes the phase of rapid expansion during which the vacuum energy dominates the energy density of the Universe, and gives rise to a term in the energy-momentum tensor that corresponds to a cosmologi cal constant. In modern cosmology the terms [*vacuum energy*]{} and [*cosmological constant*]{} are used almost synonymously \[3\]. It seems, therefore, of interest to study quantum aspects of de Sitter’s cosmological model by treating the vacuum as a dynam ical entity. In such a treatment, the cosmological constant should not be postulated from the start, but should emerge from the dynamical degrees of freedom of the vacuum. A possible way to achieve this is by regarding the vacuum as a perfect fluid with e quation of state $p=-\rho$. This approach appears to be fruitful, has several attractive features from the thermodynamic point of view, and leads to interesting consequences in inflationary cosmology \[4\]. The standard way of dealing with de Sitter’s model in quantum cosmology \[5\] is highly questionable because it invol ves a system with a single degree of freedom and one constraint, so that, strictly speaking, the system has no degrees of freedom at all and is empty of physical content. The assignation of dynamical degrees of freedom to the vacuum circumvents this dif ficulty and renders our method distinctive in its ability to make room for the introduction of a time variable. Accordingly, we shall adopt Schutz’s canonical formalism \[6\] which describes a relativistic fluid interacting with the gravitational field. This formalism is especially adequate for our purposes, inasmuch as it has the advantage of ascribing dynamical degrees of freedom to the fluid. As it will be seen, Schutz’s a ction principle is successful even in the case of the vacuum in the sense that the cosmological constant appears dynamically as a manifestation of the degrees of freedom of the fluid that acts as the vacuum. In the quantum realm the properties of de Sitter’s model will be investigated on the basis of the associated Wheeler-DeWitt equation. Because the super-Hamiltonian constraint is linear in one of the momenta, the Wheeler-DeWitt equation can be reduced to a bona fide Schrödinger equation. This paper is organized as follows. In Section 2 a Hamiltonian treatment of de Sitter’s model is developed on the basis of Schutz’s canonical formalism, which is proved, in the case of the vacuum, to lead to the correct classical equations of motion. In S ection 3 the Wheeler-DeWitt equation is written down and is shown to take the form of a genuine Schrödinger equation for an appropriate form of the inner product. In order for the Hamiltonian operator to be self-adjoint its domain must be restricted to wave functions that obey certain boundary conditions. General sets of stationary solutions to the Wheeler-DeWitt equation obeying said boundary conditions are found. Then, in Section 4, normalized wave-packet solutions to the Wheeler-DeWitt equation ar e found, and their properties analyzed. Section 5 is dedicated to final comments. [**2. DYNAMICAL VACUUM IN DE SITTER’S COSMOLOGICAL MODEL**]{} The line element for a homogeneous and isotropic universe can be written in the Friedmann-Robertson-Walker form (we take $c=1$) $$ds^2 = g_{\nu\lambda} dx^{\nu} dx^{\lambda} = - N(t)^2 dt^2 + R(t)^2 {\sigma}_{ij} dx^i dx^j \,\,\, , \eqno(2.1)$$\ where ${\sigma}_{ij}$ denotes the metric for a 3-space of constant curvature $k= +1, 0$ or $-1$, corresponding to spherical, flat or hyperbolic spacelike sections, respectively. The matter content will be taken to be a perfect fluid, and Schutz’s canonical formulation of the dynamics of a relativistic fluid in interaction with the gravitational field will be employed \[6\]. The degrees of freedom ascribed to the fluid are five scalar potentials $\varphi , \alpha , \beta , \theta , S$ in terms of which the four-velocity of the fluid is written as $$U_{\nu} = \frac{1}{\mu} \, (\varphi _{,\nu} + \alpha \beta _{,\nu} + \theta S_{,\nu}) \,\,\, , \eqno(2.2)$$\ where $\mu$ is the specific enthalpy. By means of the normalization condition $$g_{\nu\lambda} U^{\nu} U^{\lambda} = - 1 \eqno(2.3)$$\ one can express $\mu$ in terms of the velocity potentials. The action for the gravitational field plus perfect fluid is $$S= \int_M\,d^4x \sqrt{-g}\,\, ^{(4)}R \, + \, 2\int_{\partial M}\,d^3x\sqrt{h}\, h_{ij} K^{ij} \, + \, \int_M\,d^4x \sqrt{-g}\,p \,\,\, \eqno(2.4)$$\ in units such that $c=16\pi G=1$. In the above equation $p$ is the pressure of the fluid, $^{(4)}R$ is the scalar curvature derived from the spacetime metric $g_{\nu\lambda}$, $h_{ij}$ is the 3-metric on the boundary $\partial M$ of the 4-manifold $M$, and $K^{ij}$ is the extrinsic curvature or second fundamental form of the boundary \[7\]. The surface term is necessary in the path-integral formulation of quantum gravity in order to rid the Einstein-Hilbert Lagrangian of second-order derivatives. Variations of the pressure are computed from the first law of thermodynamics. Compatibility with the homogeneous spacetime metric is guaranteed by taking all of the velocity potentials of the fluid as functions of $t$ only. We shall take $p = (\gamma - 1)\, \rho$ as equation of state for the fluid, where $\gamma$ is a constant and $\rho$ is the fluid’s energy density (we shall eventually put $\gamma = 0$). In the geometry characterized by (2.1) the appropriate boundary condition for the action principle is to fix the initial and final hypersurfaces of constant time. The second fundamental form of the boundary becomes $K_{ij} = - {\dot{h}}_{ij}/2N$. As described in its full details in \[8\], after inserting the metric (2.1) into the action (2.4), using the equation of state, computing the canonical momenta and employing the constraint equations to eliminate the pair $(\theta, p_{\theta})$, what remains is a reduced action in the Hamiltonian form $$S_r = \int dt \Bigl \{ {\dot R}p_R + {\dot \varphi}p_{\varphi} +{\dot S} p_S - N {\cal H} \Bigr \} \,\,\, \eqno(2.5)$$\ where an overall factor of the spatial integral of $(det \, \sigma)^{1/2}$ has been discarded, since it has no effect on the equations of motion. The super-Hamiltonian $\cal H$ is given by $${\cal H} = - \frac{p_R^2}{24R} - 6kR + p_{\varphi}^{\gamma}\, R^{-3(\gamma - 1)}\, e^S \,\,\, . \eqno(2.6)$$\ The lapse $N$ plays the role of a Lagrange multiplier, and upon its variation it is found that the super-Hamiltonian $\cal H$ vanishes. This is a constraint, revealing that the phase-space contains redundant canonical variables. For $\gamma = 0$ the super-Hamiltonian contains neither the fluid’s degree of freedom $\varphi$ nor its conjugate momentum $p_{\varphi}$, so that these canonical variables can be simply dropped. Equivalently, the correct classical equations of motion can be obtained without taking into account the degrees of freedom described by $\varphi , \alpha , \beta$ and $\theta$, that is, they could have been disregarded from the start. It is a pleasant circumstance that only the physically meaningful entropy density $S$ is relevant for $\gamma = 0$. The action reduces to $$S = \int dt \Bigl \{ {\dot R}p_R + {\dot S} p_S - N {\cal H} \Bigr \} \,\,\, \eqno(2.7)$$\ with $${\cal H} = - \frac{p_R^2}{24R} - 6kR + R^3\, e^S \,\,\, . \eqno(2.8)$$\ This can be put in a more suggestive form by means of the canonical transformation $$T = - e^{-S}\, p_S \,\,\,\,\,\,\,\,\,\, , \,\,\,\,\,\,\,\,\,\, p_T = e^S \,\,\, . \eqno(2.9)$$\ Then $$S = \int dt \Bigl \{ {\dot R}p_R + {\dot T} p_T - N {\cal H} \Bigr \} \,\,\, \eqno(2.10)$$\ where $${\cal H} = - \frac{p_R^2}{24R} - 6kR + R^3\, p_T \,\,\, . \eqno(2.11)$$\ The extended phase-space is generated by $(R,T,p_R,p_T)$. The variable $T$ is such that the Poisson bracket $$\{ T,{\cal H}\} _{{\cal H}=0} = R^3 >0 \,\,\, , \eqno(2.12)$$\ so that $T$ is a “global phase time” or, more precisely, since it does not involve the canonical momenta, a “global time” in accordance with the terminology introduced by Hajicek \[9\]. This is reassuring because the existence of a global time appears to be a necessary condition to prevent violations of unitarity in the quantum domain. The classical equations of motion are $${\dot R} = \frac{\partial (N {\cal H})}{\partial p_R} = - \frac{N p_R}{12 R} \,\,\, , \eqno(2.13a)$$\ $${\dot p}_R = -\frac{\partial (N {\cal H})}{\partial R} = N \Bigl (- \frac{ p_R^2}{24 R^2} +6k - 3 R^2\, p_T \Bigr ) \,\,\, , \eqno(2.13b)$$\ $${\dot T} = \frac{\partial (N {\cal H})}{\partial p_T} = N R^3 \,\,\, , \eqno(2.13c)$$\ $${\dot p}_T =- \frac{\partial (N {\cal H})}{\partial T} = 0 \,\,\, , \eqno(2.13d)$$\ supplemented by the super-Hamiltonian constraint $${\cal H} = - \frac{p_R^2}{24R} - 6kR + R^3\, p_T = 0 \,\,\, . \eqno(2.14)$$\ In order to solve these equations in the case $k=0$ let us choose the gauge $t = T$, so that $ N=R^{-3}$. Since $p_T $ is constant, it follows from ${\cal H} = 0$ that $p_R$ is proportional to $R^2$. Insertion of this result in Eq.(2.13a) leads to $$R^2 {\dot R} = constant \,\,\, \Longrightarrow \,\,\, R(t) = (At)^{1/3} \,\,\, , \eqno(2.15)$$\ where $A$ is a positive constant and the origin of the time $t$ has been conveniently chosen. The lapse function is, therefore, $$N(t) = R^{-3} = (At)^{-1} \,\,\, . \eqno(2.16)$$\ In terms of the cosmic time $\tau$ defined by $$d\tau = N(t)\, dt = \frac{dt}{At} \,\,\, \Longrightarrow \,\,\, \tau - {\tau}_0 = \frac{\ln (t)}{A} \eqno(2.17)$$\ one recovers the usual de Sitter solution $$R = R_0 \, e^{\Lambda \, \tau} \,\,\, . \eqno(2.18)$$\ This concludes the verification that the action (2.10) leads to de Sitter’s spacetime solution to Einstein’s equations. Note that if the time variable is chosen as $t=T$ the scale factor vanishes at $t=0$, whereas in the cosmic-time gauge it vanishes only at $\tau = -\infty$. It is also worth mentioning that Eqs.(2.9) and (2.13d) show that the entropy density $S$ remains constant, in agreement with the behavior of inflat ionary models during the de Sitter phase \[3\]. The general case of arbitrary $k$ can be easily handled in the gauge $N=1$ and leads to the expected solutions, but we shall refrain from considering it here. [**3. QUANTIZED MODEL: A WHEELER-DeWITT DESCRIPTION**]{} It will be convenient to introduce a new parametrization of the lapse function by writing it as $NR$. Then the action retains the form (2.10) but the super-Hamiltonian is now $${\cal H} = - \frac{p_R^2}{24} - 6kR^2 + R^4\, p_T = 0 \,\,\, . \eqno(3.1)$$\ The Wheeler-DeWitt quantization scheme consists in setting $$p_R \rightarrow -i\frac{\partial}{\partial R} \,\,\,\,\,\,\,\,\,\, , \,\,\,\,\,\,\,\,\,\, p_T \rightarrow -i\frac{\partial}{\partial T} \eqno(3.2)$$\ to form the operator ${\hat {\cal H}}$, and imposing the Wheeler-DeWitt equation $${\hat{\cal H}} \, \Psi = 0 \eqno(3.3)$$\ on the wave function of the universe $\Psi$. In our present case this equation takes the form $$\frac{1}{24} \frac{\partial ^2 \Psi}{\partial R^2} -6kR^2\, \Psi -iR^4 \frac{\partial \Psi}{\partial T} = 0 \,\,\, . \eqno(3.4)$$\ Upon division by $R^4$ this equation takes the form of a Schrödinger equation $$i\frac{\partial \Psi}{\partial T} =\frac{1}{24R^4} \frac{\partial ^2 \Psi}{\partial R^2} -\frac{6k}{R^2} \eqno(3.5)$$\ with $T$ playing the role of time. In order to be able to interpret $T$ as a true time and (3.5) as a genuine Schrödinger equation, the operator $${\hat H} = \frac{1}{24R^4} \frac{\partial ^2 }{\partial R^2} -\frac{6k}{R^2}\eqno(3.6)$$\ must be self-adjoint. The scale factor $R$ is restricted to the domain $R>0$, so that the minisuperspace quantization deals only with wave-functions defined on the half-line $(0,\infty )$. It is well-known that in such circumstances one has to impose boundary conditions on the allowed wave functions otherwise the relevant differential operators will not be self-adjoint. The need to impose boundary conditions to ensure self-adjointness has been long recognized by practitioners of the Arnowitt-Deser-Misner (ADM) reduced phase space formalism as applied to quantum cosmology \[8,10-12\], and very recently it has also been seen to have non-trivial cosmological implications in the Wheeler-DeWitt approach \[13\]. In the present case the operator $\hat H$ given by Eq.(3.6) with $k=0$ is self-adjoint in the inner product $$(\psi ,\phi ) = \int_0^{\infty} R^4 \psi ^*(R) \phi (R) dR \eqno(3.7)$$\ if its domain is suitably specified. The operator $\hat H$ is symmetric if $$(\psi ,{\hat H}\phi )= \int_0^{\infty} \psi ^* (R) \frac{d^2\phi (R)}{dR^2}dR = \int_0^{\infty} \frac{d^2\psi (R)^*}{dR^2}\phi (R)dR = ({\hat H}\psi ,\phi )\,\,\, , \eqno(3.8)$$\ and, as in the case of $ d^2/dR^2$ on $L^2 (0,\infty)$, it is well known that the domain of self-adjointness of the Hamiltonian operator $ \hat H$ comprises only those wave functions that obey $$\psi ^{\prime}(0) = \alpha \psi (0) \eqno(3.9)$$\ with $\alpha \in (-\infty ,\infty ]$. For the sake of simplicity, here we shall address ourselves in detail only to the cases $\alpha =\infty$ and $\alpha =0$, that is, the boundary conditions we shall be mainly concerned with are $$\Psi (0,T) = 0 \eqno(3.10a)$$ or $$\Psi^{'} (0,T) = 0 \,\,\, , \eqno(3.10b)$$\ where the prime denotes partial derivative with respect to $R$. Let us look for stationary solutions to Eq.(3.4), that is, solutions of the form $$\Psi (R,T) = e^{iET} \psi (R) \,\,\, , \eqno(3.11)$$\ where $E$ is a real parameter. Then the equation for $\psi (R)$ becomes $$\frac{1}{24} \frac{d^2 \psi}{dR^2} + (ER^4 - 6kR^2 ) \, \psi = 0 \,\,\, . \eqno(3.12)$$\ The above equation coincides with the time-independent Wheeler-DeWitt equation written by other authors, occasionally with the help of somewhat obscure methods \[14\], with $E$ playing here the role of the cosmological constant $\Lambda$. It should be emphasized that here this equation has been derived from a well-defined action principle and the cosmological constant has appeared dynamically from the vacuum degrees of freedom. In the de Sitter case $(k=0)$ it is easy to show from the above equation that the “cosmological constant" $E$ is positive. Indeed, multiplying Eq.(3.12) by $\psi ^*$ and integrating over the half-line one finds $$-\int_0^{\infty} \psi ^* (R) \frac{d^2 \psi (R)}{dR^2} \, dR = 24E\int_0^{\infty} R^4 \vert \psi (R) \vert ^2 dR \,\,\, \eqno(3.13)$$\ which, after an integration by parts followed by the use of (3.9), yields, for $\alpha \geq 0$, $$E = \frac{1}{24}\frac{\alpha\vert\psi (0)\vert ^2 + \int_0^{\infty}\vert d\psi / dR \vert ^2 dR }{\int_0^{\infty} R^4 \vert \psi (R) \vert ^2 dR} > 0 \,\,\, \eqno(3.14)$$\ as we wished to prove. It should be clear from the above derivation that the general boundary condition (3.9) is not sufficient to allow us to reach the same conclusion. This special property of conditions (3.9) with $\alpha\geq 0$ is not present in other minisuperspace models, and seems to confer this restricted set of boundary conditions a physically privileged status as compared to the general one with arbitrary $\alpha$. The general solution to Eq.(3.12) with $k=0$ is \[15\] $$\psi_E (R) = \sqrt{R}\, \Bigl [ A\, J_{1/6} (\beta R^3 /3) + B J_{-1/6} (\beta R^3 /3) \Bigr ] \,\,\, , \eqno(3.15)$$\ where $J_{\nu}$ denotes a Bessel function of the first kind and order $\nu$, $A$ and $B$ are arbitrary constants, and $$\beta = \sqrt{24 E} \,\,\, . \eqno(3.16)$$\ The usual interpretation of $R^4 \vert \Psi \vert ^2$ as a probability density implies no correlation between $R$ and $T$. The existence of such solutions to the Wheeler-DeWitt equation is perhaps not surprising since de Sitter’s spacetime may be regarded as static \[16\] or self-similar \[17\]. It follows from the behavior of Bessel functions for small argument that in the case of boundary condition (3.10a) the solution is $$\psi^{(a)}_E (R) = \sqrt{R} \, J_{1/6} (\beta R^3 /3) \,\,\, , \eqno(3.17a)$$\ whereas in the case of boundary condition (3.10b) the solution is $$\psi^{(b)}_E (R) = \sqrt{R} \, J_{-1/6} (\beta R^3 /3) \,\,\, . \eqno(3.17b)$$\ From the asymptotic behavior of Bessel functions for small and large argument one easily checks that both solutions are square integrable, but their norm induced by the inner product (3.7) is infinite. Thus, states of the Universe with a definite value of the cosmological constant do not exist. Realizable states can only be constr ucted by superposition of solutions to the Wheeler-DeWitt equation with different values of the cosmological constant. For any two states $\psi_1$ and $\psi_2$ belonging to the domain of the Hamiltonian operator, that is, obeying condition (3.9), one has $$J_{12}(0) =\frac{i}{2} \Biggl( {\psi_1}^* \frac{\partial{\psi_2}}{\partial R} - {\psi_2} \frac{\partial{\psi_1}^*}{\partial R} \Biggr )_{R=0} = 0 \,\,\, .\eqno(3.18)$$\ Therefore, as in other minisuperspace models \[12,18\], here Vilenkin’s wave function of the universe $\Psi$ is ruled out because it is in conflict with the self-adjointness of the Hamiltonian operator. Indeed, Vilenkin’s tunneling boundary condition \[2,5\] requires the wave function of the universe $\Psi$ to consist only of outgoing modes at singular boundaries of superspace. In the present context this would amount mathematically to $J_{12}(0)>0$ whenev er $\psi_1 = \psi_2 =\Psi$, which is impossible. .7cm [**4. EVOLUTION OF WAVE PACKETS**]{} The stationary solutions (3.17) have infinite norm and play here a role analogous to that of plane waves in the quantum mechanics of the free particle, that is, finite-norm solutions can be constructed by superposing them. The general solutions to the Whe eler-DeWitt equation (3.5) with $k=0$ are given by the continuous linear combinations $$\Psi^{(\sigma )} (R,T) = \int_0^{\infty} c^{(\sigma )}(E) e^{iET} \psi ^{(\sigma )}_E (R)\,\,\,\,\, , \,\,\,\,\, \sigma = a,b \,\,\,\,\, , \eqno(4.1)$$\ where the superscript is used to distinguish the wave functions that obey the boundary condition (3.10a) from those that obey (3.10b). According to the Appendix, the probability distribution of values of the cosmological constant is given by $$\rho^{(\sigma)} (E) = \frac{1}{4}\, \vert c^{(\sigma)}(E)\vert ^2 \,\,\, ,\eqno(4.2)$$\ assuming, of course, that $\Psi^{(\sigma )} (R,T)$ is normalized in the inner product (3.7). We shall consider simple but illustrative examples of wave-packet solutions to the Wheeler-DeWitt equation obeying each of the boundary conditions (3.10). Introducing the parameter $$\lambda = \frac{\sqrt{24E}}{3}\eqno (4.3)$$\ we can write (4.1) as $$\Psi^{(\sigma )} (R,T) = \sqrt{R}\int_0^{\infty} a^{(\sigma )}\,(\lambda ) \, e^{i3{\lambda}^2 T/8}\, J_{\nu}(\lambda R^{3})\, d{\lambda}\eqno(4.4)$$\ where $\nu=+1/6$ or $\nu=-1/6$ according to whether $\sigma = a$ or $b$, and $$a^{(\sigma )}(\lambda)=\frac{3\lambda}{4}\,c^{(\sigma )}(\frac{3{\lambda}^{2}}{8})\, .\eqno(4.5)$$.\ The choice $$a^{(\sigma )}(\lambda)={\lambda}^{{\nu}+1} e^{-{\alpha}{\lambda}^{2}}\,\,\,\,\, , \,\, \,\,\, {\alpha}>0 \,\,\,\,\, , \eqno(4.6)$$\ with $\nu = +1/6$ for $\sigma = a$ and $\nu = -1/6$ for $\sigma = b$, is particularly simple because it enables us to perform the integration in (4.4) and express the wave function of the Universe in terms of elementary functions \[19\]. In the case $\nu=1/6$ we find $$\Psi^{(a)} (R,T) = \Bigl[ 2(\alpha - \frac{3iT}{8})\Bigr] ^{-7/6}\, R \, \exp\Bigl(- \frac{R^6}{4(\alpha - \frac{3iT}{8})}\Bigr) \,\,\,\,\, , \eqno(4.7)$$\ whereas for $\nu=-1/6$ the result is $$\Psi^{(b)} (R,T) = \Bigl[ 2(\alpha - \frac{3iT}{8})\Bigr ] ^{-5/6}\, \exp\Bigl(- \frac{R^6}{4(\alpha - \frac{3iT}{8})}\Bigr) \,\,\,\,\, . \eqno(4.8)$$\ The expectation value of the scale factor is given by $$\langle R\rangle_T=\frac{\int_{0}^{\infty} R^4 {\Psi}^{*}(R,T)\, R \,{\Psi}(R,T)dR}{\int_{0}^{\infty} R^4 {\Psi}^{*}(R,T){\Psi}(R,T)dR}\,\,\,\,\, , \eqno(4.9)$$\ and its time dependence reflects the dynamical evolution of the quantized version of de Sitter’s cosmological model. For the two types of boundary conditions of interest we find, respectively, $$\langle R\rangle _T^{(a)}=\frac{\Gamma{(\frac{4}{3})}}{\Gamma{(\frac{7}{6})}}\,\, \Bigl[\frac{64{\alpha}^2 + 9T^2}{32\alpha}\Bigr]^{1/6}\,\,\,\,\, , \eqno(4.10)$$\ and $$\langle R\rangle _T^{(b)}=\frac{1}{\Gamma{(\frac{5}{6})}}\,\, \Bigl[\frac{64{\alpha}^2 + 9T^2}{32\alpha}\Bigr]^{1/6}\,\,\,\,\, . \eqno(4.11)$$\ For sufficiently large values of the time $T$ the expectation value $\langle R\rangle _T$ grows at the same rate predicted by the classical solution (2.15), that is, the classical regime is attained for asymptotically large times. Quantum effe cts make themselves felt only for small enough $T$ corresponding to small $R$, as expected. The dispersion of the wave packets defined by $$(\Delta R)_T^2 = \langle R^2 \rangle _T -\langle R \rangle _T^2 \eqno(4.12)$$\ is readily computed, with the results $$(\Delta R)_T^{(a)} = \Biggl( \frac{\Gamma (3/2)}{\Gamma (7/6)}-\frac{\Gamma (4/3)^2}{\Gamma (7/6)^2}\Biggr)^{1/2} \Bigl[\frac{64{\alpha}^2 + 9T^2}{32\alpha}\Bigr]^{1/6}\,\,\, ,\eqno (4.13a)$$\ $$(\Delta R)_T^{(b)} = \Biggl( \frac{\Gamma (5/6)\Gamma (7/6)-1}{\Gamma (5/6)^2} \Biggr)^{1/2} \Bigl[\frac{64{\alpha}^2 + 9T^2}{32\alpha}\Bigr]^{1/6}\,\,\, .\eqno (4.13b)$$\ The wave packets inevitably disperse as time passes, the minimum width being attained at $T=0$. As in the case of the free particle, the more localized the initial state at $T=0$ the more rapidly the wave packet disperses. It is important to classify the nature of this model as concerns the presence or absence of singularities. For the states (4.7) and (4.8) the expectation value of $R$ never vanishes, showing that these states are nonsingular. The issue of existence or nonexistence of singularities may be addressed from another point of view \[20\]. We can define the probability density $$P^{(\sigma )}(R)=R^4\,\, \vert \Psi_E^{(\sigma)}(R)\vert^{2}\,\,\,\,\,\, , \,\,\,\,\, \sigma = a,b \,\,\,\,\, , \eqno(4.14)$$\ for the stationary solutions (3.17a) and (3.17b). The behavior of the Bessel functions for small values of the argument makes it clear that $P^{(\sigma )}(R)\rightarrow 0$ as $R\rightarrow 0$, and thus the singularity is avoided within this mode l according to this criterion. Whatever the singularity criterion, de Sitter’s quantum cosmological model is nonsingular just as its classical counterpart. .7cm [**5. CONCLUSION**]{} We have shown that taking the vacuum as a perfect fluid with equation of state $p = - \rho$ a Hamiltonian description of de Sitter’s cosmological model is possible, which makes subsequent quantization a straightforward process. This circumvents the problem of insufficient number of degrees of freedom that besets the usual Wheeler-DeWitt quantiz ation of de Sitter’s model. The endowment of the vacuum with dynamical degress of freedom makes it possible the introduction of a time variable which, in turn, gives meaning to the dynamical evolution at the quantum level. The cosmological term is not postulated from the beginning, but arises as a manifestation of the vacuum degrees of freedom. In our approach states with a definite value of the cosmological constant are ruled out, and only those states are realizable that are finite-norm superpositions of solutions to the Wheeler-DeWitt equation with different values of the cosmological constant. With the scale factor as the sole degree of freedom of the gravitational field, stationary and simple wave-packet solutions to the Wheeler-DeWitt equation have been found. It turns out that, for the wave packets investigated, quantum effects are significant only for small values of the scale factor, and the classical regime sets in at asymptotically large times. Just like the classical de Sitter model, its quantum counterpart is nonsingular. [**ACKNOWLEDGMENT**]{} The authors are grateful to Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil, for financial support. [**APPENDIX: COSMOLOGICAL CONSTANT PROBABILITY DENSITY**]{} Let us start with a normalized state vector $\Psi (R,t)$ for which $$\Vert \Psi \Vert^2 =\int_0^{\infty}R^4 \vert\Psi (R,t)\vert^2dR =$$ $$\int_0^{\infty}dR R^4\int_0^{\infty}c(E)^* e^{-iEt}{\sqrt R}J_{\nu}(\sqrt{24E}R^3/3)dE\int_0^{\infty}c(E^{\prime}) e^{iE^{\prime}t}{\sqrt R}J_{\nu}(\sqrt{24E^{\prime}}R^3/3)dE^{\prime}\,\,\, .\eqno(A.1)$$\ The change of variables $$E= \frac{9}{24}\lambda ^2 \,\,\,\,\, , \,\,\,\,\, E^{\prime}= \frac{9}{24}{\lambda^{\prime}} ^2 \,\,\,\,\, , \,\,\,\,\, x=R^3 \,\,\,\,\, , \,\,\,\,\,g(\lambda ) = c(9\lambda^2/24)\exp (-i9\lambda^2 t/24) \,\,\,\,\, ,\eqno(A.2)$$\ leads to $$\Vert \Psi \Vert^2 =\frac{3}{16} \int_0^{\infty}d\lambda \lambda g(\lambda)^*\int_0^{\infty}d\lambda^{\prime} \lambda^{\prime} g(\lambda^{\prime}) \int_0^{\infty} xJ_{\nu}(\lambda x)J_{\nu}(\lambda^{\prime} x)dx\,\,\, . \eqno(A.3)$$\ With the help of Hankel’s integral formula \[21\] $$f(x) = \int_0^{\infty}\, J_{\nu}(tx)tdt\, \int_0^{\infty }f(\lambda )J_{\nu}(\lambda t) \lambda d\lambda \,\,\, ,\eqno(A.4)$$\ which is equivalent to the formal equation $$\int_0^{\infty} xJ_{\nu}(\lambda x)J_{\nu}(\lambda^{\prime} x)dx = \frac{1}{\lambda}\delta (\lambda - \lambda^{\prime})\,\,\, , \eqno(A.5)$$\ one finds $$\Vert \Psi \Vert^2 =\frac{3}{16}\int_0^{\infty}d\lambda \lambda \vert g(\lambda)\vert^2 = \frac{3}{16}\int_0^{\infty}d\lambda \lambda \vert c(\frac{9\lambda^2}{24})\vert^2 = \frac{1}{4}\int_0^{\infty}\vert c(E) \vert ^2 dE \,\,\, , \eqno(A.6)$$\ from which Eq.(4.2) follows. **REFERENCES** [\[1\]]{} B. S. DeWitt, Phys. Rev. [**160**]{}, 1113 (1967). [\[2\]]{} J. A.Halliwell in [*Quantum Cosmology and Baby Universes*]{}, ed. S. Coleman, J. B. Hartle, T. Piran and S. Weinberg (World Scientific, Singapore, 1991). This work contains a guide to the literature on quantum cosmology and references to the seminal work of Hawking, Hartle, Vilenkin and Linde. [\[3\]]{} E. W. Kolb and M. S. Turner, [*The Early Universe*]{} (Addison-Wesley, New York, 1994). [\[4\]]{} J. A. S. Lima and A. Maia, Jr., Phys. Rev. [**D52**]{}, 5628 (1995). [\[5\]]{} A. Vilenkin, Phys. Rev. [**D50**]{}, 2581 (1994), and references therein. [\[6\]]{} B. F. Schutz, Phys. Rev. [**D2**]{}, 2762 (1970); [**D4**]{}, 3559 (1971). [\[7\]]{} See, for example, S. W. Hawking in [*Quantum Gravity and Cosmology*]{}, ed. H. Sato and T. Inami (World Scientific, Singapore, 1986); C. W. Misner, K. S. Thorne and J. A. Wheeler, [ *Gravitation*]{} (Freeman, San Francisco, 1973). [\[8\]]{} V. G. Lapchinskii and V. A. Rubakov, Theo. Math. Phys. [**33**]{}, 1076 (1977). [\[9\]]{} P. Hajicek, Phys. Rev. [**D34**]{}, 1040 (1986); see also S. C. Beluardi and R. Ferraro, Phys. Rev. [**D52**]{}, 1963 (1995). [\[10\]]{} W. F. Blyth and C. J. Isham, Phys. Rev. [**D11**]{}, 768 (1975). [\[11\]]{} F. J. Tipler, Phys. Rep. [**137**]{}, 231 (1986). [\[12\]]{} N. A. Lemos, J. Math. Phys. [**37**]{}, 1449 (1996). [\[13\]]{} J. Feinberg and Y. Peleg, Phys. Rev. [**D52**]{}, 1988 (1995). [\[14\]]{} M. L. Fil’chenkov, Phys Lett. B354, 208 (1995). [\[15\]]{} F. B. Hildebrand, [*Advanced Calculus for Applications*]{} (Prentice Hall, Englewood Cliffs, NJ, 1976), sec. 4.10. [\[16\]]{} W. Rindler, [*Essential Relativity*]{} (Springer, New York, 1977). [\[17\]]{} D. W. Sciama, [*Modern Cosmology*]{} (Cambridge University Press, Cambridge, 1975). [\[18\]]{} N. A. Lemos, Phys. Lett. [**A 221**]{}, 359 (1996). [\[19\]]{} I. S. Gradshteyn and I. M. Ryzhik, [*Tables of Integrals, Series, and Products*]{}, Corrected and Enlarged Edition (Academic, New York, 1980), formula 6.631(4). [\[20\]]{} T. Christodoulakis and C. G. Papadopoulos, Phys. Rev. [**D38**]{}, 1063 (1988). [\[21\]]{} Bateman Manuscript Project, [*Higher Transcendental Functions*]{}, vol. II, ed. by A. Erdélyi (McGraw-Hill, New York, 1953), sec. 7.10.5.
--- abstract: 'We introduce the concept of Shannon dimensionality $D$ as a new way to quantify bipartite entanglement as measured in an experiment. This is applied to orbital-angular-momentum entanglement of two photons, using two state analyzers composed of a rotatable angular-sector phase plate that is lens-coupled to a single-mode fiber. We can deduce the value of $D$ directly from the observed two-photon coincidence fringe. In our experiment, $D$ varies between 2 and 6, depending on the experimental conditions. We predict how the Shannon dimensionality evolves when the number of angular sectors imprinted in the phase plate is increased and anticipate that $D\simeq 50$ is experimentally within reach.' author: - 'J.B. Pors' - 'S.S.R. Oemrawsingh' - 'A. Aiello' - 'M.P. van Exter' - 'E.R. Eliel' - 'G. W. ’t Hooft' - 'J.P. Woerdman' title: Shannon dimensionality of quantum channels and its application to photon entanglement --- Photons can be entangled in various degrees of freedom. The most extensively studied variety involves the polarization degrees of freedom, of which there are inherently two per photon. In a typical EPR-Bell type experiment, the state analyzers are polarizers, and when their relative orientation is scanned, this gives rise to a sinusoidal coincidence fringe [@Aspect]. This particular shape is characteristic of the two-dimensional nature of polarization entanglement. Recently, much attention has been drawn to bipartite entanglement involving more than two degrees of freedom. With increasing dimensionality, quantum entanglement becomes correspondingly richer. High-dimensional entanglement is predicted to violate locality more strongly and to show more resilience to noise [@Collins; @Kaszlikowski]. From an applications perspective, it holds promise for implementing larger alphabets in quantum information, e.g. quantum cryptography [@Bechmann-Pasquinucci], and for an increased security against eavesdropping [@Zhang]. High-dimensional entanglement can be studied employing the frequency-time [@Riedmatten] or position-momentum degrees of freedom, the latter having been demonstrated for both the transverse linear [@Neves; @O'Sullivan-Hale] and orbital-angular-momentum degrees of freedom [@Mair; @Oemrawsingh2005]. It is crucial to have a quantifier of the dimensionality of entanglement as measured in an experiment [@Brunner]. In this Letter we introduce such a quantifier, using concepts from classical information theory in the spirit of Shannon [@ShannonBook]. We apply these ideas to orbital-angular-momentum entanglement, inserting appropriate angular state analyzers in the beamlines of a parametric down-conversion setup. We have realized a Shannon dimensionality $2\leq D \leq 6$ and we argue that $D\simeq50$ is within reach. In classical information theory [@ShannonBook], the number of independent communication channels of a signal is known as the Shannon number. The signal being the state of a physical system, the Shannon number is also referred to as the number of degrees of freedom, or the number of modes, of that system [@Toraldo69; @Gori]. For example, a signal encoded in the polarization degrees of freedom of a light beam has a Shannon number equal to 2. When dealing with a bipartite quantum system in an entangled pure state $|\psi \rangle \in \mathscr{K} = \mathscr{K}_A \otimes \mathscr{K}_B$, the usual measure of the effective dimensionality of the Hilbert space in which the state lives, is given by the Schmidt number $K$ [@Karelin] $$\label{Eq.1} K = \frac{1}{\mathrm{Tr}_A(\rho_A^2)} = \frac{1}{\mathrm{Tr}_B(\rho_B^2)}.$$ Here, $\rho_A = \mathrm{Tr}_B(|\psi \rangle \langle \psi |)$ and $\rho_B= \mathrm{Tr}_A(|\psi \rangle \langle \psi |)$, are the reduced density matrices representing the states of the two sub-systems $A \in \mathscr{K}_A$ and $B \in \mathscr{K}_B$, respectively. Although a system may have infinitely many degrees of freedom, any actual measurement apparatus has effective access only to a finite number of them, say $D$. Such a dimensionality $D$ is referred to as the Shannon number of the measurement apparatus. Consider an experiment measuring correlations between the two subsystems $A$ and $B$. There are two measuring apparatuses, say $\mathscr{P}_A(\alpha)$ and $\mathscr{P}_B(\beta)$, interacting with subsystems $A$ and $B$, respectively, where $\alpha$ and $\beta$ label possible settings of the two apparatuses. For a given setting $\xi \in \{ \alpha, \beta\}$, detector $\mathscr{P}_X(\xi)$ is represented by the projection operator $\hat{\Gamma}(\xi) = |X(\xi)\rangle \langle X(\xi)|$, where $X \in \{A,B\}$, and $|X(\xi)\rangle$ is the state in which the system $X$ is left after measurement. If a Von Neumann-type projective measurement is performed, the set of states $\{ |X(\xi) \rangle \}_\xi$ obtained by varying $\xi$ is complete and orthonormal, namely $$\label{Eq.2} \langle X(\xi) | X(\xi') \rangle = \delta_{\xi \xi'}, \qquad \sum_\xi \hat{\Gamma}(\xi) = \hat{1},$$ where the measurement operators $\hat{\Gamma}(\xi)$ are Hermitean and idempotent. The number of these operators is equal to the dimension of the Hilbert space of the measured quantum system [@Brandt]. However, in many situations non-orthogonal measurements are made and Eqs. (\[Eq.2\]) do not hold [@Thew]. In this case, the number of projection operators $\hat{\Gamma}(\xi)$ does not give the dimension of the Hilbert space of the measured system, and a new criterion must be introduced. Let us therefore consider finite-dimensional systems, say $\dim (\mathscr{K}_X) = L$, and rewrite Eq. (\[Eq.2\]) for the case of non-orthogonal measurements as $$\label{Eq.3} \langle X(\xi) | X(\xi') \rangle = g_{\xi \xi'}, \qquad \sum_\xi \hat{\Gamma}(\xi) = \hat{\gamma},$$ where $G=[g_{\xi \xi'}]$ is a matrix of size $L \times L$, and $\hat{\gamma}$ is an Hermitean operator. The eigenvalues $\gamma_l$ of $\hat{\gamma}$ give the detector’s ‘sensitivity’ to the corresponding eigenmodes. In general, a detector will not be equally sensitive to all eigenmodes and some $\gamma_l$ are substantially larger than others. The *effective* dimensionality $D \leq L$ of the Hilbert space $\mathscr{D}$ where the *measured* system lives can be quantified as the Hilbert-Schmidt norm of the eigenvalue distribution [@Pors] $$\label{Eq.4} D \equiv \frac{1}{\text{Tr} (\hat{\gamma}^2)} = \frac{1}{\sum_l \gamma^2_l}.$$ This dimensionality should be interpreted as the effective Shannon number of information channels [@ShannonBook; @Toraldo69]. The isomorphism of Eq. (\[Eq.1\]) and Eq. (\[Eq.4\]) suggests a relation between the Schmidt number $K$ and the Shannon dimensionality $D$. The nature of such relation becomes clear if one notes that since the operators $\hat{\Gamma}(\xi)$ are Hermitian and positive semidefinite, the operator $\hat{\gamma}$ may be interpreted as a density matrix acting in $\mathscr{K}_X$ [@BengtssonBook]. Thus, if we think of $\hat{\gamma}$ as a reduced density matrix of a bipartite system, then $K$ and $D$ are formally the same. However, it is important to note that while $K$ furnishes the dimensionality of the *generated* entanglement, $D$ gives the effective dimensionality of the space $\mathscr{D}$ that can *potentially* be *probed* and it is a property of the projection apparatus only. The dimensionality of the *measured* entanglement is a joint property of the generated system and analyzers, but simply amounts to $D$ as long as $\mathscr{K}\supset\mathscr{D}$. ![\[fig:1\] (color online). Experimental setup. Orbital-angular-momentum entangled photons are emitted at 826 nm by a BBO crystal, cut for Type-I collinear phase matching. A thin GaP wafer serves to eliminate the pump beam. The two-photon field can be clipped with an aperture. The twin-photons are spatially separated by a beam splitter and imaged on the angular phase plates ($f_2=4f_1=40~$cm). Just behind the phase plates, the frequency-degenerate photons are selected by interference filters (not shown), centered around 826 nm with a 10 nm width. The phase plates (shown are quarter-sector plates) are oriented at angles $\alpha$ and $\beta$, and photon counts are rendered by a coincidence circuit. ](Fig_1.eps){width="8.3truecm"} Next, we apply our formal theory to an experiment on orbital-angular-momentum entanglement of two photons, in order to illustrate how detector characteristics bound the measured entanglement to an effective Shannon dimensionality $D$, while probing a generated state with Schmidt number $K \gg D$ (and $\mathscr{K}\supset\mathscr{D}$). Our experimental setup is depicted in Figure \[fig:1\]. Pumping a BBO non-linear crystal with a 150 mW Kr$^+$ laser beam at $\lambda=413$ nm, we produce spatially entangled photons by means of spontaneous parametric down conversion. The state we generate is of the form $|\Psi\rangle = \sum_l \sqrt{\lambda_l} |l\rangle |-l\rangle$, where $|l\rangle$ denotes the orbital angular momentum eigenmode of order $l$: $\langle \phi | l \rangle = \exp(i l \phi)/\sqrt{2\pi}$, with $\phi$ the azimuthal angle [@Walborn]. Employing Type-I collinear phase matching, we collect the full emission cone and with the experimental parameters of our setup (beam half-waist at the position of the crystal $w_\text{0} = 250~ \mu$m and crystal length 1 mm) we obtain an azimuthal Schmidt number $K\simeq31$ [@Law; @Exter_SpaFilt; @Exter_ModCount]. The twin photons are spatially separated by means of a non-polarizing beam splitter. Each arm of the setup contains an angular state analyzer, composed of an angular phase plate that is lens-coupled to a single-mode fiber (see Fig. \[fig:1\]) [@Oemrawsingh2006]. The angular phase plates carry a purely azimuthal variation of the optical thickness. As in polarization entanglement [@Aspect], the phase plates are rotated around their normals and the photon coincidence rate is recorded as a function of their independent orientations [@Oemrawsingh2005]. The combined detection state of the two angular-phase-plate analyzers, each acting locally, can be expressed as $$\label{Eq.5} |A(\alpha)\rangle \otimes |B(\beta)\rangle = \left(\sum_l \sqrt{\gamma_l} |l\rangle e^{i l \alpha}\right)_A \otimes \left(\sum_{l} \sqrt{\gamma_{l}} |l\rangle e^{i l \beta}\right)_B,$$ where $\alpha$ and $\beta$ denote the orientations of the two phase plates, respectively [@Footnote1]. The complex expansion coefficients $\sqrt{\gamma_l}$ are fixed by the physical profile of the angular phase plate and obey the normalization condition $\sum_l |\gamma_l|=1$. In general, the detection state constitutes a non-uniform superposition of orbital angular momentum modes. When the angular phase plates are rotated over $\alpha$ or $\beta$, all modes in the superposition rephase with respect to each other, yielding a set of detection states of the type Eq. (\[Eq.3\]). The effective Shannon dimensionality that is so probed is given by Eq. (\[Eq.4\]). It is the average number of modes captured by an analyzer when its phase plate is rotated over $360^\circ$. As we have recently shown in Ref. [@Pors], the Shannon dimensionality is straightforwardly deduced from the shape of the experimental coincidence curve; it is the inverse of the area underneath the peak-normalized coincidence fringe, obtained when rotating one of the phase plates. In our experiment, we have used angular-sector phase plates; these have a single arc sector, characterized by the angle $\delta$, whose optical thickness is $\lambda/2$ greater than that of the remainder of the plate [@Oemrawsingh2006]. The part of the field that crosses this sector thus flips sign. The phase plates are manufactured from fused-quartz plane-parallel plates, having a wedge angle of $0.25"$. They are processed by a combination of photolithography, wet etching, deposition and lift-off, resulting in a well-defined mesa structure, with a transition region that is typically $20~\mu$m wide. The insets of Figure \[fig:2\] show two such plates; a half-sector plate ($\delta=\pi$) consisting of two equal halves that are phase shifted by $\pi$; and a quarter-sector plate ($\delta=\pi/2$) having one quadrant $\pi$-phase shifted with respect to the remainder of the plate. For state analyzers that are equipped with such sector phase plates, the Shannon dimensionality is given by [@Pors] $$\label{Eq.6} D(\delta) = \left\{% \begin{array}{ll} \left[1 - 4\frac{\delta}{\pi} + 6\left(\frac{\delta}{\pi}\right)^2 - \frac{8}{3}\left(\frac{\delta}{\pi}\right)^3 \right]^{-1}, & \delta \in [0 ,\pi], \\ D(2 \pi -\delta), & \delta \in [\pi, 2\pi]. \\ \end{array} \right.$$ For $\delta=0$ we find the trivial result $D=1$; a planar plate does nothing. For $\delta=\pi$, i.e. a state analyzer equipped with a half-sector plate, we arrive at $D=3$. For an analyzer equipped with a quarter-sector plate we find $D=6$. This is the maximum value for a single angular-sector phase plate. We note that for our setup indeed $K \gg D$. In the experiment, we scan one angular-sector phase plate over a $360^{\circ}$ rotation, the other remaining fixed, and measure the coincidence rate. In terms of Klyshko’s picture of advanced waves [@Klyshko], valid when $K \gg D$, the resulting shape of the coincidence curves can be explained in terms of the mode overlap of the two state analyzers. Figure \[fig:2\](a) shows experimental results obtained with two half-sector plates $(\delta=\pi)$, having step height of 0.48 $\lambda$. The data points form a double *parabolic* fringe, consistent with theory (solid curve). The maxima at $0^\circ$ and $180^\circ$ are sharply peaked. The zeros of the fringe are very deep; less than 10 counts per 10 seconds. The maximum coincidence rate is of the order of $6.5\times 10^3$ per 10 seconds, compared to $10^5$ single counts. We verified that the coincidence rate depends on the *relative* orientation between the two phase plates only, the fringe visibility being $>99\%$ for all cases studied. This basis independence is the key aspect of quantum entanglement. From the area underneath the data we deduce the experimental value $D=3.0$. Note that a parabolic fringe was also reported in Ref. [@Oemrawsingh2005], obtained with non-integer spiral phase plates. We conclude that also in that case $D=3$. ![\[fig:2\] (color online). Coincidence count rate vs. the relative orientation of the two state analyzers. Points denote experimental data, the curves are theoretical predictions. (a) Half-sector plate. The parabolic fringe (circles) is a signature of a dimensionality larger than two: we find $D=3.0$. Truncating the number of modes, by closing the aperture, gradually reduces the parabola into a sine of dimensionality 2.0 (triangles). (b) Quarter-sector plate. The piece-wise parabolic fringe yields an experimental dimensionality of 5.8 (circles), where theory predicts $D=6$.](Fig_2.eps){width="7.8truecm"} An aperture, positioned inside the telescope, allows us to control the number of detected modes (see Figure \[fig:1\]). Because of the anti-symmetric profile of the half-sector plate, the detection state contains only odd expansion terms (see Eq. (\[Eq.5\])) in a fashion $\gamma_l = \gamma_{-l}$. When the aperture size is reduced, higher-order orbital-angular-momentum modes are cut off so that, eventually, only the modes $l=1$ and $l=-1$ survive. We then expect a sinusoidal fringe, analogous to two-dimensional polarization entanglement [@Aspect]. In the experiment, we observe that the coincidence curve is gradually transformed from parabolic to sinusoidal when the aperture gets smaller. Using an aperture of 600 $\mu$m diameter, we are in an intermediate regime (squares, $D=2.1$), while using a 400 $\mu$m diaphragm yields a curve that resembles a sine very well (triangles, $D=2.0$). The dashed and dotted curve are theoretical predictions. To achieve $D=6$, we use two quarter-sector plates $(\delta=\pi/2)$, carrying an edge discontinuity deviating less than 3% from $\lambda/2$. The circles in Figure \[fig:2\](b) show our experimental results, revealing a coincidence curve which is parabolic for $|\alpha-\beta| \leq 90^\circ$ and equal to zero otherwise, in agreement with theory (solid curve). We find $D=5.8$, in very good agreement with the expected value of 6 mentioned above. ![\[fig:3\] (color online). Maximum dimensionality that can be accessed with sector phase plates having $2N$ angular sectors alternatingly phase shifted by $\pi$. The insets show the optimized plates for $N=1, N=2$ and $N=3$.](Fig_3.eps){width="7.8truecm"} The maximum value of the Shannon dimensionality that can be achieved with a phase plate having but a single sector is $D=6$. Can one reach higher values of $D$ by using plates with more sectors? To answer this question, we consider plates with $N$ sectors that are phase shifted by $\pi$ with respect to interjacent regions. For each choice of sector angles, we calculate the expansion coefficients $\{\sqrt{\gamma_l}\}$ and, subsequently, $D$ (see Eq. (\[Eq.4\]) and (\[Eq.5\])). Next, we maximize $D$ by adjusting the sector angles using a Monte-Carlo random-search algorithm. The result is plotted in Figure \[fig:3\], showing a graph of the maximum value of $D$ versus the number of mesas $N$. For 10 such sectors, we find $D=49.9$. The insets show the optimal phase plates for $N=1$ (quarter-sector plate), $N=2$, and $N=3$. In conclusion, we have introduced the effective Shannon dimensionality as a novel quantifier of entanglement as measured in an actual experiment. We have demonstrated its significance to the case of two-photon orbital-angular-momentum entanglement. Using angular-sector phase plates, we have achieved Shannon dimensionalities up to $D=6$. We anticipate that it is feasible to probe dimensionalities as high as 50, using multi-sector phase plates. These can be manufactured by means of photo- or e-beam lithography as in diffractive-optics technology. Alternatively, the use of adaptive optical devices, such as spatial light modulators or micro-mirror arrays, seems promising, particularly because of their versatility with regard to plate patterns. However, the ultimate limit to the Shannon dimensionality is constrained by the angular Schmidt number of the source; using periodically poled crystals, such as PPKTP, $K\sim100$ is viable for realistic values of pump-beam waist and crystal length, without loss of count rates [@Fiorentino]. This work was supported by the Stichting voor Fundamenteel Onderzoek der Materie. [27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , ** (, ). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , , , ****, (). , ** (, ). , , , , ****, (). , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , ****, (). , , , , , , ****, ().
--- author: - 'Jérôme Benoit[^1]' - Ana Nunes - Margarida Telo da Gama bibliography: - 'pamds.bib' date: '[[`q-bio/0510005`](http://arxiv.org/abs/q-bio/0510005)]{}' title: Pair Approximation Models for Disease Spread --- \[1997/12/01\] Introduction {#sec/introduction} ============ Stochastic Susceptible-Infective-Recovered (<span style="font-variant:small-caps;">SIR</span>) epidemic models on lattices and networks can be mapped on to percolation problems and are well understood [@Grassberger1982; @MooreNewman2000May; @MooreNewman2000Nov]. To describe disease spread and persistence in a community, the model must be extended to include a mechanism for renewal of susceptibles, either births or immunity waning. Models with immunity waning, Susceptible-Infective-Recovered-Susceptible (<span style="font-variant:small-caps;">SIRS</span>), are based on the following transitions: $$\label{ESWN/SIRS/scheme} {S}{\xrightarrow[I_{\eswnNeighbourIndex}]{\mspace{15.0mu}\eswnInfectionRate\mspace{15.0mu}}} {I}{\xrightarrow{\mspace{15.0mu}\eswnRecoveryRate\mspace{15.0mu}}} {R}{\xrightarrow{\mspace{15.0mu}\eswnDemographicRate\mspace{15.0mu}}} {S},$$ meaning that any susceptible individual $S$ can be infected by an infected neighbour $I_{\eswnNeighbourIndex}$ at the infection rate $\eswnInfectionRate$, any infected individual $I$ becomes recovered $R$ at the recovery rate $\eswnRecoveryRate$, and any recovered individual $R$ becomes susceptible $S$ at the immunity loss rate $\eswnDemographicRate$. Following customary habits, we shall choose time units for which . The <span style="font-variant:small-caps;">SIRS</span> model interpolates between two well known models, the contact process (also known as Susceptible-Infective-Susceptible or <span style="font-variant:small-caps;">SIS</span>) and the <span style="font-variant:small-caps;">SIR</span> model, in the limits $\eswnDemographicRate\rightarrow\infty$ and $\eswnDemographicRate\rightarrow{0}$, respectively, and much is known about its behaviour on regular lattices, both from the point of view of rigorous results [@DurrettNeuhauser1991; @AndjelSchinazi1996; @vdBerg1998] and of assessing the performance of mean field and pair approximations against stochastic simulations [@JooLebowitz2004]. In particular, it is known [@vdBerg1998] that on hypercubic lattices of arbitrary dimension the phase diagram of has two critical values, , which are the critical rates of the two limit problems that is the contact process and <span style="font-variant:small-caps;">SIR</span>, respectively. For there is disease extinction for every $\eswnDemographicRate$, while for $\eswnCriticalInfectionRate(0)<\eswnInfectionRate$ there is disease persistence for every $\eswnDemographicRate$. For $\eswnCriticalInfectionRate(\infty)<\eswnInfectionRate<\eswnCriticalInfectionRate(0)$ disease persistence occurs only for $\eswnDemographicRate$ above a certain threshold. The region of disease persistence for every $\eswnDemographicRate$ is ‘missing’ in dimension , because in this case $\eswnCriticalInfectionRate(0)$ is infinite. In [@JooLebowitz2004] the uncorrelated pair approximation (<span style="font-variant:small-caps;">UPA</span>, see Section \[sec/MFAUPA\]) was applied to the <span style="font-variant:small-caps;">SIRS</span> model on linear and square lattices, and the phase diagrams computed from the corresponding equations of evolution were compared with the mean field phase diagram and with the results of simulations. It was shown that, by contrast with the mean field approximation, the <span style="font-variant:small-caps;">UPA</span> phase diagram agrees qualitatively with the simulations and the exact results both in and in . Since the <span style="font-variant:small-caps;">UPA</span> does not take into account the lattice dimensionality explicitly, it predicts identical phase diagrams on lattices with the same coordination number $\eswnDegree$, namely on linear () and square () lattices, when next nearest neighbours () are considered. However, in one dimension the critical infection rate , and the critical line has an asymptote at $\eswnDemographicRate=0$, while in two dimensions the critical line crosses the $\eswnDemographicRate=0$ axis at a finite value of $\eswnCriticalInfectionRate$, which is the result of the <span style="font-variant:small-caps;">UPA</span> for . The question is then whether generalized pair approximations can account for the dependence on dimensionality and, in particular, whether they can describe phase diagrams with different qualitative behaviours at fixed coordination numbers. We have addressed this question, and more generally the problem of constructing suitable pair approximations (Section \[sec/MFAUPA\]), in the context of a modification of model , where the mechanism of renewal of susceptibles is demography, rather than immunity waning. This is the natural scenario in the epidemiology of diseases that confer permanent immunity, such as childhood infectious diseases [@AndersonMay; @MurrayII]. For this model infection obeys the same rules as in , immunity is permanent and all individuals, whatever their state, are submitted to birth and death events at a rate $\eswnDemographicRate$. The stochastic process, which describes the dynamics of this system, is governed by the transitions \[ESWN/scheme\] $$\begin{gathered} \label{ESWN/scheme/disease} {S}{\xrightarrow[I_{\eswnNeighbourIndex}]{\mspace{15.0mu} \eswnInfectionRate\mspace{15.0mu}}} {I}{\xrightarrow{\mspace{15.0mu}\eswnRecoveryRate\mspace{15.0mu}}}{R} , \\ \label{ESWN/scheme/demographic} {\{S,I,R\}}{\xrightarrow{\mspace{15.0mu}\eswnDemographicRate\mspace{15.0mu}}}{S} .\end{gathered}$$ In the limit $\eswnDemographicRate=0$, both models and coincide with <span style="font-variant:small-caps;">SIR</span> model. In the opposite limit the dynamics of the two models are drastically different. While, in the limit $\eswnDemographicRate=\infty$, <span style="font-variant:small-caps;">SIRS</span> coincides with the contact model [@JooLebowitz2004], in the same limit the dynamics of is trivial: it is driven by demography, that keeps the entire population susceptible for any $\eswnCriticalInfectionRate$, and thus $\eswnCriticalInfectionRate(\infty)=\infty$. We are interested in the regime, where $\eswnDemographicRate$ is smaller than the recovery rate, which is meaningful for the study of acute disease spread. Although in this regime the dynamics is dominated by the infection and recovery processes which are identical in both models, the behaviour of appears to be different, in a subtle way, from that of (Section \[sec/MFAUPA\]). We have considered the demographic <span style="font-variant:small-caps;">SIR</span> model on a linear lattice with periodic boundary conditions (ring) and next nearest neighbour interactions, . We constructed a family of correlated pair approximations (<span style="font-variant:small-caps;">CPA</span>), parametrized by $\eswnClosedFormParameter$, a measure of the relative contributions of loops and open triplets of connected sites involved in the disease spread (Section \[sec/CPA\]). For $\eswnClosedFormParameter=0$ the approximation reduces to the standard <span style="font-variant:small-caps;">UPA</span> (Section \[sec/MFAUPA\]). The phase diagrams of the <span style="font-variant:small-caps;">CPA</span> show that as $\eswnClosedFormParameter$ increases from $0$ to $\eswnRemarkableClosedFormParameter$ (see Section \[sec/CPA\]) the <span style="font-variant:small-caps;">CPA</span> interpolates between the <span style="font-variant:small-caps;">UPA</span> critical behaviour and the typical one-dimensional phase behaviour, with $\eswnCriticalInfectionRate(0)=\infty$. Finally, we have simulated the demographic <span style="font-variant:small-caps;">SIR</span> model on a ring, with . The results of the simulations indicate that while the <span style="font-variant:small-caps;">CPA</span> with a constant value of $\eswnClosedFormParameter$ cannot describe the global phase diagram of , a reasonable description of endemic equilibria as well as of the phase diagram is obtained when $\eswnClosedFormParameter$ is allowed to depend on the demographic rate $\eswnDemographicRate$ (Section \[sec/CPA\]). This illustrates that in addition to describe the dimensional crossover for lattices with coordination number , the <span style="font-variant:small-caps;">CPA</span> can be made semi-quantitative providing an alternative to the stochastic simulations of individual based models. We conclude in Section \[sec/discussion\] with a brief discussion of the results. ![ Endemic infective probability versus infection rate at (a) high demographic rate, and at (b) huge demographic rate: the endemic infective probability is plotted from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed lines), the <span style="font-variant:small-caps;">UPA</span> (dashed dotted lines) and the correlated model with best-fit closed form parameters (solid lines). The fitting procedure is based on perpendicular offsets and on the assumption that the closed form parameter $\eswnClosedFormParameter$ depends only on the demographic rate $\eswnDemographicRate$. Closed form parameters $\eswnClosedFormParameter$ for (a) and (b) respectively: $0.50$, $0.70$. []{data-label="fig/X/endemic"}](pamds-EndemicInfectivePlots){width="1.0\linewidth"} Mean Field and Uncorrelated Pair Approximations {#sec/MFAUPA} =============================================== In this section we consider the time evolution of the demographic <span style="font-variant:small-caps;">SIR</span> model on regular lattices and review the mean-field and (standard) uncorrelated pair approximations, setting the notation and the stage for the development of the more sophisticated correlated pair approximations. In the demographic <span style="font-variant:small-caps;">SIR</span> model on networks, sites represent individuals and bonds social links. The dynamics is governed by the stochastic process . Denoting by $\prob{A}$ the probability for an individual to be in state $A$ (at time $t$), $\prob{AB}$ the probability for a lattice bond to connect an individual in state $A$ to an individual in state $B$, the time evolution of the singleton probabilities $\prob{A}$ can be described by the set of first order differential equations [@AndersonMay; @MurrayII]: \[ESWN/SDE/1\] $$\begin{aligned} \label{ESWN/SDE/1/S} \difft{\prob{S}}&= +\eswnDemographicRate\: \bigl[ \prob{I} + \prob{R} \bigr] -\eswnInfectionRate\sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}} , \\ \label{ESWN/SDE/1/I} \difft{\prob{I}}&= +\eswnInfectionRate\sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}} -(\eswnDemographicRate+\eswnRecoveryRate)\: \prob{I} , \\ \label{ESWN/SDE/1/R} \difft{\prob{R}}&= +\eswnRecoveryRate\: \prob{I} -\eswnDemographicRate\: \prob{R} ,\end{aligned}$$ where the summations run over the connected neighbours. Clearly the set of equations is not closed since it involves pair probabilities without describing their time evolution. This follows from the stochastic process where infection proceeds *via* $SI$ contact pairs. As a matter of fact, the time evolution of the $q$-tuple probabilities is described by a set of first order differential equations expressing their time derivatives as linear combinations of $q$-tuple and $(q\!+\!1)$-tuple probabilities, subject to a normalization condition. In order to proceed, the set of equations must be closed, that is the $(q\!+\!1)$-tuple probabilities must be written in terms of $q$-tuple probabilities. The ‘art’ is to use closures that capture key physical features of the system and are still manageable by symbolic or numerical-symbolic computation. The results of a particular closure, or approximation, may then be checked against rigorous results and/or stochastic simulations. For most closures the $(q\!+\!1)$-tuple probabilities are rational functions of the $q$-tuple probabilities, appropriately normalized, and thus the constrained set of first order differential equations may be replaced by an unconstrained set where the time derivatives of independent $q$-tuple probabilities are expressed as rational functions of these $q$-tuple probabilities. Although the resulting sets of equations are easily integrable by classical numerical methods and admit polynomial systems as steady state equations, their analysis remains cumbersome even at low order $q$. The simplest closure is the mean field approximation (<span style="font-variant:small-caps;">MFA</span>), where the pairs ($2$-tuples) are assumed to be formed by uncorrelated singletons ($1$-tuples): $$\label{ESWN/UCS/approximation} \sum_{\eswnNeighbourIndex}{\prob{{S}{I_{\eswnNeighbourIndex}}}}\approx \eswnDegree\: \prob{S}\prob{I} .$$ For the demographic <span style="font-variant:small-caps;">SIR</span> model the endemic equilibrium (steady state) is computed easily. The mean-field endemic infective probabilities are plotted in Figure \[fig/X/endemic\] as a function of the infection rate, at two values of $\eswnDemographicRate$. For any value of $\eswnDemographicRate$, the <span style="font-variant:small-caps;">MFA</span> predicts two different steady states: at infection rates $\eswnInfectionRate$ smaller than the critical infection rate $\eswnCriticalInfectionRate$ there is disease extinction, while at infection rates $\eswnInfectionRate$ greater than the critical infection rate $\eswnCriticalInfectionRate$ there is disease persistence, *i.e.* infected (and recovered) individuals coexist with susceptibles. The two regimes are separated by the mean-field endemic threshold that is plotted in Figure \[fig/UPA/phasediagram\] (dashed line). ![ Phase diagram for the <span style="font-variant:small-caps;">UPA</span>: the no-coexistence phase and the coexistence phase are separated by the critical curve from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed line), and the <span style="font-variant:small-caps;">UPA</span> (thick solid line). Within the coexistence phase, at very low demographic rates $\eswnDemographicRate$, the <span style="font-variant:small-caps;">UPA</span> predicts an oscillatory phase as shown in the inset. []{data-label="fig/UPA/phasediagram"}](pamds-PhaseDiagram-UPA){width="1.0\linewidth"} We anticipate that the results of the <span style="font-variant:small-caps;">MFA</span> will be accurate when the demographic process of dominates over the infectious one since in this regime pairs are continually broken and thus the behaviour of each individual is essentially independent on that of the other ones. The infection process governed by *Susceptible*-*Infective* contact pairs, dominates in the opposite regime (), relevant in the epidemiological context. The appropriate mean field theory is then the uncorrelated pair approximation (<span style="font-variant:small-caps;">UPA</span>). The <span style="font-variant:small-caps;">UPA</span> is for pairs what the <span style="font-variant:small-caps;">MFA</span> is for singletons. In the <span style="font-variant:small-caps;">UPA</span> triplets ($3$-tuples) are assumed to be formed by uncorrelated pairs: $$\label{ESWN/UCP/approximation} \sum_{\eswnNeighbourIndex}{\prob{{AS}{I_{\eswnNeighbourIndex}}}}\approx (\eswnDegree-1)\: \frac{\prob{SA}\prob{SI}}{\prob{S}} .$$ The <span style="font-variant:small-caps;">UPA</span> is expected to outperform the <span style="font-variant:small-caps;">MFA</span> but, in general, its solution is not known in closed form. For the demographic <span style="font-variant:small-caps;">SIR</span> model the calculation of the phase diagram and the stability analysis is still tractable by symbolic computation. For lattices with coordination number , the phase diagram is plotted in Figure \[fig/UPA/phasediagram\]. It is clear that the <span style="font-variant:small-caps;">UPA</span> is quantitatively superior to the <span style="font-variant:small-caps;">MFA</span> when compared with the results of simulations (open circles). Both the <span style="font-variant:small-caps;">MF</span> and the <span style="font-variant:small-caps;">UP</span> approximations of the demographic <span style="font-variant:small-caps;">SIR</span> model predict a finite critical infection rate at $\eswnDemographicRate=0$, while the simulations indicate that $\eswnCriticalInfectionRate$ will diverge as $\eswnDemographicRate$ tends to $0$. However, the <span style="font-variant:small-caps;">SIRS</span> and the demographic <span style="font-variant:small-caps;">SIR</span> models are different at low (but finite) demographic rates $\eswnDemographicRate$. In the demographic <span style="font-variant:small-caps;">SIR</span> model the mechanism for the renewal of susceptibles is totally random by contrast to the mechanism of the <span style="font-variant:small-caps;">SIRS</span> model. In our model susceptibles are born anywhere on the lattice while in the <span style="font-variant:small-caps;">SIRS</span> model only previously infected sites loose immunity. We note that the randomizing effect of the demographic <span style="font-variant:small-caps;">SIR</span> mechanism for the renewal of susceptibles is reminiscent of the randomizing effect of shortcuts in small-world networks of the Watts and Strogatz type [@WattsStrogatz1998; @Hastings2003] where correlations are destroyed and an effective mixing of the population is achieved, with (drastic) consequences on the phase diagram. Finally, it is worth noticing that the <span style="font-variant:small-caps;">UPA</span> also predicts the existence of an oscillatory phase within the survival or coexistence phase (*i.e.*, to the right of $\eswnCriticalInfectionRate(0)$), for small values of $\eswnDemographicRate$ (Figure \[fig/UPA/phasediagram\]). The same is true for the <span style="font-variant:small-caps;">UPA</span> of process on the square lattice. This behaviour will be difficult to identify in stochastic simulations, since it may be blurred by large fluctuations and stochastic extinctions. ![ Isoparametric phase diagrams for the correlated model: the no-coexistence and coexistence phases are separated by the critical curve from simulations (open circles), the <span style="font-variant:small-caps;">MFA</span> (long dashed line), the <span style="font-variant:small-caps;">UPA</span> (dashed dotted line) and the correlated model for different $\eswnClosedFormParameter$ (solid lines). For $\eswnRemarkableClosedFormParameter\!\approx\!{0.3807}$ (bold solid line) the critical infection rate $\eswnCriticalInfectionRate$ tends asymptotically to infinity when the demographic rate $\eswnDemographicRate$ vanishes. Closed form parameters $\eswnClosedFormParameter$ from left to right: $\tfrac{1}{4}$, $\eswnClosedFormParameter^{*}$, $\tfrac{1}{2}$, $\tfrac{5}{8}$, $\tfrac{3}{4}$. []{data-label="fig/CLM/phasediagram/isoparametric"}](pamds-PhaseDiagram-CPA){width="1.0\linewidth"} Correlated pair approximations {#sec/CPA} ============================== In order to construct more realistic pair approximations, we have investigated closure procedures inspired by the geometrical structure of the lattice. Within this perspective and as far as social triplets are concerned, the ring of degree and the triangular lattice are propitious networks since their nearest-neighbour triplets split into two distinct classes: ‘chain-like’ (open) and ‘loop-like’ (closed) triplets. A very naive idea is to take into account the two classes of triplets and to use the probability $\eswnClosedFormParameter$ and $1-\eswnClosedFormParameter$ of finding respectively a ‘loop-like’ triplet and a ‘chain-like’ triplet as a parameter to be fitted to simulation results. Thus triplets are assumed to be formed either of uncorrelated (chained) pairs or of correlated (looped) pairs [@VanBaalen2000]: $$\label{ESWN/CPL/approximation} \begin{split} &\qquad \sum_{\eswnNeighbourIndex}{\prob{{AS}{I_{\eswnNeighbourIndex}}}}\approx \\ & \begin{cases} (\eswnDegree-1)\, \Bigr[ \bigr(1-\eswnClosedFormParameter\bigl)\, \tfrac{\prob{SA}\prob{SI}}{\prob{S}} + \, \eswnClosedFormParameter\: \tfrac{\prob{AI}\prob{SA}\prob{SI}}{\prob{A}\prob{S}\prob{I}} \Bigl] &\\ \qquad\text{if ${A}\in\{S,R\}$} , &\\ (\eswnDegree-1)\, \prob{SI} - \sum_{\eswnNeighbourIndex}{ \bigr[ \prob{{SS}{I_{\eswnNeighbourIndex}}} \!+\! \prob{{RS}{I_{\eswnNeighbourIndex}}} \bigl] } &\\ \qquad\text{if ${A}={I}$} . & \end{cases} \end{split}$$ The demographic <span style="font-variant:small-caps;">SIR</span> version of the <span style="font-variant:small-caps;">CPA</span> is amenable by cumbersome numerical-symbolic computation although some interesting results may be obtained by symbolic computation. The phase diagrams are shown in Figure \[fig/CLM/phasediagram/isoparametric\]. We find that, as $\eswnClosedFormParameter$ increases from $0$ to $\eswnRemarkableClosedFormParameter\!\approx\!{0.3807}$[^2], keeping fixed, the <span style="font-variant:small-caps;">CPA</span> phase diagrams interpolate between the <span style="font-variant:small-caps;">UPA</span> behaviour and typical one-dimensional phase diagrams with $\eswnCriticalInfectionRate(0)=\infty$. At $\eswnRemarkableClosedFormParameter$ the critical infection rate $\eswnCriticalInfectionRate$ tends asymptotically to infinity as the demographic rate $\eswnDemographicRate$ vanishes. Inspection of Figure \[fig/CLM/phasediagram/isoparametric\] also shows that the closed form parameter $\eswnClosedFormParameter$ cannot be constant if a quantitative description of the global phase diagram is required. If we allow $\eswnClosedFormParameter$ to depend on $\eswnDemographicRate$, reasonable descriptions of the endemic equilibria (Figure \[fig/X/endemic\]) and of the global phase diagram (Figure \[fig/CLM/phasediagram/isoparametric\]) are obtained. For the <span style="font-variant:small-caps;">SIRS</span> model on the square lattice, a <span style="font-variant:small-caps;">CPA</span> obtained by fitting $\eswnClosedFormParameter$ to $\eswnCriticalInfectionRate(0)$ will improve the results of the <span style="font-variant:small-caps;">UPA</span> used in [@JooLebowitz2004] to describe the behaviour of the system at low values of $\eswnDemographicRate$. Discussion {#sec/discussion} ========== We have proposed a simple <span style="font-variant:small-caps;">CPA</span> that was shown to provide a reasonable approximation to the behaviour of stochastic models that are relevant in epidemiology — the agreement against simulation data being far better than <span style="font-variant:small-caps;">MFA</span> and <span style="font-variant:small-caps;">UPA</span> with a suitable choice of the parameters. The resulting equations of evolution may be used to approximate phase diagrams, as well as steady state and dynamical behaviours of the associated stochastic models. The <span style="font-variant:small-caps;">CPA</span> takes into account some of the effects of the local lattice structure and yields a clear alternative to heavy stochastic simulations. One of the directions of future work includes the development of <span style="font-variant:small-caps;">CPA</span>s, along the lines of the present work, to account for the local (lattice like) structure of a class of complex networks, such as the Watts and Strogatz small-world networks, that have been shown to be relevant in epidemiological contexts [@Verdasca2005]. Financial supports from the Foundation of the University of Lisbon, under contract BPD-CI/01/04, and from the Portuguese Foundation for Science and Technology (FCT), under contracts POCTI/ESP/44511/2002 and POCTI/ISFL/2/618, are gratefully acknowledged. [^1]: [^2]: the real solution of the cubic equation $27\eswnClosedFormParameter^{3}-18\eswnClosedFormParameter^{2}+87\eswnClosedFormParameter-32=0$.
--- abstract: 'In this work on the basis of Kadomtsev’s kinetic fluctuation theory we present the more general expression for noise-noise correlation function in effective theory for ultrasoft field modes.' author: - 'Yu.A. Markov' - 'M.A. Markova' title: | Problem of the noise-noise correlation\ function in hot non-Abelian plasma --- Dynamical processes occurring in systems, described within the framework of Standard Model at finite temperature (probably with the minimal supersymmetric extension) play an essential role in the physics of the early Universe and of heavy ion collisions. In the weak coupling regime hot non-Abelian gauge theories possess several energy scale: the hard scale, corresponding to momentum of order of temperature $T$; the soft scale $\sim gT$ ($g$ is the gauge coupling) and ultrasoft scale $\sim g^2 T$. As long as we are interested in the collective excitations with wavelength $\sim 1/gT$, we can ignore, in leading order in $g$, collisions among the plasma particles [@blaizot1]. However the collisions become a dominant effect for color excitations with wavelength $\sim 1/g^2 T$. As known [@linde] the color fluctuations characterized by the momentum scale $g^2T$ are non-perturbative. Their dynamics is of particular interest, because it is responsible for the large rate of baryon number violation in hot electroweak theory due to topology changing transitions of the weak $SU(2)$ gauge fields [@rubakov]. This rate is determined by certain different-time correlation function of the product of two operators, which in turn are a gauge invariant nonlinear functions of the ultrasoft gauge fields $A^{a}_{\mu}(X)$. At present the only known instrument to evaluate real time dependent quantities is the classical field approximation [@grigoriev] and possible extension [@mclerran] which contains additional degrees of freedom representing the hard field modes. For time dependent correlation function it was important to find an effective theory for the ultrasoft field modes. The effective theory at ultrasoft momentum scale ($\omega\sim g^4T,\;|{\bf p}|\sim g^2T$) is generated by a Boltzmann-Langevin equation which includes a collision term for color relaxation and the Gaussian noise term, which keeps the ultrasoft modes in thermal equilibrium. The Boltzmann-Langevin equation has been obtained by different approaches. The first is connected with Bödeker’s effective theory for $|{\bf p}|\ll gT$ field modes [@bodeker1; @bodeker2]. Starting from the collisionless non-Abelian Vlasov equation, that is the result of integrating out the scale $T$ [@blaizot1], Bödeker has shown how one can integrate out the scale $g T$ in an expansion in the coupling $g$. At leading order in $g$, he obtained the linearized Vlasov-Boltzmann equation for the hard field modes, which besides a collision term also includes a Gaussian noise arising from thermal fluctuations of initial conditions of the soft fields. Afterwards, an alternative derivation of the Boltzmann-Langevin equation was proposed by Litim and Manuel [@litim1; @litim2]. The authors used a classical transport theory in the spirit of Heinz [@heinz]. The approach of Litim and Manuel [@litim1; @litim2] provides not only the correct collision term but also the correct noise-noise correlator. This correlator was obtained, similar to Bödeker [@bodeker1], directly from the microscopic theory without making use of the fluctuation-dissipation theorem. A somewhat different approach of more phenomenological character to the computation of the correlator of stochastic source was presented by the same authors in [@litim3], where the well known link between a linearized collision integral and the entropy was exploited. Blaizot and Iancu [@blaizot2] presented a detailed derivation of the Vlasov-Boltzmann equation, starting from the Kadanoff-Baym equations. The derivation is based on the method of gauge covariant gradient expansion first proposed by them for the collective dynamics at the scale $g T$ [@blaizot1]. In work [@blaizot3] Blaizot and Iancu derived the statistics of the noise term in the Boltzmann-Langevin equation by using the fluctuation-dissipation theorem together with the known structure of the collision term in the Boltzmann equation. The purpose of this paper is to show that the Boltzmann-Langevin equation in the form obtained by Blaizot and Iancu [@blaizot2] is somewhat more general, than the equation obtained by Bödeker [@bodeker1; @bodeker2]. We use the metric $g^{\mu \nu} = diag(1,-1,-1,-1)$, choose units such that $c=k_{B}=1$ and note $X=(X_0,{\bf X}).$ On a space-time scale $X\gg (gT)^{-1}$ the ultrasoft colored fluctuation of the gluon density in the adjoint representation $\delta N({\bf k},X)=\delta N^a({\bf k},X)T^a$ $((T^a)^{bc}\equiv -if^{abc})$ satisfies the linearized Boltzmann-Langevin equation[^1] $$[v \cdot D_X, \delta N({\bf k}, X)] + g{\bf v}\cdot {\bf E}(X) \,\frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}} \label{eq:q}$$ $$= \hat{\rm C}_{\bf k}\delta N({\bf k},X) + y({\bf k},X).$$ Here, $v=(1,{\bf v}),\;{\bf v}={\bf k}/\vert{\bf k}\vert$; $D_{\mu} = \partial_{\mu} + igA_{\mu}(X)$; $[\,,\,]$ denotes a commutator; ${\bf k}$ is a momentum of hard thermal gluons; ${\bf E} (X) = {\bf E}^a(X) T^a$ is a chromoelectric field; $N(\epsilon_{\bf k}) = 1/(\exp(\epsilon_{\bf k}/T) - 1)$ is a boson occupation factor, where $\epsilon_{\bf k}\equiv\vert{\bf k}\vert$. The collision operator $\hat{\rm C}_{\bf k}$ acts on function on the right according to [@blaizot2] $$\hat{\rm C}_{\bf k}f({\bf k})\equiv g^4N_cT\!\int\!\frac{d {\bf k}^{\prime}}{(2\pi)^3}\, \Phi ({\bf v}\cdot{\bf v}^{\prime}) \label{eq:w}$$ $$\times \biggl\{\frac{dN(\epsilon_{{\bf k}^{\prime}})} {d\epsilon_{{\bf k}^{\prime}}} \,[T^a,[T^a,f({\bf k})]]- \frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}}\,T^a\,{\rm Tr}\, (T^af({\bf k}^{\prime}))\biggr\},$$ where the collision kernel $\Phi ({\bf v}\cdot{\bf v}^{\prime})$ reads $$\Phi ({\bf v}\cdot{\bf v}^{\prime})\!\simeq\! \frac{2}{\pi^2m_D^2} \frac{({\bf v}\cdot{\bf v}^{\prime})^2} {\sqrt{1- ({\bf v}\cdot{\bf v}^{\prime})^2}}\ln\left(\frac{1}{g}\right)\!, \,m_D^2= \frac{1}{3}\,g^2N_cT^2$$ within logarithmic accuracy. The function $y({\bf k},X)=y^a({\bf k},X)T^a$ on the right-hand side of Eq.(\[eq:q\]) is a noise term. This term injects energy compensating the energy loss at scale $g^2T$ by virtue of the damping term. Furthermore we write out a general expression for a correlation function of the noise term $y({\bf k},X)$ in the form proposed by Kadomtsev [@kadom] with a minimal extension to the color degrees of freedom $$\ll\!y^a({\bf k}, X) T^a\!\otimes y^b({\bf k}^{\prime},X^{\prime})T^b\!\gg\,$$ $$=-\frac{1}{2N_c}\, \Bigl(\hat{\rm C}_{\bf k}\otimes \hat{\rm I} + \hat{\rm I}\otimes\hat{\rm C}_{{\bf k}^{\prime}}\Bigr)T^a\!\otimes T^a \label{eq:e}$$ $$\times\,(2\pi)^3\delta^{(3)}\!({\bf k}-{\bf k}^{\prime}) N(\epsilon_{{\bf k}^{\prime}})[1+N(\epsilon_{{\bf k}^{\prime}})] \,\delta^{(4)}\!(X-X^{\prime}).$$ Here, a symbol $\otimes$ denotes a direct production in a color space, $\hat{\rm I}$ is an identity operator. We note that in original work of Kadomtsev [@kadom] the factor $N(\epsilon_{{\bf k}^{\prime}})$ in noise-noise correlation function stands instead of the factor $N(\epsilon_{{\bf k}^{\prime}}) [1+N(\epsilon_{{\bf k}^{\prime}})]= -T(dN(\epsilon_{{\bf k}^{\prime}})/d\epsilon_{{\bf k}^{\prime}})$. In [@kadom] pure classical gas with Maxwell-Boltzmann statistic was considered, while we consider a hot quantum plasma for gluons (in the semiclassical limit), which obey Bose-Einstein statistic. Using the definition of collision term (\[eq:w\]) and decomposing the momentum $\delta$-function in polar coordinates $$\delta^{(3)}\!({\bf k}-{\bf k}^{\prime})= \frac{1}{4\pi{\bf k}^2}\;\delta(|{\bf k}|-|{\bf k}^{\prime}|)\, \delta^{(S^2)}\!({\bf v}-{\bf v}^{\prime}),$$ where $\delta^{(S^2)}\!({\bf v}-{\bf v}^{\prime})$ is a delta-function on a unit sphere, from (\[eq:e\]) we obtain the following expression for the noise-noise correlation function $$\ll\!y^a({\bf k}, X) y^b({\bf k}^{\prime},X^{\prime})\!\gg = -\,(2\pi)^3 \delta^{ab}\frac{T}{N_c}\, \Biggl\{\gamma\,\frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}}\, \frac{1}{4\pi{\bf k}^2}\;\delta(|{\bf k}|-|{\bf k}^{\prime}|)\, \delta^{(S^2)}\!({\bf v}-{\bf v}^{\prime}) +\, \frac{g^4N_c^2T}{(2\pi)^3}\; \frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}}\, \frac{dN(\epsilon_{{\bf k}^{\prime}})}{d\epsilon_{{\bf k}^{\prime}}}\; \Phi ({\bf v}\cdot{\bf v}^{\prime})\Biggr\}\, \] \[ \times\,\delta^{(4)}\!(X-X^{\prime}). \label{eq:r}$$ Here, $$\gamma\!=\!m_D^2\,\frac{g^2N_cT}{2}\!\! \int\!\!\frac{d\Omega_{{\bf v}^{\prime}}}{4\pi}\, \Phi ({\bf v}\cdot{\bf v}^{\prime}) \!\simeq\! \frac{g^2N_cT}{2}\,\biggl(\!\ln\Bigl(\frac{m_D}{\mu}\Bigr)+O(1)\!\biggr)$$ is the damping rate for a hard transverse gluon with velocity ${\bf v}$, and $\mu$ is the magnetic screening “mass”, usually entered by hand for removal of infrared divergence. The equation (4) is the main result of our report. From Eq.(4) we see, that the noise term $y({\bf k},X)$ depends on both the velocity ${\bf v}$ (unit vector) and the magnitude $|{\bf k}|$ of the momentum in a nontrivial way, and thus generates (by virtue of the Boltzmann-Langevin equation (\[eq:q\])) similar dependence for the fluctuation $\delta N({\bf k},X)$. Note that Litim and Manuel in [@litim3] (Eqs.(23) and (32)) pointed to the possible nontrivial dependence of noise-noise correlator on the magnitude $|\textbf{k}|$ of the momentum. However our expression (4) is more complicated, since here contrary to [@litim3], we have different factors (depending on $|\bf{k}|$ and $|\bf{k^{\prime}}|$) before functions $\delta^{(S^2)}\!({\bf v}-{\bf v}^{\prime})$ and $\Phi ({\bf v}\cdot{\bf v}^{\prime})$. For the calculation of the color current $$j_{\mu}(X) = 2gN_c\!\int\!\!\frac{d{\bf k}}{(2 \pi)^3} \,v_{\mu}\delta N({\bf k},X)$$ we need only the second momentum with respect to the magnitude $|{\bf k}|$ of ultrasoft fluctuation $\delta N({\bf k},X)$. For this and similar physical problems, where the second moment is the only relevant quantity, it is convenient to introduce new functions $W(X,{\bf v})$ and $\nu(X,{\bf v})$ depending on velocity ${\bf v}$ only, instead of initial functions $\delta N({\bf k},X)$ and $y({\bf k},X)$, by the rules $$\int\limits_0^{\infty}\!{\bf k}^2d|{\bf k}|\,\delta N({\bf k},X) =-g\,W(X,{\bf v})\! \int\limits_0^{\infty}\!{\bf k}^2d|{\bf k}| \,\frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}}, \label{eq:t}$$ $$\int\limits_0^{\infty}\!{\bf k}^2d|{\bf k}|\,y({\bf k},X) = -g\,\nu(X,{\bf v})\! \int\limits_0^{\infty}\!{\bf k}^2d|{\bf k}|\, \frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}}. \hspace{0.5cm} \label{eq:y}$$ Here, the first relation is an extension of usually used parametrization of off-equilibrium fluctuation [@baym; @blaizot2] $$\delta N({\bf k},X) = -g\,W(X,{\bf v})\, \frac{dN(\epsilon_{\bf k})}{d\epsilon_{\bf k}},$$ which is valid in the absence of the noise term $y({\bf k},X)$ or when this term depends on velocity ${\bf v}$ only. The general connection, Eq.(\[eq:t\]), between the functions $\delta N({\bf k},X)$ and $W(X,{\bf v})$ involves an integral over ${\bf k}$, which reflects the corresponding integral in the relation (\[eq:y\]) between the noise term $y({\bf k},X)$ and $\nu(X,{\bf v})$. Multiplying Eq.(\[eq:q\]) and correlation function (4) by ${\bf k}^2$ and ${\bf k}^2 {{\bf k}^{\prime}}^2$, and integrating over $d|{\bf k}|$ and $d|{\bf k}| d|{\bf k}^{\prime}|$, correspondingly (with regard to (\[eq:t\]) and (\[eq:y\])) instead of Eq.(\[eq:q\])–(\[eq:e\]) we recover the equations for the functions $W(X,{\bf v})$ and $\nu(X,{\bf v})$, first proposed by Bödeker [@bodeker1; @bodeker2]. Let us stress that such a reduction of initial system of Eqs.(\[eq:q\])–(\[eq:e\]) to simpler system for functions $W(X,{\bf v})$ and $\nu(X,{\bf v})$ does not lead to any loss of the information, if we only restrict our consideration to the second moment with respect to $|{\bf k}|$ of ultrasoft gluon fluctuations $\delta N({\bf k},X)$. But for the calculation of more general quantities, which involve also the other moments of the ultrasoft fluctuations (e.g. correlation function of energy density fluctuation), there is no such one-to-one correspondence between these systems by virtue of nontrivial dependence of noise-noise correlator (4) on magnitudes $|{\bf k}|$ and $|{\bf k}^{\prime}|$. In this case it is necessary to use more exact system of equations (\[eq:q\]), (\[eq:w\]) and (4). **Acknowledgments** {#acknowledgments .unnumbered} =================== This work was supported by the Russian Foundation for Basic Research (project no 03-02-16797). J.-P. Blaizot and E. Iancu, Nucl. Phys. [**B417**]{}, 608 (1994). A.D. Linde, Phys. Lett. [**B96**]{}, 289 (1980); D.J. Gross, R.D. Pisarski, and L.G. Yaffe, Rev. Mod. Phys. [**53**]{}, 43 (1981). For a review, see V.A. Rubakov, M.E. Shaposhnikov, Usp. Fiz. Nauk [**166**]{}, 493 (1996); M. Trodden, Rev. Mod. Phys. [**71**]{}, 1463 (1999). D.Yu. Grigoriev and V.A. Rubakov, Nucl. Phys. [**B299**]{}, 67 (1988). D. Bödeker, L. McLerran, and A.V. Smilga, Phys. Rev. D [**52**]{}, 4675 (1995). D. Bödeker, Phys. Lett. [**B426**]{}, 351 (1998); Nucl. Phys. [**B559**]{}, 502 (1999). D. Bödeker, Nucl. Phys. [**B566**]{}, 402 (2000). D.F. Litim and C. Manuel, Phys. Rev. Lett. [**82**]{}, 4981 (1999); Nucl. Phys. [**B562**]{}, 237 (1999). D.F. Litim and C. Manuel, Phys. Rept. [**364**]{}, 451 (2002). U. Heinz, Ann. Phys. [**161**]{}, 48 (1985); [*ibid.*]{} [**168**]{}, 148 (1986). D.F. Litim and C. Manuel, Phys. Rev. D [**61**]{}, 125004 (2000). J.-P. Blaizot and E. Iancu, Nucl. Phys. [**B557**]{}, 183 (1999). J.-P. Blaizot and E. Iancu, Phys. Rep. [**359**]{}, 355 (2002). B.B. Kadomtsev, Zh. Eksp. Theor. Fiz. [**32**]{}, 943 (1957). G. Baym, H. Monien, C.J. Pethick, and D.G. Ravenhall, Phys. Rev. Lett. [**64**]{}, 1867 (1990). [^1]: This equation is taken in the form suggested by Blaizot and Iancu in Ref.[@blaizot2].
--- abstract: 'This paper aims at investigating the achievable performance and the issues that arise in ultra-dense networks (UDNs), when the signal propagation includes both the Line-of-Sight (LOS) and Non-Line-Of-Sight (NLOS) components. Backed by an analytical stochastic geometry-based model, we study the coverage, the Area Spectral Efficiency (ASE) and the energy efficiency of UDNs with LOS/NLOS propagation. We show that when LOS/NLOS propagation components are accounted for, the network suffers from low coverage and the ASE gain is lower than linear at high base station densities. However, this performance drop can partially be attenuated by means of frequency reuse, which is shown to improve the ASE vs coverage trade-off of cell densification, provided that we have a degree of freedom on the density of cells. In addition, from an energy efficiency standpoint, cell densification is shown to be inefficient when both LOS and NLOS components are taken into account. Overall, based on the findings of our work that assumes a more advanced system model compared to the current state-of-the-art, we claim that highly crowded environments of users represent the worst case scenario for ultra-dense networks. Namely, these are likely to face serious issues in terms of limited coverage.' title: 'Effect of LOS/NLOS Propagation on Ultra-Dense Networks' --- Ultra-dense, LOS/NLOS, Area Spectral Efficiency, partially loaded, energy efficiency, coverage. Introduction {#sect:intro} ============ There is a common and widely shared vision that next generation wireless networks will witness the proliferation of small-cells. As a matter of fact, researchers foresee network densification as one of key enablers of the 5^-th^ generation (5G) wireless networks [@Hwang2013; @Bhushan2014]. Although it refers to a concept rather than being a precise definition, the term *ultra-dense networks* is used to describe networks characterized by a massive and dense deployment of small-cells, in which the amount of base stations may grow up to a point where it will exceed the amount of user devices [@Park2014]. As wireless networks evolve, performance requirements for the new technology are becoming more and more stringent. In fact, the 5G requirements are set to a data rate increase up to a 1000-fold with respect to current 4G systems[@Bhushan2014], as well as for high energy efficiency [@Ericsson2015] in order to limit the energy expenditure of network operators. Supported by recent results [@Bhushan2014; @Andrews2011], cell densification as been put forward as the main enabler to achieve these target data rates. For example, the authors in [@Andrews2011] have shown that the throughput gain is expected to grow linearly with the density of base stations per area; this is a result of the simplified system model used during the investigation. Namely, the assumption of a single slope path-loss model and that all base stations in the network are active. Nonetheless, further work on cell densification has shown that, under less ideal assumptions, network performance may be different than what predicted in [@Andrews2011]. In particular, when different path-loss models than single slope are used, the actual performance of cell densification is less optimistic than what estimated with single slope path-loss [@Zhang2014; @Bai2014; @Galiotto2015; @Ding2015]. In addition, if the base station density increases beyond the user density, like in a typical ultra-dense scenario, the network has been shown to experience a coverage improvement at the expense of a limited throughput gain [@ChangLi2014; @Park2014]. This implies that a larger number of BSs will need to be deployed to meet a given data rate target, translating on higher network infrastructures costs. Related Work {#sub:related_work} ------------ In recent years, stochastic geometry has been gradually accepted as a mathematical tool for performance assessment of wireless networks. In fact, one of the most important contribution to the study of cell densification can be found in [@Andrews2011], where the authors proposed a stochastic geometry-based framework to model single-tier cellular wireless networks; by assuming a single slope path loss model, the authors have observed the independence of the Signal-to-Interference-plus-Noise-Ratio (SINR) and Spectral Efficiency (SE) from the BS deployment density, with the main consequence being the linear dependence of the ASE on the cell density. Nevertheless, when the assumption of single-slope path-loss is dropped, it emerges that SINR, ASE and coverage exhibit a different behaviour than what was found in [@Andrews2011]. The authors in [@Zhang2014; @Bai2014a] considered propagation models for millimeter waves. In [@Zhang2014], the authors extended the stochastic geometry framework proposed in [@Andrews2011] to a multi-slope path loss model. The authors in [@Bai2014a] developed a stochastic geometry framework for path-loss including Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) propagation for millimeter-waves. The effect of NLOS propagation on the outage probability has been studied in [@Bai2014], where the authors propose a function that gives the probability to have LOS at a given point depending on the distance from the source, on the average size of the buildings and on the density of the buildings per area. A stochastic geometry-based framework to study the performance of the network with a combined LOS/NLOS propagation for micro-waves can be found in some previous work of ours [@Galiotto2015] and in [@Ding2015]. The common picture that emerges from the work [@Zhang2014; @Bai2014; @Galiotto2015; @Ding2015] is that of a non-linear behavior of the ASE with the cell density; overall, as a consequence of a different propagation model than the single slope path-loss, the spectral efficiency and coverage of the network do actually depend on the base station density. However, all these studies are based on the assumption of fully loaded networks, i.e., all the base stations are active and have at least one user to serve; thus, the applicability of the work above is limited to networks in which the density of base stations is lower than that of the users. In our paper, we broaden the study of network densification to the the case where there is no such a constraint in terms of BS density, i.e., we also tackle partially loaded networks, in which some base stations might be inactive. Work on stochastic geometry for partially loaded networks has been advanced in [@Park2014; @SeunghyunLee2012; @Dhillon2013; @ChangLi2014]. The authors in [@SeunghyunLee2012] studied the coverage in single tier networks, while multi-tier networks are addressed in [@Dhillon2013]. An analysis of the area spectral efficiency of partially loaded networks has been carried out in [@Park2014], while in [@ChangLi2014] the authors have extended the stochastic geometry-based model further to include multi-antenna transmission, and have also assessed the energy efficiency. Overall, the authors in [@SeunghyunLee2012; @Dhillon2013; @ChangLi2014] have shown that the network coverage improves as the base station density increases beyond the user density; this is paid in terms of a lower throughput gain, which grows as a logarithmic function of the cell density. Nonetheless, the authors in [@SeunghyunLee2012; @Dhillon2013; @ChangLi2014] modeled the propagation according to a single slope path-loss model and did not investigate the effect of LOS/NLOS propagation in partially loaded networks. To the best of our knowledge, currently there is no work that addresses ultra-dense scenarios with path-loss models different than the single slope for both fully and partially loaded networks. As a result of the combined effect of the path-loss model and of the partial load in ultra-dense networks, until now the behavior of the Area Spectral Efficiency (ASE), coverage, and energy efficiency as the networks become denser, was unknown. Our Contribution {#sub:our_contribution} ---------------- This paper seeks to investigate the cell densification process in ultra-dense networks and evaluate the effect of LOS/NLOS propagation on performance metrics such as coverage, spectral efficiency, area spectral efficiency, and energy efficiency. Specifically, we use *ultra-dense networks* as a term to refer to those networks characterized by a very high density of base stations and that include both the cases of *fully loaded networks* (i.e., networks in which all the base stations are active) and *partially loaded networks* (i.e., networks in which some base stations might be inactive and not transmit to any user). Overall, the major contributions of our work can be summarized in the following points: **1) Stochastic geometry-based model for ultra-dense networks with LOS/ NLOS propagation:** We propose a model based on stochastic geometry that allows us to study the Signal-to-Interference-plus-Noise-Ratio (SINR) distribution, the spectral efficiency and the area spectral efficiency of ultra-dense networks where the propagation has LOS and NLOS components. We build on previous work [@Andrews2011] and we adapt the model proposed by Andrews et al. to the case of LOS/NLOS propagation. In addition, our framework takes also into account the partial load of ultra-dense networks, in which a fraction of base stations may be inactive. **2) Study of cell densification, partial load and frequency reuse in networks where signal follows LOS/NLOS propagation:** First, we investigate the effect of network densification on performance metrics such as SINR, spectral efficiency and area spectral efficiency in networks where the path-loss follows the LOS/NLOS propagation. In particular, the ASE gain becomes lower than linear at high cell densities, meaning that a larger number of BSs would be necessary to achieve a given target with respect to the case of single slope path loss. Moreover, the network coverage drops drastically as the BS density increases. Then, we show that the performance drop due to LOS/NLOS propagation gets mitigated by the usage of frequency reuse or if the base station density exceeds the user density, as it is likely to occur in ultra-dense networks. To the best of our knowledge, the combined effect of LOS/NLOS propagation and partial load/frequency reuse has not been addressed before. **3) Investigation on the minimum transmit power per BS and energy efficiency for networks where signal propagates according to LOS/NLOS path-loss:** As the cell density increases, the transmit (TX) power per base station can be lowered. We evaluate the minimum TX power per BS such that the network is guaranteed to be in the interference-limited regime, in which case the performance is not limited by the TX power. Second, we make use of the TX power to determine the energy efficiency of the network when the propagation has LOS and NLOS components. We show that the energy efficiency with LOS/NLOS propagation drops considerably with respect to the case of single slope path-loss, making cell-densification costly for the network from an energetic stand-point. We further extend the study of the energy efficiency to frequency reuse and partial load. Paper Structure {#sub:paper_structure} --------------- The remainder of this paper is organized as follows. In Section \[sect:SystemModel\] we describe the system model. We show our formulation for computing the SINR, SE and ASE in Section \[sect:SINR\_SE\_ASE\] and we address the energy efficiency in Section \[sec:energy\_Efficiency\]. In Section \[sect:results\] we present and discuss the results while the conclusions are drawn in Section \[sect:conclusions\]. System model {#sect:SystemModel} ============ In this paper we consider a network of small-cell base stations deployed according to a homogeneous and isotropic Spatial Poisson Point Process (SPPP), denoted as $\Phi\subset \mathbb{R}^2$, with intensity $\lambda$. Further, we assume that each Base Station (BS) transmits with an isotropic antenna and with the same power, $P_{\mathrm{TX}}$, of which the value is not specified, in order to keep our model general and valid for different base station classes (e.g., micro-BSs, pico-BSs, femto-BSs); we focus our analysis on the downlink. Channel model {#subsect:ChannelModel} ------------- In our analysis, we considered the following path loss model: $$\mathrm{PL}(d)=\begin{cases} K_{\mathrm{L}}d^{-\beta_{\mathrm{L}}} & \text{with probability}\; p_{ \mathrm{L}}(d),\\ K_{\mathrm{NL}}d^{-\beta_{\mathrm{NL}}} & \text{with probability}\:1-p_{ \mathrm{L}}(d), \end{cases}\label{eq:propag_outdoor}$$ where $\beta_{\mathrm{L}}$ and $\beta_{\mathrm{NL}}$ are the path-loss exponents for LOS and NLOS propagation, respectively; $K_{\mathrm{L}}$ and $K_{\mathrm{NL}}$ are the signal attenuations at distance $d=1$ for LOS and NLOS propagation,[^1] respectively; $p_{\mathrm{L}}(d)$ is the probability of having LOS as a function of the distance $d$. The model given in is used by the 3GPP to model the LOS/NLOS propagation, for example, in scenarios with Heterogeneous Networks [@3GPP36814 Table A.2.1.1.2-3]. The incorporation of the NLOS component in the path loss model accounts for possible obstructions of the signal due to large scale objects (e.g. buildings ), which will result in a higher attenuation of the NLOS propagation compared to the LOS path. We further assume that the propagation is affected by Rayleigh fading, which is exponentially distributed $\sim \exp(\mu)$. Regarding the shadow fading, it has been shown that in networks with a deterministic, either regular or irregular, base station distribution affected by log-normal shadow fading, the statistic of the propagation coefficients converges to that of a network with SPPP distribution as the shadowing variance increases [@Blaszczyszyn2015]. In other words, this SPPP intrinsically models the effect of shadow fading. LOS probability function {#subsect:choosing_p_L} ------------------------ To ensure that our formulation and the outcomes of our study are general and not limited to a specific LOS probability pattern, we consider two different LOS probability functions. The first one, which is proposed by the 3GPP [@3GPP36814 Table A.2.1.1.2-3] to assess the network performance in pico-cell scenarios, is given below: $$\label{eq:3GPP_p_L} p_{\mathrm{L,3G}}(d) = 0.5-\min\big(0.5,5e^{-\frac{d_0}{d}}\big)+\min\big(0. 5, 5e^{-\frac{d}{d_1}}\big),$$ with $d_0$ and $d_1$ being two parameters that allow to match the measurement data. Unfortunately, this function is not practical for an analytical formulation. Therefore, we chose to approximate it with a more tractable one, namely: $$\label{eq:Our_p_L} p_{\mathrm{L}}(d) = \exp\left( -(d/L)^2\right),$$ where $L$ is a parameter that allows  to be tuned to match . The second function is also suggested by the 3GPP [@3GPP36814 Table A.2.1.1.2-3] and is given below: $$\label{eq:p_L_exp} p_{\mathrm{L}}(d) = \exp(-d/L).$$ From a physical stand point, the parameter $L$ can be interpreted as the LOS likelihood of a given propagation environment as a function of the distance. User distribution, fully and partially loaded networks {#subsect:user_distribution} ------------------------------------------------------- In our model, we always assume that: (i) the users are uniformly distributed; (ii) that each user’s position is independent of the other users’ position; and (iii) each user connects only to one base station, the one that provides the strongest signal. We denote by $\lambda_{\mathrm{U}}$ the density of users per area; whenever we consider a finite area $A$, $N_{\mathrm{U}}$ indicates the average number of users in the network. We also assume the users are served with full buffer, i.e., the base station has always data to transmit to the users and make full use of the available resources. Depending on the ratio between the density of users and the density of base stations, we distinguish between two cases, namely, fully loaded and partially (or fractionally) loaded networks. By fully loaded networks we refer to the case where each BS has at least one user to serve. With reference to a real scenario, fully loaded networks model the case where there are many more users than base stations, so that each base station serves a non-empty set of users. However, when the density of users is comparable with or even less that of base stations, some base stations may not have any users to serve and would then be inactive, meaning that they do not transmit and do not generate interference. When this occurs, we say that the network is partially loaded. The network can be modelled as partially loaded to study those scenarios characterized by high density of base stations and, in particular, scenarios where the density of base stations exceeds the density of users, such as in ultra-dense networks. To define formally the concepts of fully and partially loaded networks, we first need to introduce another concept, which is the *probability a base station being active*. \[def:probability\_of\_activity\] The probability of a base station being active, denoted as $p_{\mathrm{A}}$, is the probability that a base station has at least one user to serve. This event implies that the base station is active and transmits to its users. \[def:fully\_partially\_loaded\] The network is said to be fully loaded if $p_A=1$; the network is said to be partially or fractionally loaded if $p_A<1$. SINR, spectral efficiency and ASE {#sect:SINR_SE_ASE} ================================= In this section we propose an analytical model to compute the SINR Complementary Cumulative Distribution Function (CCDF), which allows us to asses key performance metrics such as coverage, spectral efficiency and ASE. Procedure to compute the SINR CCDF ---------------------------------- In order to compute the SINR tail distribution (i.e., the Complementary CDF), we extend the analytical framework first proposed in [@Andrews2011] so that to include the LOS and NLOS components. From the Slivnyak’s Theorem [@Haenggi2013 Theorem8.10], we consider the *typical user* as the focus of our analysis, which for convenience is assumed to be located at the origin. The procedure is composed of two steps: (i) we compute the SINR CCDF for the typical user conditioned on the distance from the user to the serving base station, denoted as $r$; (ii) using the PDF of the distance from the closest BS $f_{r}(R)$, which corresponds to the serving BS, we can average the SINR CCDF over all possible values of distance $r$. Let us denote the SINR by $\gamma$; formally, the CCDF of $\gamma$ is computed as: $$\label{eq:SINR_general_def} \mathrm{P}\left[\gamma>y\right]=\mathrm{E_{r}}\big[\mathrm{P} \left[\gamma>y|r\right]\big] = \int _{0}^{+\infty} \mathrm{P}\left[\gamma>y|r=R\right]f_{r}(R)\mathrm{d}R.$$ The key elements of this procedure are the PDF of the distance to the nearest base station $f_r(R)$ and the tail probability of the SINR conditioned on $r$, $\mathrm{P}\left[\gamma>y|r=R\right]$. The methodology to compute each of these elements while modelling the LOS and NLOS path loss components will be exposed next. SPPPs of base stations in LOS and in NLOS with the user ------------------------------------------------------- The set of the base stations locations originates an SPPP, which we denote by $\Phi=\{x_n\}$.[^2] As a result of the propagation model we have adopted in our analysis (see Section \[subsect:ChannelModel\]), the user can either be in LOS or NLOS with any base station $x_n$ of $\Phi$. Now, we perform the following mapping: we first define the set of LOS points, namely $\Phi_{\mathrm{L}}$, and the set of NLOS points, $\Phi_{\mathrm{NL}}$. Then, each point $x_n$ of $\Phi$ is mapped into $\Phi_{\mathrm{L}}$ if the base station at location $x_n$ is in LOS with the user, while it is mapped to $\Phi_{\mathrm{NL}}$ if the base station at location $x_n$ is in NLOS with the user. Since the probability that $x_n$ is in LOS with the user is $p_{\mathrm{L}}(\|x\|)$, it follows that each point $x_n$ of $\Phi$ is mapped with probability $p_{\mathrm{L}}(\|x\|)$ into $\Phi_{\mathrm{L}}$ and probability $p_{\mathrm{NL}}(\|x\|) = 1-p_{\mathrm{L}}(\|x\|)$ into $\Phi_{\mathrm{NL}}$. Given that this mapping is performed independently for each point in $\Phi$, then from the “Thinning Theorem” [@Haenggi2013 Theorem 2.36] it follows that the processes $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ are SPPPs with density $\lambda_{\mathrm{L}}(x)=\lambda p_{\mathrm{L}}(\|x\|)$ and $\lambda_{\mathrm{NL}}(x)=\lambda\left(1-p_{\mathrm{L}}(\|x\|)\right)$, respectively. Note that, because of the dependence of $\lambda_{\mathrm{L}}(x)$ and $\lambda_{\mathrm{NL}}(x)$ on $x$, $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ are inhomogeneous SPPPs. To make the formulation more tractable, we consider $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ to be independent processes; because the union of two SPPPs processes is an SPPP of which the density is the sum of the densities of the individual SPPPs [@Baccelli2009 Preposition 1.3.3], the union of $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ is an SPPP with density $\lambda_{\mathrm{L}}(x)+\lambda_{\mathrm{NL}}(x)=\lambda$, i.e., it is an SPPP with the same density as that of the original process $\Phi$. Hence, the assumption of independence between $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ does not alter the nature of the process $\Phi$. Mapping the NLOS SPPP into an equivalent LOS SPPP {#subsect:Mapping} ------------------------------------------------- Given that we have two inhomogeneous SPPP processes, it is not trivial to obtain the distribution of the minimum distance of the user to the serving base station, which will be necessary later on to compute the SINR CDF. In fact, assuming the user be in LOS with the serving base station at a distance $d_{\mathrm{1}}$, there might be an interfering BS at a distance $d_\mathrm{2}<d_{\mathrm{1}}$ which is in NLOS with the user. This is possible because the NLOS propagation is affected by a higher attenuation than the LOS propagation. Hence, to make our problem more tractable, we map the set of points of the process $\Phi_{\mathrm{NL}}$, which corresponds to the NLOS base stations, into an equivalent LOS process $\Phi_{\mathrm{eq}}$; each point $x\in\Phi_{\mathrm{NL}}$ located at distance $d_{\mathrm{NL}}$ from the user is mapped to a point $x_{\mathrm{eq}}$ located at distance $d_{\mathrm{eq}}$ from the user, so that the BS located at $x_{\mathrm{eq}}$ provides the same signal power to the user with path-loss $K_{\mathrm{L}}d_{\mathrm{eq}}^{-\beta_{\mathrm{L}}}$ as if the base station were located at $x$ with path-loss $K_{\mathrm{NL}}d_{\mathrm{NL}}^{-\beta_{\mathrm{NL}}}$. \[def:mapping\_function\] We define the mapping function $f_{\mathrm{eq}}:\Phi_{\mathrm{NL}}\rightarrow\Phi_{\mathrm{eq}}$ as: $$\label{eq:directMapping} f_{\mathrm{eq}}(x)=\frac{x}{\|x\|}d_{\mathrm{eq}}\left(\|x\|\right),$$ $$\label{eq:d_eq} d_{\mathrm{eq}}(d)=\left(\frac{K_{\mathrm{L}}}{K_{\mathrm{NL}}}\right)^{1/\beta_{\mathrm{L}}}d^{\beta_{\mathrm{NL}}/\beta_{\mathrm{L}}}.$$ \[def:inv\_mapping\_function\] The inverse function $g_{\mathrm{eq}}=f_{\mathrm{eq}}^{-1}:\Phi_{\mathrm{eq}}\rightarrow\Phi_{\mathrm{NL}}$ is defined as: $$\label{eq:inverseMapping} \qquad g_{\mathrm{eq}}(x)=\frac{x}{\|x\|}d_{\mathrm{eq}}^{-1}\left(\|x\|\right),$$ $$\label{eq:inverse_d_eq} d_{\mathrm{eq}}^{-1}(d) = \left(\frac{K_{\mathrm{NL}}}{K_{\mathrm{L}}}\right)^{1/\beta_{\mathrm{NL}}} d^{ \beta_{\mathrm{L}} / \beta_{\mathrm{NL}}}= K_{\mathrm{eq}} d^{\beta_{\mathrm{eq}}},$$ where $K_{\mathrm{eq}}=\left(\frac{K_{\mathrm{NL}}}{K_{\mathrm{L}}}\right)^{1/\beta_{\mathrm{NL}}}$ while $\beta_{\mathrm{eq}}=\beta_{\mathrm{L}}/\beta_{\mathrm{NL}}$. It is important to notice that, from the “Mapping Theorem” [@Haenggi2013 Theorem 2.34], $\Phi_{\mathrm{eq}}$ is still an SPPP. PDF of the distance from the user to the serving BS {#subsect:PDF_of_distance} --------------------------------------------------- Using the mapping we introduced in Section \[subsect:Mapping\], we can compute the PDF $f_{r}(R)$ of the minimum distance $r$ between the user and the serving BS. We first compute the probability $\mathrm{P}\left[r>R\right]$; the PDF can be ultimately obtained from the derivative of $\mathrm{P}\left[r>R\right]$ as $f_{r}(R)=\frac{\mathrm{d}}{\mathrm{d}R}(1-\mathrm{P}\left[r>R\right]) $. Let $B(0,l)$ be the ball of radius $l$ centred at the origin $(0,0)$. Moreover, we use the notation $\Phi(\mathcal{A})$ to refer to number of points $x\in \Phi$ contained in $\mathcal{A}$ [@Haenggi2013]. Using the mapping we introduced in Section \[subsect:Mapping\] the probability $\mathrm{P}\left[r>R\right]$ can be found as: $$\mathrm{P}\left[r>R\right]=\mathrm{P}\left[ \Phi_{\mathrm{L}}\left(B(0,R)\right)=0\cap\Phi_{\mathrm{eq}}\left(B\left(0,R\right)\right)=0\right]$$ $$\overset{(a)}{=}\mathrm{P}\left[\Phi_{\mathrm{L}}\left(B(0,R)\right)=0\cap\Phi_{\mathrm{NL}}\left(B\left(0,d_{\mathrm{eq}}^{-1}(R)\right)\right)=0\right]$$ $$\overset{(b)}{=}\mathrm{P}\left[\Phi_{\mathrm{L}}\left(B(0,R)\right)=0\right]\cdot\mathrm{P}\left[\Phi_{\mathrm{NL}}\left(B\left(0,d_{\mathrm{eq}}^{-1}(R)\right)\right)=0\right],$$ where equality $(a)$ comes from the mapping defined in and in , while equality $(b)$ comes from the independence of the processes $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$. By applying the probability function of inhomogeneous SPPP [@Haenggi2013 Definition 2.10], we obtain the following, $$\label{eq:prob_r_greater_than_R} \mathrm{P}\left[r>R\right]=\exp\bigg(-\int_{B(0,R)}\lambda_{\mathrm{L}}(x)\mathrm{d}x\bigg) \exp\bigg(-\int_{B\left(0,d_{\mathrm{eq}}^{-1}(R)\right)}\lambda_{\mathrm{NL}}(x)\mathrm{d}x\bigg).$$ From , we can obtain $f_{r}(R)$, first, by integrating and, second, by computing its first derivative in $R$. The formulation in  is general and thus can be applied to several LOS probability functions $p_{\mathrm{L}}(d)$. Below, we provide the expression of the PDF of the distance from the UE to the serving BS for the LOS functions and , respectively. \[res:PDF\_p\_L\_exp\_square\] If the LOS probability function is as in and if we denote $d_{\mathrm{eq}}^{-1}(R)$ by $R_{\mathrm{eq}}$, the PDF of the distance to the serving BS is: $$\begin{aligned} \label{eq:PDF_exp_square} f_r(R)=-\left(e^{\pi\lambda L^{2}e^{-\frac{R^{2}}{L^{2}}}}\cdot e^{-\pi\lambda L^{2}e^{-\frac{R_{\mathrm{eq}}^{2}}{L^{2}}}}\cdot e^{-\pi\lambda R_{\mathrm{eq}}^{2}}\right) \end{aligned}$$ $$\begin{aligned} \left( -2\pi\lambda Re^{-\frac{R^{2}}{L^{2}}} \pi\lambda K_{\mathrm{eq}}^{2}2\beta_{\mathrm{eq}}R^{2\beta_{\mathrm{eq}}-1}e^{-\frac{-K_{e\mathrm{q}}^{2}R^{2\beta_{\mathrm{eq}}}}{L^{2}}} -\pi\lambda K_{\mathrm{eq}}^{2}2\beta_{\mathrm{eq}}R^{2\beta_{\mathrm{eq}}-1} \right). \end{aligned}$$ \[res:PDF\_p\_L\_exp\] If the LOS probability function is as in and if we denote $d_{\mathrm{eq}}^{-1}(R)$ by $R_{\mathrm{eq}}$, the PDF of the distance to the serving BS is: $$\begin{aligned} \label{eq:PDF_exp} f_r(R)=-\left( e^{2\pi\lambda L^{2}e^{-\frac{R}{L}}}\cdot e^{2\pi\lambda LR e^{-\frac{R}{L}}}\cdot e^{-\pi\lambda R_{\mathrm{eq}}^{2}}\cdot e^{-2\pi\lambda L^{2}e^{-\frac{R_{\mathrm{eq}}}{L}}}\cdot e^{-2\pi\lambda LR_{\mathrm{eq}}e^{-\frac{R_{\mathrm{eq}}}{L}}} \right) \end{aligned}$$ $$\begin{aligned} \bigg( -2\pi\lambda Le^{-\frac{R}{L}} -2\pi\lambda(L-R)e^{-\frac{R}{L}} -\pi\lambda K_{\mathrm{eq}}^{2}2\beta_{\mathrm{eq}}R^{2\beta_{\mathrm{eq}}-1} \end{aligned}$$ $$\begin{aligned} + 2\pi\lambda LK_{\mathrm{eq}} \beta_{\mathrm{eq}} R^{\beta_{\mathrm{eq}}} e^{-\frac{K_{eq}R^{\beta_{\mathrm{eq}}}}{L}} + 2\pi\lambda LK_{\mathrm{eq}}\beta_{\mathrm{eq}}R^{\beta_{\mathrm{eq}}-1}(K_{\mathrm{eq}}R^{\beta_{\mathrm{eq}}}-L)e^{-\frac{K_{eq}R^{\beta_{\mathrm{eq}}}}{L}} \bigg). \end{aligned}$$ We refer to the Appendix for the details of the $f_{r}(R)$ we have given in and in . Spatial process of the interfering of the active base stations {#subsec:partially_loaded_networks} -------------------------------------------------------------- In this framework, we include the cases of partially loaded networks and of frequency reuse and we treat them separately. First, we identify the set of BSs that are active, i.e., those having one or more users to serve. As we focus the analysis on the typical user, we can also identify the set of BSs that act as interferers for that user; an active BS (excluding the one serving the user) is seen as an interferer if that BS transmits over the same band used to serve that user. In the following, we denote by $\Phi_{\mathrm{A}}$ the set of active BSs, while we denote by $\Phi_{\mathrm{I}}$ the set of the interfering BSs. Let us consider first the case of frequency reuse, in which all the base stations are active, but each of these only uses a portion of the spectrum, in order to reduce the interference. Since all the base stations are supposed to be active, the process $\Phi_{\mathrm{A}}$ is the same as $\Phi$. However, we assume each base station selects a channel in a random manner using, for instance, frequency ALOHA spectrum access [@Chandrasekhar2009]. With a frequency reuse factor of $N$, each base station uses $1$ out of $N$ channels, which is chosen independently from the other base stations. Hence, each BS interferes with a given user with probability $1/N$; this is equivalent to carrying out a thinning of the original process $\Phi$ with probability $1/N$; from the Thinning Theorem, we obtain that $\Phi_{\mathrm{I}}$ is a homogeneous process with density $\lambda_{\mathrm{I}}=\lambda/N$. In regards to the partially loaded networks, we recall from Section \[subsect:user\_distribution\] that a fraction of the base stations might be inactive and, as such, would not generate interference. Assuming all the BSs transmit over the same band, in partially loaded networks the active base stations are the only BSs that generate interference to the users—with the exception of the serving BS. Thus, we can write $\Phi_{\mathrm{I}} = \Phi_{\mathrm{A}} \setminus x_{0}$, where $x_{0}$ is the serving base station; moreover, from the Palm Theorem [@Haenggi2013], $\Phi_{\mathrm{I}}$ and $\Phi_{\mathrm{A}}$ are characterized by the same density. To obtain the process of active BSs $\Phi_{\mathrm{A}}$ from the original the process $\Phi$, we first assume that each UE deployed in the network connects to the closest BS; [^3] finally, only the BSs which are assigned one or more users are active. Yet, this is equivalent to performing a thinning of the original process to obtain $\Phi_{\mathrm{A}}$. However, the fact that a base station is picked to be part of $\Phi_{\mathrm{A}}$ (i.e. there exists a user for which this BS is the closest one) depends on the positions of the neighbouring base stations, which implies that the base stations are not chosen independently of one another [@SeunghyunLee2012]. As the independence is one of the necessary conditions in order to have an SPPP, it follows that $\Phi_{\mathrm{A}}$ is not an SPPP. Although $\Phi_{\mathrm{A}}$ cannot be formally regarded as an SPPP, it has been shown that the actual SPPP obtained through the thinning of $\Phi$ well approximates $\Phi_{\mathrm{A}}$ [@SeunghyunLee2012; @ChangLi2014]. Specifically, the authors in [@SeunghyunLee2012] have shown that, (i) the probability $p_{\mathrm{A}}$ of a base station to be active (i.e., to have users to serve) can be well approximated once the density of users $\lambda_{\mathrm{U}}$ and density of base stations $\lambda$ are known, and, (ii) the process $\Phi_{\mathrm{A}}$ of active base stations can be well approximated by thinning the original process $\Phi_{\mathrm{A}}$ with probability $p_{\mathrm{A}}$, which is given below [@SeunghyunLee2012]: $$\label{eq:p_active_BS} p_{\mathrm{A}} = 1 - \left(1 + \frac{\lambda _{\mathrm{U}}}{3.5 \lambda } \right)^{-3.5}.$$ From the Thinning Theorem, it follows that the resulting process obtained through thinning as described above is an SPPP with density $\lambda_{\mathrm{A}} = p_{\mathrm{A}}\lambda$; moreover, $\lambda_{\mathrm{I}} = \lambda_{\mathrm{A}}$. In light of these findings, we approximate $\Phi_{\mathrm{A}}$ with an SPPP of density $\lambda_{\mathrm{A}} = p_{\mathrm{A}}\lambda$. Fig. \[fig:prob\_A\_vs\_density\] shows how the probability $p_{\mathrm{A}}$ and the intensity of the interfering BSs $\lambda_{\mathrm{I}}$ vary as functions of the ratio $\lambda / \lambda_{\mathrm{U}}$. ![Probability of a BS being active and density of interfering BS vs BS density for partially loaded networks. The probability $p_{\mathrm{A}}$ drops as the ratio $\lambda / \lambda_{\mathrm{U}}$ is close to or greater than 1, i.e., as $\lambda$ approaches $\lambda_{\mathrm{U}}$. As a result of this, the density of active BSs as well as the density of interfering BSs converge to $\lambda_{\mathrm{U}}$ as $\lambda$ approaches or overcomes $\lambda_{\mathrm{U}}$.[]{data-label="fig:prob_A_vs_density"}](p_A_vs_lambda){width="\figureSize\columnwidth"} SINR inverse cumulative distribution function {#subsect:SINR_ICDF_Final} --------------------------------------------- The probability $\mathrm{P}\left[\gamma>y|r=R\right]$ can be computed as in [@Andrews2011 Theorem 1]; we skip the details and provide the general formulation: $$\label{eq:P_SINR_geq_y_conditioned} \mathrm{P}\left[\gamma>y|r=R\right]=\mathrm{P}\left[\frac{g K_{\mathrm{L}}R^{-\beta_{\mathrm{L}}}}{\sigma^{2}+I_{R}}>y\right] =e^{-\mu y K_{\mathrm{L}}^{-1}R^{\beta_{\mathrm{L}}}\sigma^{2}}\mathcal{L}_{I_{R}}(\mu y K_{\mathrm{L}}^{-1}R^{\beta_{\mathrm{L}}}),$$ where $g$ is the Rayleigh fading, which we assume to be an exponential random variable $\sim exp(\mu)$; $\sigma^{2}$ is the variance of the additive white Gaussian noise normalized with the respect to the transmit power, $I_{R}$ is the interference conditioned on $R$, i.e., $$I_{R} = \sum\limits _{\{i:\: x_i\in\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},\: \|x_i\|>R\}}g_i K_{\mathrm{L}}\|x_i\|^{-\beta_{\mathrm{L}}} + \sum\limits _{\{j:\: f_{\mathrm{eq}}(x_j)\in \Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},\: \|f_{\mathrm{eq}}(x_j)\| > R\}}g_j K_{\mathrm{L}}\|x_j\|^{-\beta_{\mathrm{L}}}$$ $$= \sum\limits _{\{i:\: x_i\in\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},\: \|x_i\|>R\}}g_i K_{\mathrm{L}}\|x_i\|^{-\beta_{\mathrm{L}}} + \sum\limits _{\{j:\: x_j\in \Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},\: \|x_j\| > d_{\mathrm{eq}}^{-1}(R)\}}g_j K_{\mathrm{NL}}\|x_j\|^{-\beta_{\mathrm{NL}}}$$ where $g_i$ and $g_j$ are independent and identically distributed $\sim exp(\mu)$ fading coefficients. Please, note that the overall interference accounts only for the active base stations. Compared to the formulation of $\mathcal{L}_{I_{R}}(s)$ proposed in [@Andrews2011], in our case we have to deal with two non-homogeneous SPPP, namely $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ instead of a single homogeneous SPPP. The Laplace transform $\mathcal{L}_{I_{R}}(s)$ can be written as follows: $$\begin{aligned} \mathcal{L}_{I_{R}}(s) =\mathrm{E}_{\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},\Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},g_i,g_j}\bigg[\exp\bigg(-s\sum\limits _{\{i:\: x_i\in\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},\: |x_i|>|x_0|\}} g_i K_{\mathrm{L}}\|x_i\|^{-\beta_{\mathrm{L}}}\bigg)\\ \exp\bigg(-s\sum\limits _{\{j:\: x_j\in \Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},\: \|x_j\| > d_{\mathrm{eq}}^{-1}(R)\}}g_j K_{\mathrm{NL}}\|x_j\|^{-\beta_{\mathrm{NL}}}\bigg)\bigg].\end{aligned}$$ Given that $\Phi_{\mathrm{L}}$ and $\Phi_{\mathrm{NL}}$ are two independent SPPP, we can separate the expectation to obtain: $$\begin{aligned} \label{eq:laplace_notSolved} \mathcal{L}_{I_{R}}(s)=\mathrm{E}_{\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},g_i}\bigg[\exp\bigg(-s\sum\limits_{\{i:\: x_i\in\Phi_{\mathrm{L}}\cap\Phi_{\mathrm{A}},\: \|x_i\|>R\}} g_i K_{\mathrm{L}}\|x_i\|^{-\beta_{\mathrm{L}}}\bigg)\bigg] \\ \nonumber \mathrm{E}_{\Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},g_j}\bigg[\exp\bigg(-s\sum\limits_{\{j:\: x_j\in \Phi_{\mathrm{NL}}\cap\Phi_{\mathrm{A}},\: \|x_j\| > d_{\mathrm{eq}}^{-1}(R)\}}g_j K_{\mathrm{NL}}\|x_j\|^{-\beta_{\mathrm{NL}}}\bigg)\bigg].\end{aligned}$$ By applying the Probability Generating Functional (PGFL) for SPPP (which holds also in case of inhomogeneous SPPP [@Haenggi2013]) to , we obtain the following result: The Laplace transform $\mathcal{L}_{I_{R}}(s)$ for LOS/NLOS propagation with model given in is: $$\mathcal{L}_{I_{R}}(s)=\exp\Bigg(-2\pi\lambda_{\mathrm{I}}\int\limits _{R}^{+\infty}\left[\frac{sK_{\mathrm{L}}v^{-\beta_{\mathrm{L}}}}{sK_{\mathrm{L}}v^{-\beta_{\mathrm{L}}}+\mu}\right]p_{\mathrm{L}}(v)v\mathrm{d}v\Bigg)$$ $$\label{eq:laplace_solved} \exp\Bigg(-2\pi\lambda_{\mathrm{I}}\int\limits _{d_{\mathrm{eq}}^{-1}(R)}^{+\infty}\left[\frac{sK_{\mathrm{NL}}v^{-\beta_{\mathrm{L}}}}{sK_{\mathrm{NL}}v^{-\beta_{\mathrm{NL}}}+\mu}\right]p_{\mathrm{NL}}(v)v\mathrm{d}v\Bigg).$$ The Laplace transform in along with and can be plugged in to obtain the SINR CCDF through numerical integration. Average Spectral Efficiency and Area Spectral Efficiency -------------------------------------------------------- Similarly to [@Andrews2011 Section IV] we compute the average spectral efficiency and the ASE of the network. First, we define the ASE as: $$\label{eq:ASE} \eta_{\mathrm{A}}=\frac{\lambda_{\mathrm{A}}\cdot A\cdot\mathrm{BW_{U}}\cdot \mathrm{E}[\mathrm{C}] }{A\cdot\mathrm{BW_{A}}}=\frac{\lambda_{\mathrm{A}}\cdot \mathrm{E}[\mathrm{C}] }{N},$$ where $\mathrm{BW_{A}}$ is the available bandwidth, $\mathrm{BW_{U}}$ is the used bandwidth, $\mathrm{E}[\mathrm{C}]$ is the average spectral efficiency, $A$ is the area, $N_{\mathrm{BS,A}}$ is the number of active base stations within the area $A$, and $N$ is frequency reuse factor. The average rate $\mathrm{E}[\mathrm{C}]$ can be computed as follows [@Andrews2011]: $$\begin{aligned} \mathrm{E}[\mathrm{C}] = \mathrm{E}\left[\log_{2}(1+\gamma)\right]=\int _{0}^{+\infty}\mathrm{P}\left[\log_{2}(1+\gamma)>u\right]\mathrm{d}u \\ \nonumber =\int_{0}^{+\infty}\int _{0}^{+\infty}\mathrm{P}\left[\log_{2}(1+\gamma)>u|r=R\right]f_{r}(R)\mathrm{d}R\mathrm{d}u.\end{aligned}$$ $$\label{eq:rate_final} = \int _{0}^{+\infty}\int _{0}^{+\infty}e^{-\mu(2^{u}-1)K_{\mathrm{L}}^{-1}R^{\beta_{\mathrm{L}}}\sigma^{2}} \mathcal{L}_{I_{R}}\big(\mu(2^{u}-1)K_{\mathrm{L}}^{-1}R^{\beta_{\mathrm{L}}}\big)f_{r}(R)\mathrm{d}R\mathrm{d}u$$ where $\mathcal{L}_{I_{R}}(s)$ is given in . Similarly to the SINR CCDF, can be evaluated numerically. Energy efficiency with LOS/NLOS propagation {#sec:energy_Efficiency} =========================================== Computing the transmit power per base station {#sub:Computing-the-transmit} --------------------------------------------- We evaluate the TX power in order to compute the overall power consumption of the wireless network. Ideally, to ensure the network performance not be limited by the transmit power, $P_{\mathrm{TX}}$ should be set in order to guarantee the interference-limited regime, i.e., the transmit power should be high enough so that the thermal noise power at the user receiver can be neglected with respect to the interference power at the receiver. In fact, when the network is in the interference-limited regime, the transmit power is high enough that any further increase of it would be pointless in terms of enhancing the SINR, since the receive power increment would be balanced by the exact same interference increment. Practically, we refer to the outage probability $ \theta = \mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{th}} \right]$ as a constraint to set the power necessary to reach the interference limited regime. When the TX power is low, small increments of $P_{\mathrm{TX}}$ yields large improvements of the outage $\theta$; however, as $P_{\mathrm{TX}}$ increases, the corresponding outage gain reduces, until $\theta$ eventually converges to its optimal value $ \theta^*$, which would be reached in absence of thermal noise. It is reasonable to assume the network be in the interference-limited regime when the following condition is met: $$| \theta^*-\theta | \leq \Delta_\theta, \label{eq:Interf_limited_reg}$$ where $\Delta_\theta$ is a tolerance measure setting the constraint in terms of the maximum deviation of $\theta$ from the optimal value $\theta^*$. Eq. provides us with a metric to compute the transmit power, but does not give us any indication on how to find $P_{\mathrm{TX}}$ as a function of the density $\lambda$. Unfortunately, we cannot derive a closed-form expression for the transmit power that satisfies , as we do not have any closed-form solution for the outage probability $ \theta = \mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{th}} \right]$. We then take a different approach to calculate the minimum transmit power. In Alg. \[algo:power\] we proposed a simple iterative algorithm that finds the minimum transmit power satisfying by using numerical integration of . This algorithm computes the outage probability corresponding to a given $P_{\mathrm{TX}}$; starting from a low value of power, it gradually increases $P_{\mathrm{TX}}$ by a given power step $\Delta_{P}$, until is satisfied. To speed up this procedure, the step granularity is adjusted from a coarse step $\mathrm{p}_1$ up to the finest step $\mathrm{p}_{N_{\mathbf{p }}}$, which represents the precision of the power value returned by Alg. \[algo:power\]. INPUTS: 1. Vector of the power steps in dBm $\mathbf{p} = [\mathrm{p}_1,\ \cdots \ \mathrm{p}_{N_{\mathbf{p}}}]$, $\mathrm{N}_{\mathbf{p}}$ is the length of vector $\mathbf{p}$; 2. Outage SINR threshold $\gamma_{\mathrm{th}}$; 3. Outage tolerance $\Delta_{\theta}$; Initialize variables: 1. $P_{curr} = P_{N_0}$, where $P_{N_0}$ is the AWGN power in dBm over the bandwidth $\mathrm{BW}_{\mathrm{U}}$ 2. $P_{\mathrm{fin}} = P_{curr}$ Find optimal outage $\theta^* = \mathrm{P}\left[ \gamma \leq \gamma_{ \mathrm{th}} \right]$ by integrating with parameter $\sigma^2=0$ Find $\theta(P_{curr}) = \mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{ th}} \right]$ by integrating with parameter $ \sigma^2=10^{-\frac{P_{curr}}{10}}$ Set granularity of the power step $\Delta_{P} = \mathbf{p}_k $ Increase the current power with step $\Delta_{P}$, i.e, $P_{curr} = P_{ curr} + \Delta_{P}$ Find $\theta(P_{curr}) = \mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{ th}} \right]$ by integrating with parameter $ \sigma^2=10^{-\frac{P_{curr}}{10}}$ Update the final value of power, i.e., $P_{\mathrm{fin}} = P_{curr}$ Remove the last power increment before increasing the granularity, i.e., $P_{curr} = P_{curr} - \Delta_{P}$ OUTPUT: $P_{\mathrm{fin}}$ is the power in dBm s.t. is satisfied. Energy efficiency {#sec:Energy_Efficiency} ----------------- In this subsection, we characterize the energy efficiency of the network as a function of the base station density to identify the trade-off between the area spectral efficiency and the power consumed by network. We define the *energy efficiency* as the ratio between the overall throughput delivered by the network and the total power consumed by the wireless network, i.e., we define the energy efficiency as follows: $$\eta_{\mathrm{EE}}(\lambda)\triangleq\frac{T(\lambda)}{P_{\mathrm{TOT}}( \lambda)},\label{eq:eff_definition}$$ where $T(\lambda)$ is the network throughput, which can be written as $T( \lambda)= A\cdot\mathrm{BW}\cdot \eta_{\mathrm{A}}(\lambda)$, with **$\mathrm{BW}$** denoting the bandwidth and $\eta_{\mathrm{A}}( \lambda)$ denoting the area spectral efficiency; $P_{\mathrm{TOT}}$ is the overall power consumption of the network. When we compute the power consumption of each BS, we need to take into account that a fraction of the base stations may be inactive and model the power consumption accordingly. For active base station, we model the power consumption $P_{\mathrm{BS,A}}$ of the base station assuming that $P_{\mathrm{ BS,A}}$ is the sum of two components, i.e., $P_{\mathrm{BS,A}}=P_{0}+P_{ \mathrm{RF}}$; the first, denoted by $P_{0}$, takes into account the energy necessary for signal processing and to power up the base station circuitry. This power $P_{0}$ is modelled as a component being independent of the transmit power and of the base station load [@Auer2011]. The second component, denoted by $P_{\mathrm{RF}}$, takes into account the power fed into the power amplifier which is then radiated for signal transmission. The power $P_{\mathrm{RF}}$ is considered to be proportional to the power transmitted by the base station; we can thus write $P_{\mathrm{RF}}=K_{\mathrm{RF}}P_{ \mathrm{TX}}$, where $K_{\mathrm{RF}}$ takes into account the losses of the power amplifier (i.e., we assume $K_{\mathrm{RF}}$ to be the inverse of the power amplifier efficiency). In the case of inactive base stations, we assume that the BS switches to a stand-by state for energy saving purposes [@Ashraf2011], in which it does not transmit (i.e., $P_{\mathrm{RF}}=0$) and reduces the circuitry power consumption. Therefore, the power required to maintain the stand-by state can be modelled as $P_{\mathrm{BS,S}}=\rho P_0 $, where $\rho$ is power saving factor that describes the relative power consumption of the circuitry with respect to the active case; note that $0<\rho<1$. Finally, the overall power consumption of the network due to both active and inactive base station can be expressed as follows: $$P_{\mathrm{TOT}}= A \lambda_{\mathrm{A}} P_{\mathrm{BS,A}} + A(\lambda - \lambda_{\mathrm{A}})P_{\mathrm{BS,S}}$$ $$\label{eq:power_tot} = A \lambda_{\mathrm{A}} P_0 + A \lambda_{\mathrm{A}} P_{\mathrm{TX}} K_{ \mathrm{RF}} + A (\lambda-\lambda_{\mathrm{A}})\rho P_0.$$ The energy efficiency for the cases of fully loaded networks and partially loaded networks is addressed in the following sub-sections. Energy efficiency for fully loaded networks {#subsec:energy_eff_fully_loaded} ------------------------------------------- In this section we study the energy efficiency $\eta_{\mathrm{EE}}(\lambda)$ trend as a function of $\lambda$; we focus on fully loaded networks, i.e., $p_{\mathrm{A}}=1$ and $\lambda_{\mathrm{A}}=\lambda$. Unfortunately, the analysis of the derivative of $\eta_{\mathrm{EE}}$ is not straightforward, as we have a closed-form solution neither for the throughput $T(\lambda)$ nor for the transmit power $P_{\mathrm{TX}}(\lambda)$. One feasible way to get around this burden is to approximate $T(\lambda)$ and $P_{\mathrm{TX}}(\lambda)$ with functions in the form: $$f(z)=az^b.\label{eq:linear_reg_func}$$ The model in has two advantages: (i) it can be easily derivated and, thus, is apt to investigate the existence of optima; (ii) it is well suited to fit the non-linear behaviour of ASE and TX power. In fact, we have shown in our previous work [@Galiotto2014] that both $T(\lambda)$ and $P_{\mathrm{TX}}(\lambda)$ can be approximated with a piece-wise function in the form , where the parameters $a$ and $b$ can be obtained, for instance, by linear regression in the logarithmic domain for a given range of values of $\lambda$. Backed by the conclusions from previous work [@Galiotto2014], we approximate the throughput as $T(\lambda)=A T_{0}\lambda^{\alpha}$ and the transmit power as $P_{\mathrm{TX}}(\lambda)=P_{\mathrm{T}}\lambda^{ \delta}$, within a given interval of $\lambda$. Under these assumptions, the energy efficiency becomes: $$\eta_{\mathrm{EE}}(\lambda)=\frac{T_{0}\lambda^{\alpha}}{\lambda P_{0}+\lambda K_{\mathrm{RF}} P_{\mathrm{T}} \lambda^{\delta}}=\frac{T_{0}\lambda^{\alpha-1}}{P_{0}+ K_{\mathrm{RF}} P_{\mathrm{T}}\lambda^{\delta}}. \label{eq:eff_pathloss_full_load}$$ The derivative of $\eta_{\mathrm{EE}}(\lambda)$ is given below: $$\frac{\mathrm{d}\eta_{EE}(\lambda)}{\mathrm{d}\lambda}=\frac{T_{0}P_{0}(\alpha -1)\lambda^{\alpha-2}+T_{0} K_{\mathrm{RF}} P_{\mathrm{T}}(\alpha-\delta-1)\lambda^{\alpha+ \delta-2}}{\left(P_{0}+K_{\mathrm{RF}} P_{\mathrm{T}}\lambda^{\delta}\right)^{2}}.$$ Let us note that $T_{0}$, $P_{0}$, $K_{\mathrm{RF}}$ and $P_T$ are positive; moreover it is reasonable to assume that $\alpha>0$ (i.e., the area spectral efficiency is an increasing function of the density) and that $\delta<0$, i.e., the transmit power per BS is a decreasing function of the density. In the following paragraphs, we study the behaviour of the energy efficiency as function of the density $\lambda$ by analyzing the derivative $\eta^\prime_{\mathrm{EE}}(\lambda)$. We distinguish the following three cases. ### The energy efficiency is a monotonically increasing function {#subsect:energy_eff_increasing} This occurs if the ASE growth is linear or superlinear, i.e., if $\alpha\geq 1$. From this, if follows that also $\alpha\geq 1>1+\delta$ holds true; in this case, $\eta^\prime_{\mathrm{EE}}(\lambda)$ is strictly positive, meaning that the energy efficiency increases as the density increases. ### The energy efficiency is a monotonically decreasing function {#subsect:energy_eff_decreasing} This occurs if the ASE growth is sublinear, i.e., if $\alpha<1$, and, in addition, $\alpha<1+\delta$. Then, $\eta^\prime_{\mathrm{EE}}(\lambda)$ is strictly negative and so the energy efficiency is a monotonically decreasing function of the density $\lambda$. ### The energy efficiency exhibits an optimum point {#subsect:energy_eff_maximum} If ASE gain is sublinear (i.e. $\alpha<1$) but grows with a slope $\alpha$ sufficiently high, (i.e., $\alpha>1+\delta$), then we obtain that the derivative $\eta\prime_{\mathrm{EE}}(\lambda)$ nulls for $$\label{eq:optimal_en_eff_point} \lambda_{0}=\left(\frac{P_{0}\left(1-\alpha\right)}{K_{\mathrm{RF}} P_{\mathrm{T}} \left(\alpha-\delta-1\right)}\right)^{1/\delta},$$ is positive for $\lambda<\lambda_0$ and is negative for $\lambda>\lambda_0$. Therefore, $\lambda_0$ is a global maximum of the energy efficiency. As a whole, the behavior of the spectral efficiency is due to how the growths of the ASE of the TX power relate among each other as the base station density increases. If the ASE grows rapidly enough to counterbalance the total power increase of the network given by the addition of base stations, then the energy efficiency increases with the BS density; this means that adding base station is profitable in terms of energy efficiency. Else, adding BSs turns not to be profitable from energy efficiency point of view. Energy efficiency for partially loaded networks {#subsec:energy_eff_part_loaded} ----------------------------------------------- For partially loaded networks, we only analyze the case where $\lambda > \lambda_{\mathrm{U}}$, as the opposite case of $\lambda <\lambda_{\mathrm{U }}$ leads back to fully loaded networks. By using L’Hôpital’s rule, one can show that can be approximated by $p_{\mathrm{A}} \cong \lambda_{\mathrm{U}}\lambda^{-1}$, for $\lambda$ is sufficiently greater than $\lambda_{\mathrm{U}}$. By applying this approximation to , we obtain: $$\label{eq:P_tot_approx_1} P_{\mathrm{TOT}} = \lambda_{\mathrm{U}} P_0 (1-\rho) + \lambda \rho P_0 + \lambda_{\mathrm{U}} K_{\mathrm{RF}} P_{\mathrm{T}} \lambda^{\delta}.$$ It is known from [@Auer2011] that, as the BS density increases, the main contribution to the total power consumption is due to the circuitry power $P_0$, while the transmit power becomes negligible for the overall power balance. Therefore, to make the problem more tractable, we can further approximate the total power in as $P_{\mathrm{TOT}} \cong \lambda_{\mathrm{U}} P_0 (1-\rho) + \lambda \rho P_0$. From , by using the approximation $T(\lambda)=A T_{0}\lambda^{\alpha}$ for the throughput and $P_{\mathrm{TOT}} \cong \lambda_{\mathrm{U}} P_0 (1-\rho) + \lambda \rho P_0$ for the power, we obtain the following expression for the energy efficiency: $$\eta_{\mathrm{EE}}(\lambda)\cong \frac{T_{0}\lambda^{\alpha-1}}{ \lambda_{ \mathrm{U}} P_0 (1-\rho) + \lambda \rho P_0 }.\label{eq:eff_pathloss_part_load}$$ To analyze the behaviour of the energy efficiency as a function of $\lambda$, we follow the same approach as in Section \[subsec:energy\_eff\_fully\_loaded\] and we compute the derivative of $\eta_{\mathrm{EE}}(\lambda)$, which is given below: $$\frac{\mathrm{d}\eta_{EE}(\lambda)}{\mathrm{d}\lambda}=\frac{T_{0} \lambda^{ \alpha-1}\left( \lambda \rho (\alpha-1) +\alpha \lambda_{\mathrm{U}} (1-\rho ) \right)}{ \left(\lambda_{\mathrm{U}} P_0 (1-\rho) + \lambda \rho P_0 \right) ^2}.$$ As the ASE is known to be sub-linear for partially loaded networks [@Park2014; @ChangLi2014], we assume $0<\alpha <1$; moreover, the power saving factor $\rho$ satisfies $0<\rho <1$. Therefore, the derivative $\eta_{EE}^\prime$ nulls for: $$\label{eq:optimal_lambda_partial_load} \lambda^* = \frac{ \alpha \lambda_{\mathrm{U}} (1-\rho) }{ \rho(1-\alpha )},$$ it is positive for $\lambda < \lambda^*$ while it is negative for $\lambda > \lambda^*$. Hence, $\lambda^*$ is a local maximum of the energy efficiency for partially loaded networks and the energy efficiency decreases for densities $\lambda > \lambda^*$. Note that, this result holds for $\lambda$ sufficiently greater than $\lambda_{\mathrm{U}}$. Results {#sect:results} ======= In this section we present and discuss the results we obtained by integrating numerically the expressions of outage probability, of the Spectral Efficiency (SE), and of the ASE. In Section \[subsect:rate\_ASE\_density\], \[sub:frequency\_reuse\] and \[sub:partially\_loaded\_results\] we assume the network to be interference-limited (i.e., we set the thermal noise power to 0), while the noise is taken into account in Section \[sub:TXPowerPerBS\_results\] and \[sub:energy\_efficiency\_results\]. The parameters we used to obtain our results are specified in Table \[table:parameters\]. **Parameter** **Value** ------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Path-loss - Single slope Path-loss - Combined LOS/NLOS See ; with $d$ in km, $K_{\mathrm{L}}=10^{10.38}$, $\beta_{\mathrm{L}}=2.09$, $K_{\mathrm{NL}}=10^{14.54}$, $\beta_{\mathrm{NL}}=3.75$, $d_0 =0.156\mathrm{km}$, $d_1 = 0.03\mathrm{km}$ [@3GPP36814] Parameter $L$ 82.5m, set so that (2) and (3) intersect at the point corresponding to probability 0.5 Bandwidth $\mathrm{BW}$ 10 MHz centered at 2 GHz Noise Additive White Gaussian Noise (AWGN) with -174 dBm/Hz Power Spectral Density Noise Figure 9 dB Antenna at BS and UE Omni-directional with 0 dBi gain $K_{\mathrm{RF}}$ $10$ [@Auer2011] $P_{0}$ $10$W [@Auer2011] : Parameters for results section[]{data-label="table:parameters"} Spectral efficiency, outage probability and ASE {#subsect:rate_ASE_density} ----------------------------------------------- In this subsection we assume the network to be fully loaded and with frequency reuse 1. We compared the results for two LOS probability functions, namely and ; we also compared the results for LOS/NLOS propagation with those obtained with a the single slope path-loss model. We first analyze the outage probability (defined as $\theta = \mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{th}} \right]$) results, which have been obtained by numerical integration of . We show the outage probability results in Fig. \[fig:outage\_full\_load\], where we can see the different effect of the LOS/NLOS propagation with respect to the single slope Path-Loss (PL). With single-slope PL, the outage is constant with the BS density. In contrast, with LOS/NLOS propagation, there is a minimum in the outage curves, which is achieved for density $\lambda$ = 50-100BSs/km$^2$, depending on the LOS probability function. Within this range of densities, the user is likely to be in LOS with the serving BS and in NLOS with most of the interfering BS, meaning that the interference power is lower than the received power. At densities $\lambda$ greater than 300BSs/km$^2$, the outage starts growing drastically and, depending on the LOS likelihood, can reach 32-43%. This is due to the fact that more and more interfering BSs are likely to enter the LOS region, causing an overall interference growth and thus a reduction of the SIR. At densities $\lambda$ smaller than 100BSs/km$^2$, the serving BS as well as the interfering BSs are likely to be in NLOS with the user. Because of this, both the receive power and the overall interference increase at the same pace[^4] and, as a consequence, the SIR remains constant, and so does the outage. Let us note that, the LOS probability function affects the outage curves at intermediate values of the BS density (e.g. 10-300 BSs/km$^2$). At low densities, all the BSs are likely to be in NLOS with the user, while at high densities the serving BS and the strongest interferers are likely to be in LOS BSs are likely to be in LOS with the user. The results of the ASE are shown in Fig. \[fig:ASE\_full\_load\]. Compared to the single-slope PL, which shows a linear growth of the ASE with the density $\lambda$, with the LOS/NLOS propagation we observe a different behaviour of the ASE. In particular, we observe a lower steepness of the ASE curve at high BS densities, which is due to the effect of the interfering BSs entering the LOS region and, thus, increasing the total interference power. To assess steepness of the ASE, we can use linear regression to interpolate the ASE curve with the model given in . In particular, we can approximate the ASE $\eta_{\mathrm{A}}(\lambda)$ with a piece-wise function of the kind $\eta_{\mathrm{A}}(\lambda)=\eta_{\mathrm{A},0}\lambda^\alpha$, where $\eta_{\mathrm{A},0}$ and $\alpha$ are given for given intervals of $\lambda$. We specifically focus on $\alpha$, which gives the steepness of the ASE curve. With reference to the ASE curve (solid-blue curve in Fig. \[fig:ASE\_full\_load\]) obtained with as a LOS probability function, the the value of the parameter $\alpha$ turns to be 1.15 within the range of $\lambda$ 1-50 BSs/km$^2$, 0.48 within the range 50-500 BSs/km$^2$ and 0.81 within the range 500-10000 BSs/km$^2$. Frequency reuse {#sub:frequency_reuse} --------------- To have a comprehensive view of the frequency reuse as an interference mitigation scheme, we need to assess the trade-off between the ASE and the network coverage probability, defined as $1-\mathrm{P}\left[ \gamma \leq \gamma_{\mathrm{th}} \right]$. The results of this trade-off are shown in Fig. \[fig:ASE\_coverage\_tradeOff\], where we plotted the network coverage against the ASE for different frequency reuse factors and base station densities. ![ASE vs coverage trade-off for frequency reuse. The trade-off curves have been plotted for BS density equal to 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000 BSs/km$^2$, and compare the combined LOS/NLOS model with the single slope one.[]{data-label="fig:ASE_coverage_tradeOff"}](ASE_vs_coverage_tradeOff_Freq_reuse_v2){width="0.52\columnwidth"} Firstly, we focus on the LOS/NLOS propagation; we can notice from this plot that, if we fix the BS density, higher frequency reuse factors enhance the network coverage but, on the other hand, determine a drop of the ASE. This is in line with what one would expect from frequency reuse. Nonetheless, if we have no constraint in the choice of the BS density, the ASE vs coverage trade-off improves as the frequency reuse factor increases. In fact, the trade-off curve we obtain for a given reuse factor $K$ lies on the top-right hand side with respect to the curve for reuse factor $K-1$. This means that, by increasing the reuse factor and the base station density at the same time, it is possible to achieve better performance than with a lower frequency reuse factors; note, though, that this is true when there is no constraint in terms of BS density. By looking at the single slope PL curve in Fig. \[fig:ASE\_coverage\_tradeOff\], it appears that higher frequency reuse factors should still be preferred in order to improve the ASE vs coverage trade-off. However, unlike with the LOS/NLOS path loss, increasing the BS density enhances the ASE with no loss in terms of network coverage. Yet, modelling the signal propagation with the combined LOS/NLOS path loss yields different results than with the single-slope PL. Partially loaded networks {#sub:partially_loaded_results} -------------------------- In this subsection we show the results of the cell densification for partially loaded networks with LOS/NLOS propagation. Differently from the case of fully loaded networks, we recall that a fraction of the BSs may be inactive and, thus, the density of interfering BSs $\lambda_{\mathrm{I}}$ does not necessary follow the trend of BS density $\lambda$ (see Section \[subsec:partially\_loaded\_networks\] and Fig. \[fig:prob\_A\_vs\_density\]). In Fig. \[fig:outage\_partial\_load\] and \[fig:ASE\_partial\_load\] we show the outage probability and the ASE curves, respectively, as function of the base station density for difference user densities. To better understand the effect of the partial load on the network performance, we compare these curves with those for fully loaded networks. We reported the values of the probability $p_{\mathrm{A}}$ of a BS being active over the outage and ASE curves. We observe that, as long as $p_{\mathrm{A}} \geq 0.9$, the deviation from the fully loaded network case is minimal. However, as soon as $\lambda$ approaches the value of user density $\lambda_{\mathrm{U}}$, the probability $p_{\mathrm{A}}$ drops and, as a consequence, the density of interfering $\lambda_{\mathrm{I}}$ BSs grows slowly with $\lambda$, up to the point where it saturates and converges to $\lambda_{\mathrm{U}}$ (see Fig. \[fig:prob\_A\_vs\_density\]). At the same time, as $\lambda$ increases, the distance from UE to the serving BS tends to decrease, leading to an increment of the received power. Overall, the fact that $\lambda_{\mathrm{I}}$ saturates whereas the received power keeps growing as $\lambda$ increases has a positive impact on the SIR; as a result, the outage probability (see Fig. \[fig:outage\_partial\_load\]) and the spectral efficiency improve once the density $\lambda$ approaches or overcomes $\lambda_{\mathrm{U}}$. In regards to the ASE trend, we show the results in Fig. \[fig:ASE\_partial\_load\]. According to , the ASE trend is the combined outcome of the increase of the spectral efficiency and of the density of the active base stations. As the density of base stations increases and approaches the user density $\lambda_{\mathrm{U}}$, the density of active base stations will converge to $\lambda_{\mathrm{U}}$ (see Fig. \[fig:outage\_partial\_load\]); given that the density of active BSs remains constant, the only contribution to the ASE increase will be given by the spectral efficiency improvement. As a matter of fact, we can see that, with respect to the case of fully loaded networks, the ASE curves show a lower gain when the density $\lambda$ approaches $\lambda_{\mathrm{U}}$. To assess steepness of the ASE, we applied linear regression to the ASE curves in order to obtain the value of the parameter $\alpha$ corresponding to different intervals of $\lambda$; we specifically consider the approximation for the curve corresponding to $\lambda_{\mathrm{U}}=1000$UEs/km$^2$ (red curve in Fig. \[fig:ASE\_partial\_load\]). These values are $\alpha = 1.15 $ within the density range 1-50 BSs/km$^2$, $\alpha = 0.43 $ within the density range 50-500 BSs/km$^2$ and $\alpha = 0.46 $ within the density range 500-10000 BSs/km$^2$. Transmit power per base station {#sub:TXPowerPerBS_results} -------------------------------- In Fig. \[fig:transmit\_power\] we show the simulation results of the transmit power per base station $P_{\mathrm{TX}}(\lambda)$, which has been computed by using Algorithm \[algo:power\] as explained in Section \[sub:Computing-the-transmit\]. In this figure we compare the results we obtained using the *single slope* and the *combined LOS/NLOS* path loss models. ![Transmit power per BS. The power has been obtained with an SINR threshold $\gamma_{\mathrm{th}}=-8$dB and for tolerance $\Delta_{\theta}= 0.1\%$. The plot compares the TX power per BS for single slope-slope and LOS/NLOS path-loss for fully loaded networks. It also provides the curve for frequency reuse factor 2 and 3 and for partially loaded network with $\lambda_{\mathrm{U}}=1000$UEs/km$^2$.[]{data-label="fig:transmit_power"}](power_per_BS_v2){width="0.52\columnwidth"} As we can see from this plot, the behaviour of the transmit power as a function of the BS density $\lambda$ is different in the two cases of single slope and combined LOS/NLOS propagation. With reference to Fig. \[fig:transmit\_power\], with single slope path loss, the power decreases linearly (in logarithmic scale) with the density; in the case of combined LOS/NLOS propagation, the transmit power exhibits different slopes as the as the base station density increases. We used linear regression to assess the slopes of the TX power curves (indicated by $\delta$, as explained in Section \[subsec:energy\_eff\_fully\_loaded\]) within different density intervals. With reference to the curve corresponding to fully loaded networks with LOS/NLOS propagation (solid-blue curve in Fig. \[fig:transmit\_power\]), the values of ($P_\mathrm{T}$, $\delta$) are $(9.3\cdot 10^{-9}, -1.9)$ within the $\lambda$ range 1-60 BSs/km$^2$, $(4.4\cdot 10^{-17}, -3.9)$ within the $\lambda$ range 60-300 BSs/km$^2$ and $(1.15\cdot 10^{-9}, -1.44)$ within the range 300-10000BSs/km$^2$. The fact that the transmit power per base station decays more or less steeply with the density $\lambda$ depends on how quickly the interference power increases or decreases with $\lambda$. As we explained in Section \[sub:Computing-the-transmit\], the transmit power per base station $P_{\mathrm{TX}}(\lambda)$ has to be set so that the network is interference limited. Thus, if the channel attenuation between the interferer and the user decreases quickly as the density increases, a lower transmit power will be enough to guarantee that the interference power is greater than the noise power. In other words, if the interferer-to-user channel attenuation tends to decrease quickly as the density increases, so does the transmit power and vice-versa. For instance, for $\lambda \in[60,300]$BSs/km$^2$, the probability of having interferers in LOS with the user rises and, as a consequence, we have a lower attenuation of the channel between the interfering base station and the user. Hence, the $P_{\mathrm{TX}}(\lambda)$ which guarantees the interference-limited regime will also decrease steeply with $\delta=-3.9$ as $\lambda$ increases. On the contrary, for $\lambda>300$ BSs/km$^{2}$, most of the interferers will have already entered the LOS zone, meaning that the interferer-to-user channel attenuation drops less rapidly than for $\lambda<300$ BSs/km$^{2}$; for this reason, also $P_{\mathrm{TX}}(\lambda)$ will decrease less rapidly with $\delta = -1.44$. Let us note that, with increasing reuse factors $N$, the TX power decreases, as indeed a smaller bandwidth is used and, thus, the noise power is lower. Energy efficiency {#sub:energy_efficiency_results} ------------------ One of the most surprising outcomes of our study on LOS/NLOS propagation for ultra-dense networks is the effect of cell-densification on the energy efficiency of the fully loaded network, of which we show the results in Fig. \[fig:energy\_efficiency\_v1\]. The difference between the energy efficiency with single-slope and with LOS/NLOS path-loss is noticeable. In the case of single-slope PL, due to the linear growth of the ASE, $\eta_{\mathrm{EE}}(\lambda)$ is a monotonically increasing function of the density $\lambda$ (see Section \[subsect:energy\_eff\_increasing\]). In the case of LOS/NLOS propagation, from Fig. \[fig:energy\_efficiency\_v1\] we observe that the energy efficiency exhibits a maximum, which is achieved for a given density $\lambda_0$. To explain this, we consider the case frequency reuse $N=1$ (solid-blue curve in Fig. \[fig:energy\_efficiency\_v1\]); from and with the values of the parameters $P_0$ (given in Table \[table:parameters\]), $P_\mathrm{T}$ and $\delta$ (given in Section \[sub:TXPowerPerBS\_results\]), and $\alpha$ (given in Section \[subsect:rate\_ASE\_density\]), the optimal point $\lambda_0$ is approximately 100BSs/km$^2$. Beyond this point, the ASE gain is too low to compensate power consumption increase in the network, leading to a drop in terms of energy efficiency. From Fig. \[fig:energy\_efficiency\_v1\], we can note that frequency reuse reduces the energy efficiency compared to $N=1$. As a result of the lower ASE achieved at higher frequency reuse factors $N$, the energy efficiency drops as $N$ increases. In Fig. \[fig:energy\_efficiency\_part\_load\] we show the energy efficiency for partially loaded networks, for a user density $\lambda_{\mathrm{U}}$ of 1000 UEs/km$^2$. As we are dealing with partially loaded networks, we are interested in the BS densities $\lambda>\lambda_{\mathrm{U}}$, where energy efficiency strongly depends on the power saving factor $\rho$ of the BSs in stand-by state. This is because the parameter $\rho$ determines the energy saving of the inactive BSs, which become more numerous as the density $\lambda$ increases. Depending on the value of $\rho$, according to a local maximum may even occur at $\lambda^* = \frac{ \alpha \lambda_{\mathrm{U}} (1-\rho) }{ \rho(1-\alpha )}$. With $\rho=0.1$ and with the values of $\alpha$ given in Section \[sub:partially\_loaded\_results\], the local maximum turns to be $\lambda^*\cong 7300$BSs/km$^2$. For higher values of $\rho$, $\lambda^*$ is smaller than or too close to $\lambda_{\mathrm{U}}$ to be considered as a reliable estimate of a maximum; we recall from Section \[subsec:energy\_eff\_part\_loaded\] that this estimate can be reckoned as reliable only if $\lambda^*$ is sufficiently greater than $\lambda_{\mathrm{U}}$. In fact, we observe from Fig. \[fig:energy\_efficiency\_part\_load\] that there is no local maximum beyond $\lambda_{\mathrm{U}}$ for $\rho=0.3$ or $0.6$. Conclusions {#sect:conclusions} =========== In this paper, we have proposed a stochastic geometry-based framework to model the outage probability, the Area Spectral Efficiency (ASE) of fully loaded and partially loaded Ultra-Dense Networks (UDNs), where the signal propagation accounts for LOS and NLOS components. We also studied the energy efficiency of UDNs resulting from this propagation model. As the main findings of our work, we have shown that, with LOS/NLOS propagation, massive cell densification determines a deterioration of the network coverage at high cell densities, if the network is fully loaded. Moreover, the ASE grows less steeply than a linear function at high cell densities, which implies that a larger number of base stations would be required to achieve a given throughput target with respect to the case of single slope path-loss. In regards to the energy efficiency, cell densification turns out to be inefficient for the network from an energetic point of view. In partially loaded networks, when the base station density exceeds that of the users, cell densification results in a coverage improvement. Overall, based on our findings, we can conclude that UDNs are likely face coverage issues in highly crowded environments with many users, which represent the worst case scenario for ultra-dense networks. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} I. Hwang, B. Song, and S. S. Soliman, “[A holistic view on hyper-dense heterogeneous and small cell networks]{},” *IEEE Communications Magazine*, vol. 51, no. 6, pp. 20–27, Jun. 2013. N. Bhushan, L. Junyi, D. Malladi, R. Gilmore, D. Brenner, A. Damnjanovic, R. Sukhavasi, and S. Geirhofer, “[Network Densification: the Dominant Theme for Wireless Evolution into 5G]{},” *IEEE Communications Magazine*, 2014. J. Park, S.-L. Kim, and J. Zander, “[Asymptotic behavior of ultra-dense cellular networks and its economic impact]{},” in *IEEE Global Communications Conference (GLOBECOM 2014)*, Dec. 2014, pp. 4941–4946. Ericsson, “[Ericsson White paper: 5G radio access]{},” Feb. 2015. \[Online\]. Available: [www.ericsson.com/res/docs/whitepapers/wp-5g.pdf](www.ericsson.com/res/docs/whitepapers/wp-5g.pdf) J. G. Andrews, F. Baccelli, and R. K. Ganti, “[A Tractable Approach to Coverage and Rate in Cellular Networks]{},” *IEEE Trans. Wireless Commun.*, vol. 59, no. 11, pp. 3122–3134, 2011. X. Zhang and J. G. Andrews, “[Downlink Cellular Network Analysis with Multi-slope Path Loss Models]{},” 2014. \[Online\]. Available: <http://arxiv.org/abs/1408.0549> T. Bai, R. Vaze, and R. W. Heath, “[Analysis of Blockage Effects on Urban Cellular Networks]{},” *IEEE Trans. Wireless Commun.*, vol. 13, no. 9, pp. 5070–5083, Sep. 2014. C. Galiotto, N. K. Pratas, N. Marchetti, and L. Doyle, “[A Stochastic Geometry Framework for LOS/NLOS Propagation in Dense Small Cell Networks]{},” *IEEE ICC 2015*, 2015. \[Online\]. Available: <http://arxiv.org/abs/1412.5065v2> M. Ding, P. Wang, D. Lopez-Perez, G. Mao, and Z. Lin, “[Performance Impact of LoS and NLoS Transmissions in Small Cell Networks]{},” Mar. 2015. \[Online\]. Available: <http://arxiv.org/abs/1503.04251> C. Li, J. Zhang, and K. B. Letaief, “[Throughput and Energy Efficiency Analysis of Small Cell Networks with Multi-Antenna Base Stations]{},” *IEEE Transactions on Wireless Communications*, vol. 13, no. 5, pp. 2505 – 2517, Mar. 2014. T. Bai and R. W. Heath, “[Coverage and Rate Analysis for Millimeter-Wave Cellular Networks]{},” *IEEE Trans. Wireless Commun.*, vol. 14, no. 2, pp. 1536–1276, Oct. 2014. S. Lee and K. Huang, “[Coverage and Economy of Cellular Networks with Many Base Stations]{},” *IEEE Commun. Lett.*, vol. 16, no. 7, pp. 1038–1040, Jul 2012. H. S. Dhillon, R. K. Ganti, and J. G. Andrews, “[Load-Aware Modeling and Analysis of Heterogeneous Cellular Networks]{},” *IEEE Transactions on Wireless Communications*, vol. 12, no. 4, pp. 1666–1677, Apr. 2013. 3rd Generation Partnership Project (3GPP), “[F]{}urther [A]{}dvancements for [E]{}-[UTRA]{} [P]{}hysical [L]{}ayer [A]{}spects ([R]{}elease 9),” Mar. 2010, 3GPP TR 36.814 V9.0.0 (2010-03). B. B[ł]{}aszczyszyn, M. K. Karrayy, and P. Keelerz, “[Wireless networks appear Poissonian due to strong shadowing]{},” *IEEE Transactions on Wireless Communications*, vol. PP, no. 99, pp. 1–1, 2015. M. Haenggi, *Stochastic Geometry for Wireless Networks*.1em plus 0.5em minus 0.4emCambridge Press, 2013. F. Baccelli and B. B[ł]{}aszczyszyn, *[Stochastic Geometry and Wireless Networks, Volume I, Theory]{}*.1em plus 0.5em minus 0.4emNOW Publishers, 2009. V. Chandrasekhar and J. G. Andrews, “[Spectrum Allocation in Tiered Cellular Networks]{},” *IEEE Transaction on Communications*, vol. 57, no. 10, pp. 3059 – 3068, October 2009. G. Auer *et al.*, “[How Much Energy Is Needed to Run a Wireless Network?]{}” *IEEE Wireless Commun.*, vol. 18, pp. 40 – 49, Oct. 2011. I. Ashraf, F. Boccardi, and L. Ho, “[SLEEP mode techniques for small cell deployments]{},” *IEEE Communications Magazine*, vol. 49, no. 8, pp. 72–79, Aug. 2011. C. Galiotto, I. M. Gomez, N. Marchetti, and L. Doyle, “[Effect of LOS/NLOS Propagation on Area Spectral Efficiency and Energy Efficiency of Small-Cells]{},” *IEEE Globecom 2014*, 2014. \[Online\]. Available: <http://arxiv.org/abs/1409.7575> [^1]: The parameters $K_{\mathrm{L}}$ and $K_{\mathrm{NL}}$ can either refer to the signal attenuations at distance $d=1$ m or $d=1$ km; this depends on the actual values given for the parameters of the channel model. [^2]: Whenever there is no chance of confusion, we drop the subscript $n$ and use $x$ and instead of $x_n$ for convenience of notation. [^3]: As we recall from Section \[subsect:Mapping\], with LOS/NLOS propagation the serving BS might not be the closest one to the user. Nonetheless, this does not alter the validity of the explanation we are giving in this section. [^4]: If both serving BS and interfering BS are in NLOS with the user, the path-loss exponents of the serving BS-to-user channel and of the interfering BS-to-user channels are the same and, therefore, the power or the interference and of the received signal varies at the same slope as a function of the density.
\#1\#2\#3[[*Theor. Mat. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Phys. Rev.*]{} [**B\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Nucl. Phys.*]{} [**B\#1**]{} (\#2) \#3]{} \#1\#2\#3[[*Int. J. Mod. Phys.*]{} [**B\#1**]{} (\#2) \#3]{} ł |V \#1[(\[\#1\])]{} \#1[\_[\#1]{}]{} t\#1\#2[\_[\#1]{}\_[\#2]{}]{} l[\^[(1)]{}]{} \#1[[*\#1*]{}]{} ITP-UH-22/96 September 1996\ OUTP-9633S cond-mat/9610222 [Exact solution of a $t$-$J$ chain with impurity]{}\ $^1$ [^1], [Fabian H.L. E’31ler]{}$^2$[^2] [and Holger Frahm]{}$^1$[^3]\ $^1$[*Institut für Theoretische Physik, Universität Hannover\ D-30167 Hannover, Germany*]{}\ $^2$[*Department of Physics, Theoretical Physics, Oxford University\ 1 Keble Road, Oxford OX1 3NP, Great Britain*]{}\ ABSTRACT > We study the effects of an integrable impurity in a periodic $t$-$J$ chain. The impurity couples to both spin and charge degrees of freedom and has the interesting feature that the interaction with the bulk can be varied continuously without losing integrability. We first consider ground state properties close to half-filling in the presence of a small bulk magnetic field. We calculate the impurity contributions to the (zero temperature) susceptibilities and the low temperature specific heat and determine the high-temperature characteristics of the impurity. We then investigate transport properties by computing the spin and charge stiffnesses at zero temperature. Finally the impurity phase–shifts are calculated and the existence of an impurity bound state in the holon sector is established. PACS-numbers: 71.27.+a  75.10.Lp  05.70.Jk   Introduction ============ Theoretical investigations of strongly correlated electron systems have shown that the low temperature properties of such one dimensional systems have to be described in terms of a Luttinger liquid rather than a Fermi liquid. Of particular interest also from an experimental point of view are the transport properties of these systems in the presence of boundaries and potential scatterers. Several attempts have been made to describe such a situation: the transport properties of a 1D interacting electron gas in the presence of a potential barrier have been first studied using renormalization group techniques [@lupe:74; @matt:74]. Triggered by a more recent study of this problem by Kane and Fisher [@kafi:92] different approaches such as boundary conformal field theory [@car:89] and an exact solution by means of a mapping to the boundary sine-Gordon model [@fesa:95; @felsa; @saleur; @tsv0] have been applied to this problem. In particular the low temperature properties of magnetic (Kondo) impurities in a Luttinger liquid [@aff:90; @afflu:91; @frjo:95; @frjo:96] have been investigated in great detail. In the present work we will investigate the effects of a particular type of potential impurity in a Luttinger liquid (where both spin-and charge degrees of freedom are gapless) by means of an exact solution through the Quantum Inverse Scattering Method (QISM). Attempts to study effects due to the presence of impurities in many-body quantum system in the framework of integrable models have a long successful history [@ajo:84; @vewo:92; @miami; @bares:94; @kondo; @mck]. As far as lattice models are concerned the basic mechanisms underlying these exact solutions are based on the fact that the QISM allows for the introduction of certain “inhomogeneities”into vertex models without spoiling integrability. Two approaches are possible and have been studied for various models:\ Since the ${\cal L}$-operators defining the *local* vertices satisfy a Yang-Baxter equation with an ${\cal R}$ matrix depending on the difference of spectral parameters only, one can build families for vertex models with site-dependent shifts of the spectral parameters. This has been widely used in solving models for particles with an internal degree of freedom by means of the nested Bethe Ansatz [@yang:67] and, more recently, also for the construction of systems with integrable impurities [@bares:94] (for a particularly simple case see [@schm]).\ A second possibility is to introduce an impurity by choosing an ${\cal L}$-operator intertwining between *different* representations of the underlying algebra (possibly with an additional shift of the spectral parameter). This has first been used by Andrei and Johannesson to study the effect of a spin $S$ site in a spin-${1\over2}$ Heisenberg chain [@ajo:84]. Later this approach was used to study chains with alternating spins [@vewo:92; @miami]. Our work is based on the second approach. A novel feature as compared to [@ajo:84] is the presence of a continuously varying free parameter describing the strength of the coupling of the impurity to the host chain. The presence of the free parameter is based on properties of the underlying symmetry algebra of the model, which in our case is the graded Lie algebra $gl(2|1)$ (see below). Recently, vertex models and the corresponding quantum chains invariant under the action of graded Lie algebras have attracted considerable interest [@yue; @schlott; @eks:92a; @ek; @eks; @martins1; @martins2]. In addition to the well known supersymmetric $t$–$J$ model [^4] [@suth:75; @schl:87; @bares:91; @esko:92; @foka:93] (which is based on the fundamental three-dimensional representation of $gl(2|1)$) a model for electrons with correlated hopping has been constructed using the one-parametric family of four-dimensional representations of $gl(2|1)$ [@bglz:94; @bglz:95; @befr:95d; @bariev; @ghlz]. In this paper we study the properties of the supersymmetric $t$–$J$ model with one vertex replaced by an ${\cal L}$ operator acting on a four–dimensional quantum space. This preserves the supersymmetry of the model but at the same time lifts the restriction of no double occupancy present in the $t$–$J$ model at the impurity site. The free parameter associated with the four–dimensional representation of the superalgebra allows one to tune the coupling of the impurity to the host chain. Note that the present models allows for the study a situation which in some respects is more general than the ones mentioned above: the impurity introduced here couples to [*both*]{} spin- and charge degrees of freedom of the bulk Luttinger liquid. The extension of our calculation to the case of many impurities is straightforward (and will in fact be used in some parts of the paper). The paper is organized as follows: In the following section we give a brief review of the Quantum Inverse Scattering Method and the Bethe Ansatz for $gl(2|1)$-invariant models. Then we study the ground state properties, low temperature specific heat and transport properties and show how they are affected by the presence of the impurity. Finally we compute the phase shifts acquired by the elementary excitations, holons and spinons, when scattered off the impurity. Construction of the model ========================= The impurity model is constructed by means of the graded version of the Quantum Inverse Scattering Method [@vladb; @Kul; @SklKul]. We start with the ${\cal R}$-matrix of the supersymmetric $t$-$J$ model [@esko:92] \_[tJ]{}=[R]{}\_[33]{}=+ 1 , \[eq:Rtj\] with $\Pi^{a_1b_1}_{a_2b_2}= \delta_{a_1b_2}\delta_{a_2b_1}(-1)^{\epsilon_{b_1}\epsilon_{b_2}}$ being the graded permutation operator acting in the tensor product of two “matrix spaces” isomorphic to the three-dimensional “quantum space” space spanned by $(\uparrow,\downarrow,0)$ with the respective grading $\e_\uparrow=\e_\downarrow=1$ and $\e_0=0$. The corresponding $t$-$J$ ${\cal L}$-operator is given by ${\cal L}_{33}=\Pi {\cal R}_{33}$ and satisfies the following (graded) intertwining relation (7,3) (0,1.5)[(2,1)[2]{}]{} (0,2)[(2,-1)[2]{}]{} (1.8,0.75)[(-1,5)[0.4]{}]{} (6,2)[(-2,-1)[2]{}]{} (6,1.5)[(-2,1)[2]{}]{} (4.2,0.75)[(1,5)[0.4]{}]{} (3.1,1.75)[$=$]{} (-2.2,1.70)[${\cal R}_{33}(\lambda-\mu)$]{} (6.3,1.70)[${\cal R}_{33}(\lambda-\mu) \quad \ .$]{} (1.65,2.7)[${\cal L}_{33}(\lambda)$]{} (1.75,0.5)[${\cal L}_{33}(\mu)$]{} (4.7,2.7)[${\cal L}_{33}(\lambda)$]{} (4.4,0.5)[${\cal L}_{33}(\mu)$]{} In components this equations reads \_[33]{}(-)\_[a\_2 c\_2]{}\^[a\_1 c\_1]{} [L]{}\_[33]{} ()\_ \^[c\_1 b\_1]{}[L]{}\_[33]{}()\_\^[c\_2 b\_2]{} (-1)\^[\_[c\_2]{}(\_[c\_1]{}+\_[b\_1]{})]{}= [L]{}\_[33]{}()\_\^[a\_1 c\_1]{} [L]{}\_[33]{}()\_\^[a\_2 c\_2]{}(-1)\^ [\_[a\_2]{}(\_[a\_1]{}+\_[c\_1]{})]{} [R]{}\_[33]{}(-)\_[c\_2 b\_2]{}\^[c\_1 b\_1]{} . \[ir\] The monodromy matrix ${\cal T}_{tJ}$ is defined on the graded tensor product on quantum spaces and its matrix elements are given by \_[tj]{}()\^[a \_1 \_L]{}\_[b \_1 \_L]{}= [L]{}\_[33]{}()\^[ac\_L]{}\_[\_L \_L]{}[L]{}\_[33]{} ()\^[c\_L c\_[L-1]{}]{}\_[\_[L-1]{} \_[L-1]{}]{} \_[33]{}()\^[c\_2 b]{}\_[\_1 \_1]{} (-1)\^[\_[j=2]{}\^L(\_[\_j]{} +\_[\_j]{})\_[i=1]{}\^[j-1]{}\_[\_i]{}]{} . The hamiltonian of the $t$-$J$ model is then given as the logarithmic derivative of the transfer matrix $\tau(\lambda)=\mbox{str}[{\cal T}_{tJ}(\lambda)]:=\sum_{a=1}^3(-1)^{\epsilon_a} [{\cal T}_{tJ}(\lambda)]^{aa}$ at zero spectral parameter [@esko:92] \_[tJ]{}=-i(\_[33]{})|\_[ł=0]{}-2[N]{}\_e =-\_[k=1]{}\^L(\_[k,k+1]{}-1) -2[N]{}\_e . The ${\cal R}_{33}$-matrix can be constructed as the intertwiner of the three–dimensional fundamental representation of the superalgebra $gl(2|1)$. An impurity model can then be constructed in a way analogous to the spin-$S$ impurity in a spin-$\frac{1}{2}$ Heisenberg chain [@ajo:84] by considering the intertwiner of the three–dimensional representation with the typical four–dimensional representation corresponding to an impurity site with four possible states ($\up,\da,\up\da=2,0$). In this way we obtain an ${\cal L}$-operator ${\cal L}_{34}$ with the same auxiliary space dimension as the ${\cal R}_{33}$-matrix satisfying the following equation: (7,3) (0,1.5)[(2,1)[2]{}]{} (0,2)[(2,-1)[2]{}]{} (6,2)[(-2,-1)[2]{}]{} (6,1.5)[(-2,1)[2]{}]{} (1.8,0.75)[(-1,5)[0.4]{}]{} (1.9,0.75)[(-1,5)[0.4]{}]{} (4.1,0.75)[(1,5)[0.4]{}]{} (4.2,0.75)[(1,5)[0.4]{}]{} (3.1,1.75)[$=$]{} (-2.2,1.70)[${\cal R}_{33}(\lambda-\mu)$]{} (6.3,1.70)[${\cal R}_{33}(\lambda-\mu) \quad \ .$]{} (1.65,2.7)[${\cal L}_{34}(\lambda)$]{} (1.77,0.5)[${\cal L}_{34}(\mu)$]{} (4.7,2.7)[${\cal L}_{34}(\lambda)$]{} (4.4,0.5)[${\cal L}_{34}(\mu)$]{} The double line denotes the four–dimensional space $(\uparrow,\downarrow,2,0)$ with the respective grading $\e_\uparrow=\e_\downarrow=1$ and $\e_2=\e_0=0$. So double occupancy is possible at the impurity. The ${\cal L}_{34}$-matrix is given by \_[34]{}= + , where ${\cal L}$, expressed in terms of projection operators (the so called ‘Hubbard Operators’) $X^{ab}=|a\rangle\langle b|$ with $a,b=\uparrow,\downarrow,2,0$, is given by [ $${\cal L}= \left( \begin{array}{ccc} X_2^{\downarrow \downarrow}+X_2^{00} & - X_2^{\downarrow \uparrow} & \sqrt{\a+1}X_2^{0 \uparrow}-\sqrt{\a}X_2^{\downarrow 2} \\ - X_2^{\uparrow \downarrow} & X_2^{\uparrow \uparrow}+X_2^{00} & \sqrt{\a}X_2^{\uparrow 2}+\sqrt{\a+1}X_2^{0 \downarrow} \\ \sqrt{\a+1}X_2^{\uparrow 0}-\sqrt{\a}X_2^{2 \downarrow} & \sqrt{\a}X_2^{2 \uparrow}+\sqrt{\a+1}X_2^{\downarrow 0} & \a+ X_2^{\uparrow \uparrow}+X_2^{\downarrow \downarrow}+2 X_2^{00}\\ \end{array} \right) \ ,$$]{} The parameter $\a>0$ is associated with the four–dimensional representation of $gl(2|1)$[^5] [@bglz:94; @maas:95]. By inserting an ${\cal L}_{34}$-operator at one site (for example site 2) we arrive at the following monodromy matrix for the impurity model (see Fig. \[fig:halg\]) \_[imp]{}()\^[a \_1 \_[L+1]{}]{}\_[b \_1 \_[L+1]{}]{}= [L]{}\_[33]{}()\^[ac\_[L+1]{}]{}\_[\_[L+1]{} \_[L+1]{}]{} [L]{}\_[33]{}()\^[c\_[L+1]{} c\_[L]{}]{}\_[\_[L]{} \_[L]{}]{} \_[34]{}()\^[c\_3 c2]{}\_[\_2 \_2]{} [L]{}\_[33]{}()\^[c\_2 b]{}\_[\_1 \_1]{} (-1)\^[\_[j=2]{}\^[L+1]{} (\_[\_j]{}+\_[\_j]{}) \_[i=1]{}\^[j-1]{}\_[\_i]{}]{} The hamiltonian is given by the logarithmic derivative of the transfer matrix at zero spectral parameter \_[tJ-Imp]{}=-i |\_[ł=0]{}  . \[hamil\] Due to the fact that ${\cal L}_{34}$ has no shift-point the hamiltonian contains three-site interactions of the impurity with the two neighbouring sites. This makes the local hamiltonian rather complicated. However, as is shown in Appendix C by direct computation, the hamiltonian constructed in the way described above is invariant under the graded Lie-algebra $gl(2|1)$. Also in the continuum limit only a comparably small number of terms relevant in the renormalization group sense will survive. The impurity contributions to the hamiltonian are found to be 1-([L]{}\_[34]{}\^[-1]{}(0))\^[\_1 \_1]{}\_[\_2 2]{} \^[\_1 \_1]{}\_[\_3 \_3]{} ([L]{}\_[34]{}(0))\^[\_1 \_1]{}\_[\_2 \_2]{} (-1)\^[\[(\_[\_3]{}+\_[\_3]{})\_[2]{}\]]{} -i([L]{}\_[34]{}\^[-1]{}(0))\^[\_1 \_1]{}\_[\_2 2]{} ([L]{}\_[34]{}’(0))\^[\_1 \_1]{}\_[\_2 \_2]{} , which can be expressed in terms of the Hubbard operators as $$\begin{aligned} {\cal H}_{1,2,3} &=& 1+\frac{\a+1}{({\a \over 2}+1)^2}-{1 \over ({\alpha \over 2}+1)^2} \times \nonumber \\ &&\Big[ \tilde{\Pi}_{12} +\a X_1^{00}+H_{12}+{\a^2 \over 4}\left(1-2 X_1^{00}\right) \Pi_{13}(1-2 X_1^{00}) \nonumber \\ &&+\left(\tilde{\Pi}_{23}+X_2^{22}(1+\Pi_{13})\right) -{\a \over 2}\left[\left(1-2 X_1^{00}\right)\Pi_{13}\tilde{\Pi}_{12} +h.c. \right] \nonumber \\ &&-{\a \over 2}\left[\left(1-2 X_1^{00}\right)\Pi_{13}H_{12} +h.c.\right] + H_{12} \Pi_{13} H_{12}+\left(H_{12}\Pi_{13}\tilde{\Pi}_{12}+h.c.\right) \Big] \ .\nonumber \end{aligned}$$ Here $\tilde{\Pi}_{12}$ denotes a modified permutation operator \_[12]{}=(-X\_2\^[22]{}+(1-X\_2\^[22]{})\_[12]{}(1-X\_2\^[22]{})). The operator $H_{12}$ is given by [H\_[12]{}&=& X\_1\^[0]{}+X\_1\^[0]{}+[ h.c.]{} ]{} [From]{} the explicit form of the hamiltonian it is clear that the impurity can be thought of as being attached from the outside to the $t$-$J$ chain as is depicted in Fig. \[fig:halg\] In the limiting cases $\a \to \infty$ and $\a \to 0$ the impurity contribution simplifies essentially \_[0]{} H\_[1,2,3]{} &=& (1-n\_[2,]{}n\_[2,]{})(2-\_[12]{}-\_[23]{}) (1-n\_[2,]{}n\_[2,]{})+ n\_[2,]{}n\_[2,]{}(3-\_[13]{}) , \_ H\_[1,2,3]{} &=& 1-(1-2 X\_1\^[00]{})\_[13]{}(1-2X\_1\^[00]{}). \[hgl\] It is shown below that for $\a=0$ the impurity site behaves like an “ordinary” $t$-$J$ site in the ground state below half-filling. In the limit $\a=\infty$ the impurity becomes doubly occupied and causes the hopping amplitude between the two neighbouring $t$-$J$ sites to switch sign. The situation is equivalent to a decoupled impurity and a $t$-$J$ chain with $L$ sites and a twist of $-1$ in the boundary conditions. These two situations are shown in Fig. \[fig:hainf\] and Fig. \[fig:hnull\] respectively. For the half–filled case (two electrons at the impurity site) the impurity decouples from the host chain and we obtain the following hamiltonian (see Fig. \[fig:hhalb\]) H\_[1,2,3]{}\^[[1 2]{} - filling]{} = -\_[13]{}+1+ , \[hgr\] If we wish to consider the case of many impurities the above construction will go through as long as every ${\cal L}$-operator ${\cal L}_{34}$ is sandwiched by two ${\cal L}$-operators ${\cal L}_{33}$. The resulting hamiltonian will consist of a sum over terms of the form $H_{k,k+1,k+2}$ and $1-\Pi_{jj+1}$. The maximal number of impurities is thus bounded by the number of $t$-$J$ sites. Algebraic Bethe Ansatz ====================== The Bethe Ansatz equations can be derived as in [@esko:92]. The monodromy matrix is a $3 \times 3$ matrix of quantum operators acting on the entire chain \_[imp]{}= ( [ccc]{} A\_[11]{}(ł) & A\_[12]{}(ł) & B\_[1]{}(ł)\ A\_[21]{}(ł) & A\_[22]{}(ł) & B\_[2]{}(ł)\ C\_[1]{}(ł) & C\_[2]{}(ł) & D(ł) ) . Like in the case of the $t$-$J$ model one constructs the eigenstates of the hamiltonian by successive application of the $C_i$-operators on the bosonic reference state $|0\rangle$ |ł\_1,…,ł\_n|F=C\_[a\_1]{}(ł\_1)C\_[a\_2]{}(ł\_2) …C\_[a\_n]{}(ł\_n)|0F\^[a\_n …a\_1]{} \[state\] The values of the spectral parameters $\l_i$ and the coefficients $F^{a_n \dots a_1}$, which are constructed by means of a nested algebraic Bethe ansatz (NABA), are determined by requiring the cancellation of the so-called “unwanted terms”. For the construction of the NABA one only needs the intertwining relation (\[ir\]) (and the existence of the reference state). Due to the fact that the corresponding ${\cal R}$-matrix is the same for the impurity model and the $t$-$J$ model the resulting equations are very similar. The only modification of the Bethe equations arises from a different eigenvalue of the $A_{ii}$-operators on the reference state leading to the following form for the Bethe equations $$\begin{aligned} \left( \frac{\l_j-{i\over 2}}{\l_j+{i\over 2}} \right)^{L} \left( \frac{\l_j-{\alpha+1 \over 2}i}{\l_j+{\alpha+1 \over 2}i} \right) &=& \prod^{N_\downarrow}_{\alpha=1} \frac {\l_j-\lambda^{(1)}_\alpha-{i\over 2}} {\l_j-\lambda^{(1)}_\alpha+{i\over 2}}\ , \quad j=1,\ldots,N_e=N_\uparrow+N_\downarrow \ , \nonumber \\ \prod^{N_e}_{j=1} \frac{\lambda^{(1)}_\alpha-\l_j+{i\over 2}} {\lambda^{(1)}_\alpha-\l_j -{i\over 2}} &=& - \prod^{N_\downarrow}_{\beta=1} \frac{\lambda^{(1)}_\alpha-\lambda^{(1)}_\beta+i} {\lambda^{(1)}_\alpha-\lambda^{(1)}_\beta-i}\ , \quad \alpha=1,\ldots,N_\downarrow \ . \label{eq:bagi}\end{aligned}$$ The energy of a Bethe state of the form with spectral parameters $\{\l_j,\ j=1\dots N_e\}$, $\{\l^{(1)}_\g\ , \g=1\dots N_\da\}$ in the grand canonical ensemble is given by E=\_[j=1]{}\^[N\_e]{}[1 ł\_j\^2+[14]{}]{} -N\_e -H  . Here $\mu$ is the chemical potential and $H$ is a bulk magnetic field. Properties of the ground state and excitations as well as the thermodymanics can now as usual be determined [*via*]{} an analysis of the Bethe equations . In order to analyze ground state and excitations we first have to discuss some details concerning the lattice length. We note that the length of the lattice is $L+1$, where $L$ is the length of the $t$-$J$ “host” chain. It is known that the ground state of the $t$-$J$ model changes if the length of the lattice is increased by one. The situation is similar in the present model. A unique antiferromagnetic ground state exists for odd (equal) numbers of up and down spins. If we require a smooth limit to the half-filled band with a doubly occupied impurity we find that the length of the host chain must be of the form $L=0\ {\rm mod}\ 4$. If we consider the case of $n_i$ impurities and $N$ $t$-$J$ sites discussed above the left-hand-side of the first set of Bethe equations changes to $\left( \frac{\l_j-{i\over 2}}{\l_j+{i\over 2}} \right)^{N} \left( \frac{\l_j-{\alpha+1 \over 2}i}{\l_j+{\alpha+1 \over 2}i} \right)^{n_i}$ whereas the other equations remain unchanged. Ground State Properties ======================= In order to study ground state properties we need to know the configuration of $\l_j$’s and $\l^{(1)}$’s corresponding to the lowest energy state for given $\mu$ and $H$. It can be found in complete analogy to the $t$-$J$ model without impurity: for finite magnetic field the ground state is described in terms of two filled Fermi seas of - complex $\l$-$\l^{(1)}$- “strings” $\l_\pm=\l^{(1)} \pm \frac{i}{2}$ [@schl:87] (where $\l^{(1)}$ are real and fill all vacancies between the Fermi rapidity $B$ and $\infty$ and $-\infty$ and $-B$). As we approach half-filling $B$ tends to $0$. Removing one $\l$-$\l^{(1)}$-“string” from the Fermi sea and placing it outside leads to a particle-hole excitation involving only charge degrees of freedom (“holon-antiholon” excitation)[@bares:91]. - real solutions $\l_j$ associated with the spin degrees of freedom. They are filling all vacancies between $A$ and $\infty$ and $-\infty$ and $-A$. We note that $A\rightarrow\infty$ as $H\rightarrow 0$. At zero temperature the dressed energies $\Psi(\l)$ and $\varepsilon(\l)$ of the excitations associated with charge and spin degrees of freedom respectively have to satisfy the following coupled integral equations [@schl:87] (ł) &=& 2a\_2(ł)-2-\_[B]{}\^ + \_[-]{}\^[-B]{} d a\_2(ł-) ()-\_[A]{}\^+ \_[-]{}\^[-A]{} d a\_1(ł-) () , (ł) &=& 2a\_1(ł)---\_[B]{}\^+ \_[-]{}\^[-B]{} d a\_1(ł-) () where $a_n(\l) = \frac{1}{2\pi} \frac{n}{\l^2+\frac{n^2}{4}}$. The functions $\Psi$ and $\varepsilon$ are negative in the intervals $(-\infty,B) \cup (B,\infty)$ and $(-\infty,A) \cup (A,\infty)$ as they are monotonically decreasing functions of $|\l |$. The ground state energy per site is given by = -(0)-2+2+(-2+2- - \_[-B]{}\^B d a\_() () ). \[energy\] The first part on the r.h.s. is the contribution of the $L$ sites of the host $t$-$J$ chain, whereas the second term is the contribution from the impurity. Transforming the integral equations into the complementary integration intervals we obtain the following equations \_s(ł)&=& -2a\_1(ł)+H-\_[-A]{}\^[A]{}d a\_2(ł-) \_s() + \_[-B]{}\^B d a\_1(ł-) \_c() , \_c(ł)&=& -+\_[-A]{}\^[A]{}d a\_1(ł-)\_s() , \[dden2\] where $\eps_s=-\eps$ and $\eps_c=-\Psi$. The ground state energy per site is now given by &=& \_c(0)-2+2+ ( \_ + - 2) , \[Egs\] where \_=\_[-B]{}\^B dł \_c(ł) a\_(ł) . We now split into two parts: the contribution of the $t$-$J$ host chain $e_{host}$ and the contribution of the impurity $E_{imp}$. Note that in we divided by the length of the host chain instead of the length of the lattice. The impurity contribution to the ground state energy is then given by E\_[imp]{} = (\_ + - 2). \[Eimp\] Before we turn to an analysis of the impurity contributions to the ground state energy we give a brief review of the properties of the $t$-$J$ host chain. The ground state properties of the $t$-$J$ model have been studied in great detail in [@kaya:91; @bares:92]: below a critical density $n_c$, which is related to the magnetic field by H=4\^2([n\_c 2]{}), \[hc\] the integration boundary $B$ in is $\infty$ and the system behaves like a Fermi-liquid with all spins up. The particle density is a function of $A$ in this case. For densities larger than $n_c$ the system is a Luttinger liquid exhibiting spin and charge separation. For general band-fillings the integral equations can be solved only numerically (in the almost empty band it is possible to reduce them to a system of coupled Wiener-Hopf equations but we do not pursue this avenue here). For densities slightly above $n_c$ ($B \gg 1$) and $B \gg A$ the integral equations can be solved analytically. Evaluating equation (\[dden2\]) in this case leads to: \_s(ł)&=& -2a\_1(ł)+H+(- [H 2]{}) (1- [1 B]{}) +[O]{} ([1 B\^3]{}) , \_c(ł)&=& -+\_[-A]{}\^[A]{}d a\_1(ł-)\_s() , \[Bgga\] Via $\eps_s(A)=0$ and $\eps_c(B)=0$ the integration boundaries depend on the magnetic field and the chemical potential in the following way A\_c=+, B\_c=+Using the densities $\rho_c$ and $\rho_s$ (see (\[rn\])) the zero–temperature density and magnetization per site are given by: D\_[bulk]{}=1-(1-) (2A\_c) , m\_[bulk]{}=-(1+) (2A\_c) \[tjc\] Taking the limit $H \to 0$ with finite $B_c$ is not permitted in (\[tjc\]) as this would imply $A \to \infty$ in contradiction to $B \gg A$[^6]. For the almost half-filled band and a small magnetic field can be solved analytically as well [@schl:87]: combining a Wiener–Hopf analysis for small magnetic field $(A\gg 1)$ and an iterative solution near half-filling $(B\ll 1)$ one obtains [@es:96] \_c(0)= --2a-2(2) B\^3 where $\mb=2\ln(2)-\mu,\ |\mb|\ll 1$ and a= [e]{}e\^[-2A]{}= (1-+…) , B\^2= . \[eq:a\] these results we can now determine bulk and impurity contributions to the ground state energy. In order to give a reference frame for the impurity properties we start by giving the results for the (bulk) zero-temperature magnetization per site, magnetic susceptibility, density and compressibility: m\_[bulk]{}&=& (1-) (1+) +… ,\_[H,bulk]{}&=& (1-)(1+ ) &+& (1-) +… ,D\_[bulk]{}&=& 1 - +… ,\_[c,bulk]{}&=& +… . \[bulk\] The physical implications of have been discussed extensively in the literature [@kaya:91; @bares:92]. We merely note the divergences in the susceptibilities as we approach half-filling. Ground State Properties of the Impurity --------------------------------------- The impurity contribution to the ground state energy is of the same form as the surface energy of an open $t$-$J$ chain with boundary chemical potential studied in [@es:96] (more precisely the impurity contribution is of the same form as the boundary chemical potential dependent part of the surface energy). The parameter $\a$ plays a role similar to the boundary chemical potential in the open chain. The leading order impurity contributions for small bulk magnetic fields close to half-filling can thus be calculated in the same way as in [@es:96] with the result \_ &=& \[epsi\] Together with this gives the leading impurity contribution to the ground state energy. Note that by differentiating the impurity contribution to the ground state energy w.r.t. $\a$ it is possible to evaluate the expectation values of various operators at the impurity. However due to the complicated structure of these operators it is difficult to extract useful information from the expectation values and we therefore do not present these calculations here. For densities below and slightly above $n_c$ analytical results for particle number and magnetization can be obtained as well, whereas for generic values of $\a$ the integral equations can be solved only numerically. Below we first present the aforementioned analytical results and then turn to the numerical solution of the integral equations. ### Large $\a$ close to half-filling By taking the appropriate derivatives of the impurity contribution to the ground state energy we can evaluate the impurity contribution to the magnetization, particle number and susceptibilities. On physical grounds it is reasonable to assume that the impurity contributions to magnetization and particle number are concentrated in the vicinity of the impurity (see also Appendix B). We find M\_&=&(1-) +… , \_[H,]{}&=& +(1-) +… , N\_&=&2-+…, &=&+… \[agr\] By inspection of we find that close to half-filling the impurity is on average almost doubly occupied and thus only weakly magnetized. The susceptibilities exhibit the same types of singularities as the bulk. In comparison to the bulk the following ratios are found: &=&  ,=  . The analytic (\[agr\]) and numerical (Fig. \[fig:ni\]) results show that the impurity is doubly occupied in the limit $\a \to \infty$. Thus the effective ground state hamiltonian in this case is a $t$-$J$ model of $L$ sites with twisted boundary conditions (see (\[hgl\])) as represented by Fig. \[fig:hainf\]. This is consistent with the twisted boundary conditions occuring in (\[eq:bagi\]) in this limit. ### Small $\a$ close to half-filling For small values of $\a$ such that $\a\ll B$ we find the following results for the impurity contributions to magnetization, particle number and susceptibilities M\_&=& (1-) (1-) (1+) +… , \_[H,]{}&=& (1-)(1-) + (1-)+…, N\_&=& 1+- (1-) +… , &=& (1-) ++… \[akb\] Comparing this to (\[bulk\]) we see that the impurity behaves almost like an “ordinary” site of the lattice and the model reduces to a $t$-$J$ chain on $L+1$ sites in the limit $\a \to 0$ (see (\[hgl\])) as represented by Fig. \[fig:hnull\]. ### Densities below the critical electron density $n_c$ Below the critical electron–density (\[hc\]) the particle number at the impurity–site is given by (see Fig.  \[fig:ni\]) N\_(n\_e)=1-( ) \[idifer\] In this regime only the states $|0\rangle $ and $| \uparrow \rangle$ are realized in the ground state, hence the hamiltonian ${\cal H}_{1,2,3}$ simplifies to \_[1,2,3]{}&=&X\_3\^+X\_1\^+ [2 1+[2]{}]{}X\_2\^ &-&[1 1+[2]{}]{} ### Densities slightly above the critical electron density $n_c$ Slightly above $n_c$ the impurity contributions to magnetization and particle number are found to be N\_&=& 1-() (1-)++…,M\_&=& -() (1+)-+… \[ncc\] Combining (\[ncc\]) and (\[tjc\]) we obtain the following result |\_[n\_e=n\_c\^+]{}= . \[abnc\] ### Numerical Results for general band filling and magnetic field For general magnetic fields and band fillings the impurity magnetization and particle number can be determined by numerically solving the relevant integral equations. Some results for the impurity magnetization as a function of the magnetic field for different values of $\a$ and band fillings are shown in Fig. \[fig:impmag1\] and Fig. \[fig:impmag2\]. Only magnetic fields below the critical values are considered. For comparison the analytical results obtained above are included in the figures. The shape of the magnetization curves is entirely different from the ones obtained in the Kondo model where (as a function of $H$) there is a crossover from a linear behaviour for small fields to a constant value at large fields. From (\[idifer\]) one finds that the impurity magnetization at the critical magnetic field $H_c$ is an increasing function of $\a$. Numerical results show that this behaviour persists for $H<H_c$, small $\a$ and not too large densities (see Fig. \[fig:impmag1\] and Fig. \[fig:nhfest\]). On the other hand the impurity is almost doubly occupied and hence only weakly magnetized for large $\a$ and magnetic fields sufficiently smaller than $H_c$. This leads to a decrease of the magnetization with $\a$, which for small densities and small magnetic field fields is shown in Fig. \[fig:impmag1\] and Fig. \[fig:nhfest\]. Near half–filling the magnetization is a decreasing function of $\a$ (see (\[agr\]) and (\[akb\])) for all magnetic fields not too close to $H_c$ (see Fig. \[fig:impmag2\] and Fig. \[fig:nhfest\]). The impurity particle number $N_\a$ as a function of band filling for various values of $\a$ is shown in Fig. \[fig:ni\]. For small $\a$ $N_\a$ at first increases linearly with band-filling and then curves upwards, reaching $2$ at half-filling. For very small $\a$ the deviation from the $N_\a=n_e$-behaviour occurs very close to half-filling. For large values of $\a$ and densities above $n_c$ $N_\a$ increases rapidly with band-filling, then evens out and eventually reaches $N_\a=2$. The crossover between these two regimes occurs roughly for $\a\approx 1$ for small magnetic field . The magnetic-field dependence of the impurity particle number is shown in Fig. \[fig:na5\], where $N_\a$ is shown for the case $\a=5$ and several values of the magnetic field. We see that apart from the shift in critical density $n_c$ the behaviour of $N_\a$ is qualitatively unchanged as $H$ is increased. The derivative of $N_\a$ at $n_c$ is given by (\[abnc\]). Fermi velocities ---------------- As we will now show the impurity contribution to the susceptibilities are related to a modification of the Fermi velocities by the impurity. The bulk Fermi velocities are defined by v\_c=  ,v\_s= , where the root densities $\rho_s(\l)$ and $\rho_c(\l)$ are solutions of the integral equations \_s(ł)&=& a\_1(ł)-\_[-A]{}\^[A]{}d a\_2(ł-) \_s() + \_[-B]{}\^B d a\_1(ł-) \_c() ,\_c(ł)&=& \_[-A]{}\^[A]{}d a\_1(ł-) \_s() . \[rn\] The Fermi velocities can be calculated for small magnetic field near half filling (using the same techniques as above) leading to the results v\_c=B+... , v\_s=(1--)+... where $H_0=\sqrt{8\pi^3/e}$. Near the critical electron density $n_c$ we obtain v\_c=-+... , v\_s=+... The respective Fermi velocities of the entire system (bulk+impurity) $\tilde{v}_c$ and $\tilde{v}_s$ are given by \_== ,=s,c , \[fv\] where $A_s=A$, $A_c=B$ and $\rho^{(1)}_\b$ satisfy the following integral equations \_s\^[(1)]{}(ł)&=& -\_[-A]{}\^[A]{}d a\_2(ł-)\_s\^[(1)]{}() +\_[-B]{}\^[B]{}d a\_1(ł-)\_c\^[(1)]{}() \_c\^[(1)]{}()&=& a\_() +\_[-A]{}\^[A]{}d’ a\_1(-’) \_s\^[(1)]{}(’)For later convenience we define the ratios f\_=  . \[fb\] In the bulk the Fermi velocities are related to the susceptibilities [*via*]{} [@bares:92] \_[c,bulk]{}&=&, \_[H,bulk]{}&=&. \[chiba\] Here $Z_{\a \b}$ are the elements of the so-called dressed charge matrix (see e.g. [@woyn:89]) $${\bf Z}= \left( \begin{array}{cc} Z_{cc} & Z_{sc} \\ Z_{cs} & Z_{ss} \end{array} \right)= \left( \begin{array}{cc} \xi_{cc}(B) & \xi_{sc} (B) \\ \xi_{cs} (A) & \xi_{ss} (A) \end{array} \right) \ , \label{eq:z}$$ where $\xi_{\a\b}(\l)$ are solutions of the integral equations $$\left( \begin{array}{cc} \xi_{cc}(\vartheta) & \xi_{sc}(\vartheta) \\ \xi_{cs}(\lambda) & \xi_{ss}(\lambda) \end{array} \right)= \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)+ \left( \begin{array}{cc} 0 & \int_{-A}^{A}d\mu\ a_1(\vartheta-\mu) \\ \int_{-B}^{B}d\mu\ a_1(\l-\mu) & -\int_{-A}^{A}d\mu\ a_2(\l-\mu) \end{array} \right)* \left( \begin{array}{cc} \xi_{cc}(\mu) & \xi_{sc}(\mu) \\ \xi_{cs}(\mu) & \xi_{ss}(\mu) \end{array} \right)\ .$$ Replacing the velocities in (\[chiba\]) by $\tilde{v}_c$ and $\tilde{v}_s$ we see that the impurity contribution to the magnetic susceptibility is given by \_[H,]{}&=&. \[chihi\] There are two impurity contributions to the compressibility: one is due to the change of the electron density $n_e$, the other to the modification of the Fermi velocities \_[c,]{}=- \_[c,bulk]{}+. \[chici\] The second part can be identified with ${1 \over L n_e^2} \frac{\partial N_{\alpha}}{\partial \mu}$. Let us consider the two cases for which we presented analytical results above in more detail: - [$A \gg 1, B \ll 1$]{} corresponding to a small bulk magnetic field and an almost half-filled band. The leading contributions to the dressed charge matrix are given by [Z\_[cc]{} = 1+B + …&,& Z\_[sc]{}=+ …, Z\_[cs]{} = 2B &,& Z\_[ss]{}=(1-) +B+…\[dcl\] ]{} where $A,B,a$ are given by . With aid of the Wiener-Hopf technique we can obtain the ratios $f_c$ and $f_s$ as well. For $\a \gg B$ they are given by f\_c=+ +…, f\_s= +… . Inserting these results in (\[chihi\]) and (\[chici\]) we reproduce the results (\[agr\]) (to ${\cal O}({1 \over \ln(H)})$). For $\a \ll B$ we find f\_c=1+(1+[a (2)]{})+…, f\_s=1-+…which leads to the same resluts as (\[akb\]). - [$0<n_e-n_c\ll 1$]{} corresponding to densities slightly above the critical density $n_c$. We find Z\_[cc]{} = 1+(1-[1 B\_c]{}) [A\_c B\_c\^2]{}+ … &,& Z\_[sc]{}= [A\_c B\_c\^2]{}+ …, Z\_[cs]{} = 1- [1 B\_c]{}&,& Z\_[ss]{}=1+…. \[dcnc\] The ratios $f_\b$ are given by f\_c=, f\_s=. The Impurity at Finite Temperatures =================================== The thermodynamics of the supersymmetric $t$-$J$ model was studied by Schlottmann in [@schl:87]. The TBA equations for the dressed energies are the same in the presence of the impurity in complete analogy with [*e.g.*]{} [@ajo:84], so that we can simply quote the result from [@schl:87] &=&2a\_1--+Ta\_1\*(1+e\^[-]{}) -T \_[n=1]{}\^a\_n\*(1+e\^[-]{})&=&-2+2a\_2+Ta\_1\*(1+e\^[-]{}) +T a\_2\*(1+e\^[-]{})\_n&=&nH-T(1+e\^[-]{})+ Ta\_n\*(1+e\^[-]{}) +T \_[m=1]{}\^\_[nm]{}\*(1+e\^[-]{}) , \[tba\] where \_[nm]{}(ł)=\_[-]{}\^de\^[-i]{}(||) (e\^[-|(m-n)|]{}-e\^[-|(m+n)|]{}). The bulk free energy is given by [@schl:87] F\_[bulk]{}= -(0)-2, whereas the impurity contribution to the free energy can be cast in the form F\_[imp]{}=-2--\_[-]{}\^dł  a\_(ł). We note that in the zero temperature limit this reproduces correctly the ground state energy . In the high-temperature limit the TBA equations turn into algebraic equations that can be solved by Takahashi’s method [@taka:71]. The leading terms of the high-temperature expansion are given by &=&-T (1+e\^2) F\_[imp]{}&=& --T (1+e\^+e\^2). We see that these yield the correct values of the entropy of a system of $L$ sites with $3$ degrees of freedom and one site with four degrees of freedom in the limit $T\rightarrow\infty$. We also note that the parameter $\a$ enters only in a trivial way into the leading term of the high-temperature expansion. By taking the appropriate derivatives we can compute the mean values of particle number and magnetization D\_[bulk]{}&=& , m\_[bulk]{}= ,N\_[imp]{}&=&  , M\_[imp]{}=  . Half-filling corresponds to the limit $\mu\rightarrow\infty$, in which the impurity is on average doubly occupied and unmagnetized whereas the bulk exhibits a magnetization per site of $\frac{1}{2}\tanh\frac{H}{2T}$. Low Temperature Specific Heat ============================= In order to calculate the contributions of the impurity to the low–temperature specific heat we need to consider different characteristic regions (see Fig. \[fig:phase\]) of the $t$-$J$ model as it was done for the Hubbard model by Takahashi [@tak:74]. As in [@tak:74] we assume $H \gg T$ so we can neglect the effects of the string solutions. (An alternative approach which overcomes this restriction has recently been used in [@juku:96] to compute bulk thermodynamic properties in the $t$-$J$ model).\ [**Region A:**]{} The region is characterized by $\mu < -{H \over 2}$. For $T=0$ the electron density is zero as $A=B=\infty$. The low temperature free energy per site is given by = &=&-T \_[-]{}\^dł a\_2(ł)(1+e\^[-T]{}) -T \_[-]{}\^dł a\_1(ł)(1+e\^[-T]{}) && -[2 T ]{} \_[0]{}\^dł [1 ł\^2]{} (1+e\^[2 T]{}e\^[-2 ł\^2 T]{}) -[T ]{} \_[0]{}\^ dł [1 ł\^2]{}(1+e\^[+ H/2T]{} e\^[-1 ł\^2 T]{}) && -[1 ]{}T\^[3/2]{} e\^[2 T]{} -[1 2 ]{}T\^[3/2]{} e\^[+H/2 T]{} The impurity contribution to the low temperature free energy is F\_=F\_[imp]{}-F\_[imp]{}(T=0)=-T \_[-]{}\^ dł a\_(ł)(1+e\^[-T]{}) -[2 ]{}T\^[3/2]{} e\^[2 T]{} .\ [**Region B:**]{} For $T=0$ this region, characterized by $-H/2 < \mu < H/2$, is the ferromagnetic phase with electron density varrying as $0 < n_e < 1$. The right boundary line is defined by $B=\infty$ the left one by $A=B=\infty$. The low temperature free energies are given by -[T\^2 6]{}[1 v\_s]{} F\_-[T\^2 6]{}[f\_s v\_s]{} .\ [**Region C:**]{} In this region, characterized by $0< A,B < \infty $, the electron density varries between $0 < n_e < 1$ at zero temperature. The right boundary line defined by $B=0$ can be calculated with aid of (\[eq:a\]) for small magnetic field and with an iterative solution of (\[dden2\]) for $H \stackrel{<}{\sim} 4$. The low temperature free energies are given by -[T\^2 6]{} F\_-[T\^2 6]{}\ [**Region D:**]{} This region is characterized by $\mu > H/2 > 2$. For $T=0$ we obtain the ferromagnetic half–filled band. The low temperature free energy is given by = -T \_[-]{}\^dła\_1(ł)(1+e\^[-\_s T]{}) -[4 T ]{} \_[0]{}\^dł (1+e\^[4-H T]{}e\^[-16 ł\^2 T]{}) -[1 2 ]{}T\^[3/2]{} e\^[4-H T]{} . The impurity contribution to the low temperature free energy is F\_=-T \_[-]{}\^dł a\_(ł) (1+e\^[-\_c T]{})-T e\^[H/2-T]{} .\ [**Region E:**]{} For $T=0$ this region is half-filled and non ferromagnetic. The low temperature free energy of the bulk is given by -[T\^2 6]{}[1 v\_s]{} The impurity contribution in this region can not be calculated in closed form. In the two limiting cases $A \ll 1$ and $A \gg 1$ we obtain the following results F\_ -[2 ]{}T\^[3/2]{} { [ll]{} [1 ]{} e\^[2(2)-+2a T]{} & A 1\ e\^[H/2-+[2 3 ]{} (4-H)\^[3/2]{} T]{} & A 1 .\ [**Wilson ratio in Region C:**]{} The specific heat of the bulk and the impurity in region C are given by C\_[v,bulk]{}=(+) C\_[v,]{}=(+) \[cv\] In deriving these results we assumed that $H\gg T$. Defining R= =. and if we assume that the limits $T\rightarrow 0$ and $H\rightarrow 0$ commute [^7] we can calculate a “Wilson ratio” R\_W=\_[H 0]{} R= . Unlike for the case of the Kondo model (where spin and charge degrees of freedom decouple and the impurity couples only to the spin) there is no reason to expect $R_W$ to be universal for the present model. From our analytical calculations we find for $B\ll 1$, $B \ll \a$ that $R_W=1-n_e\ll 1$. Near the empty band the ratio $R_W$ tends to one because $v_c$ approaches zero. Numerical calculations show that the limiting value one is most rapidly approached for small values of $\a$ (see Fig. \[fig:rw\]). For comparison we quote the result for a Kondo impurity in a Luttinger liquid [@frjo:96], for which $R_W=\frac{4}{3}(1+\frac{v_s}{v_c})$. Transport Equations =================== Following Shastry and Sutherland [@shas:90] (see also [@zyv]) we will now calculate spin- and charge stiffnesses from the finite-size corrections to the ground state energy of the model with twisted boundary conditions. For the $t$-$J$ model the bulk stiffnesses were determined in [@kaya:91]. Our analysis follows closely the discussion given in [@bares:92]. Our goal is to evaluate the ground state energy as a function of the twist angles $\phi_\uparrow$ and $\phi_\downarrow$. Imposing twisted boundary conditions the BAE (\[eq:bagi\]) are modified in the following way $$\begin{aligned} \left( \frac{\l_j-{i\over 2}}{\l_j+{i\over 2}} \right)^{L} \left( \frac{\l_j-{\alpha+1 \over 2}i}{\l_j+{\alpha+1 \over 2}i} \right) &=& e^{-i \phi_\uparrow} \prod^{N_\downarrow}_{\gamma=1} \frac {\l_j-\lambda^{(1)}_\gamma-{i\over 2}} {\l_j-\lambda^{(1)}_\gamma+{i\over 2}}\ , \quad j=1,\ldots,N_e \ , \nonumber \\ \prod^{N_e}_{j=1} \frac{\lambda^{(1)}_\gamma-\l_j+{i\over 2}} {\lambda^{(1)}_\gamma-\l_j -{i\over 2}} &=& - e^{i(\phi_\uparrow-\phi_\downarrow)}\prod^{N_\downarrow}_{\beta=1} \frac{\lambda^{(1)}_\gamma-\lambda^{(1)}_\beta+i} {\lambda^{(1)}_\gamma-\lambda^{(1)}_\beta-i}\ , \quad \gamma=1,\ldots,N_\downarrow \ . \label{eq:bagitwi}\end{aligned}$$ For technical reasons it is convenient to use a different representation of the BAE first introduced by Sutherland for the $t$-$J$ model without impurity [@suth:75]. This can be obtained by a particle–hole transformation in the space of rapidities which is done in Appendix A. The final BAE are given by [^8] [$$\begin{aligned} \left( \frac{\l_j+{i\over 2}}{\l_j-{i\over 2}} \right)^{L} &=& -e^{i\phi_s} \prod^{M^{(1)}+N_\downarrow-1}_{k=1} \frac {\l_j-\l_k+i} {\l_j-\l_k-i}\ \prod^{M^{(1)}}_{\g=1} \frac{\l_j-\l_\g^{(1)}-{i\over 2}}{\l_j-\l_\g^{(1)}+{i\over 2}}, \quad j=1,\ldots, L+1-N_\uparrow\ , \nonumber \\ \prod^{M^{(1)}+N_\downarrow-1}_{j=1} \frac{\l_\g^{(1)}-\l_j-{i\over 2}}{\l_\g^{(1)}-\l_j+{i\over 2}} &=& e^{-i\phi_c} \frac{\l_\g^{(1)}+i{\alpha \over 2}}{\l_\g^{(1)}-i{\alpha \over 2}} \quad \g=1,\ldots,M^{(1)}=L+2-N_\uparrow-N_\downarrow \ , \label{eq:bagist}\end{aligned}$$ ]{} where the twist–angles are given by $\phi_s=\phi_\uparrow-\phi_\downarrow$ and $\phi_c=\phi_\downarrow$ [@bares:92]. The equations can be simplified by making use of the ‘string-hypothesis’[^9], which states that for $L\rightarrow\infty$ all solutions are composed of real $\1l_\g$’s whereas the $\l$’s are distributed in the complex plane according to the description ł\^[n,j]{}\_= ł\^[n]{}\_+ i(-j) ,j=1…n , \[strings\] where $\b=1\ldots M_n$ labels different ‘strings’ of length $n$ and $\l^{n}_\b$ is real. The imaginary parts of the $\l$’s can now be eliminated from via . Taking the logarithm of the resulting equations (for $M_n$ strings of length $n$ and $M^{(1)}$ $\l^{(1)}$’s (note that $\sum_{n=1}^\infty nM_n= L+1-N_\uparrow$)) we arrive at I\^n\_&=& () - \_[(m)]{}\_[mn]{}(ł\^n\_-ł\^m\_) + \_[=1]{}\^[M\^[(1)]{}]{}() +, =1…M\_n J\_&=& \_[(n)]{}(l\_-ł\^n\_) + ()+ , =1…M\^[(1)]{} , \[baelog\] where $I^n_\b$ and $J_\g$ are integer or half-odd integer numbers, $\theta(x)=2\arctan(2x)$ and \_[n,m]{}(x) = (1-\_[m,n]{})([x]{}) + 2 ([x]{})+…+2 ([x]{}) + ([x]{}) . For vanishing twist angles the ranges of the “quantum numbers” $I^{n}_\b$ and $J_\g$ are given by |I\^n\_|&& (L+M\^[(1)]{}+M\_n-2\_[m=1]{}\^{m,n}M\_m-1)  ,|J\_|&& (\_[n=1]{}\^M\_n -1) . \[int\] Ground state and excitations can now be constructed by specifying sets of integer (half-odd integer) numbers $I^n_\b$ and $J_\g$ and turning equations into sets of coupled integral equations in the thermodynamic limit. The antiferromagnetic ground state for zero twist angles is obtained by filling consecutive quantum numbers $I^1_\b$ and $J_\g$ symmetrically around zero, which corresponds to filling two Fermi seas of spin and charge degrees of freedom respectively. The effect of an infinitesimally small flux is to shift the distribution of roots ([*i.e.*]{} the rapidities in the Fermi seas) by a constant amount. This shift leads to a twist-angle dependent contribution to the ground state energy. The ground state (in the presence of flux) is described in terms of the root densities $\rho_\a(\l)$, which are solutions of the integral equations \_s(ł)&=& a\_1(ł)-\_[\_s\^-]{}\^[\_s\^+]{}da\_2(ł-) \_s() +\_[\_c\^-]{}\^[\_c\^+]{}d a\_1(ł-)\_c()+[O]{}(L\^[-2]{}) \_c()&=& \_[\_s\^-]{}\^[\_s\^+]{}d’a\_1(-’) \_s(’) + +[O]{}(L\^[-2]{}). \[gss\] Here $\tilde{\La}_\b^\pm$ are the Fermi points for the finite system in the presence of the flux. We denote the Fermi points for the infinite system [*without*]{} flux by $\pm\La_{\b,0}$ (note that the distribution of roots is symmetric around zero in that case). For further convenience we define the quantities D\_= - \_[\_\^+]{}\^ - \_[-]{}\^[\_\^-]{} d\_()  . In order to evaluate the stiffnesses we need to consider infinitesimal flux only, which yields a correction of order $\frac{1}{L}$ to the ground state energy. Following through the standard steps [@woyn:89; @frko:90] and taking into account the $\frac{1}{L}$-corrections from the infinitesimal flux we obtain[^10] E(\_s,\_c,)=L e(\_[,0]{})+f(\_[,0]{})-(v\_s+v\_c)+2L D\^T [**Z**]{} [**V**]{} [**Z**]{}\^T D +o(), \[fscorr\] where ${\bf Z}$ is the dressed charge matrix (\[eq:z\]), $D=(D_c,D_s)^\top$ and $V={\rm diag}(v_c,v_s)$. Here $e(\La_{\b,0})$ and $f(\La_{\b,0})$ are the ground state energy per site and impurity energy in the infinite system without flux. The quantity $D_\b$ is given by D\_= . \[dk\] In order to study the charge and spin conductivities and the respective currents we consider the energy-difference $\Delta E=E(\phi_s,\phi_c,\a)-E(0,0,\a)$. It can be cast in the form E= [1 L]{} \_D\_ \_  + [subleading terms]{}. \[De\] According to [@shas:90] the charge (spin) stiffness $D^{(\rho)}$ ($D^{(\sigma)}$) is defined as D\^[()]{}=[L 2]{} [\^2 \^2]{}E(\_c=,\_s=0,) ,D\^[()]{}=[L 2]{} [\^2 \^2]{}E(\_c=,\_s=-2,). \[dd\] Using the expressions in and we find that the stiffnesses are not modified by the impurity to leading order in $\frac{1}{L}$. It is clear from the above analysis that there are corrections due to the impurity in the subleading terms. The precise form of these expressions is not of particular interest from a physical point of view and as the extension of the above finite-size analysis to the subleading orders is rather difficult we refrain from determining them. The important point is that despite the presence of the impurity the stiffnesses are still finite and the dc-conductivity is thus infinite. We believe that this fact is due to the integrability and the related absence of backscattering. This means that the integrable impurity considered here is of a completely different nature than the “weak link”-type potential impurity considered in [@lupe:74; @matt:74; @kafi:92; @tsv0]: as the electron-electron interactions are repulsive in the supersymmetric $t$-$J$ model a weak link drives the system to a strong coupling fixed point characterized by the vanishing of the conductivity. Stiffnesses for finite density of impurities -------------------------------------------- Let us now turn to transport properties for the system with a finite density of impurities. The necessary steps are the same as above, the main difference being the change of integral equations describing the ground state from to \_s(ł)&=& (1-n\_i)a\_1(ł)-\_[\_s\^-]{}\^[\_s\^+]{}d  a\_2(ł-) \_s()+\_[\_c\^-]{}\^[\_c\^+]{} d a\_1(ł-) \_c() ,\_c()&=& n\_ia\_()+\_[\_s\^-]{}\^[\_s\^+]{}d’a\_1(-’) \_s(’) , \[gss2\] where $n_i$ is the concentration of impurities. As the integral equations for the dressed energies remain unchanged, the dressed charge is not modified and $A(\mu,H)$ and $B(\mu,H)$ are not changed. However, the presence of a finite density of impurities leads to changes in the electron density, which is now given by $n_e=(1-n_i)D_{bulk}+n_i N_\a$ and the Fermi velocities, which are found to be of the form \_= . Here $v_\b$ are the velocities of the normal $t$-$J$-chain and the $f_\b$ are defined in (\[fb\]). [From]{} (\[dd\]) the following form of the stiffnesses is easily deduced D\^[()]{}= [1 2 ]{}(\_c Z\^2\_[cc]{}+\_s Z\^2\_[cs]{}), D\^[()]{}= [1 2 ]{}(\_c (Z\_[cc]{}-2Z\_[sc]{})\^2 +\_s (Z\_[cs]{}-2Z\_[ss]{})\^2 ). \[stiff\] Using the results of the above sections we can evaluate these expressions analytically for small magnetic field close to “maximal filling” ($t$-$J$ sites singly occupied, impurities doubly occupied) and near the critical electron density $n_c$. This is done in the following two subsections. Finally we present numerical results for the general case. ### Stiffnesses for small magnetic field near maximal filling Close to maximal filling and for $\a \gg B$ the leading term of the charge stiffness is found to be D\^[()]{}\_c Z\^2\_[cc]{}= +…\[drho\] Combining with the result for the electron density we obtain the following limiting value for the slope of the charge stiffness as a function of density |\_[n\_e=1+n\_i]{}=-[3 (3) 8 \^2(2)]{} [1 \^2]{}+…In the strong-coupling limit this turns into \_[D\^[()]{} n\_e]{}|\_[n\_e=1+n\_i]{}= -[3 (3) 8 \^2(2)]{} [1 (1-n\_i)\^2]{} =[1 (1-n\_i)\^2]{} [D\^[()]{}\_[tJ]{} n\_e]{}|\_[n\_e=1]{} \[dgr\] The leading term of the spin stiffness for $B \ll 1$ and $\a \gg B$ is given by D\^[()]{}\_s (Z\_[cs]{}-2Z\_[ss]{})\^2 =[v\_s ]{}(1-[1 4 (H/H\_0)]{})\^2 [1 1+n\_i(f\_s-1)]{} . In the limit $\a \to \infty$ we find |\_[n\_e=1+n\_i]{}=(1-[1 2 (H/H\_0)]{}) [1 (1-n\_i)\^2]{} =[1 (1-n\_i)\^2]{}[D\^[()]{}\_[tJ]{} n\_e]{}|\_[n\_e=1]{} . \[dgs\] ### Stiffnesses slightly above $n_c$ As pointed out above it is possible to derive analytic expressions for the stiffnesses for densities slightly above $n_c$. However the resulting expressions are found to be rather complicated so that we refrain from listing them here. The derivative of the spin–stiffness with respect to the electron density at $n_e=n_c^+$ is always positive, taking its maximum at $\a=0$ and its minimum $0$ in the limit $\a \to \infty$. The derivative of the charge–stiffness with respect to the electron density at $n_e=n_c^+$ changes sign as a function of $\a$ (see Fig. \[fig:drs\]). ### Numerical Results The results for the charge stiffness in systems with impurity densities $n_i=0.2$ and two different values of the bulk field $H$ are depicted in Fig. \[fig:dc2\] and Fig. \[fig:dc2hp5\]. The charge stiffness for $n_i=0.5$ is shown in Fig. \[fig:dcp5\]. For comparison we plot the result for the charge–stiffness $D^\rho_{tJ}$ for the $t$-$J$ model without impurities as calculated in [@bares:92]. We note that the maximal allowed band-filling is larger than one as the impurity sites can be doubly occupied. We see that for small band-fillings above the critical density the charge-stiffness is reduced as compared to the pure $t$-$J$ case. For larger band fillings the stiffness is found to increase in the presence of impurities. This is easily understood: due to the constraint of single occupancy the stiffness vanishes as we approach half-filling in the $t$-$J$ chain. The impurity sites can be doubly occupied, which gives the electrons “space to move” and leads to an increase in the stiffness. For large fillings the stiffness increases with increasing $\a$ because (as can be deduced from the $a\rightarrow\infty$ limit) the impurity sites become (on average) closer and closer to being doubly occupied, which again makes it easier for the electrons to move along the $t$-$J$ sites. Last but not least let us discuss the limiting case $\a=\infty$ at impurity density $n_i$, [*i.e.*]{} there are $Ln_i$ impurity sites and $L(1-n_i)$ $t$-$J$ sites. For electron densities $n_e$ smaller than $n_i$ the (spin-up) electrons (in the ground state) occupy only impurity sites which do not interact with the $t$-$J$ sites. Thus the stiffness is identically zero. For electron densities $n_i<n_e<n_i+(1-n_i)n_c$ the saturated ferromagnetic ground state on the $t$-$J$ sites is formed whereas all impurity sites are singly occupied. The stiffness is completely determined by the $t$-$J$ sites. For densities in the interval $n_i+(1-n_i)n_c<n_e<2n_i+(1-n_i)n_c$ the impurity sites become doubly occupied and the stiffness does not change. For $n_e>2n_i+(1-n_i)n_c$ all impurity sites are doubly occupied and the stiffness follows (up to a rescaling by $\frac{1}{1-n_i}$) the $t$-$J$ curve above the critical density $n_c$ (see (\[dgs\]) and (\[dgr\])). The spin-stiffness for impurity density $0.2$ is shown in Fig. \[fig:ds2\]. We see that the stiffness is decreased at low fillings (this decrease is more pronounced for larger values of $\a$) and approaches the “pure” $t$-$J$ value for large fillings. The behaviour in the limit $\a \to \infty$ is the same as for the charge stiffness. Impurity Phase-Shifts ===================== In this section we evaluate the phase shifts acquired by the elementary excitations, holons and spinons, when scattering off the impurity. The results give a good measure of the effects of the impurity on excited states. In particular we can infer from the phase-shifts how the impurity couples to spin and charge degrees of freedom. We start by briefly reviewing some important facts about the low-lying excitations in the $t$-$J$ model [@bares:91]. The elementary excitations are collective modes of spin or charge degrees of freedom. The spin excitations are called spinons and carry spin $\pm \frac{1}{2}$ and zero electric charge. They are very similar to the spin-waves in the Heisenberg XXX chain. The charge excitations are called holons and antiholons, carry zero spin and charge $\mp e$. Thus holons correspond to physical holes. At half-filling only holons can be excited as the charge Fermi sea is completely empty. The excitation energies are given by $\e_{c,s}$ defined in (\[dden2\]). The respective physical momenta are given in terms of the solutions of the following set of coupled integral equations \_s(ł)&=& -(ł)-\_[-A]{}\^Ad a\_2(ł-) [p]{}\_s() + \_[-B]{}\^B d a\_1(ł-) [p]{}\_c() ,\_c(ł)&=& \_[-A]{}\^A d a\_1(ł-) [p]{}\_s(). \[mtm\] The mometum of [*e.g.*]{} a holon-antiholon excitation is given by $P_{c{\bar c}}={\tt p}_c(\La^p)-{\tt p}_c(\La^h)$ where $\La^p$ and $\La^h$ are the spectral parameters of the holon and antiholon respectively. We thus would define the physical holon momentum as $p_c(\La^p) = {\tt p}_c(\La^p)-{\tt p}_c(B)$. At half-filling the spinon ($p_s$) and holon ($p_c$) momenta are given by p\_s(ł)&=& 2((ł))- ,p\_c(ł)&=& +i ( ). \[mtmhf\] The scattering matrix has been determined by means of Korepin’s method [@vladb; @kor:79] in [@bares:91]. At half-filling the spinon-spinon S-matrix $S(\l)$, the spinon-holon ($sc$) and holon-holon ($cc$) scattering phases are given by S(ł) &=& i (I - P) ,(i\_[sc]{}(ł))&=& -i  ,(i\_[cc]{}(ł))= \[smhf\] where $I$ and $P$ are the $4\times 4$ identity and permutation matrices respectively. Below half-filling the S-matrices are given in terms of the solution of integral equations. The impurity phase-shifts can be computed by the standard method of Korepin [@kor:79], Andrei [*et. al.*]{} [@al; @ande:84]. In the most general case of a bulk magnetic field and arbitrary filling factor the phase-shifts can be expressed only in terms of the solution of a set of coupled integral equations, the analysis of which is rather difficult. We therefore constrain ourselves to the case of a microscopic number of holes in the half-filled ground state in the absence of a bulk magnetic field. The basic ingredient for computing impurity phase-shifts is the quantization condition for factorized scattering of two particles with rapidities $\l_{1,2}$ on a ring of length $N$ (including the impurity site) (iNp(ł\_1))S(ł\_1-ł\_2)e\^[i(ł\_1)]{}=1 , \[qc\] where $p(\l)$ is the expression for the physical momentum of the corresponding (infinite) periodic system, $S(\l)$ are the bulk scattering matrices for scattering of particles $1$ and $2$, and $\psi(\l_1)$ is the phase-shift acquired by particle $1$ when scattering off the impurity. We note that the condition incorporates the fact that there is no backscattering at the impurity. For the present model the absence of backscattering follows from the conservation laws for the rapidities: although momentum is not a good quantum number for the ring with impurity, excited states can still be characterized by the rapidity variables (see below for an example). We would expect that if the impurity contained a backscattering term mixing of states with different rapidities would occur. This is not the case in the present model which indicates the absence of backscattering. We note that the absence of backscattering on the level of the “bare” Bethe Ansatz equations (which describe the scattering of excitations over the empty ground state) is not sufficient to deduce the absence of backscattering over the antiferromagnetic ground state because the impurity gets dressed by the holons and spinons in the ground state Fermi seas, and the two-particle scattering processes between holons and spinons do contain backscattering terms. Therefore the treatment of [@HPE] does not apply in the present case. In what follows we will extract the holon and spinon impurity phase-shifts from the spinon-holon scattering state, for which the condition turns into scalar equations for the scattering phases, which after taking the logarithm read Np\_s(ł\^h)+\_[sc]{}(ł\^h-\^p)+\_s(ł\^h)=0 [mod]{} 2 ,Np\_c(\^p)+\_[sc]{}(\^p-ł\^h)+\_c(\^p)=0 [mod]{} 2 . \[qchf2\] Here $\l^h$ and $\La^p$ are the rapidities of the spinon and holon respectively. Comparing these conditions with certain quantities (“counting functions”) that can be calculated from the Bethe Ansatz solution one can then read off the boundary phase-shifts $\psi_{s,c}$. Let us start by constructing the half-filled antiferromagnetic ground state for even length $L$ of the host chain, where we furthermore assume that $\frac{L}{2}$ is even as well. The ground state is obtained by choosing $M_1=\frac{L}{2}$, $M^{(1)}=0$ in and filling the half-odd integers $I^1_\b$ symmetrically around zero. In the thermodynamic limit this corresponds to filling a Fermi sea of rapidities $\l^1_\b$ between $-\infty$ and $\infty$, where the root density $\rho_s(\l^1_\b)=\lim_{L\to\infty}\frac{1}{(L+1)(\l^1_{\b+1}-\l^1_{\b})}$ is given in terms of the integral equation \_s(ł) = a\_1(ł)(1-) -\_[-]{}\^da\_2(ł-) \_s() . The spinon-holon scattering state characterized by choosing $M_1=\frac{L}{2}, M^{(1)}=1$ in the Bethe equations . There are $\frac{L}{2}+1$ vacancies for the integers $I^1_\a$ and thus one hole in the Fermi sea of $\l^1_\b$. We denote the rapidity corresponding to this hole by $\l^h$. The rapidity corresponding to the holon is denoted by $\La^p$. The Bethe equations read I\_&=& (1-) (ł\_) - \_[\^=1]{}\^[+1]{}() +, J &=& \_[=1]{}\^[+1]{} (\^p-ł\_)+ () -(\^p-ł\^h), \[baehs\] where $J$ is a half-odd integer number. In the limit $L\rightarrow\infty$ the distribution of roots $\l_\b$ is described by a single integral equation for the density of roots $\rho_s(\l)$, which is of Wiener-Hopf form but cannot be solved in a form sufficiently explicit for the purpose of determining the impurity phase-shifts. The main complication is that we need to take into account all contributions of order $\frac{1}{L+1}$ and thus must deal with the fact that the roots are distributed not between $-\infty$ and $\infty$ but between two finite, $L$-dependent values $-A$ and $A$. It can however be checked numerically that making the assumption that the contributions due to the shift of integration boundaries will be of higher order in $\frac{1}{L+1}$ as far as the impurity phase-shifts are concerned (and thus taking $A=\infty$) yields the correct result. The integral equation then can be solved by Fourier transform () = \_0()+\_1()e\^[ił\^h]{}-\_0()\[1-2e\^[i\^p]{}, \[dens\] where $\widetilde{\rho_s}(\omega)$ is the Fourier transform of $\rho_s(\omega)$ and where $\gt_x(\omega)=\frac{\exp(-\frac{x}{2}|\omega|)}{2\cosh(\frac{\omega}{2})}$. For the further analysis it is convenient to define counting functions $z_s(\l)$ and $z_c(\l)$ z\_s(ł) &=& (ł) - \_[=1]{}\^[+1]{}() +,z\_c()&=& \_[=1]{}\^[+1]{} (-ł\_)-. \[count\] Note that for any root [*e.g.*]{} $\l_\a$ of the counting function takes the integer value $z_s(\l_\a)=I_\a$ by construction. In the thermodynamic limit $\frac{1}{L+1}$ times the derivative of $z_s$ yields the distribution function of rapidities $\rho_s(\l)$. Straightforward integration of the density $\rho_s(\l)$ yields the following results for the counting functions in the thermodynamic limit evaluated at the rapidities of the spinon and holon respectively 2z\_s(ł\^h)&=&(L+1)p\_s(ł\^h) + \_[sc]{}(ł\^h-\^p) +\_s(ł\^h)=0 [mod]{} 2 ,-2z\_c(\^p)&=&(L+1)p\_c(\^p) + \_[sc]{}(\^p-ł\^h) + \_c(\^p)= [mod]{} 2 , \[cf\] where $p_{s,c}(\l)$ are the spinon/holon momenta , $\varphi_{sc}(\l)$ is the bulk phase-shift for spinon-holon scattering , and \_s(ł)=-p\_s(ł) ,\_c(ł)=-p\_c(ł)+-() . these equations we can now infer the boundary phase shifts by comparing them with the quantization condition , which yields e\^[i\_s(ł)]{}&=& C   ,e\^[i\_c(ł)]{}&=&-C\^[-1]{}  , where $C$ is an overall constant factor of unit modulus that cannot be determined within the Bethe Ansatz framework. Setting $C=-i$ we find that e\^[i\_s(ł)]{}= e\^[-ip\_s(ł)]{} ,e\^[i\_c(ł)]{}= - e\^[-ip\_c(ł)]{} , where $p_s$ and $p_c$ are the spinon and holon momenta at half-filling respectively. This result is interpreted in the following way: for the half-filled band doped with a finite number of holes the impurity site essentially decouples from the host chain in the sense that spinons and holons bypass it, which effectively shortens the lattice by one site (see Fig. \[fig:hhalb\])!\ For the spinons this is the complete picture, whereas the holons still acquire a phase shift due to the fact that the impurity site is charged (recall that it is on average almost doubly occupied) and therefore interacts with the holons passing it by. The holon scattering phase has a pole at $\l=i\frac{\a}{2}$, which for $-2\leq\a \leq-1$ lies on the physical sheet and therefore corresponds to an impurity bound state. The restriction $\a<-1$ is imposed in order to have a hermitean hamiltonian [@befr:95d]. The impurity therefore has the interesting property to lead to a holon bound state for sufficiently small negative $\a$ at half-filling. Conclusion ========== In this paper we have studied the effects of an integrable impurity in a periodic $t$-$J$ model. The impurity couples to both spin and charge degrees of freedom and the coupling strength $\a$ can be varied continuously without losing integrability. The two limiting cases $\a=0$ and $\a=\infty$ have been shown to be described by effective (ground state) hamiltonians of a $t$-$J$ model with one extra site, and a decoupled impurity in a $t$-$J$ chain with twisted boundary conditions. At zero temperature we have calculated the impurity magnetization and particle number for arbitrary band filling and bulk magnetic field. The impurity susceptibilities have been shown to exhibit the same types of singularities as the corresponding bulk susceptibilities. Similarly the low-temperature specific heat of impurity and bulk have the same temperature dependence. Transport properties have been determined through the calculation of spin and charge stiffnesses and finally the impurity phase shifts have been calculated for the half-filled band. The supersymmetric $t$-$J$ model belongs to the class of Luttinger liquids with repulsive electron-electron interactions. The effects of potential impurities of the “weak-link” type were first investigated in [@lupe:74; @matt:74]. It was found that the system flows to a strong-coupling fixed point characterized by the vanishing of the dc conductivity. The physics of the impurity studied here is quite different: the dc-conductivity is unchanged by the presence of a single impurity. As argued above the type of impurity considered here does not seem to contain backscattering terms on the level of the dressed excitations (holons and spinons). It would be interesting to verify this assertion by explicitly constructing the continuum limit of the model. However, due to the complicated structure of the impurity hamiltonian this is a difficult undertaking. The argument given above suggest that it is impossible to construct an impurity model containing backscattering off the impurity by means of the Quantum Inverse Scattering Method through the standard intertwining relation “$RTT=TTR$”: the rapidities of the elementary excitations will always be conserved quantities and are not affected by the scattering off the impurity. Clearly “generic” impurities ought to contain backscattering as only special potentials are reflectionless. From that point of view the integrable impurity considered in the present work is very special. The situation is similar to the (multichannel) Kondo model (viewed as a $1-d$ system). One may speculate that like for the case of a Kondo-impurity in a Luttinger liquid a backscattering term will drive the system to a new fixed point [@frjo:96] so that the integrable impurity would represent an unstable fixed point in the sense of the renormalisation group. This is known to be the case for the spin system of Andrei and Johannesson [@soea:93]. As we have seen the integrable impurity nevertheless leads to interesting physical consequences. [Acknowledgements]{} We are grateful to A. Tsvelik and A.Jerez for important discussions and suggestions. F.H.L.E. is supported by the EU under Human Capital and Mobility fellowship grant ERBCHBGCT940709. He thanks the ITP at Hannover, where part of this work was performed, for hospitality. This work has been supported in part by the Deutsche Forschungsgemeinschaft under Grant No. Fr 737/2–1. Transformation of the BAE ========================= To show the equivalence of the two sets of BAE (\[eq:bagitwi\]) and (\[eq:bagist\]) we use a method due to Woynarovich [@woyn:83] and Bares [*et al*]{}[@bares:92]. We express the second set of (\[eq:bagist\]) as a polynomial of degree $M^{(1)}+N_\downarrow$ P(w)=\_[j=1]{}\^[M\^[(1)]{}+N\_-1]{} (w-ł\_j-[i2]{})(w-i[2]{})- e\^[-i\_c]{}\_[=1]{}\^[M\^[(1)]{}+N\_-1]{}(w-ł\_j+[i 2]{})(w+i[2]{})=0 \[p1\] and identify the first $M^{(1)}$ roots of (\[p1\]) $w_1,\ldots,w_{M^{(1)}}$ with $\l_1^{(1)},\ldots,\l_{M^{(1)}}^{(1)}$. Using the residue theorem we obtain the following equality: \_[j=1]{}\^[M\^[(1)]{}]{}[1 i]{}=\_[j=1]{}\^[M\^[(1)]{}]{}[1 2i]{} \_[[C]{}\_j]{}dz[1 i]{} (P(z)) where ${\cal C}_j$ is a small contour including $\l_j^{(1)}$. Deforming the contour and denoting the $N_\downarrow$ other roots of (\[p1\]) with $w'_1,\ldots,w'_{N_\downarrow}$ we arrive at the following equality \_[j=1]{}\^[M\^[(1)]{}]{}[1 i]{}= -\_[j=1]{}\^[N\_]{}[1 i]{}+[1 i]{} , \[i1\] where the last term comes from integration aorund the branch cut extending from $z_n=\l_l+i/2$ to $z_p=\l_l-i/2$. Using the form of $P(w)$ and substituting (\[i1\]) into the first equation of (\[eq:bagist\]) we obtain the first equation of (\[eq:bagitwi\]) with the according twist–angle ( )\^[L]{} ( )= e\^[-i(\_c+\_s)]{} \^[N\_]{}\_[=1]{}  , j=1,…,N\_e  , The second equation of (\[eq:bagitwi\]) can be obtained by the same steps starting with the first equation of (\[eq:bagist\]). The twist–angles are related by $\phi_s=\phi_\uparrow-\phi_\downarrow$ and $\phi_c=\phi_\downarrow$ [@bares:92]. In [@esko:92; @foka:93] it was shown that the BAE (\[eq:bagist\]) for the $t$-$J$ model without impurity can be obtained by means of the QISM starting with a fermionic vacuum with all spins up. The corresponding vacuum state of the impurity model is given by a bosonic doubly occupied impurity site and all other sites occupied with spin up electrons. The algebraic Bethe–Ansatz starting from this vacuum can also be constructed. The three site model ==================== The Bethe ansatz states do not form the complete set of eigenstates of the system but are the highest-weight states of the $gl(2|1)$ superalgebra (taking the Lai solutions of the BAE). Complementing the Bethe ansatz states with those obtained by the action of the $gl(2|1)$ lowering operators one obtains additional eigenstates. The completeness of this extended Bethe ansatz has been proven for some models as the spin-${1 \over 2}$ Heisenberg chain, the supersymmetric $t$-$J$ model and the Hubbard model [@foka:93; @fata:84; @eks:92]. In this appendix we present a completeness analysis for impurity system considered here on a chain with three sites. This nontrivial example shows that the picture of [@foka:93; @fata:84; @eks:92] seems to hold in the present model as well. A detailed analysis of the general case is outside the scope of the present paper. We need to consider the action (on states given by the Bethe Ansatz) of the spin lowering operator $S^-=\sum_{i=1}^{L+1} X_i^{\downarrow \uparrow} $ and the supersymmetry operators $Q_\sigma^\dagger$ ($Q_\sigma$ in the Sutherland case), which are given by Q\_= \_[i=1,i 2]{}\^[L+1]{} X\_i\^[0 ]{} +X\_2\^[0 ]{} +X\_2\^[2]{} ,Q\_= \_[i=1,i 2]{}\^[L+1]{}X\_i\^[0 ]{} +X\_2\^[0 ]{} -X\_2\^[2]{} . They are seen to satisfy the commutation relations {Q\_,Q\_}=0 , Q\_\^2=0 , \[[H]{},Q\_\]=0 . The BA states obtained by the Lai solution starting with empty sites are characterized by $Q_\sigma |\Psi_{Lai}\rangle=0$. The respective Sutherland solutions by $Q_\sigma^\dagger |\Phi_{Suth.}\rangle=0$. Solving the BAE (\[eq:bagitwi\]) and (\[eq:bagist\]) with vanishing twist angles for the simplest case $L=2$ (recall that $L$ is the length of the host chain) and then constructing the corresponding $gl(2|1)$ multiplet by acting with all possible combinations of raising generators we obtain the following complete set of eigenstates ($\lambda = {1 \over 2}\sqrt{\a+1 \over \a+3}$) .3cm ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Energy $\sharp$ BA Lai BA Sutherland ----------------------- ---------- --------------------------------------------------------------- -------------------------------------------------------------------- 0 4 vacuum  $\l_1=-\l_2=\lambda \quad \l_1^{(1)}=-\l_2^{(1)}=\sqrt{\a \over \a+3}$ $|\Psi_0\rangle=|0\rangle$ $ |\Phi_0\rangle = Q_\uparrow^\dagger Q_\downarrow^\dagger |\Psi_0\rangle$ ${6+2\a \over \a+2}$ 8 $\l_1=\l \quad g=-{1 \over $\l_1=\l \quad 2}\left[\sqrt{\a+1}+i\sqrt{\a+3}\right]$ \l_1^{(1)}=\l {\a \over \a+1} $ $|\Psi_1\rangle=g^*|\uparrow 0 0\rangle +g |0 0 \uparrow $ |\Phi_1\rangle= \rangle +|0 \uparrow 0\rangle $ Q_\uparrow^\dagger Q_\downarrow^\dagger |\Psi_1\rangle$ ${6+2\a \over \a+2}$ 8 $\l_1=-\l$ $\l_1=-\l \quad \l_1^{(1)}=-\l {\a \over \a+1} $ $|\Psi_2\rangle=g |\uparrow 0 0\rangle +g^* |0 0 \uparrow $ |\Phi_2\rangle = \rangle +|0 \uparrow 0\rangle $ Q_\uparrow^\dagger Q_\downarrow^\dagger |\Psi_2\rangle$ ${12+4\a \over \a+2}$ 12 $\l_1=-\l_2=\l $ vacuum $|\Psi_3\rangle=|\uparrow \uparrow $ |\Phi_3\rangle =Q_\uparrow^\dagger 0\rangle-\sqrt{\a+1}|\uparrow 0 \uparrow\rangle + |0 \uparrow Q_\downarrow^\dagger |\Psi_3\rangle= |\uparrow 2\uparrow \rangle $ \uparrow\rangle$ ${4 \over \a+2}$ 4 $\l_1=-\l_2={i \over 2} \quad \l_1^{(1)}= 0 $ $\l_1=0$ $|\Psi_4\rangle = Q_\uparrow Q_\downarrow $|\Phi_4\rangle =|\uparrow 2 \downarrow\rangle |\Phi_4\rangle $ -|\downarrow 2 \uparrow\rangle $ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- .3cm The Sutherland solutions for the states $|\Phi_{0,1,2}\rangle$ and the Lai solution for $|\Psi_4\rangle$ [^11] do not exist in the case $\a=0$. In this case the $Q_\sigma ^{(\dagger)}$ commute with $X_2^{22}$ and the states decompose into the 27 states corresponding to the Lai-states without double occupancy and the 9 states of the Sutherland solution with doubly occupied impurity site [^12]. This is in agreement with the form of the hamiltonian in the limit $\a \to 0$ given in (\[hgl\]). Let us conclude this appendix with some simple considerations concerning the question of whether the impurity contributions to particle number and magnetization are indeed concentrated at the impurity. The lowest energy state with one electron on a lattice of arbitrary length is given by $Q_\sigma^\dagger |0\rangle$. Using this fact we are able to directly compute the electron density at the impurity for this state and we find that $\langle n_2 \rangle=\frac{1+\a}{L+1+\a}$. In order to compare this with (\[idifer\]) we need to take into account that (\[idifer\]) is obtained in the thermodynamic limit, [*i.e.*]{} we need to take $L\gg \a+1$. We then find that $N_\a=\frac{1+\a}{L}$, which is in agreement with $\langle n_2 \rangle$. Similarly the lowest energy state above the critical density $n_c$ is given by $Q_\downarrow^\dagger|\Psi_{Ferr.}\rangle$. By the action of the $Q$-operator a spin–down electron is generated with probability $(\a+1) \times (1-N_\a)$ and a doubly occupied impurity–site with probability $\a \times N_\a$. Taking into account the normalization–factor ${1 \over L(1-n_e)}$ we obtain the following result |\_[n\_e=n\_c\^+]{}= [N\_n\_e]{}=L N\_= [+1-N\_1-n\_e]{}+o([1 L]{}). This coincides with (\[abnc\]). From these simple examples we deduce that the assumption that the impurity contributions to magnetization and particle number are located at the impurity is a very reasonable one. $gl(2|1)$ Invariance of the Model ================================= In this appendix we show by explicit computation that the model is $gl(2|1)$-invariant. We start by expanding $R$-matrix and L-operators around infinite spectral parameters R\_[33]{}(ł)&\~& \^[00]{} +(I-)\^[00]{} ,\_[33]{}\^n(ł)&\~& I\^[0n]{}+(-I)\^[0n]{} ,\_[34]{}(ł)&\~& I\^[02]{}+([L]{}-(2+)I)\^[02]{} , \[c1\] where we denoted the auxiliary space by $0$, $n$ labels the quantum spaces over the $t$-$J$-like sites, and the impurity sits at site $2$. This leads to the following expansion of the monodromy matrix T(ł)\~I+=:I+Y . \[c2\] Inserting and into the intertwining relation R\_[33]{}(ł-)(T(ł)T())= (T()T(ł))R\_[33]{}(ł-) we obtain the following equations &&(-1)\^[\_a\_]{}T()\^[a]{}+\_[a]{} T()\^[b]{}=&&(-1)\^[\_\_+\_b(\_+\_a)]{} T()\^[a]{} +(-1)\^[(\_a+\_)\_b]{} \_[b]{} T()\^[a]{} . \[c3\] Setting $a=\bp$, multiplying by $(-1)^{\eps_a(\eps_a+\eps_\ap)}$ and then summing over $a$ we arrive at = 0 , where $\tau(\mu)$ is the transfer matrix of the system. Dropping some constants we therefore find that = 0 ,Q\_[ab]{} = \_n X\_n\^[ab]{}+[L]{}\^[ab]{},a,b=1…3, where we use the correspondences $1\leftrightarrow\up$, $2\leftrightarrow\da$, $3\leftrightarrow0$ for $X^{ab}$. The operators $Q_{ab}$ form a complete set of generators for $gl(2|1)$, which establishes the invariance of our model. [10]{} A. Luther and I. Peschel, Phys. Rev. Lett. 32, 992 (1974). D. C. Mattis, Phys. Rev. Lett. 32, 714 (1974). C. L. Kane and M. P. A. Fisher,Phys. Rev. [**B46**]{}, 15233 (1992). J. L. Cardy, Nucl. Phys. [**B324**]{}, 581 (1989). P. Fendley and H. Saleur,Phys. Rev. Lett. [**75**]{}, 4492 (1995). P. Fendley, A. W. W. Ludwig, and H. Saleur, Phys. Rev. B [**52**]{}, 8934 (1995). F. Lesage, H. Saleur and S. Skorik, [preprint]{} [cond-mat/9603043]{} and references therein. A. Tsvelik, J. Phys. A [**28**]{}, L625, (1995). I. Affleck, Nucl. Phys. [**B336**]{}, 517 (1990). I. Affleck and A. W. W. Ludwig,Nucl. Phys. [**B352**]{}, 849 (1991); [**B360**]{},641 (1991) ,Phys. Rev. Lett. [**75**]{}, 300 (1995). ,Phys. Rev. [**B53**]{},3211 (1996). N. Andrei and H. Johannesson, Phys. Lett. A [**100**]{}, 108 (1984). H. J. [de Vega]{} and F. Woynarovich, J. Phys. A [**25**]{}, 4499 (1992). H. J. [de Vega]{}, L. Mezincescu and R. Nepomechie, Phys. Rev. [**B49**]{} 13223 (1994), Int. J. Mod. Phys. [**B8**]{} 3473 (1994). P. A. Bares, cond-mat/9412011 (unpublished). A. Tsvelik, P. Wiegmann, Adv. Phys.[**32**]{} 453 (1983),\ N. Andrei, K. Furuya, J. H. Lowenstein, Rev. Mod. Phys. [**55**]{} 331 (1983). N. Andrei, A. Jerez, Phys. Rev. Lett. [**74**]{} 4507 (1995). C. N. Yang, Phys. Rev. Lett. [**19**]{}, 1312 (1967). P. Schmitteckert, P. Schwab and U. Eckern, Europhys. Lett. [**30**]{}, [534]{} (1995). R. Yue, H. Fan, B. Hou, Nucl. Phys. [**B462**]{}, 167 (1996). P. Schlottmann, Phys. Rev. Lett. [**69**]{}, 2396 (1992). F. H. L. Eßler, V. E. Korepin, K. Schoutens, Phys. Rev. Lett. [**68**]{} 2960 (1992). F. H. L. Eßler, V.E. Korepin, F. H. L. Eßler, V. E. Korepin, K. Schoutens, M. J. Martins, . M. J. Martins, Phys. Rev. Lett. [**74**]{} 3316 (1995). eds V. E. Korepin and F. H. L. Eßler, World Scientific, Singapore 1994. B. Sutherland, Phys. Rev. B [**12**]{}, 3795 (1975). P. Schlottmann, Phys. Rev. B [**36**]{}, 5177 (1987). P. A. Bares, G. Blatter, and M. Ogata, Phys. Rev. B [**44**]{}, 130 (1991). F. H. L. E[ß]{}ler and V. E. Korepin, Phys. Rev. B [**46**]{}, 9147 (1992). A. Foerster and M. Karowski, Nucl. Phys. B [**396**]{}, 611 (1993). A. J. Bracken, G. W. Delius, M. D. Gould, and Y.-Z. Zhang, J. Phys. A [**27**]{}, 6551 (1994). A. J. Bracken, M. D. Gould, J. R. Links, and Y.-Z. Zhang, Phys. Rev. Lett. [ **74**]{}, 2768 (1995). G. Bed[ü]{}rftig and H. Frahm, J. Phys. A [**28**]{}, 4453 (1995). R. Z. Bariev, A. Klümper and J. Zittartz, Europhys. Lett. [**32**]{}, 85 (1995). M. D. Gould, K. E. Hibbert, J. R. Links, Y.-Z. Zhang, Phys. Lett. [**A212**]{} 156 (1996). V. E. Korepin, A. G. Izergin, and N. M. Bogoliubov, [*[Quantum Inverse Scattering Method, Correlation Functions and Algebraic Bethe Ansatz]{}*]{} (Cambridge University Press, Cambridge, 1993). P. P. Kulish, J. Sov. Math. [**35**]{} [2648]{} (1985). P. P. Kulish and E. K. Sklyanin, J. Sov. Math. [**19**]{}, 1596 (1982). Z. Maassarani, J. Phys. A [**28**]{}, 1305 (1995). N. Kawakami and S.-K. Yang, J. Phys. Condens. Matter [**3**]{}, 5983 (1991). P. A. Bares, J. M. P. Carmelo, J. Ferrer, and P. Horsch, Phys. Rev. B [**46**]{}, 14624 (1992). G. Jüttner and A. Klümper, [preprint]{} [cond-mat/9606192]{}. F. H. L. E[ß]{}ler, J. Phys. A [**29**]{}, 6183 (1996). F. Woynarovich, J. Phys. A [**22**]{}, 4243 (1989). M. Takahashi, Prog. Theor. Phys. [**46**]{}, 1388 (1971). M. Takahashi, Prog. Theor. Phys. [**52**]{}, 103 (1974). B. S. Shastry and B. Sutherland, Phys. Rev. Lett. [**65**]{}, 243 (1990). A. A. Zvyagin, Sov. Phys. Solid State [**32**]{}, 906 (1990). H. Frahm and V. E. Korepin, Phys. Rev. B [**42**]{}, 10553 (1990). V. E. Korepin, Theor. Mat. Phys. [**41**]{}, 169 (1979). N. Andrei, J. H. Lowenstein, Phys. Lett. A [**80**]{}, 401 (1980). N. Andrei and C. Destri, Nucl. Phys. B [**231**]{}, 455 (1984). A. Punnoose, H. P. Eckle and R. A. Römer, preprint [cond-mat/9512139]{}. E. S. S[ø]{}rensen, S. Eggert, and I. Affleck, J. Phys. A [**26**]{}, 6757 (1993). F. Woynarovich, J. Phys. C [**16**]{}, 6593 (1983). L. D. Faddeev and L. A. Takhtajan, J. Sov. Math. [**24**]{}, 241 (1984), \[Zap. Nauch. Semin. LOMI [**109**]{}, 134 (1981)\]. F. H. L. E[ß]{}ler, V. E. Korepin, and K. Schoutens, Nucl. Phys. B [**384**]{}, 431 (1992). **Figure Captions** =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 =0.4 |\_[n\_e=n\_c\^+]{} =0.4 =0.4 =0.4 =0.4 =0.4 [^1]: e-mail: [bed@itp.uni-hannover.de]{} [^2]: e-mail: [fab@thphys.ox.ac.uk]{} [^3]: e-mail: [frahm@itp.uni-hannover.de]{} [^4]: For a collection of reprints see [@rep]. [^5]: Other parameter regions may be possible, but will not be considered here. [^6]: The correct result would be $D_{bulk}={2 \sqrt{\mu} \over \pi}$ and not $D_{bulk}={\sqrt{\mu} \over \pi}$ (see [@juku:96]). [^7]: This holds in all known cases for the specific heat in integrable spin chains where the ground state contains only real roots of the Bethe equations. The assumption is also supported by the findings of [@juku:96] where it was shown to be true for the bulk specific heat. [^8]: Considering the half–filled case $N_h=0$ we see that the impurity leaves the BAE (\[eq:bagist\]) unchanged, which is not immediately obvious from the other set of BAE (\[eq:bagitwi\]). [^9]: Note that we do not consider string solutions to the BAE in order to determine the stiffnesses and the results are thus independent of the precise form of the strings. [^10]: Similar expressions are obtained for the corrections to excited state energies. [^11]: The solution of the BAE exists but the norm of the corresponding state vanishes. [^12]: The multiplet of dimension 12 splits in one 7 and one 5 dimensional multiplet as $Q_\downarrow^\dagger (\a=0) |\uparrow \uparrow \uparrow\rangle =0$
--- abstract: | The Nonlinear Schrödinger (NLS) equation is used to model surface waves in wave tanks of hydrodynamic laboratories. Analysis of the linearized NLS equation shows that its harmonic solutions with a small amplitude modulation have a tendency to grow exponentially due to the so-called Benjamin–Feir instability. To investigate this growth in detail, we relate the linearized solution of the NLS equation to a fully nonlinear, exact solution, called soliton on finite background. As a result, we find that in the range of instability the maximum amplitude increase is finite and can be at most three times the initial amplitude. **Keywords** : Nonlinear Schrödinger equation, Benjamin–Feir instability, soliton on finite background, maximum amplitude increase. author: - | **Natanael Karjanto[^1], E. van Groesen[^2], and Pearu Peterson[^3]**\ Department of Applied Mathematics, University of Twente,\ P.O. Box 217, 7500 AE Enschede, The Netherlands bibliography: - 'Karyanto.bib' title: '**Investigation of the maximum amplitude increase from the Benjamin–Feir instability**' --- Introduction ============ This is an initial work on ’Extreme Waves in Hydrodynamics Laboratory’. Extreme waves here refer to very high amplitude, steep, waves that can appear suddenly from a relatively calm sea. Although extreme waves are very rare and unpredictable, they are still very dangerous to ships in case they meet. We model the problem of extreme waves using dispersive wave modes. The specific property of dispersion is that waves with different wave length propagate with different phase velocities. In the following we assume that the wave field has a frequency spectrum that is localized around one frequency. Then the envelope of the wave field is described by the NLS equation [@LD94]. The NLS equation is an amplitude equation for describing the change of envelope of a wave group. This equation is very instrumental in understanding various nonlinear wave phenomena : it arises in studies of unidirectional propagation of wave packets in a energy conserving dispersive medium at the lowest order of nonlinearity [@AS99]. In this paper we analyze the behavior of the wave group envelope using both linear and nonlinear theories. Linear theory predicts exponential growth of the amplitude when certain conditions are satisfied—the Benjamin–Feir instability [@AS99]. However, when the amplitude becomes large nonlinear effects must be taken into account, that, as it turns out, will prevent further exponential growth. The aim of this paper is to find the maximum amplitude of waves when the amplitude growth is triggered by the Benjamin–Feir instability. Also, we investigate how this maximum amplitude depends on the growth rate parameter from the Benjamin–Feir instability. Modelling of Waves Envelope =========================== Linear Theory ------------- In linear theory of water waves, we can restrict the analysis of a surface elevation $\eta(x,t)$ to *one-mode solution* of the form $\eta(x,t) = a\,e^{i(k x - \omega t)} + c.c.$ (complex conjugate), where $a$ is a constant amplitude, $k$ is wavenumber, and $\omega$ is frequency. Then a general solution is a superposition of one-mode solutions. For surface waves on water of constant depth $h$, the parameters $k$ and $\omega$ are related by the following *linear dispersion relation* [@LD94]: $$\omega^{2} = g\,k\,\tanh k\,h,$$ where $g$ is gravitational acceleration. We also write the linear dispersion relation as $\omega \equiv \Omega(k) = k\,\sqrt{\frac{g\,\tanh k\,h}{k}}$. This dispersion relation can be derived from the linearized equations of the full set of equations for water waves. The *phase velocity* is defined as $\frac{\Omega(k)}{k}$ and the *group velocity* is defined as $\frac{d\Omega}{dk} = \Omega'(k)$. Since the linear wave system has elementary solutions of the form $e^{\,i\,(k\,x - \Omega(k)\,t)}$, it is often convenient to write the general solution of an initial value problem as an integral of its Fourier components [@AS99] : $$\eta(x,t) := \int^{\,\infty}_{\!\!-\infty}\!\!\alpha(k)\:e^{\,i\,(k\,x - \Omega(k)\,t)} \,dk, \label{Fourier}$$ where $\alpha(k)$ is the Fourier transform of $\eta(x,0)$. Writing dispersion relation $\Omega(k)$ as a power series (Taylor expansion) about a fixed wavenumber $k_{0}$ and neglecting $\cal{O}$$(\kappa^{3})$ terms, we find $$\beta\,\kappa^{2} = \Omega(k_{0}+\kappa) - \Omega(k_{0}) - \Omega'\!(k_{0})\,\kappa,$$ where $\beta = \frac{1}{2}\,\Omega^{''}\!(k_{0})$. Let us define $k = k_{0} + \kappa$, $\tau = t$, and $\xi = x - \Omega'\!(k_{0})\,t$. Then equation (\[Fourier\]) can be written like $$\eta(x,t) = e^{\,i[k_{0}\,x - \Omega(k_{0})\,t]}\!\! \int^{\,\infty}_{\!\!-\infty}\!\!\alpha(k_{0} + \kappa)\:e^{\,i\,\kappa\,\xi}\,e^{-i\,\beta\,\kappa^{2}\,\tau} \,d\kappa. \label{eta}$$ Denoting the integral in (\[eta\]) with $\psi(\xi,\tau)$, we find that $\psi(\xi,\tau)$ satisfies $$i\,\frac{\partial \psi}{\partial \tau} + \beta\,\frac{\partial^{2} \psi}{\partial \xi^{2}} = 0. \label{linschro}$$ This is the *linear Schrödinger* equation for *narrow–banded* spectra. Equation (\[linschro\]) is a partial differential equation that describes time evolution of the envelope of a linear wave packet [@AS99]. Equation (\[linschro\]) has a monochromatic mode solution $\psi(\xi,\tau) = e^{i\,(\kappa\, \xi\, - \,\nu\, \tau )}$ where $\nu = \beta\,\kappa^{2}$. Nonlinear Theory ---------------- Assuming narrow-banded spectra, we consider the wave elevation in the following form $\eta(x,t) = \psi(\xi,\tau)\, e^{i(k_{0} x - \omega_{0} t)} + c.c.$, where $\tau = t$, $\xi = x - \Omega'\!(k_{0})\,t$, $\psi(\xi,\tau)$ is a complex valued function (called the *complex amplitude*), $k_{0}$ and $\omega_{0}$ are central wavenumber and frequency, respectively. The evolution of the wave elevation is a weakly nonlinear deformation of a nearly harmonic wave with the fixed wavenumber $k_{0}$. If we substitute $\eta$ to the equations which describes the physical motion of the water waves (see below), then one finds that the complex amplitude $\psi(\xi,\tau)$ satisfies the ***nonlinear Schrödinger (NLS) equation***. As example, the NLS equation can be derived from the modified KdV equation $\eta_{t} + i\,\Omega\,(-i\partial_{x})\eta + \frac{3}{4}\,\partial_{x}\eta^2 = 0$ [@EC02]. With $\tau = t$ and $\xi = x - \Omega'\!(k_{0})\,t$, the corresponding NLS equation reads $$\bigskip i\frac{\partial \psi}{\partial \tau}+\beta\,\frac{\partial ^{2}\psi}{\partial \xi^{2}}+\gamma\,\left| \psi\right| ^{2}\psi = 0, \quad \beta, \gamma \in \mathbb{R}, \label{generalNLS}$$ where $\beta$ and $\gamma$ depend only on $k_{0}$ : $$\begin{aligned} \beta = \frac{1}{2}\,\Omega^{''}\!\!(k_{0}),\end{aligned}$$ $$\gamma = - \frac{9}{4}\,k_{0}\,\left(\frac{1}{\Omega^{'}\!(k_{0}) - \Omega^{'}\!(0)} + \frac{k_{0}}{2\,\Omega(k_{0}) - \Omega(2k_{0})} \right),$$ where $\omega = \Omega(k)$ is the linear dispersion relation [@EG98]. The coefficients $\beta$ and $\gamma$ in this paper have opposite signs compared to the corresponding coefficients in [@EG98]. This equation arose as a model for packets of waves on deep water. There are two types of NLS equations : - If $\beta$ and $\gamma$ have the same sign, i.e. $\beta\,\gamma > 0$, then (\[generalNLS\]) is called the *focusing* NLS equation (an attractive nonlinearity, modulationally unstable \[Benjamin–Feir instability\]) [@SS99]. - If $\beta$ and $\gamma$ have different sign, i.e. $\beta\,\gamma < 0$, then (\[generalNLS\]) is called the *de-focusing* NLS equation (a repulsive nonlinearity, stable solution) [@SS99]. The wavenumber $k_{\textmd{\scriptsize{crit}}}$, for which $\beta\,\gamma = 0$ holds, is called the *critical wavenumber* or the *Davey–Stewartson value*. Using the physical quantity $g = 9.8 $ m/s$^{2}$ and the water depth $h = 5$ m, then for wavenumbers $k_{0} > k_{\textmd{\scriptsize{crit}}} = 0.23$, the product $\beta\,\gamma > 0$, and the NLS equation is of focusing type. The corresponding critical wavelength is 27.41 m. In the following, we consider only focusing NLS equations. The NLS equation (\[generalNLS\]) has a *plane–wave* solution $$A(\tau) = r_{0}\,e^{\,i\,\gamma\,r_{0}^{2}\,\tau}. \label{solutiondependontime}$$ In physical variables, the surface wave elevation is given as $\eta(x,t) = 2\,r_{0} \cos\,(k_{0}x - \omega_{0}t + \gamma r_{0}^2 t)$. Note that the corresponding phase velocity is $\frac{\omega_{0} - \gamma\,r_{0}^2}{k_{0}}$. In the following section, we analyze the stability of this *plane–wave* solution. Benjamin–Feir Instability {#BF} ========================= To investigate the instability of the NLS plane–wave solution, we perturb the $\xi$–independent function $A(\tau)$ with a small perturbation of the form $\epsilon(\xi,\tau) = A(\tau)\,B(\xi,\tau)$. We look for the cases where under a small perturbation the amplitude of the plane wave solution grows in time [@LD94]: $$\psi(\xi,\tau) = A(\tau)[1 + B(\xi,\tau)]. \label{perturbation}$$ Substituting (\[perturbation\]) into (\[generalNLS\]), and ignoring nonlinear terms we obtain the *linearized* NLS equation : $$i B_{\tau} + \beta\,B_{\xi\xi} + \gamma\,r_{0}^{2}(B + B^{*}) = 0. \label{linearNLS}$$ We seek the solutions of (\[linearNLS\]) in the form $$B(\xi,\tau) = B_{1}e^{(\sigma \tau + i \kappa \xi)} + B_{2}e^{(\sigma^{*}\tau - i \kappa \xi)}, \label{perturbationsolution}$$ where $B_{1}, B_{2} \in \mathbb{C}$, $k_{0} + \kappa$ is local wavenumber, $\kappa$ is modulation wavenumber, and $\sigma \in \mathbb{C}$ is called the *growth rate*. If Re($\sigma) > 0$, then the perturbed solution of the NLS equation grows exponentially. This is the criterion for the so–called **Benjamin–Feir instability** of a one–wave mode with modulation wavenumber $\kappa$ [@LD94]. Substituting the function $B(\xi,\tau)$ in (\[perturbationsolution\]) into (\[linearNLS\]) yields a pair of coupled equations that can be written in matrix form as follows $$\left ( \begin{array}{cc} i \sigma - \beta\,\kappa^{2} + \gamma\,r_{0}^{2} & \gamma\,r_{0}^{2} \\ \gamma\,r_{0}^{2} & -i \sigma - \beta\,\kappa^{2} + \gamma\,r_{0}^{2} \end{array} \right ) \left( \begin{array}{c} B_{1} \\ B_{2}^{*} \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right). \label{matrix}$$ Nontrivial solution to (\[matrix\]) can exist only if the determinant of the left hand side matrix is zero. This condition reads as follows $$\sigma^2 = \beta\,\kappa^{2}(2\,\gamma\,r_{0}^{2} - \beta\,\kappa^{2}).$$ We have the following cases : - The growth rate $\sigma$ is real and positive if $\kappa^{2} < 2\frac{\gamma}{\beta}\,r_{0}^{2}$. This corresponds to **Benjamin–Feir instability**. For specified values of $\kappa$, the perturbation amplitude is exponentially amplified in time [@LD94]. - The growth rate $\sigma$ is purely imaginary if $\kappa^{2} > 2\frac{\gamma}{\beta}\,r_{0}^{2}$. This corresponds to a plane–wave solution that has bounded amplitude for all time [@LD94]. Thus, the range of instability is given by $$0 < |\kappa| < \left|\kappa_{\textmd{\scriptsize{crit}}}\right| = \sqrt{\frac{2\,\gamma}{\beta}}\,r_{0}.$$ It is easy to find that the ’strongest’ instability occurs at $\kappa_{\textmd{\scriptsize{max}}} = \sqrt{\frac{\gamma}{\beta}}\,r_{0},$ where the maximum growth rate is $\sigma_{\textmd{\scriptsize{max}}} = \gamma\,r_{0}^2$. Figure \[plotomega\] shows the plot of growth rate $\sigma$ as a function of modulation wavenumber $\kappa$ for $r_{0} = \beta = \gamma = 1$. We can write the solution of the linearized NLS equation as $$\psi(\xi,\tau) = r_{0}\,e^{\,i\,\gamma\,r_{0}^2\,\tau} [ 1 + e^{\sigma(\kappa)\,\tau}(B_{1} e^{i\,\kappa\,\xi} + B_{2} e^{-i\kappa\,\xi})],$$ with $\sigma(\kappa) = \kappa\,\sqrt{2\,\beta\,\gamma\,r_{0}^{2} - \beta^{2}\,\kappa^{2}}$. Since this solution is obtained from the linearized NLS equation, it is only valid if amplitudes are small. When the time is increased, the amplitude increases exponentially and the linearized theory becomes invalid. Therefore, we cannot use the solution from the linearized equation to investigate the behavior of the maximum amplitude increase due to the Benjamin–Feir instability. Fortunately, there exists an exact solution to the NLS equation that describes the exact behavior of the wave profile and that corresponds to the Benjamin–Feir instability. Modulational Instability ======================== In this section we investigate the relation between the maximum amplitude of a certain solution of the NLS equation and the modulation wavenumber $\kappa$ in the instability interval. For simplicity, we choose the amplitude $r_{0} = 1$, and the coefficients $\beta = \gamma = 1$. An exact solution, the so–called *soliton on finite background*, in short SFB, (sometimes also called *the second most important solution*), of the NLS equation is given by [@AA97] : $$\psi(\xi,\tau) := \frac{(\kappa^{2} - 1) \cosh (\sigma(\kappa)\,\tau) + \sqrt{\frac{2-\kappa^{2}}{2}}\cos (\kappa \xi) + i \sigma(\kappa) \sinh (\sigma(\kappa)\,\tau)} {\cosh (\sigma(\kappa)\,\tau) - \sqrt{\frac{2-\kappa^{2}}{2}} \cos (\kappa \xi)}\; e^{\,i\,\tau}, \label{exact2}$$ where $0 < \kappa < \sqrt{2}$ and $\sigma(\kappa) = \kappa \sqrt{2-\kappa^{2}}$.\ For $\kappa = 1,$ we have $$\psi(\xi,\tau) := \frac{\cos \xi + i \sqrt{2} \sinh \tau }{\sqrt{2} \cosh \tau - \cos \xi}\;e^{\,i\,\tau}. \label{exact1}$$ Figure \[plot3d\] shows the plot of $\left|\psi \right|$ from (\[exact1\]) as a function of $\xi$ and $\tau$. Note that $\psi(\xi,\tau)$ is a $2\pi$–periodic function with respect to $\xi$ variable. In the following, we analyze the solution (\[exact2\]) in detail. The behavior for this SFB as $\tau \rightarrow \pm \infty$ is given by $$\lim_{\tau \rightarrow \infty} \; \left|\psi(\xi,\tau)\right| = \sqrt{(\kappa^2 -1)^2 + \kappa^2\,(2 - \kappa^2)} = 1 .$$ Because of this property the solution (\[exact2\]) is called as SFB. For a ’normal’ soliton, the elevation vanishes at infinity : the ’normal’ soliton is exponentially confined. For SFB, the solution is a similar elevation on top of the finite (nonzero) background level, here the normalized value 1. Write the solution in the form $\psi(\xi,\tau) = u(\xi,\tau)\,e^{\,i\,\tau}$, where $u(\xi,\tau)$ where $u$ describes the amplitude and the exponential part expresses oscillations in time. Let us investigate the behavior of $u(\xi,\tau)$ in time. For that consider the limiting behavior of $\partial_{\tau}u$ as $\tau \rightarrow \pm \infty$. We have $$\frac{\partial u}{\partial \tau}(\xi,\tau) = \frac{i\sigma^{2}(\kappa) - \sigma(\kappa)\,\sqrt{\frac{2-\kappa^2}{2}} \cos (\kappa \,\xi)\left[\kappa^2 \,\sinh \left(\sigma(\kappa)\,\tau \right) + i\,\sigma(\kappa)\,\cosh \left(\sigma(\kappa)\,\tau\right) \right]}{\left[\cosh \left(\sigma(\kappa)\,\tau\right) - \sqrt{\frac{2-\kappa^2}{2}} \cos \left(\kappa\,\xi\right)\right]^{2}}.$$ If $\xi \neq \frac{\pi}{2\,\kappa}$, then $$\frac{\partial u}{\partial \tau} \approx - 2\,(\kappa^2 + i\,\sigma(\kappa)) e^{-\sigma(\kappa)\,\tau} \quad \textmd{if} \; \tau \gg 0,$$ and $$\frac{\partial u}{\partial \tau} \approx 2\,(\kappa^2 - i\,\sigma(\kappa)) e^{\,\sigma(\kappa)\,\tau} \quad \textmd{if} \; \tau \ll 0.$$ If $\xi = \frac{\pi}{2\,\kappa}$, then $$\frac{\partial u}{\partial \tau} \approx 4\,i\,\sigma^2(\kappa) \,e^{-2\,\sigma(\kappa)\,\tau} \quad \textmd{for} \; \tau \gg 0,$$ and $$\frac{\partial u}{\partial \tau} \approx 4\,i\,\sigma^2(\kappa)\, e^{\,2\,\sigma(\kappa)\,\tau} \quad \textmd{for} \; \tau \ll 0.$$ Next, let us find the relation between the maximum amplitude of the exact solution (\[exact2\]) with modulation wavenumber $\kappa$, where $0 < \kappa < \sqrt {2}$. The maximum value of the complex amplitude is at $\xi \equiv 0\;(\textmd{mod}\;2\pi)$ and when $\tau = 0$. So, we have $$|\psi|_{\textmd{\scriptsize{max}}} = |\psi(0,0\,;\,\kappa)| = \frac{\kappa^{2} - 1 + \sqrt{1 - \frac{1}{2} \kappa^{2}}}{1 - \sqrt{1 - \frac{1}{2} \kappa^{2}}}.$$ Using the approximation $\sqrt{1+a} \approx 1 + \frac{1}{2} a$ for small $a$, and apply it to our formula ($a = - \frac{1}{2} \kappa^{2}$), we obtain $$\begin{aligned} \lim_{\kappa \rightarrow 0}\;|\psi(0,0;\kappa)| = 3.\end{aligned}$$ As a result, the maximum factor of the amplitude amplification is $$\lim_{\kappa \rightarrow 0}\;\frac{|\psi(0,0;\kappa)|}{\lim_{\tau \rightarrow \pm \infty}|\psi(\xi,\tau;\kappa)|} = \frac{3}{1} = 3.$$ Figure \[plotpsi\] shows the plot of the maximum amplitude $|\psi|_{\textmd{\scriptsize{max}}}$ as a function of modulation wavenumber $\kappa$. To summarize, Figure \[plotgabung\] compares the plot of the dispersion relation $\Omega$ and its quadratic approximation (leads to the NLS equation), the growth rate $\sigma$ (which is related to Benjamin–Feir instability), and the maximum amplitude of $\psi$ as functions of wavenumber $k$. Conclusion ========== In this paper we modelled surface waves in wave tanks of hydrodynamics laboratories using the NLS equation. We analyzed the linearized NLS equation and obtained that its solutions have tendency to grow exponentially. We considered also the exact solution known as SFB of the NLS equation, that is the continuation of this linear instability. Using this, we found the maximum amplitude in space and time when the modulation wavenumber is in the interval of the Benjamin–Feir instability. The main result of this paper is that the maximum factor of the amplitude amplification due to the Benjamin–Feir instability is three. As we can see from Figure \[plotgabung\], the growth rate from the Benjamin–Feir instability does not determine the maximum amplitude amplification of the SFB. The results of this paper will be used in the further study of relating the Benjamin–Feir instability and the Phase–Amplitude equations. **Acknowledgement** This work is executed at University of Twente, The Netherlands as part of the project ’Extreme Waves’ TWI.5374 of the Netherlands Organization of Scientific Research NWO, subdivision Applied Sciences STW. [^1]: E-mail : n.karjanto@math.utwente.nl [^2]: E-mail : groesen@math.utwente.nl [^3]: On leave from Center of Nonlinear Studies, Institute of Cybernetics at Tallinn Technical University, Akadeemia Road 21, 12618 Tallinn, Estonia, e-mail : pearu@cens.ioc.ee
--- abstract: | Separation Logic ($\seplog$) is a well-known assertion language used in Hoare-style modular proof systems for programs with dynamically allocated data structures. In this paper we investigate the fragment of first-order $\seplog$ restricted to the Bernays-Schönfinkel-Ramsey quantifier prefix $\exists^*\forall^*$, where the quantified variables range over the set of memory locations. When this set is uninterpreted (has no associated theory) the fragment is <span style="font-variant:small-caps;">PSPACE</span>-complete, which matches the complexity of the quantifier-free fragment [@CalcagnoYangOHearn01]. However, $\seplog$ becomes undecidable when the quantifier prefix belongs to $\exists^*\forall^*\exists^*$ instead, or when the memory locations are interpreted as integers with linear arithmetic constraints, thus setting a sharp boundary for decidability within $\seplog$. We have implemented a decision procedure for the decidable fragment of $\exists^*\forall^*\seplog$ as a specialized solver inside a DPLL($T$) architecture, within the CVC4 SMT solver. The evaluation of our implementation was carried out using two sets of verification conditions, produced by unfolding inductive predicates, and a weakest precondition-based verification condition generator. Experimental data shows that automated quantifier instantiation has little overhead, compared to manual model-based instantiation. author: - Andrew Reynolds - Radu Iosif - Cristina Serban bibliography: - 'refs.bib' title: 'Reasoning in the Bernays-Schönfinkel-Ramsey Fragment of Separation Logic' --- Introduction {#sec:intro} ============ Separation Logic ($\seplog$) is a popular logical framework for program verification, used by a large number of methods, ranging from static analysis [@Predator; @Xisa; @Infer] to Hoare-style proofs [@Sleek] and property-guided abstraction refinement [@SplInter]. The salient features that make $\seplog$ particularly attractive for program verification are the ability of defining recursive data structures using small and natural inductive definitions, weakest pre- and post-condition calculi that capture the semantics of programs with pointers, and compositional verification methods, based on the principle of local reasoning (analyzing separately pieces of program working on disjoint heaps). Consider, for instance, the following inductive definitions, describing an acyclic and a possibly cyclic list segment, respectively: $$\begin{array}{ll} \widehat{\mathsf{ls}}(\xterm,\yterm) \equiv \emp \wedge \xterm=\yterm ~\vee~ \xterm \neq \yterm \wedge \exists \zterm ~.~ \xterm \mapsto \zterm * \widehat{\mathsf{ls}}(\zterm,\yterm) & \text{ acyclic list segment from $\xterm$ to $\yterm$} \\ \mathsf{ls}(\xterm,\yterm) \equiv \emp \wedge \xterm=\yterm ~\vee~ \exists \uterm ~.~ \xterm \mapsto \uterm * \mathsf{ls}(\uterm,\yterm) & \text{ list segment from $\xterm$ to $\yterm$} \end{array}$$ Intuitively, an acyclic list segment is either empty, in which case the head and the tail coincide ($\emp \wedge \xterm=\yterm$), or it contains at least one element which is disjoint from the rest of the list segment. We denote by $\xterm \mapsto \zterm$ the fact that $\xterm$ is an allocated memory location, which points to $\zterm$, and by $\xterm \mapsto \zterm * \widehat{\mathsf{ls}}(\zterm,\yterm)$ the fact that $\xterm \mapsto \zterm$ and $\widehat{\mathsf{ls}}(\zterm,\yterm)$ hold over disjoint parts of the heap. The constraint $\xterm\neq\yterm$, in the inductive definition of $\widehat{\mathsf{ls}}$, captures the fact that the tail of the list segment is distinct from every allocated cell in the list segment, which ensures the acyclicity condition. Since this constraint is omitted from the definition of the second (possibly cyclic) list segment $\mathsf{ls}(\xterm,\yterm)$, its tail $\yterm$ is allowed to point inside the set of allocated cells. Automated reasoning is the key enabler of push-button program verification. Any procedure that checks the validity of a logical entailment between inductive predicates requires checking the satisfiability of formulae from the base (non-inductive) assertion language, as shown by the example below. Consider a fragment of the inductive proof showing that any acyclic list segment is also a list segment, given below: $$\infer[]{\widehat{\mathsf{ls}}(\xterm,\yterm) \vdash \mathsf{ls}(\xterm,\yterm)}{ \infer[\begin{array}{l}\xterm \neq \yterm \wedge \xterm \mapsto \zterm \models \exists \uterm ~.~ \xterm \mapsto \uterm \\ \text{by instantiation } \uterm \leftarrow \zterm \end{array}]{ \xterm \neq \yterm \wedge \xterm \mapsto \zterm * \widehat{\mathsf{ls}}(\zterm,\yterm) \vdash \exists \uterm ~.~ \xterm \mapsto \uterm * \mathsf{ls}(\uterm,\yterm) }{\widehat{\mathsf{ls}}(\zterm,\yterm) \vdash \mathsf{ls}(\zterm,\yterm)}}$$ The first (bottom) inference in the proof corresponds to one of the two cases produced by unfolding both the antecedent and consequent of the entailment (the second case $\emp \wedge \xterm=\yterm \vdash \emp \wedge \xterm=\yterm$ is trivial and omitted for clarity). The second inference is a simplification of the sequent obtained by unfolding, to a sequent matching the initial one (by renaming $\zterm$ to $\xterm$), and allows to conclude this branch of the proof by an inductive argument, based on the principle of infinite descent [@BrotherstonSimpson11]. The simplification applied by the second inference above relies on the validity of the entailment $\xterm \neq \yterm \wedge \xterm \mapsto \zterm \models \exists \uterm ~.~ \xterm \mapsto \uterm$, which reduces to the (un)satisfiability of the formula $\xterm \neq \yterm \wedge \xterm \mapsto \zterm \wedge \forall \uterm ~.~ \neg \xterm \mapsto \uterm$. The latter falls into the Bernays-Schönfinkel-Ramsey fragment, defined by the $\exists^*\forall^*$ quantifier prefix, and can be proved unsatisfiable using the instantiation of the universally quantified variable $\uterm$ with the existentially quantified variable $\zterm$ (or a corresponding Skolem constant). In other words, this formula is unsatisfiable because the universal quantified subformula asks that no memory location is pointed to by $\xterm$, which is contradicted by $\xterm \mapsto \zterm$. The instantiation of $\uterm$ that violates the universal condition is $\uterm\leftarrow\zterm$, which is carried over in the rest of the proof. The goal of this paper is mechanizing satisfiability of the Bernays-Schönfinkel-Ramsey fragment of $\seplog$, without inductively defined predicates[^1]. This fragment is defined by the quantifier prefix of the formulae in prenex normal form. We consider formulae $\exists x_1 \ldots \exists x_m \forall y_1 \ldots \forall y_n ~.~ \phi(x_1,\ldots,x_m,y_1,\ldots,y_n)$, where $\phi$ is any quantifier-free formula of $\seplog$, consisting of pure formulae from given base theory $T$, and points-to atomic propositions relating terms of $T$, combined with unrestricted Boolean and separation connectives, and the quantified variables range essentially over the set of memory locations. In a nutshell, the contributions of the paper are two-fold: We draw a sharp boundary between decidability and undecidability, proving essentially that the satisfiability problem for the Bernays-Schönfinkel-Ramsey fragment of $\seplog$ is <span style="font-variant:small-caps;">PSPACE</span>-complete, if the domain of memory locations is an uninterpreted set, whereas interpreting memory locations as integers with linear arithmetic constraints, leads to undecidability. Moreover, undecidability occurs even for uninterpreted memory locations, if we extend the quantifier prefix to $\exists^*\forall^*\exists^*$. We have implemented an effective decision procedure for quantifier instantiation, based on counterexample-driven learning of conflict lemmas, integrated within the DPLL($T$) architecture [@GanzingerHagenNieuwenhuisOliverasTinelli04] of the CVC4 SMT solver [@CVC4-CAV-11]. Experimental evaluation of our implementation shows that the overhead of the push-button quantifier instantiation is negligible, compared to the time required to solve a quantifier-free instance of the problem, obtained manually, by model inspection. #### **Related Work** The first theoretical results on the decidability and computational complexity of $\seplog$ (without inductive definitions) were found by Calcagno, Yang and O’Hearn [@CalcagnoYangOHearn01]. They showed that the satisfiability problem for $\seplog$ is undecidable, in the presence of quantifiers, assuming that each memory location can point to two other locations, i.e. using atomic propositions of the form $\xterm \mapsto (\yterm,\zterm)$. Decidability can be recovered by considering the quantifier-free fragment, proved to be <span style="font-variant:small-caps;">PSPACE</span>-complete, by a small model argument [@CalcagnoYangOHearn01]. Refinements of these results consider decidable fragments of $\seplog$ with one record field (atomic points-to propositions $\xterm \mapsto \yterm$), and one or two quantified variables. In a nutshell, $\seplog$ with one record field and separating conjunction only is decidable with non-elementary time complexity, whereas adding the magic wand adjoint leads to undecidability [@BrocheninDemriLozes11]. Decidability, in the presence of the magic wand operator, is recovered by restricting the number of quantifiers to one, in which case the logic becomes <span style="font-variant:small-caps;">PSPACE</span>-complete [@DemriGalmicheWendlingMery14]. This bound is sharp, because allowing two quantified variables leads to undecidability, and decidability with non-elementary time complexity if the magic wand is removed [@DemriDeters15]. SMT techniques were applied to deciding the satisfiability of $\seplog$ in the work of Piskac, Wies and Zufferey [@Piskac2013; @Piskac2014]. They considered quantifier-free fragments of $\seplog$ with separating conjunction in positive form (not occurring under negation) and without magic wand, and allowed for hardcoded inductive predicates (list and tree segments). In a similar spirit, we previously defined a translation to multi-sorted second-order logic combined with counterexample-driven instantiation for set quantifiers to define a decision procedure for the quantifier-free fragment of $\seplog$ [@ReynoldsIosifKingSerban16]. In a different vein, a tableau-based semi-decision procedure is given by Méry and Galmiche [@MeryGalmiche07]. Termination of this procedure is guaranteed for the (decidable) quantifier-free fragment of $\seplog$, yet no implementation is available for comparison. A number of automated theorem provers have efficient and complete approaches for the Bernays-Schönfinkel-Ramsey fragment of first-order-logic, also known as effectively propositional logic (EPR) [@DBLP:journals/ijait/BaumgartnerFT06; @DBLP:conf/cade/Korovin08]. A dedicated approach for EPR in the SMT solver Z3 was developed in [@DBLP:journals/jar/PiskacMB10]. An approach based on finite model finding is implemented in CVC4 [@ReyEtAl-1-RR-13], which is model-complete for EPR. Our approach is based on counterexample-guided quantifier instantiation, which has been used in the context of SMT solving in previous works [@GeDeM-CAV-09; @ReynoldsDKBT15Cav]. Preliminaries ============= We consider formulae in multi-sorted first-order logic. A *signature* $\Sigma$ consists of a set $\ssorts{\Sigma}$ of sort symbols and a set $\sfuns{\Sigma}$ of (sorted) *function symbols* $f^{\sigma_1 \cdots \sigma_n \sigma}$, where $n \geq 0$ and $\sigma_1, \ldots, \sigma_n, \sigma \in \ssorts{\Sigma}$. If $n=0$, we call $f^\sigma$ a *constant symbol*. In this paper, we consider signatures $\Sigma$ containing the Boolean sort, and write $\top$ and $\bot$ for the Boolean constants *true* and *false*. For this reason, we do not consider predicate symbols as part of a signature, as predicates are viewed as Boolean functions. Additionally, we assume for any finite sequence of sorts $\sigma_1, \ldots, \sigma_n \in \ssorts{\Sigma}$, the *tuple* sort $\sigma_1 \times \ldots \times \sigma_n$ also belongs to $\ssorts{\Sigma}$, and that $\sfuns{\Sigma}$ includes the $i^{th}$ tuple projection function for each $i = 1, \ldots, n$. For each $k > 0$, let $\sigma^k$ denote the $k$-tuple sort $\sigma \times \ldots \times \sigma$. Let $\vars$ be a countable set of first-order variables, each $x^\sigma \in \vars$ having an associated sort $\sigma$. First-order terms and formulae over the signature $\Sigma$ (called $\Sigma$-terms and $\Sigma$-formulae) are defined as usual. For a $\Sigma$-formula $\varphi$, we denote by $\fv{\varphi}$ the set of free variables and constant symbols in $\varphi$, and by writing $\varphi(x)$ we mean that $x \in \fv{\phi}$. Whenever $\fv{\phi} \cap \vars = \emptyset$, we say that $\phi$ is a *sentence*, i.e. $\phi$ has no free variables. A *$\Sigma$-interpretation $\I$* maps: each sort symbol $\sigma \in \Sigma$ to a non-empty set $\sigma^\I$, each function symbol $f^{\sigma_1,\ldots,\sigma_n,\sigma} \in \Sigma$ to a total function $f^\I : \sigma^\I_1 \times \ldots \times \sigma^\I_n \rightarrow \sigma^\I$ where $n > 0$, and to an element of $\sigma^\I$ when $n = 0$, and each variable $x^\sigma \in \vars$ to an element of $\sigma^\I$. For an interpretation $\I$ a sort symbol $\sigma$ and a variable $x$, we denote by $\I[\sigma \leftarrow S]$ and, respectively $\I[x \leftarrow v]$, the interpretation associating the set $S$ to $\sigma$, respectively the value $v$ to $x$, and which behaves like $\I$ in all other cases[^2]. For a $\Sigma$-term $t$, we write $t^\I$ to denote the interpretation of $t$ in $\I$, defined inductively, as usual. A satisfiability relation between $\Sigma$-interpretations and $\Sigma$-formulas, written $\I \models \varphi$, is also defined inductively, as usual. We say that $\I$ is *a model of $\varphi$* if $\I$ satisfies $\varphi$. A (multi-sorted first-order) *theory* is a pair $T = (\Sigma, \mods)$ where $\Sigma$ is a signature and $\mods$ is a non-empty set of $\Sigma$-interpretations, the *models* of $T$. We assume that $\Sigma$ always contains the equality predicate, which we denote by $\teq$, as well as projection functions for each tuple sort. A $\Sigma$-formula $\varphi$ is *$T$-satisfiable* if it is satisfied by some interpretation in $\mods$. We write $\euf$ to denote the empty theory (with equality), whose signature consists of a sort $U$ with no additional function symbols, and $\lia$ to denote the theory of linear integer arithmetic, whose signature consists of the sort $\Int$, the binary predicate symbol $\geq$, function $+$ denoting addition, and the constants $0,1$ of sort $\Int$, interpreted as usual. In particular, there are no uninterpreted function symbols in $\lia$. By $\euflia$ we denote the theory obtained by extending the signature of $\lia$ with the sort $U$ of $\euf$ and the equality over $U$. Let $T = (\Sigma, \mods)$ be a theory and let $\locs$ and $\data$ be two sorts from $\Sigma$, with no restriction other than the fact that $\locs$ is always interpreted as a countable set. Also, we consider that $\Sigma$ has a designated constant symbol $\nil^\locs$. The *Separation Logic* fragment $\seplog(T)_{\locs,\data}$ is the set of formulae generated by the following syntax: $$\begin{array}{lcl} \varphi & := & \phi \mid \emp \mid \tterm \mapsto \uterm \mid \varphi_1 * \varphi_2 \mid \varphi_1 \wand \varphi_2 \mid \neg \varphi_1 \mid \varphi_1 \wedge \varphi_2 \mid \exists x^\sigma ~.~ \varphi_1(x) \end{array}$$ where $\phi$ is a $\Sigma$-formula, and $\tterm$, $\uterm$ are $\Sigma$-terms of sorts $\locs$ and $\data$, respectively. As usual, we write $\forall x^\sigma ~.~ \varphi(x)$ for $\neg\exists x^\sigma ~.~ \neg\varphi(x)$. We omit specifying the sorts of variables and constants when they are clear from the context. Given an interpretation $\I$, a *heap* is a finite partial mapping $h : \locs^\I \rightharpoonup_{\mathrm{fin}} \data^\I$. For a heap $h$, we denote by $\dom(h)$ its domain. For two heaps $h_1$ and $h_2$, we write $h_1 \# h_2$ for $\dom(h_1) \cap \dom(h_2) = \emptyset$ and $h = h_1 \uplus h_2$ for $h_1 \# h_2$ and $h = h_1 \cup h_2$. We define the *satisfaction relation* $\I,h \models_{\tinyseplog} \phi$ inductively, as follows: $$\begin{array}{lcl} \I,h \models_{\tinyseplog} \phi & \iff & \I \models \phi \text{ if $\phi$ is a $\Sigma$-formula} \\ \I,h \models_{\tinyseplog} \emp & \iff & h = \emptyset \\ \I,h \models_{\tinyseplog} \tterm \mapsto \uterm & \iff & h = \{(\tterm^\I,\uterm^\I)\} \text{ and } \tterm^\I\not\teq\nil^\I \\ \I,h \models_{\tinyseplog} \phi_1 * \phi_2 & \iff & \text{there exist heaps } h_1,h_2 \text{ s.t. } h=h_1\uplus h_2 \text{ and } \I,h_i \models_{\tinyseplog} \phi_i, i = 1,2 \\ \I,h \models_{\tinyseplog} \phi_1 \wand \phi_2 & \iff & \text{for all heaps } h' \text{ if } h'\#h \text{ and } \I,h' \models_{\tinyseplog} \phi_1 \text{ then } \I,h'\uplus h \models_{\tinyseplog} \phi_2 \\ \I,h \models_{\tinyseplog} \exists x^S . \varphi(x) & \iff & \I[x \leftarrow s],h \models_{\tinyseplog} \varphi(x) \text{, for some }s \in S^\I \end{array}$$ The satisfaction relation for $\Sigma$-formulae, Boolean connectives $\wedge$, $\neg$, and linear arithmetic atoms, are the classical ones from first-order logic. Notice that the range of a quantified variable $x^S$ is the interpretation of its associated sort $S^\I$. A formula $\varphi$ is said to be *satisfiable* if there exists an interpretation $\I$ and a heap $h$ such that $\I,h \models_{\tinyseplog} \varphi$. The $(\seplog,T)$-*satisfiability problem* asks, given an $\seplog$ formula $\varphi$, whether there exists an interpretation $\I$ of $T$ and a heap $h$ such that $\I,h \models_{\tinyseplog} \varphi$. We write $\varphi \models_{\tinyseplog} \psi$ if for every interpretation $\I$ and heap $h$, if $\I,h \models_{\tinyseplog} \varphi$ then $\I,h \models_{\tinyseplog} \psi$, and we say that $\varphi$ *entails* $\psi$ in this case. #### **The Bernays-Schönfinkel-Ramsey Fragment of $\seplog$** In this paper we address the satisfiability problem for the class of sentences $\phi \equiv \exists x_1 \ldots \exists x_m \forall y_1 \ldots \forall y_n ~.~ \varphi(x_1, \ldots, x_m, y_1, \\ \ldots, y_n)$, where $\varphi$ is a quantifier-free formula of $\seplog(T)_{\locs,\data}$. We shall denote this fragment by $\exists^*\forall^*\seplog(T)_{\locs,\data}$. It is easy to see that any sentence $\phi$, as above, is satisfiable if and only if the sentence $\forall y_1 \ldots \forall y_n ~.~ \varphi[c_1/x_1, \ldots, c_m/x_m]$ is satisfiable, where $c_1,\ldots,c_m$ are fresh (Skolem) constant symbols. The latter is called the *functional form* of $\phi$. As previously mentioned, $\seplog$ is used mainly specify properties of a program’s heap. If the program under consideration uses pointer arithmetic, as in C or C++, it is useful to consider $\lia$ for the theory of memory addresses. Otherwise, if the program only compares the values of the pointers for equality, as in Java, one can use $\euf$ for this purpose. This distinction led us to considering the satisfiability problem for $\exists^*\forall^*\seplog(T)_{\locs,\data}$ in the following cases: $\locs$ is interpreted as the sort $U$ of $\euf$ and $\data$ as $U^k$, for some $k\geq1$. The satisfiability problem for the fragment $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ is <span style="font-variant:small-caps;">PSPACE</span>-complete, and the proof follows a small model property argument. as above, with the further constraint that $U$ is interpreted as an infinite countable set, i.e. of cardinality $\aleph_0$. In this case, we prove a cut-off property stating that all locations not in the domain of the heap and not used in the interpretation of constants, are equivalent from the point of view of an $\seplog$ formula. This satisfiability problem is reduced to the unconstrained one above, and also found to be <span style="font-variant:small-caps;">PSPACE</span>-complete. both $\locs$ and $\data$ are interpreted as $\Int$, equipped with addition and total order, in which case $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$ is undecidable. $\locs$ is interpreted as the sort $U$ of $\euf$, and $\data$ as $U \times \Int$. Then $\exists^*\forall^*\seplog(\euflia)_{U,U \times \Int}$ is undecidable. Additionally, we prove that the fragment $\exists^*\forall^*\exists^*\seplog(\euf)_{U,U^k}$, with two quantifier alternations, is undecidable, if $k\geq2$. The question whether the fragment $\exists^*\forall^*\seplog(\euflia)_{U,\Int}$ is decidable is currently open, and considered for future work. Decidability and Complexity Results =================================== This section defines the decidable cases of the Bernays-Schönfinkel-Ramsey fragment of $\seplog$, with matching undecidable extensions. The decidable fragment $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ relies on a small model property given in Section \[sec:small-model\]. Undecidability of $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$ is obtained by a refinement of the undecidability proof for Presburger arithmetic with one monadic predicate [@Halpern91], in Section \[sec:undecidability\]. Small Model Property {#sec:small-model} -------------------- The decidability proof for the quantifier-free fragment of $\seplog$ [@CalcagnoYangOHearn01; @YangPhd] relies on a small model property. Intuitively, no quantifier-free $\seplog$ formula can distinguish between heaps in which the number of invisible locations, not in the range of the set of free variables, exceeds a certain threshold, linear in the size of the formula. Then a formula is satisfiable iff it has a heap model of size linear in the size of the input formula. For reasons of self-containment, we recall a number of definitions and results from [@YangPhd]. Some of them are slightly modified for our purposes, but these changes have no effect on the validity of the original proofs for the Lemmas \[lemma:small-model-existence\] and \[lemma:heap-equivalence\] below. In the rest of this section, we consider formulae of $\seplog(\euf)_{U,U^k}$, meaning that $\locs = U$, and there exists an integer $k>0$ such that $\data = U^k$, where $U$ is the (uninterpreted) sort of $\euf$. We fix $k$ for the rest of this section. [@YangPhd Definition 90] Given a set of locations $S$, the equivalence relation $=_S$ between $k$-tuples of locations is defined as $\langle v_1, \ldots, v_k \rangle =_S \langle v'_1, \ldots, v'_k\rangle$ if and only if if $v_i \in S$ then $v_i=v'_i$, and if $v_i \not\in S$ then $v'_i \not\in S$, for all $i = 1, \ldots, k$. Intuitively, $=_S$ restricts the equality to the elements in $S$. Observe that $=_S$ is an equivalence relation and that $S \subseteq T$ implies $=_T ~\subseteq~ =_S$. For a set $S$, we write $\card{S}$ for its cardinality, in the following. [@YangPhd Definition 91]\[def:heap-equivalence\] Given an interpretation $\I$, an integer $n>0$, a set of variables $X \subseteq \vars$ and a set of locations $S \subseteq U^\I$, for any two heaps $h,h' : U^\I \rightharpoonup_{\mathrm{fin}} (U^\I)^k$, we define $h \sim^\I_{n,X,S} h'$ if and only if $\I(X) \cap \dom(h) = \I(X) \cap \dom(h')$, for all $\ell \in \I(X) \cap \dom(h)$, we have $h(\ell) =_{\I(X)\cup S} h'(\ell)$, if $\card{\dom(h) \setminus \I(X)} < n$ then $\card{\dom(h) \setminus \I(X)} = \card{\dom(h') \setminus \I(X)}$, if $\card{\dom(h) \setminus \I(X)} \geq n$ then $\card{\dom(h') \setminus \I(X)} \geq n$. Observe that, for any $n \leq m$ and $S \subseteq T$ we have $\sim^\I_{m,X,T} ~\subseteq~ \sim^\I_{n,X,S}$. In addition, for any integer $k>0$, subset $S \subseteq U^\I$ and location $\ell \in U^\I$, we consider the function $\mathit{prun}_{k,S}^\ell(\ell_1, \ldots, \ell_k)$, which replaces each value $\ell_i \not\in S$ in its argument list by $\ell$. [@YangPhd Lemma 94]\[lemma:small-model-existence\] Given an interpretation $\I$ and a heap $h : U^\I \rightharpoonup_{\mathrm{fin}} (U^\I)^k$, for each integer $n>0$, each set of variables $X \subseteq \vars$, each set of locations $L \subseteq U^\I$ such that $L \cap \I(X) = \emptyset$ and $\card{L}=n$, and each location $v \in U^\I \setminus (\I(X) \cup \{\nil^\I\} \cup L)$, there exists a heap $h' : U^\I \rightharpoonup_{\mathrm{fin}} (U^\I)^k$, with the following properties: $h \sim^\I_{n,X,L} h'$, $\dom(h') \setminus \I(X) \subseteq L$, for all $\ell \in \dom(h')$, we have $h'(\ell) = \mathit{prun}_{k,\I(X)\cup L}^v(h(\ell))$. Next, we define the following measure on quantifier-free $\seplog$ formulae: $$\begin{array}{ccccccc} \len{\phi*\psi} = \len{\phi}+\len{\psi} & \hspace*{2mm} & \len{\phi \wand \psi} = \len{\psi} & \hspace*{2mm} & \len{\phi\wedge\psi} = \max(\len{\phi},\len{\psi}) & \hspace*{2mm} & \len{\neg\phi} = \len{\phi} \\ \len{\tterm \mapsto \uterm} = 1 & \hspace*{2mm} & \len{\emp} = 1 & \hspace*{2mm} & % \len{\forall x . \phi(x)} = \len{\phi} & \hspace*{2mm} & \len{\phi} = 0 \text{ if $\phi$ is a $\Sigma$-formula} \end{array}$$ Intuitively, $\len{\varphi}$ is the maximum number of invisible locations, that are not in $\I(\fv{\varphi})$, and which can be distinguished by the quantifier-free $\seplog(\euf)_{U,U^k}$ formula $\varphi$. The crux of the <span style="font-variant:small-caps;">PSPACE</span>-completeness proof for quantifier-free $\seplog(\euf)_{U,U^k}$ is that two heaps equivalent up to $\len{\varphi}$ invisible locations are also equivalent from the point of view of satisfiability of $\varphi$, which provides a small model property for this fragment [@YangPhd; @CalcagnoYangOHearn01]. [@YangPhd Prop. 95]\[lemma:heap-equivalence\] Given a quantifier-free $\seplog(\euf)_{U,U^k}$ formula $\varphi$, an interpretation $\I$, and two heaps $h$ and $h'$, if $h \sim^\I_{\len{\varphi},\fv{\varphi},\emptyset} h'$ and $\I,h \models_{\tinyseplog} \varphi$ then $\I,h' \models_{\tinyseplog} \varphi$. Our aim is to extend this result to $\exists^*\forall^*\seplog(\euf)_{U,U^k}$, in the first place. This new small model property is given by the next lemma. \[lemma:small-model-property\] Let $\varphi(x_1^U,\ldots,x_n^U)$ be a quantifier-free $\seplog(\euf)_{U,U^k}$-formula, and $\varphi^\forall \equiv \forall x_1^U \ldots \\ \forall x_n^U ~.~ \varphi(x_1^U,\ldots,x_n^U)$ be its universal closure. Then $\varphi^\forall$ has a model if and only if there exists an interpretation $\I$ and a heap $h : U^\I \rightharpoonup_{\mathrm{fin}} (U^\I)^k$ such that $\I,h \models_{\tinyseplog} \varphi^\forall$ and: $\card{U^\I} \leq \len{\varphi} + \card{\fv{\varphi^\forall}} + n$, $\dom(h) \subseteq L \cup \I(\fv{\varphi^\forall})$, for all $\ell \in \dom(h)$, we have $h(\ell) \in (\I(\fv{\varphi^\forall}) \cup \{\nil^\I\} \cup L \cup \{v\})^k$, where $L \subseteq U^\I \setminus \I(\fv{\varphi^\forall})$ is a set of locations such that $\card{L} = \len{\varphi}+n$ and $v \in U^\I \setminus (\I(\fv{\varphi^\forall}) \cup \{\nil^\I\} \cup L)$ is an arbitrary location. We are ready to prove two decidability results, based on the above small model property, concerning the cases where $\locs$ is interpreted as a countable set with equality, and $\locs$ is interpreted as an infinite countable set with no other operators than equality. Uninterpreted Locations without Cardinality Constraints ------------------------------------------------------- In this section, we consider the satisfiability problem for the fragment $\exists^*\forall^*\seplog(\euf)_{U,U^k}$, where the location sort $U$ can be interpreted by any (possibly finite) countable set, with no other operations than the equality, and the data sort consists of $k$-tuples of locations. \[thm:ul-bsr\] The satisfiability problem for $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ is <span style="font-variant:small-caps;">PSPACE</span>-complete. This result is somewhat surprising, because the classical Bernays-Schönfinkel fragment of first-order formulae with predicate symbols (but no function symbols) and quantifier prefix $\exists^*\forall^*$ is known to be <span style="font-variant:small-caps;">NEXPTIME</span>-complete [@Lewis80 §7]. The explanation lies in the fact that the interpretation of an arbitrary predicate symbol $P(\xterm_1,\ldots,\xterm_n)$ cannot be captured using only points-to atomic propositions, e.g. $\xterm_1 \mapsto (\xterm_2, \ldots, \xterm_n)$, between locations and tuples of locations, due to the interpretation of points-to’s as heaps[^3] (finite partial functions). The following lemma sets a first decidability boundary for $\seplog(\euf)_{U,U^k}$, by showing how extending the quantifier prefix to $\exists^*\forall^*\exists^*$ leads to undecidability. \[lemma:exists-forall-exists-undecidability\] The satisfiability problem for $\exists^*\forall^*\exists^*\seplog(\euf)_{U,U^k}$ is undecidable, if $k\geq2$. Observe that the result of Lemma \[lemma:exists-forall-exists-undecidability\] sets a fairly tight boundary between the decidable and undecidable fragments of $\seplog$. On the one hand, simplifying the quantifier prefix to $\exists^*\forall^*$ yields a decidable fragment (Theorem \[thm:ul-bsr\]), whereas $\seplog(\euf)_{U,U}$ ($k=1$) without the magic wand ($\wand$) is decidable with non-elementary time complexity, even when considering an unrestricted quantifier prefix [@BrocheninDemriLozes11]. Uninterpreted Locations with Cardinality $\aleph_0$ --------------------------------------------------- We consider the stronger version of the satisfiability problem for $\exists^*\forall^*\seplog(\euf)_{U,U^k}$, where $U$ is interpreted as an infinite countable set (of cardinality $\aleph_0$) with no function symbols, other than equality. Instances of this problem occur when, for instance, the location sort is taken to be $\Int$, but no operations are used on integers, except for testing equality. Observe that this restriction changes the satisfiability status of certain formulae. For instance, $\exists \xterm \forall \yterm ~.~ \yterm \not\teq \nil \Rightarrow (\yterm \mapsto \xterm * \top )$ is satisfiable if $U$ is interpreted as a finite set, but becomes unsatisfiable when $U$ is infinite. The reason is that this formula requires every location from $U^\I$ apart from $\nil$ to be part of the domain of the heap, which is impossible due the fact that only finite heaps are considered by the semantics of $\seplog$. In the following proof, we use the formula $\mathsf{alloc}(\xterm) \equiv \xterm \mapsto (\xterm,\ldots,\xterm) \wand \bot$, expressing the fact that a location variable $\xterm$ is *allocated*, i.e. its interpretation is part of the heap’s domain [@BrocheninDemriLozes11]. Intuitively, we reduce any instance of the $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ satisfiability problem, with $U$ of cardinality $\aleph_0$, to an instance of the same problem without this restriction, by the following cut-off argument: if a free variable is interpreted as a location which is neither part of the heap’s domain, nor equal to the interpretation of some constant, then it is not important which particular location is chosen for that interpretation. \[thm:ul-aleph-zero-bsr\] The satisfiability problem for $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ is <span style="font-variant:small-caps;">PSPACE</span>-complete if $U$ is required to have cardinality $\aleph_0$. Integer Locations with Linear Arithmetic {#sec:undecidability} ---------------------------------------- In the rest of this section we show that the Bernays-Schönfinkel-Ramsey fragment of $\seplog$ becomes undecidable as soon as we use integers to represent the set of locations and combine $\seplog$ with linear integer arithmetic ($\lia$). The proof relies on an undecidability argument for a fragment of Presburger arithmetic with one monadic predicate symbol, interpreted over finite sets. Formally, we denote by $(\exists^*\forall^* \cap \forall^*\exists^*)-\lia$ the set of formulae consisting of a conjunction between two linear arithmetic formulae, one with quantifier prefix in the language $\exists^*\forall^*$, and another with quantifier prefix $\forall^*\exists^*$. \[thm:bsr-presburger\] The satisfiability problem is undecidable for the fragment $(\exists^*\forall^* \cap \forall^*\exists^*)-\lia$, with one monadic predicate symbol, interpreted over finite sets of integers. We consider now the satisfiability problem for the fragment $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$ where both $\locs$ and $\data$ are taken to be the $\Int$ sort, equipped with addition and total order. Observe that, in this case, the heap consists of a set of lists, possibly with aliases and circularities. Without losing generality, we consider that $\Int$ is interpreted as the set of positive integers[^4]. The above theorem cannot be directly used for the undecidability of $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$, by interpreting the (unique) monadic predicate as the (finite) domain of the heap. The problem is with the \[eq:sqr\] formula, that defines the interpretation of the monadic predicate as a set of consecutive perfect squares $0,1,\ldots,n^2$, and whose quantifier prefix lies in the $\forall^*\exists^*$ fragment. We overcome this problem by replacing the \[eq:sqr\] formula above with a definition of such sets in $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$. Let us first consider the following properties expressed in $\seplog$ [@BrocheninDemriLozes11]: $$\begin{array}{rcl} % \mathsf{alloc}(\xterm) & \equiv & \xterm \mapsto \xterm \wand \top \\ \sharp\xterm\geq1 & \equiv & \exists\uterm ~.~ \uterm \mapsto \xterm * \top \\ % \sharp\xterm\leq0 & \equiv & \neg(\sharp\xterm\geq1) \\ \sharp\xterm\leq1 & \equiv & \forall\uterm\forall\tterm ~.~ \neg(\uterm \mapsto \xterm * \tterm \mapsto \xterm * \top) \end{array}$$ Intuitively, $\sharp\xterm\geq1$ states that $\xterm$ has at least one predecessor in the heap, whereas $\sharp\xterm\leq1$ states that $\xterm$ has at most one predecessor. We use $\sharp\xterm=0$ and $\sharp\xterm=1$ as shorthands for $\neg(\sharp\xterm\geq1)$ and $\sharp\xterm\geq1 \wedge \sharp\xterm\leq1$, respectively. The formula below states that the heap can be decomposed into a list segment starting with $\xterm$ and ending in $\yterm$, and several disjoint cyclic lists: $$\begin{array}{rcl} \xterm \arrow{\circlearrowleft}{}^+ \yterm & \equiv & \sharp\xterm=0 \wedge \mathsf{alloc}(\xterm) \wedge \sharp\yterm=1 \wedge \neg\mathsf{alloc}(\yterm) ~\wedge \\ && \forall \zterm ~.~ \zterm \not\teq \yterm \Rightarrow (\sharp\zterm=1 \Rightarrow \mathsf{alloc}(\zterm)) \wedge \forall \zterm ~.~ \sharp\zterm \leq 1 \end{array}$$ We forbid the existence of circular lists by adding the following arithmetic constraint: $$\forall \uterm \forall \tterm ~.~ \uterm \mapsto \tterm * \top \Rightarrow \uterm < \tterm \tag{$\mathsf{nocyc}$}\label{eq:no-cycles}$$ We ask, moreover, that the elements of the list segment starting in $\xterm$ are consecutive perfect squares: $$\mathsf{consqr}(\xterm) \equiv \xterm=0 \wedge \xterm \mapsto 1 * \top \wedge \forall \zterm \forall \uterm \forall \tterm ~.~ \zterm \mapsto \uterm * \uterm \mapsto \tterm * \top \Rightarrow \tterm-\uterm=\uterm-\zterm+2 \tag{$\mathsf{consqr}$}\label{eq:consecutive-squares}$$ Observe that the formula $\exists\xterm\exists\yterm ~.~ \xterm \arrow{\circlearrowleft}{}^+ \yterm \wedge \mathsf{nocyc} \wedge \mathsf{consqr}(\xterm)$ belongs to $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$. \[thm:il-undec\] The satisfiability problem for $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$ is undecidable. It is tempting, at this point to ask whether interpreting locations as integers and considering subsets of $\lia$ instead may help recover the decidability. For instance, it has been found that the Bernays-Schönfinkel-Ramsey class is decidable in presence of integers with difference bounds arithmetic [@WeidenbachVoigt15], and the same type of question can be asked about the fragment of $\exists^*\forall^*\seplog(\lia)_{\Int,\Int}$, with difference bounds constraints only. Finally, we consider a variant of the previous undecidability result, in which locations are the (uninterpreted) sort $U$ of $\euf$ and the data consists of tuples of sort $U \times \Int$. This fragment of $\seplog$ can be used to reason about lists with integer data. The undecidability of this fragment can be proved along the same lines as Theorem \[thm:il-undec\]. \[thm:ulid-undec\] The satisfiability problem for $\exists^*\forall^*\seplog(\euflia)_{U,U\times\Int}$ is undecidable. A Procedure for $\exists^*\forall^*$ Separation Logic in an SMT Solver ====================================================================== This section presents a procedure for the satisfiability of $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ inputs [^5]. Our procedure builds upon our previous work [@ReynoldsIosifKingSerban16], which gave a decision procedure for quantifier-free $\seplog(T)_{\locs,\data}$ inputs for theories $T$ where the satisfiability problem for quantifier-free $T$-constraints is decidable. Like existing approaches for quantified formulas in SMT [@GeDeM-CAV-09; @ReynoldsDKBT15Cav], our approach is based on incremental quantifier instantiation based on a stream of candidate models returned by a solver for quantifier-free inputs. Our approach for this fragment exploits the small model property given in Lemma \[lemma:small-model-property\] to restrict the set of quantifier instantiations it considers to a finite set. $\funcsolve( \exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y ) )$ where $\vec x = ( x_1, \ldots, x_m )$ and $\vec y = ( y_1, \ldots, y_n )$: 1. Let $\vec k = ( k_1, \ldots, k_m )$ and $\vec e = ( e_1, \ldots, e_n )$ be fresh constants of the same type as $\vec x$ and $\vec y$. 2. Let $L = L' \cup \{ k_1, \ldots, k_m \}$ where $L'$ is a set of fresh constants s.t. $\card{L'} = \len{\varphi( \vec x, \vec y )} + n$. 3. Return $\funcsmtsolve( \exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y ), \emptyset, L )$. $\funcsmtsolve( \exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y ), \Gamma, L )$: 1. If $\Gamma$ is $(\seplog,\euf)$-unsat, return “unsat". 2. Assume $\exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y )$ is equivalent to $\exists \vec x\, \forall \vec y\, \varphi_1( \vec x, \vec y ) \wedge \ldots \wedge \forall \vec y\, \varphi_p( \vec x, \vec y )$. 3. If $\Gamma'_j = \Gamma \cup \{ \neg \varphi_j( \vec k, \vec e ) \wedge \displaystyle\bigwedge^n_{i=1} \bigvee_{t \in L} e_i \teq t \}$ is $(\seplog,\euf)$-unsat for all $j=1,\ldots,p$, return “sat". 4. Otherwise, let $\I,h \models_{\tinyseplog} \Gamma'_j$ for some $j \in \{ 1,\ldots,p\}$. 5. Let $\vec t = ( t_1, \ldots, t_n )$ be such that $e_i^\I = t_i^\I$ and $t_i \in L$ for each $i=1,\ldots,n$. 6. Return $\funcsmtsolve( \exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y ), \Gamma \cup \{ \varphi_j( \vec k, \vec t ) \}, L )$. Figure \[fig:smt-inst-sept\] gives a counterexample-guided approach for establishing the satisfiability of input $\exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y )$. We first introduce tuples of fresh constants $\vec k$ and $\vec e$ of the same type as $\vec x$ and $\vec y$ respectively. Our procedure will be based on finding a set of instantiations of $\forall \vec y\, \varphi( \vec k, \vec y )$ that are either collectively unsatisfiable or are satisfiable and entail our input. Then, we construct a set $L$ which is the union of constants $\vec k$ and a set $L'$ of fresh constants whose cardinality is equal to $\len{\varphi( \vec x, \vec y )}$ (see Section 3.1) plus the number of universal variables $n$ in our input. Conceptually, $L$ is a finite set of terms from which the instantiations of $\vec y$ in $\forall \vec y\, \varphi( \vec k, \vec y )$ can be built. After constructing $L$, we call the recursive subprocedure $\funcsmtsolve$ on $\Gamma$ (initially empty) and $L$. This procedure incrementally adds instances of $\forall \vec y\, \varphi( \vec k, \vec y )$ to $\Gamma$. In step 1, we first check if $\Gamma$ is $(\seplog,T)$-unsatisfiable using the procedure from [@ReynoldsIosifKingSerban16]. If so, our input is $(\seplog,T)$-unsatisfiable. Otherwise, in step 2 we consider the *miniscoped* form of our input $\exists \vec x\, \forall \vec y\, \varphi_1( \vec x, \vec y ) \wedge \ldots \wedge \forall \vec y\, \varphi_p( \vec x, \vec y )$, that is, where quantification over $\vec x$ is distributed over conjunctions. In the following, we may omit quantification on conjunctions $\varphi_j$ that do not contain variables from $\vec y$. Given this formula, for each $j=1,\ldots,p$, we check the $(\seplog,T)$-satisfiability of set $\Gamma'_j$ containing $\Gamma$, the negation of $\forall \vec y\, \varphi_j( \vec k, \vec y )$ where $\vec y$ is replaced by fresh contants $\vec e$, and a conjunction of constraints that says each $e_i$ must be equal to at least one term in $L$ for $i=1,\ldots,n$. If $\Gamma'_j$ is $(\seplog,T)$-unsatisfiable for each $j=1,\ldots,p$, our input is $(\seplog,T)$-satisfiable. Otherwise in step 3, given an interpretation $\I$ and heap $h$ satisfying $\Gamma'_j$, we construct a tuple of terms $\vec t = ( t_1, \ldots, t_n )$ used for instantiating $\forall \vec y\, \varphi_j( \vec k, \vec y )$. For each $i=1,\ldots,n$, we choose $t_i$ to be a term from $L$ whose interpretation is the same as $e_i$. The existence of such a $t_i$ is guaranteed by the fact that $\I$ satisfies the constraint from $\Gamma'_j$ that tells us $e_i$ is equal to at least one such term. This selection ensures that instantiations on each iteration are chosen from a finite set of possibilities and are unique. In practice, the procedure terminates, both for unsatisfiable and satisfiable inputs, before considering all $\vec t$ from $L^n$ for each $\forall \vec y\, \varphi_j( \vec x, \vec y )$. \[thm:solveslt\] Let $U$ be an uninterpreted sort belonging to the signature of $\euf$. For all $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ formulae $\psi$ of the form $\exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y )$, $\funcsolve( \psi )$: 1. \[it:sltunsat\] Answers “unsat" only if $\psi$ is $(\seplog,\euf)$-unsatisfiable. 2. \[it:sltsat\] Answers “sat" only if $\psi$ is $(\seplog,\euf)$-satisfiable. 3. \[it:sltterm\] Terminates. To show (\[it:sltunsat\]), note that $\Gamma$ contains only formulas of the form $\varphi_j( \vec k, \vec t )$, which are consequences of our input. Thus, when $\Gamma$ is $(\seplog,\euf)$-unsatisfiable, our input is $(\seplog,\euf)$-unsatisfiable as well. To show (\[it:sltsat\]), we have that $\Gamma$ is $(\seplog,\euf)$-satisfiable and $\Gamma' = \Gamma \cup \{ \neg \varphi_j( \vec k, \vec e ) \wedge A \}$ is $(\seplog,\euf)$-unsatisfiable for each $j=1,\ldots,p$, where: $$\begin{array}{c} A = \displaystyle\bigwedge^n_{i=1} \bigvee_{t \in L} e_i \teq t \end{array}$$ where $L = L' \cup \{ k_1, \ldots, k_m \}$ and $L'$ is a set of fresh constants s.t. $\card{L'} = \len{\varphi( \vec x, \vec y )} + n$. In other words, we have that all models of $\Gamma$ satisfy $( \varphi_1( \vec k, \vec e ) \wedge \ldots \wedge \varphi_p( \vec k, \vec e ) ) \vee \neg A$, which is equivalent to $\varphi( \vec k, \vec e )\vee \neg A$. Since $\vec e$ is not contained in $\Gamma$, we have that all models of $\Gamma$ satisfy $\forall \vec y\, (\varphi( \vec k, \vec y ) \vee \neg A \{ \vec e \mapsto \vec y \})$. Since $\Gamma$ is $(\seplog,\euf)$-satisfiable, we have that $\forall \vec y\, (\varphi( \vec k, \vec y ) \vee \neg A \{ \vec e \mapsto \vec y \})$ is $(\seplog,\euf)$-satisfiable as well. Consider the formula $\forall \vec y\, \varphi( \vec k, \vec y )$. By Lemma \[lemma:small-model-property\], $\forall \vec y\, \varphi( \vec k, \vec y )$ has a model if and only if there exists an interpretation $\I$ and heap $h$ such that $\I,h \models_{\tinyseplog} \forall \vec y\, \varphi( \vec k, \vec y )$ and $\card{U^\I} \leq \len{\varphi} + \card{\fv{\forall \vec y\,\varphi( \vec x, \vec y )}} + n$. Due to the construction of $L$, this implies that $\forall \vec y\, (\varphi( \vec k, \vec y ) \vee \neg A \{ \vec e \mapsto \vec y \})$ is $(\seplog,\euf)$-satisfiable if and only if $\forall \vec y\, \varphi( \vec k, \vec y )$ is $(\seplog,\euf)$-satisfiable. Thus, $\exists \vec x\, \forall \vec y\, \varphi( \vec x, \vec y )$ is $(\seplog,\euf)$-satisfiable. To show (\[it:sltterm\]), clearly only a finite number of possible formulas can be added to $\Gamma$ as a result of the procedure, since all terms $\vec t$ belong to the finite set $L$ and $p$ is finite. Furthermore, on every iteration, for any $j$, $\I$ satisfies $\Gamma$ and $\neg \varphi_j( \vec k, \vec e )$. Since $e_i^\I = t_i^\I$ for each $i = 1, \ldots, n$, we have that $\varphi_j( \vec k, \vec t ) \not\in \Gamma$, and thus a new formula is added to $\Gamma$ on every call. Thus, only a finite number of recursive calls are made to $\funcsmtsolve$. Since the $(\seplog,\euf)$-satisfiability of quantifier-free is decidable, all steps in the procedure are terminating, and thus $\funcsolve$ terminates. We discuss a few important details regarding our implementation of the procedure. #### Matching Heuristics When constructing the terms $\vec t$ for instantiation, it may be the case that $e_i^\I = u^\I$ for multiple $u \in L$ for some $i \in \{ 1,\ldots,n \}$. In such cases, the procedure will choose one such $u$ for instantiation. To increase the likelihood of the instantiation being relevant to the satisfiability of our input, we use heuristics for selecting the best possible $u$ among those whose interpretation is equal to $e_i$ in $\I$. In particular, if $e_i^\I = u_1^\I = u_2^\I$, and $\Gamma'$ contains predicates of the form $e_i \mapsto v$ and $u_1 \mapsto v_1$ for some $v, v_1$ where $v^\I = v_1^\I$ but no predicate of the form $u_2 \mapsto v_2$ for some $v_2$ where $v^\I = v_2^\I$, then we strictly prefer term $u_1$ over term $u_2$ when choosing term $t_i$ for $e_i$. #### Finding Minimal Models Previous work [@ReyEtAl-1-RR-13] developed efficient techniques for finding small models for uninterpreted sorts in CVC4. We have found these techniques to be beneficial to the performance of the procedure in Figure \[fig:smt-inst-sept\]. In particular, we use these techniques to find $\Sigma$-interpretations $\I$ in $\funcsmtsolve$ that interpret $U$ as a finite set of minimal size. When combined with the aforementioned matching heuristics, these techniques lead to finding useful instantiations more quickly, since more terms are constrained to be equal to $e_i$ for $i=1,\ldots,n$ in interpretations $\I$. #### Symmetry Breaking The procedure in Figure \[fig:smt-inst-sept\] introduces a set of fresh constants $L$, which in turn introduce the possibility of discovering $\Sigma$-interpretations $\I$ that are isomorphic, that is, identical up to renaming of constants in $L'$. Our procedure adds additional constraints to $\Gamma$ that do not affect its satisfiability, but reduce the number of isomorphic models. In particular, we consider an ordering $\prec$ on the constants from $L'$, and add constraints that ensure that all models $(\I,h)$ of $\Gamma$ are such that if $\ell_1^\I \not\in \dom(h)$, then $\ell_2^\I \not\in \dom(h)$ for all $\ell_2$ such that $\ell_1 \prec \ell_2$. Say we wish to show the validity of the entailment $\xterm \neq \yterm \wedge \xterm \mapsto \zterm \models_{\tinyseplog} \exists \uterm ~.~ \xterm \mapsto \uterm$, from the introductory example (Section \[sec:intro\]), where $\xterm,\yterm,\zterm,\uterm$ are of sort $U$ of $\euf$. This entailment is valid iff the $\exists^*\forall^*\seplog(\euf)_{U,U^k}$ formula $\exists \xterm \exists \yterm \exists \zterm \forall \uterm ~.~ \xterm \not\teq \yterm \wedge \xterm \mapsto \zterm \wedge \neg \xterm \mapsto \uterm$ is $(\seplog,\euf)$-unsatisfiable. A run of the procedure in Figure \[fig:smt-inst-sept\] on this input constructs tuples $\vec k = ( k_x, k_y, k_z )$ and $\vec e = ( e_u )$, and set $L = \{ k_x, k_y, k_z,\ell_1,\ell_2 \}$, noting that $\len{\xterm \not\teq \yterm \wedge \xterm \mapsto \zterm \wedge \neg \xterm \mapsto \uterm}=1$. We then call $\funcsmtsolve$ where $\Gamma$ is initially empty. By miniscoping, our input is equivalent to $\exists \xterm \exists \yterm \exists \zterm ~.~ \xterm \not\teq \yterm \wedge \xterm \mapsto \zterm \wedge \forall \uterm ~.~ \neg \xterm \mapsto \uterm$. On the first two recursive calls to $\funcsmtsolve$, we may add $k_x \not\teq k_y$ and $k_x \mapsto k_z$ to $\Gamma$ by trivial instantiation of the first two conjuncts. On the third recursive call, $\Gamma$ is $(\seplog,\euf)$-satisfiable, and we check the satisfiability of: $$\begin{array}{c} \Gamma' = \{ k_x \neq k_y, k_x \mapsto k_z, k_x \mapsto e_u \wedge ( e_u \teq k_x \vee e_u \teq k_y \vee e_u \teq k_z\vee e_u \teq \ell_1\vee e_u \teq \ell_2 ) \} \end{array}$$ Since $k_x \mapsto k_z$ and $k_x \mapsto e_u$ are in $\Gamma'$, all $\Sigma$-interpretations $\I$ and heaps $h$ such that $\I,h \models_{\tinyseplog} \Gamma'$ are such that $e_u^\I =k_z^\I$. Since $k_z \in L$, we may choose to add the instantiation $\neg k_x \mapsto k_z$ to $\Gamma$, after which $\Gamma$ is $(\seplog,\euf)$-unsatisfiable on the next recursive call to $\funcsmtsolve$. Thus, our input is $(\seplog,\euf)$-unsatisfiable and the entailment is valid. $\blacksquare$ A modified version of the procedure in Figure \[fig:smt-inst-sept\] can be used for $\exists^*\forall^*\seplog(T)_{\locs,\data}$-satisfiability for theories $T$ beyond equality, and where $\locs$ and $\data$ are not restricted to uninterpreted sorts. Notice that in such cases, we cannot restrict $\Sigma$-interpretations $\I$ in $\funcsmtsolve$ to interpret each $e_i$ as a member of finite set $L$, and hence we modify $\funcsmtsolve$ to omit the constraint restricting variables in $\vec e$ to be equal to a term from $L$ in the check in Step 2. This modification results in a procedure that is sound both for “unsat" and “sat", but is no longer terminating in general. Nevertheless, it may be used as a heuristic for determining $\exists^*\forall^*\seplog(T)_{\locs,\data}$-(un)satisfiability. Experimental Evaluation ======================= We implemented the $\funcsolve$ procedure from Figure \[fig:smt-inst-sept\] within the CVC4 SMT solver[^6] (version 1.5 prerelease). This implementation was tested on two kinds of benchmarks: finite unfoldings of inductive predicates, mostly inspired by benchmarks used in the SL-COMP’14 solver competition [@sl-comp14], and verification conditions automatically generated by applying the weakest precondition calculus of  [@IshtiaqOHearn01] to the program loops in Figure \[fig:loops\]. All experiments were run on a 2.80GHz Intel(R) Core(TM) i7 CPU machine with 8MB of cache [^7]. 1: `wh``ile` $\wterm \neq \nil$ `do`\ 2: $\mathbf{assert}(\wterm.\datap = c_0)$\ 3: $\vterm := \wterm$;\ 4: $\wterm := \wterm.\nextp$;\ 5: $\mathbf{dispose}(\vterm)$;\ 6: `do`\ \ 1: `wh``ile` $\uterm \neq \nil$ `do`\ 2: $\mathbf{assert}(\uterm.\datap = c_0)$\ 3: $\wterm := \uterm.\nextp$;\ 4: $\uterm.\nextp := \vterm$;\ 5: $\vterm := \uterm$;\ 6: $\uterm := \wterm$;\ 7: `do`\ ---------------------- -------------- ---------------------------------------------------------- ----------------------- -------------- ------------------------------------------------------------------ $\mathsf{list}^0(x)$ $\triangleq$ $\emp \land x = \nil$ $\mathsf{zlist}^0(x)$ $\triangleq$ $\emp \land x = \nil$ $\mathsf{list}^n(x)$ $\triangleq$ $\exists y \, . \, x \mapsto y * \mathsf{list}^{n-1}(y)$ $\mathsf{zlist}^n(x)$ $\triangleq$ $\exists y \, . \, x \mapsto (c_0, y) * \mathsf{zlist}^{n-1}(y)$ ---------------------- -------------- ---------------------------------------------------------- ----------------------- -------------- ------------------------------------------------------------------ We compared our implementation with the results of applying the CVC4 decision procedure for the quantifier-free fragment of $\seplog$ [@ReynoldsIosifKingSerban16] to a variant of the benchmarks, obtained by manual quantifier instantiation, as follows. Consider checking the validity of the entailment $\exists \vec{x} ~.~ \phi(\vec{x}) \models_{\tinyseplog} \exists \vec{y} ~.~ \psi(\vec{y})$, which is equivalent to the unsatisfiability of the formula $\exists \vec{x} \forall \vec{y} ~.~ \phi(\vec{x}) \wedge \neg \psi(\vec{y})$. We first check the satisfiability of $\phi$. If $\phi$ is not satisfiable, the entailment holds trivially, so let us assume that $\phi$ has a model. Second, we check the satisfiability of $\phi \wedge \psi$. Again, if this is unsatisfiable, the entailment cannot hold, because there exists a model of $\phi$ which is not a model of $\psi$. Else, if $\phi \wedge \psi$ has a model, we add an equality $x=y$ for each pair of variables $(x,y) \in \vec{x}\times\vec{y}$ that are mapped to the same term in this model, the result being a conjunction $E(\vec{x},\vec{y})$ of equalities. Finally, we check the satisfiability of the formula $\phi \wedge \neg\psi \wedge E$. If this formula is unsatisfiable, the entailment is valid, otherwise, the check is inconclusive. The times in Table \[tab:experiments\] correspond to checking satisfiability of $\exists \vec{x} \forall \vec{y} ~.~ \phi(\vec{x}) \wedge \neg \psi(\vec{y})$ using the $\funcsolve$ procedure (Figure \[fig:smt-inst-sept\]), compared to checking satisfiability of $\phi \wedge \neg\psi \wedge E$, where $E$ is manually generated. In the first set of experiments (Table \[tab:experiments\]) we have considered inductive predicates commonly used as verification benchmarks [@sl-comp14]. Here we check the validity of the entailment between $\mathsf{lhs}$ and $\mathsf{rhs}$, where both predicates are unfolded $n=1,2,3,4,8$ times. The entailment between $\mathsf{pos}_2^1$ and $\mathsf{neg}_4^1$ is skipped because it is not valid (since the negated formula is satisfiable, we cannot generate the manual instantiation). The second set of experiments considers the verification conditions of the forms $\varphi \Rightarrow \mathsf{wp}(\mathbf{l},\phi)$ and $\varphi \Rightarrow \mathsf{wp}^n(\mathbf{l},\phi)$, where $\mathsf{wp}(\mathbf{l},\phi)$ denotes the weakest precondition of the $\seplog$ formula $\phi$ with respect to the sequence of statements $\mathbf{l}$, and $\mathsf{wp}^n(\mathbf{l},\phi) = \mathsf{wp}(\mathbf{l}, \ldots\mathsf{wp}(\mathbf{l}, \\ \mathsf{wp}(\mathbf{l},\phi)) \ldots)$ denotes the iterative application of the weakest precondition $n$ times in a row. We consider the loops depicted in Figure \[fig:loops\], where, for each loop [**l**]{}, we consider the variant [**zl**]{} as well, which tests that the data values contained within the memory cells are equal to a constant $c_0$ of sort $\locs$, by the assertions on line 2. The postconditions are specified by finite unfoldings of the inductive predicates $\mathsf{list}$ and $\mathsf{zlist}$. We observed that, compared to checking the manual instantiation, the fully automated solver was less than $0.5$ seconds slower on $72\%$ of the test cases, and less than $1$ second slower on $79\%$ of the test cases. The automated solver experienced $3$ timeouts, where the manual instantiation succeeds (for $\widehat{\mathsf{tree}}$ vs $\mathsf{tree}$ with $n=8$, $\widehat{\mathsf{ts}}$ vs $\mathsf{ts}$ with $n=3$, and $\mathsf{list}^{n}(u) * \mathsf{list}^{0}(v)$ vs $\mathsf{wp}^n(\mathbf{rev}, u = \nil \land \mathsf{list}^{n}(v))$ with $n=8$). These timeouts are caused by the first call to the quantifier-free $\seplog$ decision procedure, which fails to produce a model in less than $300$ seconds (time not accounted for in the manually produced instance of the problem). [|c|c|c|c|c|c|c|c|]{} & [**rhs**]{} & & $n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=8$\ \ $\scriptstyle \widehat{\mathsf{ls}}(x, y) \triangleq \emp \land x = y \lor$ & $\scriptstyle \mathsf{ls}(x, y) \triangleq \emp \land x = y \lor $ & [$\funcsolve$]{} & [$<0.01$s]{} & [0.02s]{} & [0.03s]{} & [0.05s]{} & [0.21s]{}\ $\scriptstyle \exists z \, . \, x \neq y \land x \mapsto z * \widehat{\mathsf{ls}}(z, y)$ & $\scriptstyle \exists z \, . \, x \mapsto z * \mathsf{ls}(z, y)$ & [manual]{} & [$<0.01$s]{} & [$<0.01$s]{} & [$<0.01$s]{} & [$<0.01$s]{} & [$<0.01$s]{}\ $\scriptstyle \widehat{\mathsf{tree}}(x) \triangleq \emp \land x = \mathsf{nil} \lor$ & $\scriptstyle \mathsf{tree}(x) \triangleq \emp \land x = \mathsf{nil} \lor$ & [$\funcsolve$]{} & [$<0.01$s]{} & [0.04s]{} & [1.43s]{} & [23.42s]{} & [$>300$s]{}\ $\scriptstyle \exists l \exists r \, . \, l \neq r \land x \mapsto (l, r) * \mathsf{tree}(l) * \mathsf{tree}(r)$ & $\scriptstyle \exists l \exists r \, . \, x \mapsto (l, r) * \mathsf{tree}(l) * \mathsf{tree}(r)$ & [manual]{} & [$<0.01$s]{} & [$<0.01$s]{} & [$<0.01$s]{} & [$<0.01$s]{} & [0.09s]{}\ $\scriptstyle \widehat{\mathsf{ts}}(x, a) \triangleq \emp \land x = \nil \lor $ & $\scriptstyle\mathsf{ts}(x, a) \triangleq \emp \land x = \nil \lor$ & [$\funcsolve$]{} & [$<0.01$s]{} & [0.81s]{} & [$>300$s]{} & [$>300$s]{} & [$>300$s]{}\ $\scriptstyle \exists l \exists r \, . \, x \neq y \land x \mapsto (l, r) * \widehat{\mathsf{ts}} (l, y) * \mathsf{tree}(r) \lor$ & $\scriptstyle \exists l \exists r \, . \, \land x \mapsto (l, r) * \mathsf{ts} (l, y) * \mathsf{tree}(r) \lor$ & [manual]{} & [$<0.01$s]{} & [0.03s]{} & [103.89s]{} & [$>300$s]{} & [$>300$s]{}\ $\scriptstyle \exists l \exists r \, . \, x \neq y \land x \mapsto (l, r) * \mathsf{tree} (l) * \widehat{\mathsf{ts}}(r, y)$ & $\scriptstyle \exists l \exists r \, . \, \land x \mapsto (l, r) * \mathsf{tree} (l) * \mathsf{ts}(r, y)$ & & & & & &\ $\scriptstyle \mathsf{pos_1}(x, a) \triangleq x \mapsto a \lor \exists y \exists b \, . \,$ & $\scriptstyle \mathsf{neg_1}(x, a) \triangleq \lnot x \mapsto a \lor \exists y \exists b \, . \,$ & [$\funcsolve$]{} & [0.34s]{} & [0.01s]{} & [0.31s]{} & [0.76s]{} & [21.19s]{}\ $\scriptstyle x \mapsto a * \mathsf{pos_1}(y, b)$ & $\scriptstyle x \mapsto a * \mathsf{neg_1}(y, b)$ & [manual]{} & [0.04s]{} & [0.05s]{} & [0.08s]{} & [0.12s]{} & [0.53s]{}\ $\scriptstyle \mathsf{pos_1}(x, a) \triangleq x \mapsto a \lor \exists y \exists b \, . \,$ & $\scriptstyle \mathsf{neg_2}(x, a) \triangleq x \mapsto a \lor \exists y \exists b \, . \,$ & [$\funcsolve$]{} & [0.03s]{} & [0.12s]{} & [0.23s]{} & [0.46s]{} & [3.60s]{}\ $\scriptstyle x \mapsto a * \mathsf{pos_1}(y, b)$ & $\scriptstyle \lnot x \mapsto a * \mathsf{neg_2}(y, b)$ & [manual]{} & [0.05s]{} & [0.08s]{} & [0.08s]{} & [0.12s]{} & [0.54s]{}\ $\scriptstyle \mathsf{pos_2}(x, a) \triangleq x \mapsto a \lor \exists y \, . \,$ & $\scriptstyle \mathsf{neg_3}(x, a) \triangleq \lnot x \mapsto a \lor \exists y \, . \,$ & [$\funcsolve$]{} & [0.04s]{} & [0.13s]{} & [0.28s]{} & [0.48s]{} & [4.20s]{}\ $\scriptstyle x \mapsto a * \mathsf{pos_2}(a, y)$ & $\scriptstyle x \mapsto a * \mathsf{neg_3}(a, y)$ & [manual]{} & [0.01s]{} & [0.03s]{} & [0.05s]{} & [0.09s]{} & [0.45s]{}\ $\scriptstyle \mathsf{pos_2}(x, a) \triangleq x \mapsto a \lor \exists y \, . \,$ & $\scriptstyle \mathsf{neg_4}(x, a) \triangleq x \mapsto a \lor \exists y \, . \,$ & [$\funcsolve$]{} & — & [0.08s]{} & [0.15s]{} & [0.26s]{} & [1.33s]{}\ $\scriptstyle x \mapsto a * \mathsf{pos_2}(a, y)$ & $\scriptstyle \lnot x \mapsto a * \mathsf{neg_4}(a, y)$ & [manual]{} & — & [0.03s]{} & [0.06s]{} & [0.09s]{} & [0.46s]{}\ \ $\scriptstyle \mathsf{list}^{n}(w)$ & $\scriptstyle\mathsf{wp}(\mathbf{disp}, \mathsf{list}^{n-1}(w))$ & [$\funcsolve$]{} & [0.01s]{} & [0.03s]{} & [0.08s]{} & [0.19s]{} & [1.47s]{}\ & & [manual]{} & [$<0.01$s]{} & [0.01s]{} & [0.02s]{} & [0.05s]{} & [0.26s]{}\ $\scriptstyle\mathsf{list}^{n}(w)$ & $\scriptstyle\mathsf{wp}^{n}(\mathbf{disp},\emp \land w = \nil)$ & [$\funcsolve$]{} & [0.01s]{} & [0.06s]{} & [0.17s]{} & [0.53s]{} & [7.08s]{}\ & & [manual]{} & [$<0.01$s]{} & [0.02s]{} & [0.08s]{} & [0.14s]{} & [2.26s]{}\ $\scriptstyle\mathsf{zlist}^{n}(w)$ & $\scriptstyle\mathsf{wp}(\textbf{zdisp}, \mathsf{zlist}^{n-1}(w))$ & [$\funcsolve$]{} & [0.04s]{} & [0.05s]{} & [0.09s]{} & [0.19s]{} & [1.25s]{}\ & & [manual]{} & [$<0.01$s]{} & [0.01s]{} & [0.02s]{} & [0.04s]{} & [0.29s]{}\ $\scriptstyle\mathsf{zlist}^{n}(w)$ & $\scriptstyle\mathsf{wp}^n(\textbf{zdisp}, \emp \land w = \nil)$ & [$\funcsolve$]{} & [0.01s]{} & [0.10s]{} & [0.32s]{} & [0.87s]{} & [11.88s]{}\ & & [manual]{} & [0.01s]{} & [0.02s]{} & [0.07s]{} & [0.15s]{} & [2.20s]{}\ $\scriptstyle\mathsf{list}^{n}(u) * \mathsf{list}^{0}(v)$ & $\scriptstyle\mathsf{wp}(\mathbf{rev}, \mathsf{list}^{n-1}(u) * \mathsf{list}^{1}(v))$ & [$\funcsolve$]{} & [0.38s]{} & [0.06s]{} & [0.11s]{} & [0.16s]{} & [0.56s]{}\ & & [manual]{} & [0.07s]{} & [0.03s]{} & [0.07s]{} & [0.11s]{} & [0.43s]{}\ $\scriptstyle\mathsf{list}^{n}(u) * \mathsf{list}^{0}(v)$ & $\scriptstyle\mathsf{wp}^n(\mathbf{rev}, u = \nil \land \mathsf{list}^{n}(v))$ & [$\funcsolve$]{} & [0.38s]{} & [0.07s]{} & [0.30s]{} & [68.68s]{} & [$>300$s]{}\ & & [manual]{} & [0.08s]{} & [0.06s]{} & [0.11s]{} & [0.23s]{} & [1.79s]{}\ $\scriptstyle\mathsf{zlist}^{n}(u) * \mathsf{zlist}^{0}(v)$ & $\scriptstyle\mathsf{wp}(\mathbf{zrev}, \mathsf{zlist}^{n-1}(u) * \mathsf{zlist}^{1}(v))$ & [$\funcsolve$]{} & [0.22s]{} & [0.07s]{} & [0.15s]{} & [0.21s]{} & [0.75s]{}\ & & [manual]{} & [0.04s]{} & [0.02s]{} & [0.04s]{} & [0.06s]{} & [0.31s]{}\ $\scriptstyle\mathsf{zlist}^{n}(u) * \mathsf{zlist}^{0}(v)$ & $\scriptstyle\mathsf{wp}^n(\mathbf{zrev}, u = \nil \land \mathsf{zlist}^{n}(v))$ & [$\funcsolve$]{} & [0.23s]{} & [0.09s]{} & [0.17s]{} & [0.30s]{} & [2.06s]{}\ & & [manual]{} & [0.04s]{} & [0.02s]{} & [0.05s]{} & [0.09s]{} & [0.48s]{}\ \ Conclusions and Future Work =========================== We present theoretical and practical results for the existence of effective decision procedures for the fragment of Separation Logic obtained by restriction of formulae to quantifier prefixes in the set $\exists^*\forall^*$. The theoretical results range from undecidability, when the set of memory locations is taken to be the set of integers and linear arithmetic constraints are allowed, to <span style="font-variant:small-caps;">PSPACE</span>-completeness, when locations and data in the cells belong to an uninterpreted sort, equipped with equality only. We have implemented a decision procedure for the latter case in the CVC4 SMT solver, using an effective counterexample-driven instantiation of the universal quantifiers. The procedure is shown to be sound, complete and termination is guaranteed when the input belongs to a decidable fragment of $\seplog$. As future work, we aim at refining the decidability chart for $\exists^*\forall^*\seplog(T)_{\locs,\data}$, by considering the case where the locations are interpreted as integers, with weaker arithmetics, such as sets of difference bounds, or octagonal constraints. These results are likely to extend the application range of our tool, to e.g. solvers working on $\seplog$ with inductive definitions and data constraints. The current implementation should also benefit from improvements of the underlying quantifier-free $\seplog$ and set theory solvers. [^1]: Strictly speaking, the Bernays-Schönfinkel-Ramsey class refers to the $\exists^*\forall^*$ fragment of first-order logic with equality and predicate symbols, but no function symbols [@Lewis80]. [^2]: By writing $\I[\sigma \leftarrow S]$ we ensure that all variables of sort $\sigma$ are mapped by $\I$ to elements of $S$. [^3]: If $\xterm_1 \mapsto (\xterm_2, \ldots, \xterm_n)$ and $\xterm_1 \mapsto (\xterm'_2, \ldots, \xterm'_n)$ hold, this forces $\xterm_i=\xterm'_i$, for all $i=2,\ldots,n$. [^4]: Extending the interpretation of $\locs$ to include negative integers does not make any difference for the undecidability result. [^5]: The procedure is incorporated into the master branch of the SMT solver CVC4 (<https://github.com/CVC4>), and can be enabled by command line parameter `-``-quant-epr`. [^6]: Available at <http://cvc4.cs.nyu.edu/web/>. [^7]: The CVC4 binary and examples used in these experiments are available at\ [<http://cs.uiowa.edu/~ajreynol/VMCAI2017-seplog-epr>]{}.
--- abstract: 'Local scale invariance has been investigated in the nonequilibrium kinetic Ising model exhibiting absorbing phase transition of PC type in 1+1 dimension. Numerical evidence has been found for the satisfaction of this symmetry and estimates for the critical ageing exponents are given.' address: 'Research Institute for Technical Physics and Materials Science, H-1525 Budapest, P.O.Box 49, Hungary' author: - Géza Ódor title: Local scale invariance in the parity conserving nonequilibrium kinetic Ising model --- Introduction ============ The classification of the universality classes of non-equilibrium phase transitions is still an open problem of statistical physics [@Dick-Mar; @Hin2000; @dok; @kam]. In equilibrium conformal invariance (CI) [@FQS; @CardyConf; @HenkelConf] enables this in two dimensional critical systems as the consequence of a larger group (the CI group) than the mere scale transformations. Recently the generalization of the generators of CI (albeit without invariance under time-translations) are proposed for anisotropic, dynamical models [@LSIbev; @HP05]. The corresponding invariance is the so-called local scale-invariance (LSI). Since it is supposed to be the extension of the dynamical scale transformations for such systems it may serve as a convenient tool for classifying universality classes of non-equilibrium systems as well. The quantities of main interest are the two-time autocorrelation function $C(t,s)$ and the auto-response function $R(t,s)$, which describe ageing phenomena (for recent reviews see [@ageing]) $$\begin{aligned} C(t,s) &=& \left\langle \phi(t,\vec{r}) \phi(s,\vec{r})\right\rangle \\ R(t,s) &=& \left. \frac{\delta\langle \phi(t,\vec{r})\rangle}{\delta h(s,\vec{r})} \right|_{h=0} \:=\: \left\langle \phi(t,\vec{r}) \widetilde\phi(s,\vec{r})\right\rangle\end{aligned}$$ where $\phi$ and $\widetilde\phi$ are the fields in the Janssen-de Dominicis formalism [@deDo78; @Jans92] and $h$ is the magnetic field conjugate to $\phi$. For $t,s\to\infty$ [*and*]{} $y=t/s>1$ one expects the scaling forms $$\begin{aligned} \label{CRforms} C(t,s) &=& s^{-b} f_C(t/s) \\ R(t,s) &=& s^{-1-a} f_R(t/s) ,\end{aligned}$$ where $a$ and $b$ are ageing exponents and $f_C$ and $f_R$ are scaling functions such that $f_{C,R}(y)\sim y^{-\lambda_{C,R}/Z}$ for $y\gg 1$. Here $\lambda_C$ and $\lambda_R$ are the auto-correlation [@ace] and auto-response [@are] exponents respectively and independent of equilibrium exponents and the dynamical exponent $Z$ (defined as usual $Z = \nu_{||} / \nu_{\perp}$). As in case of CI one expects that LSI fully determines the functional form of the scaling functions. Henkel et al. derived $R(t,s)$ in general and the form of $C(t,s)$ for $Z=2$ by identifying the quasi-primary operators of the theory [@HPGL; @HEP0605]. The generalized form of $R(t,s)$ takes into account the difference between physical observable defined in lattice models and the associated quasi-primary scaling operators of the underlying field theory as well. This ansatz looks as $$\label{Rfullform} R(t,s) = s^{-1-a} \left(\frac{t}{s}\right)^{1+a'-\lambda_R/Z} \left( \frac{t}{s}-1\right)^{-1-a'} \ ,$$ where $a'\ne a$ is an independent ageing exponent in general. Some systems with detailed balance symmetry has been analyzed recently and found to satisfy (\[Rfullform\]) [@HP05; @Bert99; @Maze04; @Godr00; @Maye06; @Maye04; @Plei05] with $a\ne a'$. On the other hand renormalization-group results for some important universality classes concluded that $a=a'$ should be hold. In particular explicit two-loop field-theoretical computation of $R(t,s)$ for the $O(N)$ universality class and Model A dynamics at the critical point claim $a=a'$ [@CG02; @CGJPA]. Recently numerical simulations of the non-equilibrium contact process (CP) did not satisfy that form completely [@HinLSICP] and Hinrichsen argued that LSI is not a generic property of ageing phenomena but is restricted to diffusive models ($Z=2$) or above the upper critical dimension. In [@HEP0605] Henkel et al. suggested that there is crossover in case of nonequilibrium critical dynamics because both the ageing regime ($t-s \sim O(s)$) and the quasi-stationary regime ($t-s<<s$) display scaling with the same length scale $L(t) \propto t^{1/Z}$. For a more detailed discussion of these results see a very recent review [@HenkLSIrev]. In this paper I present simulation results for an other nonequilibrium critical model, the parity conserving (PC) nonequilibrium Ising model (NEKIM) in $1+1$ dimensions. I provide numerical evidence that in this model $C(t,s)$ and $R(t,s)$ can be fitted with the forms Eqs.(\[CRforms\]),(\[Rfullform\]) hence this nonequilibrium critical model exhibits LSI scaling invariance. The PC class NEKIM model ======================== The NEKIM model has been introduced and analyzed first by Menyhárd [@Nora94] as a generalization of the Kinetic Ising model [@Glauber] by adding spin-exchange updates in between the spin-flipp steps of the Glauber Ising model. In one dimension the domain walls (kinks) between up and down regions can be considered as particles. The spin-flip dynamics can be mapped onto particle movement $$\uparrow\downarrow\downarrow\stackrel{w_i}{\rightleftharpoons} \uparrow\uparrow\downarrow \ \ \sim \ \ \bullet\circ \rightleftharpoons \circ\bullet$$ or to the annihilation of neighboring particles $$\uparrow\downarrow\uparrow\stackrel{w_o}{\rightarrow} \uparrow\uparrow\uparrow \ \ \sim \ \ \circ\circ \rightarrow \bullet\bullet$$ Therefore the $T=0$ Glauber dynamics is equivalent to the annihilating random walk (ARW). This is a double degenerate phase, an initial state decays algebraically to the stationary state, which is one of the absorbing ones (all spins up or all spins down, provided the initial state has an even number of kinks). By mapping the spin-exchange dynamics in the same way more complicated particle dynamics emerges, for example: $$\uparrow\uparrow\downarrow\downarrow\stackrel{p_{ex}}{\rightleftharpoons} \uparrow\downarrow\uparrow\downarrow \ \ \sim \ \ \circ\bullet\circ \rightleftharpoons \bullet\bullet\bullet$$ one particle may give birth of two others or three particles may coagulate to one. Therefore this model is equivalent to branching and annihilating random walks with even number of offsprings [@Taka; @Cardy-Tauber]. By increasing the spin-exchange a second order phase transition takes place[@Nora94] for the [*kinks*]{} from absorbing to active state, which belongs to the parity conserving (PC) universality class [@Gras84; @Cardy-Tauber; @Canet]. In [@cpccikk] this model has been investigated by high precision cluster simulations with the parameterization $$\begin{aligned} p_{ex}=1-2\Gamma \\ w_i=\Gamma (1-\delta)/2 \\ w_o=\Gamma (1+\delta) \end{aligned}$$ originating from the Glauber Ising model [@Glauber]. In this paper I present simulations at the critical point determined in previous works [@Nora94; @meorcikk; @MeOd96; @MeOd97; @cpccikk]. The parameters chosen are: $\Gamma=0.35$, $p_{ex}=0.3$, $\delta_c=-0.3928(2)$. Here the kink ($n_i \in (0,1)$) density decays as $$\label{densdec} <n_i(t)> \propto t^{-0.285(2)}$$ as can be seen on the inset of Fig.\[LSIC\]. In previous works the NEKIM algorithm introduced as follows. The spin-flip part was applied using two-sub-lattice updating. Following this states of the spins are stored and L (L is the size of the system) random attempts of spin-exchanges are done using the stored situation of states of the spins before updating the whole lattice. All these together was counted as one Monte Carlo time-step (MCS) of updating (throughout the paper time is measured by MCS). Simulations =========== Time-dependent simulations were performed in $L=2\times 10^4 - 10^5$ sized systems with periodic boundary conditions. The runs were started from fully ordered kink state ($n_i(0)=1$) i.e. alternating up-down spin configuration ($s_i \in (-1,1)$). I followed the quench towards the critical state and measured the kink (order parameter) density $$<n_i(t)> \propto t^{-\alpha} \ ,$$ the kink-kink autocorrelation $$C(t,s) = <n_i(t)n_i(s)> \ ,$$ and the auto-response function, by flipping a spin at random site $l$ at time $s$ generating a kink pair out of the vacuum $$R(t,s) = <n_i(t)n_{i+1}> - <n'_i(t)n'_{i+1}> |_{s'_l(s) := -s_l(s)}$$ The simulations were run for several values of waiting times $s=256, 512, 1024, 2048, 4096$ and the scaled autocorrelation $C(t,s) t^{-2\alpha}$ is plotted on Fig. \[LSIC\] with the assumption of the form Eq.(\[CRforms\]). Good data collapse (within error margin of the simulations) in case of systems of sizes $L=2\times 10^4$ could be achieved for the whole region, however for larger $s$ and $t$ values small deviations form the collapse could also be observed. By investigating larger systems this proved to be finite size effect. The curve on Fig. \[LSIC\] for $s=4096$ shows the result of $L=10^5$ simulations. In the asymptotic $t/s\to\infty$ limit it can be fitted by $t^{-0.285}$ power-law, corresponding to the density decay of the PC class [@cpccikk]. This suggests the scaling exponents: $b=0.570(4)$, $\lambda_C / Z = 0.285(2)$. By inserting the value of the dynamical exponent of the PC class $Z=1.75(1)$ [@dok] on obtains $\lambda_C = 0.498(2)$. This exponent agrees with that of the autocorrelation exponent of spins $\lambda = 1.50(2)$ [@MeOd97] for $(t/s)\to\infty$ $$\begin{aligned} A(t,s) & = & < s_i(s) s_i(t) > \nonumber \\ & = & f(t/s) \propto (t/s)^{-(\lambda - d + 1 -\eta/2)/Z} \ ,\end{aligned}$$ since $\eta=1.01(1)$ and a hyperscaling-law connecting time dependent spin and kink exponents at the PC transition point derived in [@MeOd96]. Fitting for the connected autocorrelation function defined as $\Gamma(t,s)=C(t,s)-N(t)N(s)$ resulted in $\lambda_G/Z=1.9(1)$ The auto-response function has been found to exhibit similar nice data collapse by plotting $R(t,s) t^{0.57}$ as the function of $y = t/s$ (Fig.\[LSIR\]). However in [@HinLSICP] Hinrichsen discovered that in case of the CP model deviations from the LSI scaling form of $R(t,s)$ (\[Rfullform\]) may occur for $y\to 1$. To see this I plotted $R(t,s) s^{0.57} y^A (y-1)^B$ suggested in [@HinLSICP] as the function of $y-1$. Now one may see agreement with Eq.(\[Rfullform\]) if the curves fitted with the parameters collapse and are horizontal for all $y$ values. The best agreement could be achieved with $A=-1.3(1)$ and $B=-0.57(1)$ plotted in the inset of Fig.\[LSIR\]. By increasing $s$ the observed deviations from LSI scaling for $y \to 1$ occur at smaller $y$ values, suggesting corrections due to the microscopic reference time $s$. This is different from the case of the CP, where all such curves collapsed. Assuming the general form Eq.(\[Rfullform\]) fitting results in: $a = a' = -0.430(2)$, $\lambda_R/Z = 1.9(1)$ dynamical exponents with a validity of more than three decades. Conclusions =========== In conclusion numerical simulations of the parity conserving NEKIM model in 1d support local scale invariance at the critical point. In contrast to the contact process (belonging to the directed percolation class [@Dick-Mar]) corrections to scaling due to the microscopic reference time vanish in the $s\to\infty$ limit. Both the autocorrelation and the auto-response functions can well be described by the functional forms of LSI, only negligible dependence on the system sizes has been detected within error margin of the numerical simulations. Further sources of deviations may come from the value of $\alpha$ and the location of the critical point. The same analysis done for $-\delta_c = 0.3925, 0.392, 0.391$ and using $\alpha = 0.286, 0.287$ did not result in viewable deviations in the figures and the fitting parameters. The auto-response function scales in such a way that $a = a'$. Numerical estimates for the $\lambda_{C,G,R}$ exponents are determined and $\lambda_G=\lambda_R=Z(1+\alpha)+d$ in agreement with the scaling hypothesis of [@CGK06]. The results support the conjecture of Henkel [@HPGL] that LSI can be extended to other nonequilibrium critical systems besides diffusive models ($Z=2$) and models above the upper critical dimension. 0.5cm [**Acknowledgements:**]{}\ I thank N. Menyhárd, H. Hinrichsen and M. Henkel for the useful discussion and A. Gambassi for the useful comments. Support from Hungarian research funds OTKA (Grant No. T046129) is acknowledged. The author thanks the access to the NIIFI Cluster-GRID, LCG-GRID and to the Supercomputer Center of Hungary. References {#references .unnumbered} ========== [10]{} For references see : J. Marro and R. Dickman, , Cambridge University Press, Cambridge, 1999. For a review see : H. Hinrichsen, Adv. Phys. [**49**]{}, 815 (2000). For a recent review see : G. Ódor, [*Phase transition universality classes of classical, nonequilibrium systems*]{}, Rev. Mod. Phys. [**76**]{}, 663 (2004). Vlad Elgart, Alex Kamenev, Phys. Rev. E [**74**]{} (2006) 041101. D. Friedan, Z. Qiu and S. Shenker, Phys. Rev. Lett. [**52**]{}, (1984) 1575. J. L. Cardy, [*Phase transitions and critical Phenomena*]{}, vol 11, eds. C. Domb and J.L. lebowitz (Academic Press). (1987). M. Henkel,[*Conformal Invariance and Critical Phenomena*]{}, Springer 1999. M. Henkel, Nucl. Phys. B [**641**]{}, 405 (2002); A. Picone and M. Henkel, Nucl. Phys. B [**688**]{}, 217 (2004). M. Henkel and M. Pleimling, J. Phys. Cond. Matt. [**17**]{}, S1899 (2005). J. P. Bouchaud and A. J. Bray, [*Soft and Fragile Matter*]{}, eds. M. E. Cates and M. R. Evans (Bristol: IOP). (2000); A. J. Bray, Adv. Phys. [**43**]{} (1994) 357. C. de Dominics and L. Peliti, Phys. Rev. B [**18**]{}, (1978) 353. H. K. Janssen, in G. Györgyi et al. (eds), [*From phase transitions to chaos*]{}, World Scientific, Singapour 1992, p.68. D. S. Fisher and D. A. Huse, Phys. Rev. B [**38**]{} (1988) 373; A. D. Huse, Phys. Rev. B [**40**]{} (1989) 304. A. Picone and M. Henkel, J. Phys. A [**35**]{}, (2002) 5575. M. Henkel, M. Pleimling, C. Godreche and J-M Luck, Phys. Rev. Lett. [**87**]{}, (2001) 265701. M. Henkel, T. Enss and M. Pleimling, J. Phys. A [**39**]{} (2006) L589-L598. M. Henkel, cond-mat/0609672. L. Berthier, J.-L. Barrat and J. Kurchan, Eur. Phys. J. [**B11**]{}, 635 (1999). G.F. Mazenko, Phys. Rev. [**E69**]{}, 016114 (2004). C. Godrèche and J.-M. Luck, J. Phys. A: Math. Gen. [**33**]{}, 9141 (2000); E. Lippiello and M. Zannetti, Phys. Rev. [**E61**]{}, 3369 (2000); M. Henkel and G.M. Schütz, J. Phys. A: Math. Gen. [**37**]{}, 591 (2004). P. Mayer, S. Léonard, L. Berthier, J.P. Garrahan and P. Sollich, Phys. Rev. Lett. [**96**]{}, 030602 (2006). P. Mayer, PhD thesis, King’s college London (2004). M. Pleimling and A. Gambassi, Phys. Rev. [**B71**]{}, 180401(R) (2005). P. Calabrese and A. Gambassi, Phys. Rev. E [**66**]{}, 066101 (2002) P. Calabrese and A. Gambassi, J. Phys. A: Math. Gen. [**38**]{} (2005) R133-R193. H. Hinrichsen, J. Stat. Mech.(2006) L06001. N. Menyhárd, J.Phys.A [**27**]{}, 6139 (1994). R. J. Glauber, J. Math. Phys. [**4**]{}, 191 (1963). P. Grassberger, F. Krause and T.  von der Twer, J. Phys. A:Math.Gen., [**17**]{}, L105 (1984). H. Takayasu and A. Yu. Tretyakov, Phys. Rev. Lett. [**68**]{}, 3060, (1992). J. L. Cardy and U. C. Täuber, Phys. Rev. Lett. [**77**]{}, 4780 (1996). L. Canet, H. Chaté, B. Delamotte, I. Dornic, M. A. Muñoz, Phys. Rev. Lett. 95 (2005) 100601. N. Menyhárd and G. Ódor, J. Phys. A. [**28**]{}, 4505 (1995). N. Menyhárd and G. Ódor, J.Phys.A [**29**]{}, 7739 (1996). N. Menyhárd and G. Ódor, J.Phys.A [**30**]{}, 8515 (1997). N. Menyhárd and G. Odor, J. Phys. A [**31**]{}, 6771 (1998). P. Calabrese, A. Gambassi and F. Krzakala, J. Stat. Mech. [**0606**]{} (2006) P016.
--- abstract: 'This paper proposes a high-throughput energy-efficient Successive Cancellation (SC) decoder architecture for polar codes based on combinational logic. The proposed combinational architecture operates at relatively low clock frequencies compared to sequential circuits, but takes advantage of the high degree of parallelism inherent in such architectures to provide a favorable tradeoff between throughput and energy efficiency at short to medium block lengths. At longer block lengths, the paper proposes a hybrid-logic SC decoder that combines the advantageous aspects of the combinational decoder with the low-complexity nature of sequential-logic decoders. Performance characteristics on ASIC and FPGA are presented with a detailed power consumption analysis for combinational decoders. Finally, the paper presents an analysis of the complexity and delay of combinational decoders, and of the throughput gains obtained by hybrid-logic decoders with respect to purely synchronous architectures.' author: - 'Onur Dizdar,  and Erdal Arikan, [^1][^2]' title: 'A High-Throughput Energy-Efficient Implementation of Successive Cancellation Decoder for Polar Codes Using Combinational Logic' --- Polar codes, successive cancellation decoder, error correcting codes, VLSI, energy efficiency. Introduction ============ codes were proposed in [@arikan] as a low-complexity channel coding method that can provably achieve Shannon’s channel capacity for any binary-input symmetric discrete memoryless channel. Apart from the intense theoretical interest in the subject, polar codes have attracted attention for their potential applications. There have been several proposals on hardware implementations of polar codes, which mainly focus on maximizing throughput or minimizing hardware complexity. In this work, we propose an architecture for SC decoding using combinational logic in an effort to obtain a high throughput decoder with low power consumption. We begin with a survey of the relevant literature. The basic decoding algorithm for polar codes is the SC decoding algorithm, which is a non-iterative sequential algorithm with complexity $O(N\log N)$ for a code of length $N$. Many of the SC decoding steps can be carried out in parallel and the latency of the SC decoder can be reduced to roughly $2N$ in a fully-parallel implementation, as pointed out in [@arikan] and [@Arikan2010]. This means that the throughput of any synchronous SC decoder is limited to $\frac{f_{c}}{2}$ in terms of the clock frequency $f_{c}$, as pointed out in [@hardwarearchitectures]. The throughput is reduced further in semi-parallel architectures, such as [@scasic] and [@anefficientpartsumnet], which increase the decoding latency further in exchange for reduced hardware complexity. This throughput bottleneck in SC decoding is inherent in the logic of SC decoding and stems from the fact that the decoder makes its final decisions one at a time in a sequential manner. Some algorithmic and hardware implementation methods have been proposed to overcome the throughput bottleneck problem in polar decoding. One method that has been tried is Belief Propagation (BP) decoding, starting with [@arikanbp]. In BP decoding, the decoder has the capability of making multiple bit decisions in parallel. Indeed, BP throughputs of $2$ Gb/s (with clock frequency $500$ MHz) and $4.6$ Gb/s (with clock frequency $300$ MHz) are reported in [@architecturesforpolarbp] and [@bpasicthesis], respectively. Generally speaking, the throughput advantage of BP decoding is observed at high SNR values, where correct decoding can be achieved after a small number of iterations; this advantage of BP decoders over SC decoders diminishes as the SNR decreases. A second algorithmic approach to break the throughput bottleneck is to exploit the fact that polar codes are a class of generalized concatenated codes (GCC). More precisely, a polar code $\mathcal{C}$ of length-$N$ is constructed from two length-$N/2$ codes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$, using the well-known Plotkin $|\mathbf{u}|\mathbf{u}+\mathbf{v}|$ code combining technique [@Plotkin]. The recursive nature of the polar code construction ensures that the constituent codes $\mathcal{C}_1$ and $\mathcal{C}_2$ are polar codes in their own right and each can be further decomposed into two polar codes of length $N/4$, and so on, until the block-length is reduced to one. In order to improve the throughput of a polar code, one may introduce specific measures to speed up the decoding of the constituent polar codes encountered in the course of such recursive decomposition. For example, when a constituent code $\mathcal{C}_{i}$ of rate $0$ or $1$ is encountered, the decoding becomes a trivial operation and can be completed in one clock cycle. Similarly, decoding is trivial when the constituent code is a repetition code or a single parity-check code. Such techniques have been applied earlier in the context of Reed-Muller codes by [@schnabl_bossert] and [@dumer_shabunov]. They have been also used in speeding up SC decoders for polar codes by [@kschis]. Results reported by such techniques show a throughput of $1$ Gb/s by using designs tailored for specific codes [@fastpolardecoders]. On the other hand, decoders utilizing such shortcuts require reconfiguration when the code is changed, which makes their use difficult in systems using adaptive coding methods. Implementation methods such as precomputations, pipelined, and unrolled designs, have also been proposed to improve the throughput of SC decoders. These methods trade hardware complexity for gains in throughput. For example, it has been shown that the decoding latency may be reduced to $N$ by doubling the number of adders in a SC decoder circuit [@lowlatencysequential]. A similar approach has been used in a first ASIC implementation of a SC decoder to reduce the latency at the decision-level LLR calculations by $N/2$ clock cycles and provide a throughput of $49$ Mb/s with $150$ MHz clock frequency for a rate-$1/2$ code [@scasic]. In contrast, pipelined and unrolled designs do not affect the latency of the decoder; the increase in throughput is obtained by decoding multiple codewords simultaneously without resource sharing. A recent study [@unrolled_final] exhibits a SC decoder achieving $254$ Gb/s throughput with a fully-unrolled and deeply-pipelined architecture using component code properties for a rate-$1/2$ code. Pipeling in the context of polar decoders was used earlier in various forms and in a more limited manner in [@Arikan2010], [@hardwarearchitectures], [@Pamuk2011], [@lowlatencysequential], and [@interleavedsc]. SC decoders, while being simple, are suboptimal. In [@tal_list], SC [*list-of-$L$*]{} decoding was proposed for decoding polar codes, following similar ideas developed earlier by [@DumerList] for Reed-Muller codes. Ordinary SC decoding is a special case of SC list decoding with list size $L=1$. SC list decoders show markedly better performance compared to SC decoders at the expense of complexity, and are subject to the same throughput bottleneck problems as ordinary SC decoding. Parallel decision-making techniques, as discussed above, can be applied to improve the throughput of SC list decoding. For instance, it was shown in [@parhi_list] that by using $4$-bit parallel decisions, a list-of-2 SC decoder can achieve a throughput of around $500$ Mb/s with a clock frequency of $500$ MHz. The present work is motivated by the desire to obtain high-throughput SC decoders with low power consumption, which has not been a main concern in literature so far. These desired properties are attained by designing completely combinational decoder architectures, which is possible thanks to the recursive and feed-forward (non-iterative) structure of the SC algorithm. Combinational decoders operate at lower clock frequencies compared to ordinary synchronous (sequential logic) decoders. However, in a combinational decoder an entire codeword is decoded in one clock cycle. This allows combinational decoders to operate with less power while maintaining a high throughput, as we demonstrate in the remaining sections of this work. Pipelining can be applied to combinational decoders at any depth to adjust their throughput, hardware usage, and power consumption characteristics. Therefore, we also investigate the performance of pipelined combinational decoders. We do not use any of the multi-bit decision shortcuts in the architectures we propose. Thus, for a given block length, the combinational decoders that we propose retain the inherent flexibility of polar coding to operate at any desired code rate between zero and one. Retaining such flexibility is important since one of the main motivations behind the combinational decoder is to use it as an “accelerator” module as part of a hybrid decoder that combines a synchronous SC decoder with a combinational decoder to take advantage of the best characteristics of the two types of decoders. We give an analytical discussion of the throughput of hybrid-logic decoders to quantify the advantages of the hybrid decoder. The rest of this paper is organized as follows. Section \[sec:background\] give a brief discussion of polar coding to define the SC decoding algorithm. Section \[sec:combinational\] introduces the main decoder architectures considered in this paper, namely, combinational decoders, pipelined combinational decoders, and hybrid-logic decoders. Also included in that section is an analysis of the hardware complexity and latency of the proposed decoders. Implementation results of combinational decoders and pipelined combinational decoders are presented in Section \[sec:implementation\], with a detailed power consumption analysis for combinational decoders. Also presented in the same section is an analysis of the throughput improvement obtained by hybrid-logic decoders relative to synchronous decoders. Section \[sec:conclusion\] concludes the paper. Throughout the paper, vectors are denoted by boldface lowercase letters. All matrix and vector operations are over vector spaces over the binary field $\mathbb{F}_{2}$. Addition over $\mathbb{F}_{2}$ is represented by the $\oplus$ operator. For any set $\mathcal{S} \subseteq \left\{0,1,\ldots, N-1\right\}$, $\mathcal{S}^{\mathrm{c}}$ denotes its complement. For any vector $\mathbf{u}=\left(u_{0}, u_{1},\ldots , u_{N-1}\right)$ of length $N$ and set $\mathcal{S} \subseteq \left\{0,1,\ldots, N-1\right\}$, . We define a binary sign function $\mathrm{s}(\ell)$ as $$\begin{aligned} \mathrm{s}(\ell)= \left\{ \begin{array}{ll} 0, & \mbox{if} \ \ell \geq 0 \\ 1, & \mbox{otherwise}. \end{array} \right. \label{eq:sign}\end{aligned}$$ Background on Polar Coding {#sec:background} ========================== We briefly describe the basics of polar coding in this section, including the SC decoding algorithm. Consider the system given in Fig. \[fig:commn\], in which a polar code is used for channel coding. All input/output signals in the system are vectors of length $N$, where $N$ is the length of the polar code that is being used. ![Communication scheme with polar coding[]{data-label="fig:commn"}](system.eps){width="6.3in" height="6.3in"} The encoder input vector consists of a [*data*]{} part $\mathbf{u}_{\mathcal{A}}$ and a [*frozen*]{} part $\mathbf{u}_{\mathcal{A}^{\mathrm{c}}}$, where $\mathcal{A}$ is chosen in accordance with polar code design rules as explained in [@arikan]. We fix the frozen part $\mathbf{u}_{\mathcal{A}^{\mathrm{c}}}$ to zero in this study. We define a [*frozen-bit indicator vector*]{} $\boldsymbol{a}$ so that $\boldsymbol{a}$ is a 0-1 vector of length $N$ with $$\begin{aligned} a_{i}= \left\{ \begin{array}{ll} 0, & \mbox{if} \ i \in \mathcal{A}^{\mathrm{c}} \\ \nonumber 1, & \mbox{if} \ i \in \mathcal{A} . \nonumber \end{array} \right. \nonumber\end{aligned}$$ The frozen-bit indicator vector is made available to the decoder in the system. The channel $W$ in the system is an arbitrary discrete memoryless channel with input alphabet ${\cal X}=\{0,1\}$, output alphabet ${\cal Y}$ and transition probabilities $\{W(y|x):x\in {\cal X},y\in {\cal Y}\}$. In each use of the system, a codeword $\mathbf{x} \in \mathbb{F}_{2}^{N}$ is transmitted, and a channel output vector is received. The receiver calculates a log-likelihood ratio (LLR) vector ${\mathbf \ell}=(\ell_1,\ldots,\ell_N)$ with $$\ell_i = \ln \left(\frac{P\left(y_{i} | x_{i}=0\right)}{P\left(y_{i} | x_{i}=1\right)}\right),$$ and feeds it into the SC decoder. [latex@errorgobble]{} $N=$*length*$(\boldsymbol{\ell})$\ The decoder in the system is an SC decoder as described in [@arikan], which takes as input the channel LLRs and the frozen-bit indicator vector and calculates an estimate of the data vector $\mathbf{u}$. The SC algorithm outputs bit decisions sequentially, one at a time in natural index order, with each bit decision depending on prior bit decisions. A precise statement of the SC algorithm is given in Algorithm \[alg:2NbyN\_dec\], where the functions $f_{N/2}$ and $g_{N/2}$ are defined as $$\begin{gathered} f_{N/2}(\boldsymbol{\ell})=\left(f(\ell_{0},\ell_{1}), \ldots, f(\ell_{N-2},\ell_{N-1})\right)\\ g_{N/2}(\boldsymbol{\ell}, \mathbf{v})=\left(g(\ell_{0},\ell_{1},v_{0}), \ldots, g(\ell_{N-2},\ell_{N-1},v_{N/2-1})\right)\end{gathered}$$ with $$\begin{gathered} f(\ell_{1},\ell_{2})=2\tanh^{-1}\left(\tanh\left(\ell_1/2\right) \,\tanh\left(\ell_2/2\right) \right) \\ g(\ell_{1},\ell_{2},v)=\ell_{1}(-1)^{v} + \ell_{2}.\end{gathered}$$ In actual implementations discussed in this paper, the function $f$ is approximated using the [*min-sum*]{} formula $$f(\ell_{1},\ell_{2})\approx(1-2\mathrm{s}(\ell_{1}))\cdot (1-2\mathrm{s}(\ell_{2}))\cdot \min\left\{\left|\ell_{1}\right|,\left|\ell_{2}\right|\right\}, \label{eq:f_minsum}$$ and $g$ is realized in the alternative (exact) form $$g(\ell_{1},\ell_{2},v)=\ell_{2}+(1-2v)\cdot \ell_{1}. \label{eq:g_minsum}$$ A key property of the SC decoding algorithm that makes low-complexity implementations possible is its recursive nature, where a decoding instance of block length $N$ is broken in the decoder into two decoding instances of lengths $N/2$ each. SC Decoder Using Combinational Logic {#sec:combinational} ==================================== The pseudocode in Algorithm \[alg:2NbyN\_dec\] shows that the logic of SC decoding contains no loops, hence it can be implementated using only combinational logic. The potential benefits of a combinational implementation are high throughput and low power consumption, which we show are feasible goals. In this section, we first describe a combinational SC decoder for length $N=4$ to explain the basic idea. Then, we describe the three architectures that we propose. Finally, we give an analysis of complexity and latency characteristics of the proposed architectures. Combinational Logic for SC Decoding ----------------------------------- In a combinational SC decoder the decoder outputs are expressed directly in terms of decoder inputs, without any registers or memory elements in between the input and output stages. Below we give the combinational logic expressions for a decoder of size $N=4$, for which the signal flow graph (trellis) is depicted in Fig. \[fig:dec\_graph4\]. ![SC decoding trellis for $N=4$[]{data-label="fig:dec_graph4"}](decodertrellis4.eps){width="2.5in" height="2.5in"} At Stage 0 we have the LLR relations $$\begin{gathered} \ell_{0}^{\prime}=f(\ell_{0},\ell_{1}), \quad \ell_{1}^{\prime}=f(\ell_{2},\ell_{3}),\\ \ell_{0}^{\prime\prime}=g(\ell_{0},\ell_{1},\hat{u}_{0} \oplus \hat{u}_{1}),\quad \ell_{1}^{\prime\prime}=g(\ell_{2},\ell_{3},\hat{u}_{1}).\end{gathered}$$ At Stage 1, the decisions are extracted as follows. $$\begin{gathered} \hat{u}_{0}=\mathrm{s}\left[f\left(f(\ell_{0},\ell_{1}), f(\ell_{2},\ell_{3})\right)\right]\cdot a_{0}, \\ \hat{u}_{1}=\mathrm{s}\left[g\left(f(\ell_{0},\ell_{1}), f(\ell_{2},\ell_{3}),\hat{u}_{0}\right)\right]\cdot a_{1},\\ \hat{u}_{2}=\mathrm{s}\left[f\left(g(\ell_{0},\ell_{1},\hat{u}_{0} \oplus \hat{u}_{1}),g(\ell_{2},\ell_{3},\hat{u}_{1})\right)\right]\cdot a_{2},\\ \hat{u}_{3}=\mathrm{s}\left[g\left(g(\ell_{0},\ell_{1},\hat{u}_{0} \oplus \hat{u}_{1}),g(\ell_{2},\ell_{3},\hat{u}_{1}),\hat{u}_{2}\right)\right]\cdot a_{3},\end{gathered}$$ where the decisions $\hat{u}_0$ and $\hat{u}_2$ may be simplified as $$\begin{gathered} \hat{u}_{0}=\left[\mathrm{s}(\ell_{0}) \oplus \mathrm{s}(\ell_{1}) \oplus \mathrm{s}(\ell_{2}) \oplus \mathrm{s}(\ell_{3})\right]\cdot a_{0}, \\ \hat{u}_{2}=\left[\mathrm{s}\left(g(\ell_{0},\ell_{1},\hat{u}_{0} \oplus \hat{u}_{1})\right) \oplus \mathrm{s}\left(g(\ell_{2},\ell_{3},\hat{u}_{1})\right)\right]\cdot a_{2}.\end{gathered}$$ ![Combinational decoder for $N=4$[]{data-label="fig:N4decoder"}](decoder4.eps){width="3.5in" height="3.5in"} \[sec:combdec\] ![image](recursivearchitecture.eps){width="8.0in" height="8.0in"} Fig. \[fig:N4decoder\] shows a combinational logic implementation of the above decoder using only comparators and adders. We use sign-magnitude representation, as in [@asemiparallel], to avoid excessive number of conversions between different representations. Channel observation LLRs and calculations throughout the decoder are represented by $Q$ bits. The function $g$ of is implemented using the precomputation method suggested in [@lowlatencysequential] to reduce latency. In order to reduce latency and complexity further, we implement the decision logic for odd-indexed bits as $$\begin{aligned} \hat{u}_{2i+1}= \left\{ \begin{array}{ll} 0\hfill,&\ \mbox{if} \ a_{2i+1} = 0 \\ \mathrm{s}(\lambda_{2})\hfill,& \ \mbox{if} \ a_{2i+1} = 1 \ \mbox{and} \ \left|\lambda_{2}\right| \geq \left|\lambda_{1}\right| \\ \mathrm{s}(\lambda_{1}) \oplus \hat{u}_{2i},& \ \mbox{otherwise}. \end{array} \right. \label{eq:compinsteadofadd}\end{aligned}$$ Architectures ------------- In this section, we propose three SC decoder architectures for polar codes: combinational, pipelined combinational, and hybrid-logic decoders. Thanks to the recursive structure of the SC decoder, the above combinational decoder of size $N=4$ will serve as a basic building block for the larger decoders that we discuss in the next subsection. ### Combinational Decoder A combinational decoder architecture for any block length $N$ using the recursive algorithm in Algorithm \[alg:2NbyN\_dec\] is shown in Fig. \[fig:2NbyN\_fig\]. This architecture uses two combinational decoders of size $N/2$, with glue logic consisting of one $f_{N/2}$ block, one $g_{N/2}$ block, and one size-$N/2$ encoder block. ![RTL schematic for combinational decoder ($N=8$)[]{data-label="fig:N8decoder"}](decoder8.eps){width="5.75in" height="5.75in"} The RTL schematic for a combinational decoder of this type is shown in Fig. \[fig:N8decoder\] for $N=8$. The decoder submodules of size-4 are the same as in Fig. \[fig:N4decoder\]. The size-4 encoder is implemented using combinational circuit consisting of XOR gates. The logic blocks in a combinational decoder are directly connected without any synchronous logic elements in-between, which helps the decoder to save time and power by avoiding memory read/write operations. Avoiding the use of memory also reduces hardware complexity. In each clock period, a new channel observation LLR vector is read from the input registers and a decision vector is written to the output registers. The clock period is equal to the overall combinational delay of the circuit, which determines the throughput of the decoder. The decoder differentiates between frozen bits and data bits by AND gates and the frozen bit indicators $a_{i}$, as shown in Fig. \[fig:N4decoder\]. The frozen-bit indicator vector can be changed at the start of each decoding operation, making it possible to change the code configuration in real time. Advantages and disadvantages of combinational decoders will be discussed in more detail in Section \[sec:implementation\]. ### Pipelined Combinational Decoder {#sec:pipelined} Unlike sequential circuits, the combinational architecture explained above has no need for any internal storage elements. The longest path delay determines the clock period in such a circuit. This saves hardware by avoiding usage of memory, but slows down the decoder. In this subsection, we introduce pipelining in order to increase the throughput at the expense of some extra hardware utilization. It is seen in Fig. \[fig:2NbyN\_fig\] that the outputs of the first decoder block (DECODE($\boldsymbol{\ell}^{\prime}, \boldsymbol{a}^{\prime}$)) are used by the encoder to calculate partial-sums. Therefore, this decoder needs to preserve its outputs after they settle to their final values. However, this particular decoder can start the decoding operation for another codeword if these partial-sums are stored with the corresponding channel observation LLRs for the second decoder (DECODE($\boldsymbol{\ell}^{\prime\prime}, \boldsymbol{a}^{\prime\prime}$)). Therefore, adding register blocks to certain locations in the decoder enable a pipelined decoding process. Early examples of pipelining in the context of synchronous polar decoders are [@Arikan2010], [@hardwarearchitectures], and [@Pamuk2011]. In synchronous design with pipelining, shared resources at certain stages of decoding have to be duplicated in order to prevent conflicts on calculations when multiple codewords are processed in the decoder. The number of duplications and their stages depend on the number of codewords to be processed in parallel. Since pipelined decoders are derived from combinational decoders, they do not use resource sharing; therefore, resource duplications are not needed. Instead, pipelined combinational decoders aim to reuse the existing resources. This resource reuse is achieved by using storage elements to save the outputs of smaller combinational decoder components and re-employ them in decoding of another codeword. ![image](pipelinedarchitecture.eps){width="7.0in" height="7.0in"} A single stage pipelined combinational decoder is shown in Fig. \[fig:2NbyN\_pipe\_fig\]. The channel observation LLR vectors $\boldsymbol{\ell}_{1}$ and $\boldsymbol{\ell}_{2}$ in this architecture correspond to different codewords. The partial-sum vector $\mathbf{v}_{1}$ is calculated from the first half of the decoded vector for $\boldsymbol{\ell}_{1}$. Output vectors $\mathbf{\hat{u}}_{2}^{\prime}$ and $\mathbf{\hat{u}}_{1}^{\prime \prime}$ are the first and second halves of decoded vectors for $\boldsymbol{\ell}_{2}$ and $\boldsymbol{\ell}_{1}$, respectively. The schedule for this pipelined combinational decoder is given in Table \[table:pipelinedschedule\]. [&gt; m[0.8in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{} | &gt; m[0.1in]{}]{} Clock Cycle & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\ \[0.5ex\] Input of DECODE($\boldsymbol{\ell}, \boldsymbol{a}$) & $\boldsymbol{\ell}_{1}$ & $\boldsymbol{\ell}_{2}$ & $\boldsymbol{\ell}_{3}$ & $\boldsymbol{\ell}_{4}$ & $\boldsymbol{\ell}_{5}$ & $\boldsymbol{\ell}_{6}$ &\ Output of DECODE($\boldsymbol{\ell}^{\prime}, \boldsymbol{a}^{\prime}$) & & $\mathbf{\hat{u}_{1}}^{\prime}$ & $\mathbf{\hat{u}_{2}}^{\prime}$ & $\mathbf{\hat{u}_{3}}^{\prime}$ & $\mathbf{\hat{u}_{4}}^{\prime}$ & $\mathbf{\hat{u}_{5}}^{\prime}$ & $\mathbf{\hat{u}_{6}}^{\prime}$ &\ Output of DECODE($\boldsymbol{\ell}^{\prime \prime}, \boldsymbol{a}^{\prime \prime}$) & & & $\mathbf{\hat{u}_{1}}^{\prime \prime}$ & $\mathbf{\hat{u}_{2}}^{\prime \prime}$ & $\mathbf{\hat{u}_{3}}^{\prime \prime}$ & $\mathbf{\hat{u}_{4}}^{\prime \prime}$ & $\mathbf{\hat{u}_{5}}^{\prime \prime}$ & $\mathbf{\hat{u}_{6}}^{\prime \prime}$\ Output of DECODE($\boldsymbol{\ell}, \boldsymbol{a}$) & & & $\mathbf{\hat{u}_{1}}$ & $\mathbf{\hat{u}_{2}}$ & $\mathbf{\hat{u}_{3}}$ & $\mathbf{\hat{u}_{4}}$ & $\mathbf{\hat{u}_{5}}$ & $\mathbf{\hat{u}_{6}}$\ \[table:pipelinedschedule\] As seen from Table \[table:pipelinedschedule\], pipelined combinational decoders, like combinational decoders, decode one codeword per clock cycle. However, the maximum path delay of a pipelined combinational decoder for block length $N$ is approximately equal to the delay of a combinational decoder for block length $N/2$. Therefore, the single stage pipelined combinational decoder in Fig. \[fig:2NbyN\_pipe\_fig\] provides approximately twice the throughput of a combinational decoder for the same block length. On the other hand, power consumption and hardware usage increase due to the added storage elements and increased operating frequency. Pipelining stages can be increased by making the two combinational decoders for block length $N/2$ in Fig. \[fig:2NbyN\_pipe\_fig\] also pipelined in a similar way to increase the throughput further. Comparisons between combinational decoders and pipelined combinational decoders are given in more detail in Section \[sec:implementation\]. ### Hybrid-Logic Decoder {#sec:hybrid} In this part, we give an architecture that combines synchronous decoders with combinational decoders to carry out the decoding operations for component codes. In sequential SC decoding of polar codes, the decoder slows down every time it approaches the decision level (where decisions are made sequentially and number of parallel calculations decrease). In a hybrid-logic SC decoder, the combinational decoder is used near the decision level to speed up the SC decoder by taking advantage of the GCC structure of polar code. The GCC structure is illustrated in Fig. \[fig:hybrid\_enc\], which shows that a polar code $\mathcal{C}$ of length $N=8$ can be seen as the concatenation of two polar codes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ of length $N^{\prime}=N/2=4$, each. ![Encoding circuit of $\mathcal{C}$ with component codes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ ($N=8$ and $N^{\prime}=4$)[]{data-label="fig:hybrid_enc"}](encodertrellis.eps){width="2.6in" height="2.6in"} The dashed boxes in Fig. \[fig:hybrid\_enc\] represent the component codes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$. The input bits of component codes are and . For a polar code of block length $8$ and $R=1/2$, the frozen bits are $\hat{u}_{0}$, $\hat{u}_{1}$, $\hat{u}_{2}$, and $\hat{u}_{4}$. This makes $3$ input bits of $\mathcal{C}_{1}$ and $1$ input bit of $\mathcal{C}_{2}$ frozen bits; thus, $\mathcal{C}_{1}$ is a $R=3/4$ code with $\hat{u}^{(1)}_{0}$, $\hat{u}^{(1)}_{1}$, $\hat{u}^{(1)}_{2}$ and $\mathcal{C}_{2}$ is a $R=1/4$ code with $\hat{u}^{(2)}_{0}$ frozen. Encoding of $\mathcal{C}$ is done by first encoding $\mathbf{\hat{u}}^{(1)}$ and $\mathbf{\hat{u}}^{(2)}$ separately using encoders for block length $4$ and obtain coded outputs $\mathbf{\hat{x}}^{(1)}$ and $\mathbf{\hat{x}}^{(2)}$. Then, each pair of coded bits $\left(\mathbf{\hat{x}}_{i}^{(1)}, \mathbf{\hat{x}}_{i}^{(2)}\right)$, , is encoded again using encoders for block length $2$ to obtain the coded bits of $\mathcal{C}$. ![Decoding trellis for hybrid-logic decoder ($N=8$ and $N^{\prime}=4$)[]{data-label="fig:hybrid"}](gcc_and_hybrid.eps){width="3.0in" height="3.0in"} Decoding of $\mathcal{C}$ is done in a reversed manner with respect to encoding explained above. Fig. \[fig:hybrid\] shows the decoding trellis for the given example. Two separate decoding sessions for block length $4$ are required to decode component codes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$. We denote the input LLRs for component codes as , as shown in Fig. \[fig:hybrid\]. These inputs are calculated by the operations at stage 0. The frozen bit indicator vector of $\mathcal{C}$ is and the frozen bit vectors of component codes are and . It is seen that $\boldsymbol{\lambda}^{(2)}$ depends on the decoded outputs of $\mathcal{C}_{1}$, since $g$ functions are used to calculate $\boldsymbol{\lambda}^{(2)}$ from input LLRs. This implies that the component codes cannot be decoded in parallel. The dashed boxes in Fig. \[fig:hybrid\] show the operations performed by a combinational decoder for $N^{\prime}=4$. The operations outside the boxes are performed by a synchronous decoder. The sequence of decoding operations in this hybrid-logic decoder is as follows: a synchronous decoder takes channel observations LLRs and use them to calculate intermediate LLRs that require no partial-sums at stage $0$. When the synchronous decoder completes its calculations at stage $0$, the resulting intermediate LLRs are passed to a combinational decoder for block length $4$. The combinational decoder outputs $\hat{u}_{0}, \ldots, \hat{u}_{3}$ (uncoded bits of the first component code) while the synchronous decoder waits for a period equal to the maximum path delay of combinational decoder. The decoded bits are passed to the synchronous decoder to be used in partial-sums ($\hat{u}_{0} \oplus \hat{u}_{1} \oplus \hat{u}_{2} \oplus \hat{u}_{3}$, $\hat{u}_{1} \oplus \hat{u}_{3}$, $\hat{u}_{2} \oplus \hat{u}_{3}$, and $\hat{u}_{3}$). The synchronous decoder calculates the intermediate LLRs using these partial-sums with channel observation LLRs and passes the calculated LLRs to the combinational decoder, where they are used for decoding of $\hat{u}_{4}, \ldots, \hat{u}_{7}$ (uncoded bits of the second component code). Since the combinational decoder architecture proposed in this work can adapt to operate on any code set using the frozen bit indicator vector input, a single combinational decoder is sufficient for decoding all bits. During the decoding of a codeword, each decoder (combinational and sequential) is activated $2$ times. Algorithm \[alg:alghybrid\] shows the algorithm for hybrid-logic polar decoding for general $N$ and $N^{\prime}$. For the $i^{th}$ activation of combinational and sequential decoders, , the LLR vector that is passed from synchronous to combinational decoder, the frozen bit indicator vector for the $i^{th}$ component code, and the output bit vector are denoted by , , and , respectively. The function DECODE\_SYNCH represents the synchronous decoder that calculates the intermediate LLR values at stage $(\log _{2}(N/N^{\prime})-1)$, using the channel observations and partial-sums at each repetition. [latex@errorgobble]{} $\mathbf{\hat{u}}$ During the time period in which combinational decoder operates, the synchronous decoder waits for $\left\lceil D_{N^{\prime}}\cdot f_{c}\right\rceil$ clock cycles, where $f_{c}$ is the operating frequency of synchronous decoder and $D_{N^{\prime}}$ is the delay of a combinational decoder for block length $N^{\prime}$. We can calculate the approximate latency gain obtained by a hybrid-logic decoder with respect to the corresponding synchronous decoder as follows: let $\mathrm{L}_{\mathrm{S}}\left(N\right)$ denote the latency of a synchronous decoder for block length $N$. The latency reduction obtained using a combinational decoder for a component code of length-$N^{\prime}$ in a single repetition is . In this formulation, it is assumed that no numerical representation conversions are needed when LLRs are passed from synchronous to combinational decoder. Furthermore, we assume that maximum path delays of combinational and synchronous decoders do not change significantly when they are implemented together. Then, the latency gain factor can be approximated as $$\begin{aligned} \mathrm{g}(N,N^{\prime})\approx\frac{\mathrm{L}_{\mathrm{S}}\left(N\right)}{\mathrm{L}_{\mathrm{S}}\left(N\right)-\left(N/N^{\prime}\right)\mathrm{L}_{\mathrm{r}}\left(N^{\prime}\right)}. \label{eq:gain}\end{aligned}$$ The approximation is due to the additional latency from partial-sum updates at the end of each repetition using the $N^{\prime}$ decoded bits. Efficient methods for updating partial sums can be found in [@anefficientpartsumnet] and [@ascalablesc]. This latency gain multiplies the throughput of synchronous decoder, so that: $$\begin{aligned} \mathrm{TP}_{\mathrm{HL}}(N,N^{\prime})=\mathrm{g}(N,N^{\prime})\: \mathrm{TP}_{\mathrm{S}}(N), \nonumber\end{aligned}$$ where $\mathrm{TP}_{\mathrm{S}}(N,N^{\prime})$ and $\mathrm{TP}_{\mathrm{HL}}(N)$ are the throughputs of synchronous and hybrid-logic decoders, respectively. An example of the analytical calculations for throughputs of hybrid-logic decoders is given in Section \[sec:implementation\]. Analysis {#sec:analysis} -------- In this section, we analyze the complexity and delay of combinational architectures. We benefit from the recursive structure of polar decoders (Algorithm \[alg:2NbyN\_dec\]) in the provided analyses. ### Complexity Combinational decoder complexity can be expressed in terms of the total number of comparators, adders, and subtractors in the design, as they are the basic building blocks of the architecture with similar complexities. First, we estimate the number of comparators. Comparators are used in two different places in the combinational decoder as explained in Section \[sec:combdec\]: in implementing the function $f$ in , and as part of decision logic for odd-indexed bits. Let $c_{N}$ denote the number of comparators used for implementing the function $f$ for a decoder of block length $N$. From Algorithm \[alg:2NbyN\_dec\], we see that the initial value of $c_N$ may be taken as $c_{4}=2$. From Fig. \[fig:N4decoder\], we observe that there is the recursive relationship $$c_{N}=2c_{N/2}+\frac{N}{2}=2\left(2c_{N/4}+\frac{N}{4}\right)+\frac{N}{2}=\ldots.$$ This recursion has the following (exact) solution $$c_N = \frac{N}{2}\log_2\frac{N}{2}$$ as can be verified easily. Let $s_{N}$ denote the number of comparators used for the decision logic in a combinational decoder of block length $N$. We observe that $s_{4}=2$ and more generally $s_N = 2s_{N/2}$; hence, $$\begin{aligned} s_{N}=\frac{N}{2}. \nonumber \end{aligned}$$ Next, we estimate the number of adders and subtractors. The function $g$ of is implemented using an adder and a subtractor, as explained in Section \[sec:combdec\]. We define $r_{N}$ as the total number of adders and subtractors in a combinational decoder for block length $N$. Observing that $r_N = 2c_N$, we obtain $$r_{N}=N\log _{2}\left(N/2\right).$$ Thus, the total number of basic logic blocks with similar complexities is given by $$\begin{aligned} c_{N}+s_{N}+r_{N}=N\left(\frac{3}{2}\log _{2}\left(N\right)-1\right), \label{eq:complexity}\end{aligned}$$\ which shows that the complexity of the combinational decoder is roughly $N\log _{2}\left(N\right)$. ### Combinational Delay {#sec:delay_analysis} We approximately calculate the delay of combinational decoders using Fig. \[fig:2NbyN\_fig\]. The combinational logic delays, excluding interconnect delays, of each component forming DECODE($\boldsymbol{\ell}, \boldsymbol{a}$) block is listed in Table \[table:componentdelays\]. [&gt; m[1.0in]{} | &gt; m[0.5in]{}]{} Block & Delay\ \[0.5ex\] $f_{N/2}(\boldsymbol{\ell})$ & $\delta_{c}+\delta_{m}$\ DECODE($\boldsymbol{\ell}^{\prime}, \boldsymbol{a}^{\prime}$) & $D^{\prime}_{N/2}$\ ENCODE($\mathbf{v}$) & $E_{N/2}$\ $g_{N/2}(\boldsymbol{\ell}, \mathbf{v})$ & $\delta_{m}$\ DECODE($\boldsymbol{\ell}^{\prime \prime}, \boldsymbol{a}^{\prime \prime}$) & $D^{\prime \prime}_{N/2}$\ \[table:componentdelays\] The parallel comparator block $f_{N/2}(\boldsymbol{\ell})$ in Fig. \[fig:2NbyN\_fig\] has a combinational delay of $\delta_{c}+\delta_{m}$, where $\delta_{c}$ is the delay of a comparator and $\delta_{m}$is the delay of a multiplexer. The delay of the parallel adder and subtractor block $g_{N/2}(\boldsymbol{\ell}, \mathbf{v})$ appears as $\delta_{m}$ due to the precomputation method, as explained in Section \[sec:combdec\]. The maximum path delay of the encoder can be approximated as $E_{N/2}\approx\left[\log _{2}\left(\frac{N}{2}\right)\right]\delta_{x}$, where $\delta_{x}$ denotes the propagational delay of a $2$-input XOR gate. We model $D^{\prime}_{N/2}\approx D^{\prime \prime}_{N/2}$, although it is seen from Fig. \[fig:2NbyN\_fig\] that DECODE($\boldsymbol{\ell}^{\prime}, \boldsymbol{a}^{\prime}$) has a larger load capacitance than DECODE($\boldsymbol{\ell}^{\prime \prime}, \boldsymbol{a}^{\prime \prime}$) due to the ENCODE($\mathbf{v}$) block it drives. However, this assumption is reasonable since the circuits that are driving the encoder block at the output of DECODE($\boldsymbol{\ell}^{\prime}, \boldsymbol{a}^{\prime}$) are bit-decision blocks and they compose a small portion of the overall decoder block. Therefore, we can express $D_{N}$ as $$\begin{aligned} D_{N}=2D^{\prime}_{N/2}+\delta_{c}+2\delta_{m}+E_{N/2}. \label{eq:delayeq}\end{aligned}$$ We use the combinational decoder for $N=4$ as the base decoder to obtain combinational decoders for larger block lengths in Section \[sec:combdec\]. Therefore, we can write $D_{N}$ in terms of $D^{\prime}_{4}$ and substitute the expression for $D^{\prime}_{4}$ to obtain the final expression for combinational delay. Using the recursive structure of combinational decoders, we can write $$\begin{aligned} D_{N}=\frac{N}{4}D^{\prime}_{4}&+\left(\frac{N}{4}-1\right)(\delta_{c}+2\delta_{m}) \nonumber \\ &+\left(\frac{3N}{4}-\log _{2}\left(N\right)-1\right)\delta_{x}+\mathrm{T}_{N}. \label{eq:delayeq_finalpre} \end{aligned}$$ Next, we obtain an expression for $D^{\prime}_{4}$ using Fig. \[fig:N4decoder\]. Assuming $\delta_{c} \geq 3\delta_{x}+\delta_{a}$, we can write $$\begin{aligned} D^{\prime}_{4}=3\delta_{c}+4\delta_{m}+\delta_{x}+2\delta_{a}, \label{eq:delayeq_N4} \end{aligned}$$ where $\delta_{a}$ represents the delay of an AND gate. Finally, substituting in , we get $$\begin{aligned} D_{N}=N&\left(\frac{3\delta_{m}}{2}+\delta_{c}+\delta_{x}+\frac{\delta_{a}}{2}\right) \nonumber \\ &-\left\{\delta_{c}+2\delta_{m}+\left[\log _{2}\left(N\right)+1\right]\delta_{x}\right\}+\mathrm{T}_{N}, \label{eq:delayeq_final} \end{aligned}$$ for $N>4$. The interconnect delay of the overall design, $\mathrm{T}_{N}$, cannot be formulated since the routing process is not deterministic. We had mentioned in Section \[sec:combdec\] that the delay reduction obtained by precomputation in adders increases linearly with $N$. This can be seen by observing the expressions and . Reminding that we model the delay of an adder with precomputation by $\delta_{m}$, the first and second terms of contain the delays of adder block stages, both of which are multiplied by a factor of roughly $N/4$. This implies that the overall delay gain obtained by precomputation is approximately equal to the difference between the delay of an adder and a multiplexer, multiplied by $N/2$. The expression shows the relation between basic logic element delays and maximum path delay of combinational decoders. As $N$ grows, the second term in becomes negligible with respect to the first term, making the maximum path delay linearly proportional to $\left(\frac{3\delta_{m}}{2}+\delta_{c}+\delta_{x}+\frac{\delta_{a}}{2}\right)$ with the additive interconnect delay term $\mathrm{T}_{N}$. Combinational architecture involves heavy routing and the interconnect delay is expected to be a non-negligible component in maximum path delay. The analytical results obtained here will be compared with implementation results in the next section. Performance Results {#sec:implementation} =================== In this section, implementation results of combinational and pipelined combinational decoders are presented. Throughput and hardware usage are studied both in ASIC and FPGA, and a detailed discussion of the power consumption characteristics is given form the ASIC design. The metrics we use to evaluate ASIC implementations are throughput, energy-per-bit, and hardware efficiency, which are defined as $$\begin{aligned} \mathrm{Throughput} &\mathrm{[b/s]} = \frac{N \mathrm{[bit]}}{D_{N}\mathrm{[sec]}}, \nonumber \\ \mathrm{Energy}\mathrm{-}\mathrm{per}\mathrm{-}\mathrm{bit}& \mathrm{[J/b]} = \frac{\mathrm{Power} \mathrm{[W]}}{\mathrm{Throughput} \mathrm{[b/s]}}, \nonumber \\ \mathrm{Hardware} \ \mathrm{Efficiency}& \mathrm{[b/s/m^{2}]} = \frac{\mathrm{Throughput} \mathrm{[b/s]}}{\mathrm{Area} \mathrm{[m^{2}]}}, \nonumber \\ \label{eqn:metrics} \end{aligned}$$ respectively. These metrics of combinational decoders are also compared with state-of-the-art decoders. The number of look-up tables (LUTs) and flip-flops (FFs) in the design are studied in addition to throughput in FPGA implementations. Formulas for achievable throughputs in hybrid-logic decoders are also given in this section. ASIC Synthesis Results {#sec:asiciplementation} ---------------------- ### Post-Synthesis Results {#sec:asic_postsynth} Table \[table:asic\] gives the post-synthesis results of combinational decoders using Cadence Encounter RTL Compiler for block lengths $2^{6}$ - $2^{10}$ with Faraday’s UMC $90$ nm $1.3$ V FSD0K-A library. Combinational decoders of such sizes can be used as standalone decoders, *e.g.*, wireless transmission of voice and data; or as parts of a hybrid-logic decoder of much larger size, as discussed in Section \[sec:hybrid\]. We use $Q=5$ bits for quantization in the implementation. As shown in Fig. \[fig:quant\_fer\], the performance loss with $5$-bit quantization is negligible at $N=1024$ (this is true also at lower block lengths, although not shown here). [&gt; m[1.08in]{} | &gt; m[0.27in]{} | &gt; m[0.27in]{} | &gt; m[0.27in]{} | &gt; m[0.27in]{} | &gt; m[0.27in]{}]{} N & $2^{6}$ & $2^{7}$ & $2^{8}$ & $2^{9}$ & $2^{10}$\ \[0.5ex\] Technology &\ Area \[$\mathrm{m}\mathrm{m}^{2}$\] & 0.153 & 0.338 & 0.759 & 1.514 & 3.213\ Number of Cells & 24.3K & 57.2K & 127.5K & 260.8K & 554.3K\ Dec. Power \[mW\] & 99.8 & 138.8 & 158.7 & 181.4 & 190.7\ Frequency \[MHz\] & 45.5 & 22.2 & 11.0 & 5.2 & 2.5\ Throughput \[Gb/s\] & 2.92 & 2.83 & 2.81 & 2.69 & 2.56\ \[pJ/b\] & 34.1 & 49.0 & 56.4 & 67.4 & 74.5\ \[Mb/s/$\mathrm{m}\mathrm{m}^{2}$\] & 19084 & 8372 & 3700 & 1776 & 796\ \[table:asic\] ![FER performance with different numbers of quantization bits ($N=1024$, $R=1/2$)[]{data-label="fig:quant_fer"}](quant_fer.eps){width="3.5in" height="3.5in"} The results given in Table \[table:asic\] verify the analytical analyses for complexity and delay. It is expected from that the ratio of decoder complexities for block lengths $N$ and $N/2$ should be approximately $2$. This can be verified by observing the number of cells and area of decoders in Table \[table:asic\]. As studied in Section \[sec:delay\_analysis\], implies that the maximum path delay is approximately doubled due to the basic logic elements, and there is also a non-deterministic additive delay due to the interconnects, which is also expected to at least double when block length is doubled. The maximum delay results in Table \[table:asic\] show that this analytical derivation also holds for the given block lengths. It is seen from Table \[table:asic\] that the removal of registers and RAM blocks from the design keeps the hardware usage at moderate levels despite the high number of basic logic blocks in the architecture. Moreover, the delays due to register read and write operations and clock setup/hold times are discarded, which accumulate to significant amounts as $N$ increases. ### Power Analysis {#sec:power_analysis} Table \[table:asic\] shows that the power consumption of combinational decoders tends to saturate as $N$ increases. In order to fully understand this behavior, a detailed report for power characteristics of combinational decoders is given in Table \[table:asic\_power\]. [ &gt; m[0.52in]{} | &gt; m[0.25in]{} | &gt; m[0.25in]{} | &gt; m[0.25in]{} | &gt; m[0.25in]{} | &gt; m[0.25in]{}]{} $N$ & $2^{6}$ & $2^{7}$ & $2^{8}$ & $2^{9}$ & $2^{10}$\ \[0.5ex\] Stat. \[nW\] & 701.8 & 1198.7 & 2772.8 & 6131.2 & 14846.7\ Dyn. \[mW\] & 99.8 & 138.8 & 158.7 & 181.3 & 190.5\ \[table:asic\_power\] Table \[table:asic\_power\] shows the power consumption in combinational decoders in two parts: static and dynamic power. Static power is due to the leakage currents in transistors when there is no voltage change in the circuit. Therefore, it is proportional to the number of transistors and capacitance in the circuit ([@vlsibook]). By observing the number of cells given in Table \[table:asic\], we can verify the static power consumption doubling in Table \[table:asic\_power\] when $N$ is doubled. On the other hand, dynamic power consumption is related with the total charging and discharging capacitance in the circuit and defined as $$\begin{aligned} P_{\mathrm{dynamic}}=\alpha C V_{DD}^{2} f_{c}, \label{eq:dynamic_power}\end{aligned}$$ where $\alpha$ represents the average percentage of the circuit that switches with the switching voltage, $C$ is the total load capacitance, $V_{DD}$ is the drain voltage, and $f_{c}$ is the operating frequency of the circuit ([@vlsibook]). The behavior of dynamic power consumption given in Table \[table:asic\_power\] can be explained as follows: The total load capacitance of the circuit is approximately doubled when $N$ is doubled, since load capacitance is proportional to the number of cells in the decoder. On the other hand, operating frequency of the circuit is approximately reduced to half when $N$ is doubled, as discussed above. Activity factor represents the switching percentage of load capacitance, thus, it is not affected from changes in $N$. Therefore, the multiplication of these parameters produce approximately the same result for dynamic power consumption in decoders for different block lengths. The decoding period of a combinational decoder is almost equally shared by the two combinational decoders for half code length. During the first half of this period, the bit estimate voltage levels at the output of the first decoder may vary until they are stabilized. These variations cause the input LLR values of the second decoder to change as they depend on the partial-sums that are calculated from the outputs of the first decoder. Therefore, the second decoder may consume undesired power during the first half of decoding period. In order to prevent this, the partial-sums are fed to the $g_{N/2}$ block through $2$-input AND gates, the second input of which is given as low during the first half of delay period and high during the second half. This method can be recursively applied inside the decoders for half code lengths in order to reduce the power consumption further. We have observed that small variations in timing constraints may lead to significant changes in power consumption. More precise figures about power consumption will be provided in the future when an implementation of this design becomes available. ### Comparison With Other Polar Decoders {#sec:comp_other_polar} In order to have a better understanding of decoder performance, we compare the combinational decoder for $N=1024$ with three state-of-the-art decoders in Table \[table:asiccomparison\]. We use standard conversion formulas in [@turbo3gpp] and [@690mWLDPC] to convert all designs to $65$ nm, $1.0$ V for a fair (subject to limitations in any such study) comparison. [&gt; m[0.65in]{}|&gt; m[0.45in]{} |&gt; m[0.43in]{} |&gt; m[0.43in]{} |&gt; m[0.19in]{} |&gt; m[0.19in]{}]{} & Comb. & [@scasic] & [@anefficientpartsumnet] &\ Decoder Type & SC & SC & SC &\ Block Length & 1024 & 1024 & 1024 &\ Technology & 90 nm & $180$ nm & 65 nm &\ Area \[mm$^{2}$\] & 3.213 & 1.71 & 0.68 &\ Voltage \[V\] & 1.3 & 1.3 & 1.2 & 1.0 & 0.475\ Freq. \[MHz\] & 2.5 & 150 & 1010 & 300 & 50\ Power \[mW\] & 190.7 & 67 & - & 477.5 & 18.6\ TP \[Mb/s\] & 2560 & 49 & 497 & 4676 & 779.3\ \[pJ/b\] & 74.5 & 1370 & - & 102.1 & 23.8\ \[Mb/s/mm$^{2}$\] & 796 & 29^\*^ & 730^\*^ & 3168 & 528\ \ Area \[mm$^{2}$\] & 1.676 & 0.223 & 0.68 &\ Power \[mW\] & 81.5 & 14.3 & - & 477.5 & 82.4\ TP \[Mb/s\] & 3544 & 136 & 497 & 4676 & 779.3\ \[pJ/b\] & 23.0 & 105.2 & - & 102.1 & 105.8\ \[Mb/s/mm$^{2}$\] & 2114 & 610 & 730 & 3168 & 528\ ^\*^ Not presented in the paper, calculated from the presented results ^\*\*^ Results are given for $(1024,512)$ code at $4$dB SNR Information bit throughput for $(1024,512)$ code \[table:asiccomparison\] As seen from the technology-converted results in Table \[table:asiccomparison\], combinational decoder provides the highest throughput among the state-of-the-art SC decoders. Combinational decoders are composed of simple basic logic blocks with no storage elements or control circuits. This helps to reduce the maximum path delay of the decoder by removing delays from read/write operations, setup/hold times, complex processing elements and their management. Another factor that reduces the delay is assigning a separate logic element to each decoding operation, which allows simplifications such as the use of comparators instead of adders for odd-indexes bit decisions. Furthermore, the precomputation method reduces the delays of addition/subtraction operations to that of multiplexers. These elements create an advantage to the combinational decoders in terms of throughput with respect to even fully-parallel SC decoders; and therefore, [@scasic] and [@anefficientpartsumnet], which are semi-parallel decoders with slightly higher latencies than fully-parallel decoders. The reduced operating frequency gives the combinational decoders a low power consumption when combined with simple basic logic blocks, and the lack of read, write, and control operations. The use of separate logic blocks for each computation in decoding algorithm and precomputation method increase the hardware consumption of combinational decoders. This can be observed by the areas spanned by the three SC decoders. This is an expected result due to the trade-off between throughput, area, and power in digital circuits. However, the high throughput of combinational decoders make them hardware efficient architectures, as seen in Table \[table:asiccomparison\]. Implementation results for BP decoder in [@bpasicthesis] are given for operating characteristics at $4$ dB SNR, so that the decoder requires $6.57$ iterations per codeword for low error rates. The number of required iterations for BP decoders increase at lower SNR values Therefore, throughput of the BP decoder in [@bpasicthesis] is expected to decrease while its power consumption increases with respect to the results in Table \[table:asiccomparison\]. On the other hand, SC decoders operate with the same performance metrics at all SNR values since the total number of calculations in conventional SC decoding algorithm is constant ($N \log_{2} N$) and independent from the number of errors in the received codeword. The performance metrics for the decoder in [@bpasicthesis] are given for low-power-low-throughput and high-power-high-throughput modes. The power reduction in this decoder is obtained by reducing the operating frequency and supply voltage for the same architecture, which also leads to the reduction in throughput. Table \[table:asiccomparison\] shows that the throughput of the combinational decoder is only lower than the throughput of [@bpasicthesis] when it is operated at high-power mode. In this mode, [@bpasicthesis] provides a throughput which is approximately $1.3$ times larger than the throughput of combinational decoder, while consuming $5.8$ times more power. The advantage of combinational decoders in power consumption can be seen from the energy-per-bit characteristics of decoders in Table \[table:asiccomparison\]. The combinational decoder consumes the lowest energy per decoded bit among the decoders in comparison. ### Comparison With LDPC Decoders {#sec:comp_ldpc_dec} A comparison of combinational SC polar decoders with state-of-the-art LDPC decoders is given in Table \[table:comb\_vs\_ldpc\]. The LDPC decoder presented in [@ldpc] is a multirate decoder capable of operating with $4$ different code rates. The LDPC decoder in [@parkthesis] is a high throughput LDPC decoder. It is seen from Table \[table:comb\_vs\_ldpc\] that the throughputs of LDPC decoders are higher than that of combinational decoders for $5$ and $10$ iterations without early termination. The throughput is expected to increase for higher and decrease for lower SNR values, as explained above. Power consumption and area of the LDPC decoders is seen to be higher than those of the combinational decoder. [&gt; m[1.05in]{}|&gt; m[0.55in]{} |&gt; m[0.55in]{} |&gt; m[0.55in]{}]{} & Comb.^\*\*^ & [@ldpc]^\*^ & [@parkthesis]\ Code/Decoder Type & Polar/SC & LDPC/BP & LDPC/BP\ Block Length & 512 & 672 & 672\ Code Rate & Any & 1/2, 5/8, 3/4, 7/8 & 1/2\ Area \[mm$^{2}$\] & 0.79 & 1.56 & 1.60\ Power \[mW\] & 77.5 & 361 & 782.9\ TP \[Gb/s\] & 3.72 & 5.79 & 9.0\ \[pJ/b\] & 20.8 & 62.4 & 89.5^\*\*^\ \[Gb/s/mm$^{2}$\] & 4.70 & 3.7 & 5.63^\*\*^\ ^\*^ Technology=$65$ nm, $1.0$ V ^\*\*^ Technology converted to $65$ nm, $1.0$ V Results are given for $(672,588)$ code and $5$ iterations without early termination Results are given for $(672,336)$ code and $10$ iterations without early termination \[table:comb\_vs\_ldpc\] An advantage of combinational architecture is that it provides a flexible architecture in terms of throughput, power consumption, and area by its pipelined version. One can increase the throughput of a combinational decoder by adding any number of pipelining stages. This increases the operating frequency and number of registers in the circuit, both of which increase the dynamic power consumption in the decoder core and storage parts of the circuit. The changes in throughput and power consumption with the added registers can be estimated using the characteristics of the combinational decoder. Therefore, combinational architectures present an easy way to control the trade-off between throughput, area, and power. FPGA implementation results for pipelined combinational decoders are given in the next section. FPGA Implementation Results {#sec:fpga_implementations} --------------------------- Combinational architecture involves heavy routing due to the large number of connected logic blocks. This increases hardware resource usage and maximum path delay in FPGA implementations, since routing is done through pre-fabricated routing resources as opposed to ASIC. In this section, we present FPGA implementations for the proposed decoders and study the effects of this phenomenon. Table \[table:virtex6\] shows the place-and-route results of combinational and pipelined combinational decoders on Xilinx Virtex-6-XC6VLX550T ($40$ nm) FPGA core. The implementation strategy is adjusted to increase the speed of the designs. We use RAM blocks to store the input LLRs, frozen bit indicators, and output bits in the decoders. FFs in combinational decoders are used for small logic circuits and fetching the RAM outputs, whereas in pipelined decoder they are also used to store the input LLRs and partial-sums for the second decoding function (Fig. \[fig:2NbyN\_fig\]). It is seen that the throughputs of combinational decoders in FPGA drop significantly with respect to their ASIC implementations. This is due to the high routing delays in FPGA implementations of combinational decoders, which increase up to $90\%$ of the overall delay. Pipelined combinational decoders are able to obtain throughputs on the order of Gb/s with an increase in the number FFs used. Pipelining stages can be increased further to increase the throughput with a penalty of increasing FF usage. The results in Table \[table:virtex6\] show that we can double the throughput of combinational decoder for every $N$ by one stage of pipelining as expected. The error rate performance of combinational decoders is given in Fig. \[fig:floors\] for different block lengths and rates. The investigated code rates are commonly used in various wireless communication standards (*e.g.,* WiMAX, IEEE 802.11n). It is seen from Fig. \[fig:floors\] that the decoders can achieve very low error rates without any error floors. ![FER performance of combinational decoders for different block lengths and rates[]{data-label="fig:floors"}](floors.eps){width="3.5in" height="3.5in"} Throughput Analysis for Hybrid-Logic Decoders {#sec:hybridresults} --------------------------------------------- As explained in Section \[sec:hybrid\], a combinational decoder can be combined with a synchronous decoder to increase its throughput by a factor $g(N,N')$ as in . In this section, we present analytical calculations for the throughput of a hybrid-logic decoder. We consider the semi-parallel architecture in [@asemiparallel] as the synchronous decoder part and use the implementation results given in the paper for the calculations. A semi-parallel SC decoder employs $P$ processing elements, each of which are capable of performing the operations and and perform one of them in one clock cycle. The architecture is called semi-parallel since $P$ can be chosen smaller than the numbers of possible parallel calculations in early stages of decoding. The latency of a semi-parallel architecture is given by $$\begin{aligned} \mathrm{L}_{\mathrm{SP}}\left(N,P\right)=2N+\frac{N}{P}\log _{2}\left(\frac{N}{4P}\right). \label{eqn:splatency}\end{aligned}$$ The minimum latency that can be obtained with the semi-parallel architecture by increasing hardware usage is $2N-2$, the latency of a conventional SC algorithm, when $P=N/2$. Throughput of a semi-parallel architecture is its maximum operating frequency divided by its latency. Therefore, using $N/2$ processing elements does not provide a significant multiplicative gain for the throughput of the decoder. We can approximately calculate the approximate throughput of a hybrid-logic decoder with semi-parallel architecture using the implementation results given in [@asemiparallel]. Implementations in [@asemiparallel] are done using Stratix IV FPGA, which has a similar technology with Virtex-6 FPGA used in this work. Table \[table:latencygain1\] gives these calculations and comparisons with the performances of semi-parallel decoder. [&gt; m[0.3in]{} |&gt; m[0.55in]{}|&gt; m[0.55in]{}|&gt; m[0.55in]{}|&gt; m[0.55in]{} |&gt; m[0.55in]{}|&gt; m[0.55in]{}|&gt; m[0.55in]{} |&gt; m[0.55in]{} |&gt; m[0.55in]{}]{} & &\ \[0.5ex\] & LUT & FF & RAM (bits) & TP \[Gb/s\] & LUT & FF & RAM (bits) & TP \[Gb/s\] & TP Gain\ \[0.5ex\] $2^{4}$ & 1479 & 169 & 112 & 1.05 & 777 & 424 & 208 & 2.34 & 2.23\ $2^{5}$ & 1918 & 206 & 224 & 0.88 & 2266 & 568 & 416 & 1.92 & 2.18\ $2^{6}$ & 5126 & 392 & 448 & 0.85 & 5724 & 1166 & 832 & 1.80 & 2.11\ $2^{7}$ & 14517 & 783 & 896 & 0.82 & 13882 & 2211 & 1664 & 1.62 & 1.97\ $2^{8}$ & 35152 & 1561 & 1792 & 0.75 & 31678 & 5144 & 3328 & 1.58 & 2.10\ $2^{9}$ & 77154 & 3090 & 3584 & 0.73 & 77948 & 9367 & 6656 & 1.49 & 2.04\ $2^{10}$ & 193456 & 6151 & 7168 & 0.60 & 190127 & 22928 & 13312 & 1.24 & 2.06\ \[table:virtex6\] Table \[table:latencygain1\] shows that throughput of a hybrid-logic decoder is significantly better than the throughput of a semi-parallel decoder. It is also seen that the multiplicative gain increases as the size of the combinational decoder increases. This increase is dependent on $P$, as $P$ determines the decoding stage after which the number of parallel calculations become smaller than the hardware resources and causes the throughput bottleneck. It should be noted that the gain will be smaller for decoders that spend less clock cycles in final stages of decoding trellis, such as [@twophase] and [@lowlatencysc]. The same method can be used in ASIC to obtain a high increase in throughput. Hybrid-logic decoders are especially useful for decoding large codewords, for which the hardware usage is high for combinational architecture and latency is high for synchronous decoders. [&gt; m[0.25in]{}|&gt; m[0.3in]{}|&gt; m[0.3in]{}|&gt; m[0.3in]{}|&gt; m[0.25in]{}|&gt; m[0.3in]{}|&gt; m[0.35in]{}]{} & & f & TP$ _{\mathrm{SP}}$ & & & TP$ _{\mathrm{HLSP}}$\ & & \[Mhz\] & \[Mb/s\] & & & \[Mb/s\]\ \[0.5ex\] $2^{10}$ & 64 & 173 & 85 & $2^{4}$ & 5.90 & 501\ $2^{10}$ & 64 & 173 & 85 & $2^{5}$ & 6.50 & 552\ $2^{10}$ & 64 & 173 & 85 & $2^{6}$ & 7.22 & 613\ $2^{11}$ & 64 & 171 & 83 & $2^{4}$ & 5.70 & 473\ $2^{11}$ & 64 & 171 & 83 & $2^{5}$ & 6.23 & 517\ $2^{11}$ & 64 & 171 & 83 & $2^{6}$ & 7.27 & 603\ \[table:latencygain1\] Conclusion {#sec:conclusion} ========== In this paper, we proposed a combinational architecture for SC polar decoders with high throughput and low power consumption. The proposed combinational SC decoder operates at much lower clock frequencies compared to typical synchronous SC decoders and decodes a codeword in one long clock cycle. Due to the low operating frequency, the combinational decoder consumes less dynamic power, which reduces the overall power consumption. Post-synthesis results showed that the proposed combinational architectures are capable of providing a throughput of approximately $2.5$ Gb/s with a power consumption of $190$ mW for a $90$ nm $1.3$ V technology. These figures are independent of the SNR level at the decoder input. We gave analytical formulas for the complexity and delay of the proposed combinational decoders that verify the implementation results, and provided a detailed power analysis for the ASIC design. We also showed that one can add pipelining stages at any desired depth to this architecture in order to increase its throughput at the expense of increased power consumption and hardware complexity. We also proposed a hybrid-logic SC decoder architecture that combined the combinational SC decoder with a synchronous SC decoder so as to extend the range of applicability of the purely combinational design to larger block lengths. In the hybrid structure, the combinational part acts as an [*accelerator*]{} for the synchronous decoder in improving the throughput while keeping complexity under control. The conclusion we draw is that the proposed combinational SC decoders offer a fast, energy-efficient, and flexible alternative for implementing polar codes. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by the FP7 Network of Excellence NEWCOM\# under grant agreement 318306. The authors acknowledge O. Arikan, A. Z. Alkar, and A. Atalar for the useful discussions and support during the course of this work. The authors are also grateful to the reviewers for their constructive suggestions and comments. [10]{} \[1\][\#1]{} E. Ar[i]{}kan, “Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” *IEEE Trans. Inform. Theory*, vol. 55, no. 7, pp. 3051–3073, July 2009. E. Ar[i]{}kan, “Polar codes: A pipelined implementation,” in Proc. Int. Symp. Broadband Communication (ISBC2010), Melaka, Malaysia, 2010. C. Leroux, I. Tal, A. Vardy, and W. J. Gross, “Hardware architectures for successive cancellation decoding of polar codes,” 2010. \[Online\]. Available: <http://arxiv.org/abs/1011.2919> A. Pamuk, “An FPGA implementation architecture for decoding of polar codes,” in Proc. 8th Int. Symp. Wireless Comm. (ISWCS), pp. 437–441, 2011. A. Mishra, A. Raymond, L. Amaru, G. Sarkis, C. Leroux, P. Meinerzhagen, A. Burg, and W. Gross, “A successive cancellation decoder [ASIC]{} for a 1024-bit polar code in 180nm [CMOS]{},” in *IEEE Asian Solid State Circuits Conf. (A-SSCC)*, Nov. 2012, pp. 205–208. Y. Fan and C.-Y. Tsui, “An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation,” *IEEE Trans. Signal Process.*, vol. 62, no. 12, pp. 3165–3179, June 2014. E. Arikan, “A performance comparison of polar codes and Reed-Muller codes,” *IEEE Commun. Letters*, vol. 12, no. 6, pp. 447–449, June 2008. B. Yuan and K. Parhi, “Architectures for polar [BP]{} decoders using folding,” in *IEEE Int. Symp. Circuits Syst. (ISCAS)*, June 2014, pp. 205–208. Y. S. Park, Y. Tao, S. Sun, and Z. Zhang, “A 4.68gb/s belief propagation polar decoder with bit-splitting register file,” in *Symp. VLSI Circuits Dig. of Tech. Papers*, June 2014, pp. 1–2. M. Plotkin, “Binary codes with specified minimum distance,” IRE Trans. Inform. Theory, vol. 6, no. 4, pp. 445–450, Sept. 1960. G. Schnabl and M. Bossert, “Soft-decision decoding of [R]{}eed-[M]{}uller codes as generalized multiple concatenated codes,” *IEEE Trans. Inform. Theory*, vol. 41, no. 1, pp. 304–308, Jan. 1995. I. Dumer and K. Shabunov, “Recursive decoding of [R]{}eed-[M]{}uller codes,” in *Proc. IEEE Int. Symp. Inform. Theory (ISIT)*, 2000, pp. 63–. A. Alamdar-Yazdi and F. Kschischang, “A simplified successive-cancellation decoder for polar codes,” *IEEE Commun. Letters*, vol. 15, no. 12, pp. 1378–1380, Dec. 2011. G. Sarkis, P. Giard, A. Vardy, C. Thibeault, and W. Gross, “Fast polar decoders: algorithm and implementation,” *IEEE J. Sel. Areas Commun.*, vol. 32, no. 5, pp. 946–957, May 2014. I. Tal and A. Vardy, “List decoding of polar codes,” in *Proc. IEEE Int. Symp. Inform. Theory (ISIT)*, July 2011, pp. 1–5. I. Dumer and K. Shabunov, “Soft-decision decoding of Reed-Muller codes: recursive lists,” IEEE Trans. Inform. Theory, vol. 52, no. 3, pp. 1260–1266, Mar. 2006. B. Yuan and K. Parhi, “Low-latency successive-cancellation list decoders for polar codes with multibit decision,” *IEEE Trans. Very Large Scale Integration (VLSI) Syst.*, vol. 23, no. 10, pp. 2268–2280, Oct. 2015. C. Zhang and K. Parhi, “Low-latency sequential and overlapped architectures for successive cancellation polar decoder,” *IEEE Trans. Signal Process.*, vol. 61, no. 10, pp. 2429–2441, May 2013. P. Giard, G. Sarkis, C. Thibeault, and W. J. Gross, “Unrolled polar decoders, part [I:]{} hardware architectures,” 2015. \[Online\]. Available: <http://arxiv.org/abs/1505.01459> C. Zhang and K. Parhi, “Interleaved successive cancellation polar decoders,” in *Proc. IEEE Int. Symp. Circuits and Syst. (ISCAS)*, June 2014, pp. 401–404. C. Leroux, A. Raymond, G. Sarkis, and W. Gross, “A semi-parallel successive-cancellation decoder for polar codes,” *IEEE Trans. Signal Process.*, vol. 61, no. 2, pp. 289–299, Jan. 2013. A. Raymond and W. Gross, “A scalable successive-cancellation decoder for polar codes,” *IEEE Trans. Signal Process.*, vol. 62, no. 20, pp. 5339–5347, Oct. 2014. N. Weste and D. Harris, *Integrated Circuit Design*.1em plus 0.5em minus 0.4emPearson, 2011. C.-C. Wong and H.-C. Chang, “Reconfigurable turbo decoder with parallel architecture for 3gpp lte system,” *IEEE Trans. Circuits and Syst. II, Exp. Briefs*, vol. 57, no. 7, pp. 566–570, July 2010. A. Blanksby and C. Howland, “A 690-m[W]{} 1-gb/s 1024-b, rate-1/2 low-density parity-check code decoder,” *IEEE J. Solid-State Circuits*, vol. 37, no. 3, pp. 404–412, Mar. 2002. S.-W. Yen, S.-Y. Hung, C.-L. Chen, Chang, Hsie-Chia, S.-J. Jou, and C.-Y. Lee, “A 5.79-[G]{}b/s energy-efficient multirate [LDPC]{} codec chip for [IEEE]{} 802.15.3c applications,” *IEEE J. Solid-State Circuits*, vol. 47, no. 9, pp. 2246–2257, Sep. 2012. Y. S. Park, “Energy-efficient decoders of near-capacity channel codes,” Ph.D. dissertation, Univ. of Michigan, Ann Arbor, 2014. A. Pamuk and E. Arikan, “A two phase successive cancellation decoder architecture for polar codes,” in *Proc. IEEE Int. Symp. Inform. Theory (ISIT)*, July 2013, pp. 957–961. B. Yuan and K. Parhi, “Low-latency successive-cancellation polar decoder architectures using 2-bit decoding,” *IEEE Trans. Circuits Syst. I, Reg. Papers*, vol. 61, no. 4, pp. 1241–1254, Apr. 2014. [Onur Dizdar]{} (S’–10) was born in Ankara, Turkey, in 1986. He received the B.S. and M.S. degrees in electrical and electronics engineering from the Middle East Technical University, Ankara, Turkey in 2008 and 2011. He is currently a Ph.D. candidate in the Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey. He also works as a Senior Design Engineer in ASELSAN, Turkey. [Erdal Arikan]{} (S84), (M79), (SM94), (F11) was born in Ankara, Turkey, in 1958. He received the B.S. degree from the California Institute of Technology, Pasadena, CA, in 1981, and the S.M. and Ph.D. degrees from the Massachusetts Institute of Technology, Cambridge, MA, in 1982 and 1985, respectively, all in Electrical Engineering. Since 1987 he has been with the Electrical-Electronics Engineering Department of Bilkent University, Ankara, Turkey, where he works as a professor. He is the receipient of [*2010 IEEE Information Theory Society Paper Award*]{} and the [*2013 IEEE W.R.G. Baker Award*]{}, both for his work on polar coding. [^1]: Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to pubs-permissions@ieee.org. [^2]: The authors are with the Department of Electrical-Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey (e-mail: , arikan@ee.bilkent.edu.tr.)
--- abstract: 'Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.' author: - | Lucas Theis\ University of Tübingen\ 72076 Tübingen, Germany\ `lucas@bethgelab.org`\ Matthias Bethge\ University of Tübingen\ 72076 Tübingen, Germany\ `matthias@bethgelab.org`\ bibliography: - 'references.bib' title: Generative Image Modeling Using Spatial LSTMs --- Introduction ============ The last few years have seen tremendous progress in learning useful image representations [@Donahue:2014]. While early successes were often achieved through the use of generative models [e.g., @Hinton:2006; @Lee:2009; @Ranzato:2011], recent breakthroughs were mainly driven by improvements in supervised techniques [e.g., @Krizhevsky:2012; @Simonyan:2015]. Yet unsupervised learning has the potential to tap into the much larger source of unlabeled data, which may be important for training bigger systems capable of a more general scene understanding. For example, multimodal data is abundant but often unlabeled, yet can still greatly benefit unsupervised approaches [@Srivastava:2014]. Generative models provide a principled approach to unsupervised learning. A perfect model of natural images would be able to optimally predict parts of an image given other parts of an image and thereby clearly demonstrate a form of scene understanding. When extended by labels, the Bayesian framework can be used to perform semi-supervised learning in the generative model [@Ngiam:2011; @Kingma:2014b] while it is less clear how to combine other unsupervised approaches with discriminative learning. Generative image models are also useful in more traditional applications such as image reconstruction [@Roth:2009; @Zoran:2011; @Sohl-Dickstein:2015] or compression [@VanDenOord:2014b]. Recently there has been a renewed strong interest in the development of generative image models [e.g., @VanDenOord:2014b; @Kingma:2014a; @Uria:2014; @Gregor:2014; @Goodfellow:2014; @Ranzato:2014; @Gregor:2015; @Sohl-Dickstein:2015; @Li:2015; @Denton:2015]. Most of this work has tried to bring to bear the flexibility of deep neural networks on the problem of modeling the distribution of natural images. One challenge in this endeavor is to find the right balance between tractability and flexibility. The present article contributes to this line of research by introducing a fully tractable yet highly flexible image model. Our model combines multi-dimensional recurrent neural networks [@Graves:2009] with mixtures of experts. More specifically, the backbone of our model is formed by a spatial variant of *long short-term memory* (LSTM) [@Hochreiter:1997]. One-dimensional LSTMs have been particularly successful in modeling text and speech [e.g., @Sundermeyer:2010; @Sutskever:2014], but have also been used to model the progression of frames in video [@Srivastava:2014] and very recently to model single images [@Gregor:2015]. In contrast to earlier work on modeling images, here we use *multi-dimensional LSTMs* [@Graves:2009] which naturally lend themselves to the task of generative image modeling due to their spatial structure and ability to capture long-range correlations. To model the distribution of pixels conditioned on the hidden states of the neural network, we use *mixtures of conditional Gaussian scale mixtures* (MCGSMs) [@Theis:2012a]. This class of models can be viewed as a generalization of Gaussian mixture models, but their parametrization makes them much more suitable for natural images. By treating images as instances of a stationary stochastic process, this model allows us to sample and capture the correlations of arbitrarily large images. A recurrent model of natural images {#sec:ride} =================================== In the following, we first review and extend the MCGSM [@Theis:2012a] and multi-dimensional LSTMs [@Graves:2009] before explaining how to combine them into a recurrent image model. Section \[sec:experiments\] will demonstrate the validity of our approach by evaluating and comparing the model on a number of image datasets. Factorized mixtures of conditional Gaussian scale mixtures {#sec:mcgsm} ---------------------------------------------------------- One successful approach to building flexible yet tractable generative models has been to use fully-visible belief networks [@Neal:1992; @Larochelle:2011]. To apply such a model to images, we have to give the pixels an ordering and specify the distribution of each pixel conditioned on its parent pixels. Several parametrizations have been suggested for the conditional distributions in the context of natural images [@Domke:2008; @Hosseini:2010; @Theis:2012a; @Uria:2013; @Uria:2014]. We here review and extend the work of Theis et al. [@Theis:2012a] who proposed to use *mixtures of conditional Gaussian scale mixtures* (MCGSMs). Let $\mathbf{x}$ be a grayscale image patch and $x_{ij}$ be the intensity of the pixel at location $ij$. Further, let $\mathbf{x}_{<ij}$ designate the set of pixels $x_{mn}$ such that $m < i$ or $m = i$ and $n < j$ (Figure \[fig:rim\]A). Then $$\begin{aligned} \textstyle p(\mathbf{x}; \bm{\theta}) = \prod_{i,j} p(x_{ij} \mid \mathbf{x}_{<ij}; \bm{\theta}) \label{eq:factorization} \end{aligned}$$ for the distribution of any parametric model with parameters $\bm{\theta}$. Note that this factorization does not make any independence assumptions but is simply an application of the probability chain rule. Further note that the conditional distributions all share the same set of parameters. One way to improve the representational power of a model is thus to endow each conditional distribution with its own set of parameters, $$\begin{aligned} \textstyle p(\mathbf{x}; \left\{ \bm{\theta}_{ij} \right\}) = \prod_{i,j} p(x_{ij} \mid \mathbf{x}_{<ij}; \bm{\theta}_{ij}). \label{eq:factorization2} \end{aligned}$$ Applying this trick to mixtures of Gaussian scale mixtures (MoGSMs) yields the MCGSM [@Theis:2011]. Untying shared parameters can drastically increase the number of parameters. For images, it can easily be reduced again by adding assumptions. For example, we can limit $\mathbf{x}_{<ij}$ to a smaller neighborhood surrounding the pixel by making a Markov assumption. We will refer to the resulting set of parents as the pixel’s *causal neighborhood* (Figure \[fig:rim\]B). Another reasonable assumption is stationarity or shift invariance, in which case we only have to learn one set of parameters $\bm{\theta}_{ij}$ which can then be used at every pixel location. Similar to convolutions in neural networks, this allows the model to easily scale to images of arbitrary size. While this assumption reintroduces parameter sharing constraints into the model, the constraints are different from the ones induced by the joint mixture model. The conditional distribution in an MCGSM takes the form of a mixture of experts, $$\begin{aligned} p(x_{ij} \mid \mathbf{x}_{<ij}, \bm{\theta}_{ij}) &= \sum_{c,s} \underbrace{p(c, s \mid \mathbf{x}_{<ij}, \bm{\theta}_{ij})}_\text{gate} \underbrace{p(x_{ij} \mid \mathbf{x}_{<ij}, c, s, \bm{\theta}_{ij})}_\text{expert}, \end{aligned}$$ where the sum is over mixture component indices $c$ corresponding to different covariances and scales $s$ corresponding to different variances. The gates and experts in an MCGSM are given by $$\begin{aligned} p(c, s \mid \mathbf{x}_{<ij}) &\propto \textstyle \exp\left( \eta_{cs} - \frac{1}{2} e^{\alpha_{cs}} \mathbf{x}_{<ij}^\top \mathbf{K}_c \mathbf{x}_{<ij} \right), \label{eq:gates} \\ p(x_{ij} \mid \mathbf{x}_{<ij}, c, s) &= \mathcal{N}(x_{ij}; \mathbf{a}_c^\top \mathbf{x}_{<ij}, e^{-\alpha_{cs}}), \label{eq:experts} \end{aligned}$$ where $\mathbf{K}_c$ is positive definite. The number of parameters of an MCGSM still grows quadratically with the dimensionality of the causal neighborhood. To further reduce the number of parameters, we introduce a factorized form of the MCGSM with additional parameter sharing by replacing $\mathbf{K}_c$ with $\sum_n \beta_{cn}^2 \mathbf{b}_n\mathbf{b}_n^\top$. This *factorized MCGSM* allows us to use larger neighborhoods and more mixture components. A detailed derivation of a more general version which also allows for multivariate pixels is given in Supplementary Section 1. Spatial long short-term memory ------------------------------ In the following we briefly describe the *spatial LSTM* (SLSTM), a special case of the multi-dimensional LSTM first described by Graves & Schmidhuber [@Graves:2009]. At the core of the model are memory units $\mathbf{c}_{ij}$ and hidden units $\mathbf{h}_{ij}$. For each location $ij$ on a two-dimensional grid, the operations performed by the spatial LSTM are given by\ $$\begin{aligned} \mathbf{c}_{ij} &= \mathbf{g}_{ij} \odot \mathbf{i}_{ij} + \mathbf{c}_{i,j - 1} \odot \mathbf{f}_{ij}^c + \mathbf{c}_{i - 1,j} \odot \mathbf{f}_{ij}^r, \\ \mathbf{h}_{ij} &= \tanh\left( \mathbf{c}_{ij} \odot \mathbf{o}_{ij} \right), \end{aligned}$$ $$\begin{aligned} \begin{pmatrix} \mathbf{g}_{ij} \\ \mathbf{o}_{ij} \\ \mathbf{i}_{ij} \\ \mathbf{f}_{ij}^r \\ \mathbf{f}_{ij}^c \end{pmatrix} = \begin{pmatrix} \tanh \\ \sigma \\ \sigma \\ \sigma \\ \sigma \end{pmatrix} T_{\mathbf{A},\mathbf{b}} \begin{pmatrix} \mathbf{x}_{<ij} \\ \mathbf{h}_{i,j - 1} \\ \mathbf{h}_{i - 1,j} \end{pmatrix}, \label{eq:slstm} \end{aligned}$$ \ where $\sigma$ is the logistic sigmoid function, $\odot$ indicates a pointwise product, and $T_{\mathbf{A},\mathbf{b}}$ is an affine transformation which depends on the only parameters of the network $\mathbf{A}$ and $\mathbf{b}$. The gating units $\mathbf{i}_{ij}$ and $\mathbf{o}_{ij}$ determine which memory units are affected by the inputs through $\mathbf{g}_{ij}$, and which memory states are written to the hidden units $\mathbf{h}_{ij}$. In contrast to a regular LSTM defined over time, each memory unit of a spatial LSTM has two preceding states $\mathbf{c}_{i,j-1}$ and $\mathbf{c}_{i-1,j}$ and two corresponding forget gates $\mathbf{f}_{ij}^c$ and $\mathbf{f}_{ij}^r$. Recurrent image density estimator --------------------------------- We use a grid of SLSTM units to sequentially read relatively small neighborhoods of pixels from the image, producing a hidden vector at every pixel. The hidden states are then fed into a factorized MCGSM to predict the state of the corresponding pixel, that is, $p(x_{ij} \mid \mathbf{x}_{<ij}) = p(x_{ij} \mid \mathbf{h}_{ij})$. Importantly, the state of the hidden vector only depends on pixels in $\mathbf{x}_{<ij}$ and does not violate the factorization given in Equation \[eq:factorization\]. Nevertheless, the recurrent network allows this *recurrent image density estimator* (RIDE) to use pixels of a much larger region for prediction, and to nonlinearly transform the pixels before applying the MCGSM. We can further increase the representational power of the model by stacking spatial LSTMs to obtain a deep yet still completely tractable recurrent image model (Figure \[fig:rim\]C). Related work ------------ Larochelle & Murray [@Larochelle:2011] derived a tractable density estimator (NADE) in a manner similar to how the MCGSM was derived [@Theis:2012a], but using restricted Boltzmann machines (RBM) instead of mixture models as a starting point. In contrast to the MCGSM, NADE tries to keep the weight sharing constraints induced by the RBM (Equation \[eq:factorization\]). Uria et al. extended NADE to real values [@Uria:2013] and introduced hidden layers to the model [@Uria:2014]. Gregor et al. [@Gregor:2014] describe a related autoregressive network for binary data which additionally allows for stochastic hidden units. Gregor et al. [@Gregor:2015] used one-dimensional LSTMs to generate images in a sequential manner (DRAW). Because the model was defined over Bernoulli variables, normalized RGB values had to be treated as probabilities, making a direct comparison with other image models difficult. In contrast to our model, the presence of stochastic latent variables in DRAW means that its likelihood cannot be evaluated but has to be approximated. Ranzato et al. [@Ranzato:2014] and Srivastava et al. [@Srivastava:2015] use one-dimensional recurrent neural networks to model videos, but recurrency is not used to describe the distribution over individual frames. Srivastava et al. [@Srivastava:2015] optimize a squared error corresponding to a Gaussian assumption, while Ranzato et al. [@Ranzato:2014] try to side-step having to model pixel intensities by quantizing image patches. In contrast, here we also try to solve the problem of modeling pixel intensities by using an MCGSM, which is equipped to model heavy-tailed as well as multi-modal distributions. Experiments {#sec:experiments} =========== RIDE was trained using stochastic gradient descent with a batch size of 50, momentum of 0.9, and a decreasing learning rate varying between 1 and $10^{-4}$. After each pass through the training set, the MCGSM of RIDE was finetuned using L-BFGS for up to 500 iterations before decreasing the learning rate. No regularization was used except for early stopping based on a validation set. Except where indicated otherwise, the recurrent model used a 5 pixel wide neighborhood and an MCGSM with 32 components and 32 quadratic features ($\mathbf{b}_n$ in Section \[sec:mcgsm\]). Spatial LSTMs were implemented using the Caffe framework [@jia:2014]. Where appropriate, we augmented the data by horizontal or vertical flipping of images. We found that conditionally whitening the data greatly sped up the training process of both models. Letting $\mathbf{y}$ represent a pixel and $\mathbf{x}$ its causal neighborhood, conditional whitening replaces these with $$\begin{aligned} \mathbf{\hat x} &= \textstyle\mathbf{C}_\mathbf{xx}^{-\frac{1}{2}} \left( \mathbf{x} - \mathbf{m}_\mathbf{x} \right), & \mathbf{\hat y} &= \textstyle \mathbf{W} (\mathbf{y} - \mathbf{C}_{\mathbf{yx}} \mathbf{C}_\mathbf{xx}^{-\frac{1}{2}} \mathbf{\hat x} - \mathbf{m}_\mathbf{y}), & \mathbf{W} &= (\mathbf{C}_\mathbf{yy} - \mathbf{C}_{\mathbf{yx}} \mathbf{C}_\mathbf{xx}^{-1} \mathbf{C}_{\mathbf{yx}}^\top)^{-\frac{1}{2}}, \end{aligned}$$ where $\mathbf{C}_{\mathbf{yx}}$ is the covariance of $\mathbf{y}$ and $\mathbf{x}$, and $\mathbf{m}_\mathbf{x}$ is the mean of $\mathbf{x}$. In addition to speeding up training, this variance normalization step helps to make the learning rates less dependent on the training data. When evaluating the conditional log-likelihood, we compensate for the change in variance by adding the log-Jacobian $\log |\det \mathbf{W}|$. Note that this preconditioning introduces a shortcut connection from the pixel neighborhood to the predicted pixel which is not shown in Figure \[fig:rim\]C. Ensembles --------- Uria et al. [@Uria:2014] found that forming ensembles of their autoregressive model over different pixel orderings significantly improved performance. We here consider a simple trick to produce an ensemble without the need for training different models or to change training procedures. If $\mathbf{T}_k$ are linear transformations leaving the targeted image distribution invariant (or approximately invariant) and if $p$ is the distribution of a pretrained model, then we form the ensemble $\frac{1}{K} \sum_k p(\mathbf{T}_k \mathbf{x}) |\det \mathbf{T}_k|$. Note that this is simply a mixture model over images $\mathbf{x}$. We considered rotating as well as flipping images along the horizontal and vertical axes (yielding an ensemble over 8 transformations). While it could be argued that most of these transformations do not leave the distribution over natural images invariant, we nevertheless observed a noticeable boost in performance. Natural images -------------- Several recent image models have been evaluated on small image patches sampled from the Berkeley segmentation dataset (BSDS300) [@Martin:2001]. Although our model’s strength lies in its ability to scale to large images and to capture long-range correlations, we include results on BSDS300 to make a connection to this part of the literature. We followed the protocol of Uria et al. [@Uria:2013]. The RGB images were turned to grayscale, uniform noise was added to account for the integer discretization, and the resulting values were divided by 256. The training set of 200 images was split into 180 images for training and 20 images for validation, while the test set contained 100 images. We extracted 8 by 8 image patches from each set and subtracted the average pixel intensity such that each patch’s DC component was zero. Because the resulting image patches live on a 63 dimensional subspace, the bottom-right pixel was discarded. We used $1.6 \cdot 10^6$ patches for training, $1.8 \cdot 10^5$ patches for validation, and $10^6$ test patches for evaluation. MCGSMs have not been evaluated on this dataset and so we first tested MCGSMs by training a single factorized MCGSM for each pixel conditioned on all previous pixels in a fixed ordering. We find that already an MCGSM (with 128 components and 48 quadratic features) outperforms all single models including a deep Gaussian mixture model [@VanDenOord:2014a] (Table \[tbl:bsds300\]). Our ensemble of MCGSMs[^1] outperforms an ensemble of RNADEs with 6 hidden layers, which to our knowledge is currently the best result reported on this dataset. Training the recurrent image density estimator (RIDE) on the 63 dimensional dataset is more cumbersome. We tried padding image patches with zeros, which was necessary to be able to compute a hidden state at every pixel. The bottom-right pixel was ignored during training and evaluation. This simple approach led to a reduction in performance relative to the MCGSM (Table \[tbl:bsds300\]). A possible explanation is that the model cannot distinguish between pixel intensities which are zero and zeros in the padded region. Supplying the model with additional binary indicators as inputs (one for each neighborhood pixel) did not solve the problem. However, we found that RIDE outperforms the MCGSM by a large margin when images were treated as instances of a stochastic process (that is, using infinitely large images). MCGSMs were trained for up to 3000 iterations of L-BFGS on $10^6$ pixels and corresponding causal neighborhoods extracted from the training images. Causal neighborhoods were 9 pixels wide and 5 pixels high. RIDE was trained for 8 epochs on image patches of increasing size ranging from 8 by 8 to 22 by 22 pixels (that is, gradients were approximated as in backpropagation through time [@Robinson:1987]). The right column in Table \[tbl:bsds300\] shows average log-likelihood rates for both models. Analogously to the entropy rate [@Cover:2006], we have for the expected log-likelihood rate: $$\begin{aligned} \lim_{N \rightarrow \infty} \mathbb{E}\left[ \log p(\mathbf{x})/N^2 \right] &= \mathbb{E}[\log p(x_{ij} \mid \mathbf{x}_{<ij})], \end{aligned}$$ where $\mathbf{x}$ is an $N$ by $N$ image patch. An average log-likelihood rate can be directly computed for the MCGSM, while for RIDE and ensembles we approximated it by splitting the test images into 64 by 64 patches and evaluating on those. To make the two sets of numbers more comparable, we transformed nats as commonly reported on the 63 dimensional data, $\ell_{1:63}$, into a bit per pixel log-likelihood rate using the formula $(\ell_{1:63} + \ell_{DC} + \ln |\det \mathbf{A}|) / 64 / \ln(2)$. This takes into account a log-likelihood for the missing DC component, $\ell_{DC} = 0.5020$, and the Jacobian of the transformations applied during preprocessing, $\ln |\det \mathbf{A}| = -4.1589$ (see Supplementary Section 2.2 for details). The two rates in Table \[tbl:bsds300\] are comparable in the sense that their differences express how much better one model would be at losslessly compressing BSDS300 test images than another, where patch-based models would compress patches of an image independently. We highlighted the best result achieved with each model in gray. Note that most models in this list do not scale as well to large images as the MCGSM or RIDE (GMMs in particular) and are therefore unlikely to benefit as much from increasing the patch size. A comparison of the log-likelihood rates reveals that an MCGSM with 16 components applied to large images already captures more correlations than any model applied to small image patches. The difference is particularly striking given that the factorized MCGSM has approximately 3,000 parameters while a GMM with 200 components has approximately 400,000 parameters. Using an ensemble of RIDEs, we are able to further improve this number significantly (Table \[tbl:bsds300\]). Another dataset frequently used to test generative image models is the dataset published by van Hateren and van der Schaaf [@vanHateren:1998]. Details of the preprocessing used in this paper are given in Supplementary Section 3. We reevaluated several models for which the likelihood has been reported on this dataset [@Gerhard:2015; @Theis:2011; @Theis:2012a; @Theis:2012b]. Likelihood rates as well as results on 16 by 16 patches are given in Table \[tbl:vanhateren\]. Because of the larger patch size, RIDE here already outperforms the MCGSM on patches. Dead leaves ----------- Dead leaf images are generated by superimposing disks of random intensity and size on top of each other [@Matheron:1968; @Lee:2001]. This simple procedure leads to images which already share many of the statistical properties and challenges of natural images, such as occlusions and long-range correlations, while leaving out others such as non-stationary statistics. They therefore provide an interesting test case for natural image models. We used a set of 1,000 images, where each image is 256 by 256 pixels in size. We compare the performance of RIDE to the MCGSM and a very recently introduced deep multiscale model based on a diffusion process [@Sohl-Dickstein:2015]. The same 100 images as in previous literature [@Theis:2012a; @Sohl-Dickstein:2015] were used for evaluation and we used the remaining images for training. We find that the introduction of an SLSTM with 64 hidden units greatly improves the performance of the MCGSM. We also tried an extended version of the SLSTM which included memory units as additional inputs (right-hand side of Equation \[eq:slstm\]). This yielded a small improvement in performance (5th row in Table \[tbl:deadleaves\]) while adding layers or using more hidden units led to more drastic improvements. Using 3 layers with 128 hidden units in each layer, we find that our recurrent image model is on a par with the deep diffusion model. By using ensembles, we are able to beat all previously published results for this dataset (Table \[tbl:deadleaves\]). Figure \[fig:nb\_size\] shows that the improved performance of RIDE is not simply due to an effectively larger causal neighborhood but that the nonlinear transformations performed by the SLSTM units matter. Simply increasing the neighborhood size of an MCGSM does not yield the same improvement. Instead, the performance quickly saturates. We also find that the performance of RIDE slightly deteriorates with larger neighborhoods, which is likely caused by optimization difficulties. Texture synthesis and inpainting -------------------------------- To get an intuition for the kinds of correlations which RIDE can capture or fails to capture, we tried to use it to synthesize textures. We used several 640 by 640 pixel textures published by Brodatz [@Brodatz:1966]. The textures were split into sixteen 160 by 160 pixel regions of which 15 were used for training and one randomly selected region was kept for testing purposes. RIDE was trained for up to 6 epochs on patches of increasing size ranging from 20 by 20 to 40 by 40 pixels. Samples generated by an MCGSM and RIDE are shown in Figure \[fig:textures\]. Both models are able to capture a wide range of correlation structures. However, the MCGSM seems to struggle with textures having bimodal marginal distributions and periodic patterns (D104, D34, and D110). RIDE clearly improves on these textures, although it also struggles to faithfully reproduce periodic structure. Possible explanations include that LSTMs are not well suited to capture periodicities, or that these failures are not penalized strong enough by the likelihood. For some textures, RIDE produces samples which are nearly indistinguishable from the real textures (D106 and D110). One application of generative image models is inpainting [e.g., @Roth:2009; @Heess:2009; @Sohl-Dickstein:2015]. As a proof of concept, we used our model to inpaint a large (here, 71 by 71 pixels) region in textures (Figure \[fig:inpainting\]). Missing pixels were replaced by sampling from the posterior of RIDE. Unlike the joint distribution, the posterior distribution cannot be sampled directly and we had to resort to Markov chain Monte Carlo methods. We found the following *Metropolis within Gibbs* [@Tierney:1994] procedure to be efficient enough. The missing pixels were initialized via ancestral sampling. Since ancestral sampling is cheap, we generated 5 candidates and used the one with the largest posterior density. Following initialization, we sequentially updated overlapping 5 by 5 pixel regions via Metropolis sampling. Proposals were generated via ancestral sampling and accepted using the acceptance probability $$\begin{aligned} \textstyle\alpha = \min\left\{ 1, \frac{p(\mathbf{x}')}{p(\mathbf{x})} \frac{p(\mathbf{x}_{ij} \mid \mathbf{x}_{<ij})}{p(\mathbf{x}_{ij}' \mid \mathbf{x}_{<ij})} \right\}, \end{aligned}$$ where here $\mathbf{x}_{ij}$ represents a 5 by 5 pixel patch and $\mathbf{x}_{ij}'$ its proposed replacement. Since evaluating the joint and conditional densities on the entire image is costly, we approximated $p$ using RIDE applied to a 19 by 19 pixel patch surrounding $ij$. Randomly flipping images vertically or horizontally in between the sampling further helped. Figure \[fig:inpainting\] shows results after 100 Gibbs sampling sweeps. Conclusion ========== We have introduced RIDE, a deep but tractable recurrent image model based on spatial LSTMs. The model exemplifies how recent insights in deep learning can be exploited for generative image modeling and shows superior performance in quantitative comparisons. RIDE is able to capture many different statistical patterns, as demonstrated through its application to textures. This is an important property considering that on an intermediate level of abstraction natural images can be viewed as collections of textures. We have furthermore introduced a factorized version of the MCGSM which allowed us to use more experts and larger causal neighborhoods. This model has few parameters, is easy to train and already on its own performs very well as an image model. It is therefore an ideal building block and may be used to extend other models such as DRAW [@Gregor:2015] or video models [@Ranzato:2014; @Srivastava:2015]. Deep generative image models have come a long way since deep belief networks have first been applied to natural images [@Osindero:2008]. Unlike convolutional neural networks in object recognition, however, no approach has as of yet proven to be a likely solution to the problem of generative image modeling. Further conceptual work will be necessary to come up with a model which can handle both the more abstract high-level as well as the low-level statistics of natural images. ### Acknowledgments {#acknowledgments .unnumbered} The authors would like to thank Aäron van den Oord for insightful discussions and Wieland Brendel, Christian Behrens, and Matthias Kümmerer for helpful input on this paper. This study was financially supported by the German Research Foundation (DFG; priority program 1527, BE 3848/2-1). [^1]: Details on how the ensemble of transformations can be applied despite the missing bottom-right pixel are given in Supplementary Section 2.1.
--- abstract: | The Landau problem in non-commutative quantum mechanics (NCQM) is studied. First by solving the Schr$\ddot{o}$dinger equations on noncommutative(NC) space we obtain the Landau energy levels and the energy correction that is caused by space-space noncommutativity. Then we discuss the noncommutative phase space case, namely, space-space and momentum-momentum non-commutative case, and we get the explicit expression of the Hamiltonian as well as the corresponding eigenfunctions and eigenvalues. PACS number(s): 11.10.Nx, 03.65.-w --- =-0.5cm =-0.5cm [**Landau Problem in Noncommutative Quantum Mechanics**]{} Sayipjamal Dulat$^{a,c,}$[^1] and Kang Li$^{b,c,}$[^2]\ [*$^a$ School of Physics Science and Technology, Xinjiang University, Urumqi, 830046,China\ $^b $Department of Physics, Hangzhou Teachers College,Hangzhou, 310036, China\ $^c $ The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy*]{} 0.5cm Introduction ============ Recently, there has been much interest in the study of physics on noncommutative(NC) space[@SW]-[@Scho], not only because the NC space is necessary when one studies the low energy effective theory of D-brane with B field background, but also because in the very tiny string scale or at very high energy situation, the effects of non commutativity of both space-space and momentum-momentum may appear. There are many papers devoted to the study of various aspects of quantum mechanics on noncommutative space with usual (commutative) time coordinate. In the noncommutative (NC) space the coordinate and momentum operators satisfy the following commutation relations $$\label{eq1} ~[\hat{x}_{i},\hat{x}_{j}]=i\Theta_{ij},~~~ [\hat{p}_{i},\hat{p}_{j}]=0,~~~[\hat{x}_{i},\hat{p}_{j}]=i \hbar\delta_{ij},$$ where $\hat{x}_i$ and $\hat{p}_i$ are the coordinate and momentum operators on a NC space. Ref.[@zhang; @Likang] showed that $\hat{p}_i=p_i$, and $\hat{x}_i$ have the representation form $$\label{eq2} \hat{x}_{i}= x_{i}-\frac{1}{2\hbar}\Theta_{ij}p_{j}, \hspace{1cm} i,j = 1,2,...,n.$$ The case of both space-space and momentum-momentum noncommuting [@zhang; @Likang] is different from the case of only space-space noncommuting. Thus in the noncommutative (NC) phase space the momentum operator in Eq. (\[eq1\]) satisfies the following commutation relations $$\label{eq3} [\hat{p}_{i},\hat{p}_{j}]=i\bar{\Theta}_{ij},\hspace{2cm} i,j = 1,2,...,n.$$ Here $\{\Theta_{ij}\}$ and $\{\bar{\Theta}_{ij}\}$ are totally antisymmetric matrices which represent the noncommutative property of the coordinate and momentum on noncommutative space and phase space, respectively, and play analogous role to $\hbar$ in the usual quantum mechanics. On NC phase space the representations of $\hat{x}$ and $\hat{p}$ in terms of $x$ and $p$ are given in Ref.[@Likang] as follows $$\label{eq4} \begin{array}{ll} \hat{x}_{i}&= \alpha x_{i}-\frac{1}{2\hbar\alpha}\Theta_{ij}p_{j},\\ ~&~\\ \hat{p}_{i}&=\alpha p_{i}+\frac{1}{2\hbar\alpha}\bar{\Theta}_{ij}x_{j}, \hspace{1cm} i,j = 1,2,...,n. \end{array}$$ The $\alpha$ here is a scaling constant related to the noncommutativity of phase space. When $\bar{\Theta}=0$, it leads $\alpha =1$[@Likang], the NC phase space returns to the NC space, which is extensively studied in the literature, where the space-space is non-commuting, while momentum-momentum is commuting. Given the NC space or NC phase space, one should study its physical consequences. It appears that the most natural places to search the noncommutativity effects are simple quantum mechanics (QM) system. So far many interesting topics in NCQM such as hydrogen atom spectrum in an external magnetic field [@nair; @CST2], Aharonov-Bohm(AB) effect [@CST1] in the presence of the magnetic field, the Aharonov-Casher effects [@MZ], and Landau problem [@landau], as well as the Van de Waals interactions and photoelectric effect in noncommutative quantum mechanics [@chamoun] have been studied extensively. The purpose of this paper is to study the Landau problems on NC space and NC phase space, respectively, where both space-space and momentum-momentum noncommutativity could give additional contribution. This paper is organized as follows: In Section 2, we study the Landau problem on NC space. By solving the Schr$\ddot{o}$dinger equation in the presence of magnetic field we obtain all the energy levels. In Section 3, we investigate the Landau problem on NC phase space. By solving the Schr$\ddot{o}$dinger equation in the presence of magnetic field, the additional terms related to the momentum-momentum noncommutativity is obtained explicitly. Conclusions are given in Section 4. The Landau problem on NC space =============================== In this section we consider the two dimensional Landau problem in the symmetric gauge on noncommutative space. Let us consider a charged particle, with electric charge $q$ and mass $\mu$, moving in two dimensions (say $x-y$ plane), and under uniform magnetic field $B$ perpendicular to the plane (say $z$ direction). The magnetic vector potential has the form, $$A_x=-\frac{1}{2}By,~~~A_y=\frac{1}{2}Bx,~~~A_z=0$$ and then the Hamiltonian of the system has the following form, $$\begin{aligned} \label{H-classical} H&=&\frac{1}{2\mu}[\big(p_x^2+\frac{qB}{2c}y\big)^2+\big(p_y^2-\frac{qB}{2c}x\big)^2+p_z^2]\nonumber\\ ~&=&\frac{1}{2\mu}(p_x^2+p_y^2)+\frac{1}{2}\mu\omega_L^2 (x^2+y^2)-\omega_L l_z +\frac{1}{2\mu}p_z^2,\end{aligned}$$ where $\omega_L=\frac{qB}{2\mu c}$, $l_z$ is the $z$ component of the orbital angular momentum and defined as $l_z = xp_y - yp_x$. The static Schr$\ddot{o}$dinger equation on NC space is usually written as $$\label{eq6} H(x,p)\ast\psi = E\psi,$$ where the Moyal-Weyl (or star) product between two functions is defined by, $$\label{eq7} (f \ast g)(x) = e^{ \frac{i}{2} \Theta_{ij} \partial_{x_i} \partial_{x_j} }f(x_i)g(x_j) = f(x)g(x) + \frac{i}{2}\Theta_{ij} \partial_i f \partial_j g\big|_{x_i=x_j},$$ here $f(x)$ and $g(x)$ are two arbitrary functions. On NC space the star product can be replaced by a Bopp’s shift [@CFZ], i.e. the star product can be changed into the ordinary product by replacing $H(x,p)$ with $H(\hat{x},p)$. Thus the Schr$\ddot{o}$dinger equation (\[eq6\]) can be written as, $$\label{eq8} H(\hat{x}_i,p_i)\psi=H( x_{i}-\frac{1}{2\hbar}\Theta_{ij}p_{j},p_{i})\psi = E\psi.$$ In our case, the $H(\hat x,p)$ has the following form $$\begin{aligned} \label{eq9} H(\hat x,p)&=&\frac{1}{2\mu}[\big(p_x^2+\frac{qB}{2c}\hat{y}\big)^2+\big(p_y^2-\frac{qB}{2c}\hat{x}\big)^2+p_z^2]\nonumber\\ ~&=&\frac{1}{2\mu}\{[(1+\frac{qB}{4\hbar c}\theta)p_x+\frac{qB}{2c}y]^2+[(1+\frac{qB}{4\hbar c}\theta)p_y-\frac{qB}{2c}x]^2 +p_z^2\}\nonumber\\ ~&=&\frac{1}{2\mu '}(p_x^2+p_y^2)+\frac{1}{2}\tilde{\mu} \tilde{\omega}_L^2 (x^2+y^2)-\tilde{\omega}_L l_z +\frac{1}{2\mu}p_z^2\nonumber\\ ~&=& H_{xy}+ H_{l_z} + H_\parallel\;,\end{aligned}$$ where $$H_{xy} =\frac{1}{2\tilde{\mu}}(p_x^2+p_y^2)+\frac{1}{2}\tilde{\mu} \tilde{\omega}_L^2 (x^2+y^2), \hspace{0,5cm}H_{l_z}=-\tilde{\omega}_L l_z, \hspace{0,5cm}H_\parallel=\frac{1}{2\mu}p_z^2\;,$$ $$\tilde{\mu}=\frac{\mu} {(1+\frac{q B}{4\hbar c}\theta)^2}\;\;, \hspace{2cm} \tilde{\omega}_L=\frac{q B}{2\tilde{\mu } c (1+\frac{q B}{4\hbar c}\theta)},$$ $H_{xy}$ is the hamiltonian for two dimensional harmonic oscillator with mass $\tilde{\mu}$ and angular frequency $\tilde{\omega}_L$ . We now look for a basis of eigenvectors common to $H_{xy}$ (eigenvalues $E_{xy}$), $H_{l_z}$(eigenvalues $E_{l_z}$), and $H_{\parallel}$ (eigenvalues $E_{\parallel}$). It is easy to show that the $H_{xy}$, $H_{l_z}$, and $H_{\parallel}$ commute with each other. Therefore the eigenvectors of $\{H_{xy}, H_{l_z}, H_{\parallel}\}$ will automatically be eigenvectors of $H$ with eigenvalues $$E = E_{xy} + E_{l_z} + E_{\parallel}.$$ The eigenvectors $\psi_k(z) \sim e^{ik z}$ of the momentum operator $p_z$ are also eigenvectors of $H_{\parallel}$. Thus the eigenvalues of $H_{\parallel}$ are of the form $$E_\parallel = \frac{\hbar^2 k^2}{2\mu}, \hspace{1cm} -\infty < k<+\infty.$$ We see that the spectrum of $H_{\parallel}$ is continuous, the energy $E_\parallel$ can take any positive value or zero. This result implies that $H_{\parallel}$ describes the kinetic energy of a free particle moving along the $oz$( along the direction of magnetic field). The eigenfunctions $\psi_m (\varphi)\sim e^{im\varphi}$, $m = 0,\pm 1, \pm 2, ...$ of $l_z$ are also wave functions of $H_{l_z}$. Therefore the eigenvalues of $H_{l_z}$ are $$E_{l_z} = - m \hbar \tilde \omega_L\;.$$ Thus, now we shall concentrate on solving the eigenvalue equation of $H_{xy}$ of two-dimensional harmonic oscillator; note that the wave functions which we consider now depend on $x$ and $y$, and not on $z$. The solution to Eq. (\[eq8\]) can be written as a product of the solution for a static harmonic oscillator with the phase factors responsible for the momentum and orbital angular momentum, $$\begin{aligned} \label{eq10} \psi_{n_\rho m k}(\rho,\varphi,z) = R(\rho) e^{i m\varphi}e^{i k z},\hspace{1cm} m = 0,\pm 1, \pm 2, ...,\hspace{1cm} -\infty < k<+\infty.\end{aligned}$$ Inserting Eq.(\[eq10\]) into Eq. (\[eq8\]), and using cylindrical coordinate system, we can obtain the following radial equation for the two dimensional homogenous harmonic oscillator $$\label{eq11} \Big[-\frac{\hbar^2}{2\tilde{\mu}}\Big(\frac{\partial^2}{\partial\rho^2} + \frac{1}{\rho}\frac{\partial}{\partial\rho} - \frac{m^2}{\rho^2}\Big) + \frac{1}{2}\mu\tilde{\omega}^2_L \rho^2\Big]R(\rho) = E_{xy} R(\rho)\;.$$ Solving Eq.(\[eq11\]), the eigenvalues of the Hamiltonian $H_{xy}$ are $$\label{eq12} E_{xy}= (N + 1)\hbar \tilde\omega_L,$$ with $ N=(2n_\rho +|m|), n_\rho=0,1,2,\cdots $; and the corresponding eigenfunctions are $$\label{eq13} R(\rho) = \rho^{|m|} F(-n_\rho, |m| + 1,\zeta^2\rho^2) e^{-{\zeta^2\rho^2}/2},\hspace{1cm} \zeta^2 =\frac{\tilde{\mu}\tilde\omega_L}{\hbar}.$$ Therefore the energy eigenfunctions are $$\label{eq14} \psi_{n_\rho m k}(\rho\varphi,z) =\rho^{|m|} F(-n_\rho, |m| + 1,\zeta^2\rho^2)e^{i m\varphi + i k z}\;.$$ The eigenvalues of the total Hamiltonian H are of the form $$\label{eq12} E = (N+1)\hbar\tilde{\omega}_L - m\hbar \tilde{\omega}_L + \frac{\hbar^2 k^2}{2\mu}.$$ The corresponding levels are called Landau levels. Obviously, when $\theta =0$, then $\tilde{\mu} \rightarrow \mu , \tilde{\omega}_L\rightarrow\omega_L$, our results return to the space-space commuting case. The Landau problem on NC phase space ===================================== The Bose-Einstein statistics in NCQM requires both space-space and momentum-momentum non-commutativity. Thus we should also consider the momentum-momentum non-commutativity when we deal with physical problems. The star product in Eq. (\[eq7\]) on NC phase space now is defined as $$\begin{aligned} (f \ast g)(x,p) &=& e^{ \frac{i}{2\alpha^2} \Theta_{ij} \partial_i^x \partial_j^x+\frac{i}{2\alpha^2}\bar{\Theta}_{ij} \partial_i^p \partial_j^p} f(x,p)g(x,p) \nonumber\\ &=& f(x,p)g(x,p) + \frac{i}{2\alpha^2}\Theta_{ij} \partial_i^x f \partial_j^x g\big|_{x_i=x_j} + \frac{i}{2\alpha^2}\bar{\Theta}_{ij} \partial_i^p f \partial_j^p g\big|_{p_i=p_j},\end{aligned}$$ which can be replaced by a generalized Bopp’s shift $x_i\rightarrow \hat{x}_i, p_i\rightarrow\hat{p}_i$ with $\hat{x}_i$ and $\hat{p}_i$ defined in Eq.(\[eq4\]). Thus on noncommutative phase space the Schr$\ddot{o}$dinger Eq.(\[eq8\]) can be written as, $$\label{eq18} H(\hat{x}_i,\hat{p}_i)\psi=H( \alpha x_{i}-\frac{1}{2\hbar\alpha}\Theta_{ij}p_{j},\alpha p_{i}+\frac{1}{2\hbar\alpha}\bar{\Theta}_{ij}x_{j})\psi = E\psi.$$ In two dimensions we have, $$\begin{aligned} \label{eq19} \hat{x}=\alpha x -\frac{\theta}{2\hbar\alpha}p_y,~~~\hat{y}=\alpha y +\frac{\theta}{2\hbar\alpha}p_x,\nonumber\\ \hat{p}_x=\alpha p_x +\frac{\bar{\theta}}{2\hbar\alpha}y,~~~\hat{p}_y=\alpha p_y -\frac{\bar{\theta}}{2\hbar\alpha}x,\end{aligned}$$ The three parameters $\theta, \bar{\theta}$ and $\alpha $ represent the non-commutativity of the phase space, it is related by $$\label{constrain} \bar{\theta}=4\hbar^2\alpha^2 (1-\alpha^2)/\theta,$$ so only two of them are free in the theory and they may depend on the space and energy scales. The Hamiltonian for the two dimensional Landau problem on noncommutative phase space in the symmetric gauge is $$\begin{aligned} \label{eq20} H(\hat x,\hat{p})&=&\frac{1}{2\mu}[\big(\hat{p}_x^2+\frac{qB}{2c}\hat{y}\big)^2+\big(\hat{p}_y^2-\frac{qB}{2c}\hat{x}\big)^2+\hat{p}_z^2]\nonumber\\ ~&=&\frac{1}{2\mu}\{[(\alpha+\frac{qB}{4\hbar\alpha c}\theta)p_x+(\frac{qB}{2c}\alpha+\frac{\bar{\theta}}{2\hbar\alpha})y]^2+ [(\alpha+\frac{qB}{4\hbar\alpha c}\theta)p_y-(\frac{qB}{2c}\alpha+\frac{\bar{\theta}}{2\hbar\alpha})x]^2 +p_z^2\}\nonumber\\ ~&=&\frac{1}{2\tilde{\mu}'}(p_x^2+p_y^2)+\frac{1}{2}\tilde{\mu}'\tilde{\omega'}_L^2 (x^2+y^2)-\tilde{\omega'}_L l_z +\frac{1}{2\mu}p_z^2\nonumber\\ ~&=& H'_{xy}-\tilde{\omega'}_L l_z +\frac{1}{2\mu}p_z^2,\end{aligned}$$ where $$\tilde{\mu}'=\frac{\mu} {(\alpha+\frac{qB}{4\hbar\alpha c}\theta)^2} \;\;,\hspace{2cm} \tilde{\omega'}_L=\frac{\frac{qB}{c}\alpha+\frac{\bar{\theta}}{\hbar\alpha}}{2\tilde{\mu}' (\alpha+\frac{qB}{4\hbar\alpha c}\theta)},$$ $H'_{xy}$ is the hamiltonian for two dimensional harmonic oscillator with mass $\tilde{\mu}'$ and angular frequency $\tilde{\omega'}_L$. In an analogous way as in NC space, the solution to Eq.(\[eq18\]) can be written as a product of the solution for a static harmonic oscillator with the phase factors responsible for the momentum and orbital angular momentum. $$\begin{aligned} \label{eq22} \psi_{n_\rho m k}(\rho,\varphi,z) = R(\rho) e^{i m\varphi}e^{i k z},\hspace{1cm} m = 0,\pm 1, \pm 2, ...,\hspace{1cm} -\infty < k<+\infty.\end{aligned}$$ The eigenvalues of $l_z$ and $p_z$ are $m \hbar$ and $\hbar k$, respectively. Choosing cylindrical coordinate system, and inserting Eq.(\[eq22\]) into Eq.(\[eq18\]), we can obtain the following radial equation for the two dimensional homogenous harmonic oscillator $$\label{eq23} \Big[-\frac{\hbar^2}{2\tilde{\mu}'}\Big(\frac{\partial^2}{\partial\rho^2} + \frac{1}{\rho}\frac{\partial}{\partial\rho} - \frac{m^2}{\rho^2}\Big) + \frac{1}{2}\mu\tilde{\omega'}^2_L \rho^2\Big]R(\rho) = E'_{xy} R(\rho)\;.$$ This eigenvalue equation of $H'_{xy}$ leads to the wave functions $$\label{eq12} R(\rho) = \rho^{|m|} F(-n_\rho, |m| + 1,\zeta'^2\rho^2) e^{-{\zeta'^2\rho^2}/2},\hspace{1cm} \zeta'^2 =\frac{\tilde{\mu}'\tilde\omega'_L}{\hbar}.$$ with eigenvalue $$\label{eq13} E'_{xy}= (N + 1)\hbar \tilde\omega'_L,$$ where $ N=(2n_\rho +|m|), n_\rho=0,1,2,\cdots $. Therefore the total eigenfunctions of the Hamiltonian $H$ are of the form $$\label{eq14} \psi_{n_\rho m k}(\rho\varphi,z) =\rho^{|m|} F(-n_\rho, |m| + 1,\zeta'^2\rho^2)e^{i m\varphi + i k z}$$ where the term $e^{i k z}$ describes a free particle moving along the magnetic field, and the particle energy is continuous. In the x-y plane, particle is confined in a harmonic potential, energy is discontinuous. The eigenvalues of the total Hamiltonian H are of the form $$\label{eq12} E = (N+1)\hbar\tilde{\omega'}_L - m\hbar \tilde{\omega'}_L + \frac{\hbar^2 k^2}{2\mu}.$$ The corresponding levels are called Landau levels on NC phase space. Obviously, when $\theta\neq 0 $ and $\alpha=1$, it leads to $\bar{\theta}=0$ (refer to Eq.(\[constrain\])), such that $ \tilde{\mu}'\rightarrow \tilde{\mu} ,~\tilde\omega'_L\rightarrow \tilde\omega_L$, which is the space-space noncommuting case. When both $\theta =0$ and $\bar\theta =0$ then $\tilde{\mu}'\rightarrow \mu , \tilde{\omega'}_L\rightarrow\omega_L$, our results return to the case of usual quantum mechanics. Conclusion =========== In this letter we study the Landau problem in NCQM. The consideration of the NC space and NC phase space produces additional terms. In order to obtain the NC space correction to the usual Landau energy levels, in Section 2, first, we give the Schr$\ddot{o}$dinger equation in the presence of a uniform magnetic field; and then by solving the equation we derive all the energy levels. In order to obtain the NC phase space correction to the usual Landau problems, in Section 3, we solve the Schr$\ddot{o}$dinger equation in the presence of a uniform magnetic field and obtain new terms which comes from the momentum-momentum noncommutativity. Acknowledgments =============== This work is supported by the National Natural Science Foundation of China (10465004, 10665001 and 10575026). The authors are also grateful to the support from the Abdus Salam ICTP, Trieste, Italy. [99]{} Seiberg N and Witten E. JHEP, 1999, [**032**]{}:9909 Connes A, Douglas M R, Schwarz A. JHEP, 1998, [**003**]{}:9802 Douglas M R, Hull C M. JHEP, 1998, [**008**]{}:9802 Ardalan F, Arfaei H, Sheikh-Jabbari M M. JHEP, 1999, [**016**]{}:9902 Curtright T, Fairlie D and Zachos C. Phys.Rev., 1998, [**D58**]{}:25002 Chu S-C, Ho P-M. Nucl. Phys., 1999, [**B550**]{}:151; Nucl. Phys., 2000, [**B568**]{}:447 Schomerus V. JHEP, 1999, [**030**]{}:9906 Demetrian M and Kochan D. hep-th/0102050; ZHANG Jian-zu, Phys. Lett., 2004, [**B584**]{}:204 KANG Li, WANG Jianhua , CHEN Chiyi. Mod. Phys. Lett., 2005, [**A20**]{}(28):2165 Nair V P. Phys. Lett., 2001, [**B505**]{}:249; Nair V P, Polychronakos A P. Phys. Lett., 2001, [**B505**]{}:267 Chaichian M, Sheikh-Jabbari M M, Tureanu A. Phys. Rev. Lett., 2001, [**86**]{}:2716; Chaichian M, Demichev A, Presnajder P et al. Nucl. Phys., 2001, [**B611**]{}:383 Chaichian M , Presnajder P, Sheikh-Jbbari M M, et al. Phys. Lett., 2002, [**B527**]{}:149; Falomir H, Gamboa J, Loeve M et al. Phys. Rev., 2002, [**D66**]{}:045018; Dayi O F and Jellal A. J.Math.Phys., 2002, [**43**]{}:4592; KANG Li, DULAT Sayipjamal. Eur.Phys.J., 2006, [**C46**]{}:825; Mirza B, Zarei M, Eur. Phys.J. 2004, [**C32**]{}:583; KANG Li and WANG Jian-Hua. hep-th/0608100; Iengo R, Ramachandran R. JHEP, 2002, [**017**]{}:0202 ; Gamboa J, Loewe M, Mendez F et al. hep-th/0104224; Dayi O F, Kelleyane L T, hep-th/0202062; Horvathy P A. hep-th/0201007; Magro G. quant-ph/0302001; Jonke L, Meljanac S. Eur. Phys. J., 2003, [**C29**]{}:433 Chamoun N. hep-ph/0509282. [^1]: sdulat@xju.edu.cn [^2]: kangli@hztc.edu.cn
--- author: - Xun Xie bibliography: - 'reff.bib' title: | The based ring the lowest generalized two-sided cell\ of an extended affine Weyl group --- Introduction ============ We are concerned with the based ring (also called asymptotic ring) of the lowest generalized two-sided cell of an extended affine Weyl group $ W $. The based ring of an two-sided cell of a certain Coxeter group was introduced by Lusztig in [@lusztig1987cellsII], which is useful in studying representations of Hecke algebras (see for examples [@lusztig1987cellsIII; @lusztig1989cells; @Xi1990based; @xi1994representations; @xi2007representations]). Generalized cells of Coxeter groups are defined by Lusztig using Hecke algebras of Coxeter groups with unequal parameters in [@lusztig1983left]. For a certain Coxeter group, the based ring of a generalized two-sided cell is defined in [@lusztig2003hecke §18], by assuming 15 conjectural properties of the generic Hecke algebras (or generalized cells) of the Coxeter group with unequal parameters, which are called properties P1-P15 (see [@lusztig2003hecke §14]). In this paper, we show that the properties P1-P15 are valid for the lowest generalized two-sided cell of an extended affine Weyl group. This implies that the based ring of the lowest generalized two-sided cell is well defined. We then determine the structure of the based ring. As an application, we use these results to study representations of affine Hecke algebras with unequal parameters attached to the lowest generalized two-sided cell. The contents of the paper are as follows. In section 2 we recall some basic facts about extended affine Weyl groups and their Hecke algebras and the lowest generalized two-sided cell of an extended affine Weyl group. In section 3 we study a formula of Xi, which gives a decomposition of the (generalized) Kazhdan-Lusztig element $C_w$ for $w$ being in the lowest generalized two-sided cell. This formula is a key for determining the structure of the based ring of the lowest generalized two-sided cell. In section 4 we show that the properties P1-P15 are valid for the lowest two-sided cell of an extended affine Weyl group. In section 5 we determine the based ring of the lowest generalized two-sided cell. In section 6 we consider the based ring of the lowest generalized two-sided cell of an affine Weyl group of type $\tilde C_n$. In section 7 we use the ideas in [@xi1994representations; @xi2007representations] to study simple representations of an affine Hecke algebras with unequal parameters attached to the lowest generalized two-sided cell. In section 8 we study the reduced affine Hecke algebra $k_\mathfrak{p}\mathcal{H}=\mathcal{H}\otimes_\mathcal{Z} k_\mathfrak{p}$, where $\mathcal{H}$ is the generic affine Hecke algebra, $\mathcal{Z}$ is the center of $\mathcal{H}$, $\mathfrak{p}$ is a prime ideal of $\mathcal{Z}$, $k_\mathfrak{p}=\operatorname{Frac}(\mathcal{Z}/\pr)$ is the residue field of $\mathcal{Z}$ at $\mathfrak{p}$. Preliminaries {#pre} ============= In this section, we recall some basic definitions and facts about affine Weyl groups and their Hecke algebras and the lowest generalized two-sided cell of an extended affine Weyl group. Extended affine Weyl groups {#extended aff} --------------------------- Let $R$ be an irreducible root system. Its Weyl group, root lattice and weight lattice are denoted by $W_0$, $Q$ and $P$ repectively. Then $W'=W_0\ltimes Q$ is an affine Weyl group and $W=W_0\ltimes P= \Omega\ltimes W'$ is an extended affine Weyl group, where $\Omega$ is a finite subgroup of $W$ and is isomorphic to $P/Q$. The length function $l:W'\to \mathbb{N}$ can be extended to a length function $l:W\to \mathbb{N}$ by defining $l(\omega w)=l(w)$ for $\omega\in\Omega$ and $w\in W'$. Denote by $e$ the neutral element of the group $W$. Let $E=\mathbb{R}\otimes Q$ and $(~,~)$ be an $W_0$-invariant inner product on $E$. Let $\mathcal{F}$ be the set of hyperplanes in $E$ consisting of all hyperplanes $$H_{\alpha,n}=\{x\in E\,|\, \frac{(x,2\alpha)}{(\alpha,\alpha)}=n\},\qquad \alpha\in R,\ \ n\in\mathbb{Z}.$$ Let $X$ be the set of alcoves, i.e. the set of connected components of the set $E-\bigcup_{H\in \mathcal{F}}H$. Then $W'$ is isomorphic to the group generated by all reflections on $E$ with respect to the hyperplanes in $\mathcal{F}$. Thus $W'$ acts on $E$ and $X$ naturally. We regard this action as right action. Another action of $W'$ on $X$, regarded as a left action, was introduced in [@Lusztig1980Janzten 1.1]. The left action and the right action commute. For every simple reflection $s\in W'$ and every alcove $A\in X$, $A$ and $sA$ has a common face and the common face of $A$ and $sA$ is called a face (of $A$) of type $s$. Let $\Gamma$ be an abelian group written additively. A *weight function* of the [extended]{} affine Weyl group, $L:W\longrightarrow\Gamma$, is defined by the condition: $L(ww')=L(w)+L(w')$ if $l(ww')=l(w)+l(w')$, and $L(\pi)=0\in \Gamma$ if $\pi\in \Omega$. Denote by $\mathbb{Z}[\Gamma]$ the group ring of the group $\Gamma$. Let $q$ be a symbol. Following [@bonnafe2006tw], an element of $\mathbb{Z}[\Gamma]$ is written in the form $\sum_{\gamma\in\Gamma}a_\gamma q^\gamma$ with $a_\gamma\in \mathbb{Z}$. We often write $q_w$ for $q^{L(w)}$. In particular, $q_\pi=q^0=1\in q^{\Gamma}$ for each $\pi\in \Omega$. Given a weight function $L$ on $W$, we can associate with each hyperplane $H\in \mathcal{F}$ an element $L(H)=L(s)$ in $\Gamma$, where $s$ is an simple reflection such that $H$ supports a face of type $s$. This is well defined by [@bremke1997generalized Lemma 2.1]. Furthermore, by [@bremke1997generalized Lemma 2.2], we have - $L(H)=L(H')$ when $H$ and $H'$ are parallel. A weight function on $W'$ may not be extended to a weight function on $W$. The property (a) does not hold for some weight functions of $W'$ of type $\tilde C_r, r\geq1$, see Section \[nonext\]. Affine Hecke algebras {#aff hec alg} --------------------- Let $W$ be an extended affine Weyl group and let $L:W\longrightarrow\Gamma$ be a weight function of $W$. The affine Hecke algebra $\mathcal{H}$ with respect to weight function $L$ is defined to be the algebra over $\mathbb{Z}[\Gamma]$ with generators $\{T_w\mid w\in W\}$ and relations - $T_wT_{w'}=T_{ww'}$ if $l(ww')=l(w)+l(w')$, - $(T_s+1)(T_s-q_s ^2)=0$ if $s\in S$. We often use the notation ${\widetilde{T}_{w}}:=q_w^{-1}T_w$, $w\in W$. The subalgebra generated by $T_w,w\in W'$ is denoted by $\mathcal{H'}$. Assume that $\Gamma$ is a totally ordered abelian group. Then there is a total order $\leq$ on $\Gamma$ such that $\gamma_1+\gamma<\gamma_2+\gamma$ for any $\gamma\in \Gamma$ whenever $\gamma_1<\gamma_2$. Now the (generalized) Kazhdan-Lusztig basis (KL-basis) $\{C_w \mid w\in W \}$ of $\mathcal{H}$ with respect to the pair $(L,\leq)$ is defined by the following property: - - $\overline{C}_w=C_w$, where $\overline{\,\cdot\,}$ is the involution of algebras on $\mathcal{H}$ such that $\overline{q^\gamma}=q^{-\gamma}$, $\overline{T_s}=T_s^{-1}$ and $\overline{T_\pi}=T_\pi$, for $\gamma\in\Gamma$, $s\in S$, $\pi\in \Omega$. - $C_w\equiv {\widetilde{T}_{w}}\mod{\mathbb{Z}[\Gamma^{<0}]\{{\widetilde{T}_{w}}|w\in W\}}$, where $\mathbb{Z}[\Gamma^{<0}]$ is the set of $\mathbb{Z}$-linear combinations of the elements $\gamma$ in $\Gamma$ such that $\gamma<0$. The element $C_w$ can be expressed as $C_w=\sum_{y\leq w}{P^*_{y,w}}{\widetilde{T}_{y}}=q_w^{-1}\sum_{y\leq w}{P_{y,w}}T_y$ where ${P_{y,w}}$ is a polynomial in $q_s^2,s\in S$, and ${P^*_{y,w}} $ is a polynomial in $q_s^{-1},s\in S$. As in [@lusztig2003hecke], using the KL-basis, one can define preorders $\leq_L$, $\leq_R$, $\leq_{LR}$ on $W$. These preorders induce naturally equivalence relations $\sim_L$, $\sim_R$, $\sim_{LR}$ on $W$. The corresponding equivalence classes are called respectively generalized left cells, generalized right cells, and generalized two-sided cells of $W$ (with respect to the pair $(L,\leq)$). - Since $\Gamma$ is totally ordered, we can define the degree map $$\deg:\mathbb{Z}[\Gamma]\longrightarrow\Gamma$$ by setting $\deg(\sum_{\gamma\in \Gamma}a_\gamma q^{\gamma})=\max\{\gamma|a_\gamma\neq0\}$. The elements $h_{x,y,z}$, $m_{x,y,z}$ of $\mathbb{Z}[\Gamma]$ are defined respectively by the formulas $$C_xC_y=\sum_{z}h_{x,y,z}C_z,\quad {\widetilde{T}_{x}}{\widetilde{T}_{y}}=\sum_{z}m_{x,y,z}{\widetilde{T}_{z}}.$$Then - Lusztig’s **a**-function is defined to be $$\textbf{a}(z)=\sup\{\deg(h_{x,y,z})~|~x,y\in W \}\in\Gamma\cup\{\infty\}.$$ If $\textbf{a}(z)\neq\infty$, then $h_{x,y,z}$ can be written in the form $$h_{x,y,z}=\gamma_{x,y,z^{-1}}q^{\textbf{a}(z)} \text{+ lower degree terms},$$ where $\gamma_{x,y,z^{-1}}\in \mathbb{Z}$. In this case there exists some $x,y\in W$ such that $\gamma_{x,y,z^{-1}}$ is nonzero. \[assum1\] Hereafter, we will always assume that $\Gamma$ is a totally ordered abelian group and $L$ is a positive weight function, i.e. $L(s)>0$ for all $s\in S$. Lowest two-sided cells {#low_c_0} ---------------------- In this subsection we recall some known results on the lowest generalized two-sided cell of the extended affine Weyl group $W$. The lowest two-sided cell of $W$ is described in [@shi1987two; @shi1988two]. The lowest generalized two-sided cell of $W$ is described in [@xi1994representations Theorem 3.22]. It turns out the generalized lowest two-sided and lowest two-sided cell of $W$ coincide. It is $$\mathbf{c}_0=\{ww_0u\in W\ |\ l(ww_0u)=l(w)+l(w_0)+l(u)\},$$ where $w_0$ is the longest element of $W_0$. We shall keep to use the term generalized lowest two-sided cell for emphasizing its connection with affine Hecke algebras with unequal parameters. Following [@shi1988two] we give a more detailed description of $\mathbf{c}_0$. Keep the notations in \[extended aff\]. Let $v$ be a special point in $E$ (that is, there are $|R|/2$ hyperplanes in $\mathcal{F}$ passing through $v$) and let $W_v$ be the stabilizer in $W'$ (under left action) of the set of alcoves containing $v$ in its closure. Let $w_v$ be the longest element of $W_v$. A connected component of $E-\cup_{v\in H}H$ is called a quarter with vertex $v$. For every special point $v$, we will fix a quarter ${\mathcal{C}_{v}^{+}}$ such that for any two special points $v, v'$, ${\mathcal{C}_{v'}^{+}}$ is a translate of ${\mathcal{C}_{v}^{+}}$. Denote by $A_v^+$ the unique alcove contained in ${\mathcal{C}_{v}^{+}}$ and containing $v$ in its closure. Let ${\mathcal{C}_{v}^{-}}=-{\mathcal{C}_{v}^{+}}$ and $A_v^-=w_vA_v^+\subset{\mathcal{C}_{v}^{-}}$. Let $\mathcal{F}^*$ be the set of hyperplanes $H\in \mathcal{F}$ such that $H$ is parallel to a wall of ${\mathcal{C}_{v}^{+}}$. A connected component of $E-\cup_{H\in \mathcal{F}^*}H$ is called a box. The box containing $A_v^+$ is unique and is denoted by $\Pi_v^+$. For $H\in\mathcal{F}$, let $E_H^+$ be the connected component of $E-H$ that meets ${\mathcal{C}_{v}^{+}}$ for any special point $v$. Let $$B_v=\{\pi w\in W\vert w\in W', wA_v^+\subset \Pi_v^+,\pi\in \Omega\},$$ $$U_v=\{\pi w\in W\vert w\in W', wA_v^+\subset {\mathcal{C}_{v}^{+}},\pi\in \Omega\}.$$ The lowest two-sided cell of $W$ with respect to $(L,\leq)$ can be described as (see \[Shi88\] and [@bremke1997generalized Thm. 6.13]). - $\mathbf{c}_0=\{ww_vw'^{-1}~|~w\in B_v,w'\in U_v\}=\{z\in W~|~\textbf{a}(z)=L(w_0) \}$. The cell $\mathbf{c}_0$ is independent of the choice of the special point $v$, since if $v,v'$ are two special points, then there always exists $\pi\in \Omega$ such that $\pi w_v\pi^{-1}=w_{v'}$. Recall that $\Omega$ is the finite subgroup of $W$ consisting of elements of length 0 and that $\Omega$ normalizes $W'$. The lowest two-sided cell $\mathbf{c}_0$ can also be described as $\mathbf{c}_0=\{ww_vw'^{-1}~|~w\in U_v,w'\in B_v\}$. Note that we always have $l(ww_vw'^{-1})=l(w)+l(w_0)+l(w')$ for $w\in B_v$, $w'\in U_{v}$ or $w\in U_v,w'\in B_v$. It is known that the set $$\mathcal{H}^{\mathbf{c}_0}=\mathbb{Z}[\Gamma]\{C_w \mid w\in\mathbf{c}_0\}$$ is a two-sided ideal of the affine Hecke algebra $\mathcal{H}$. For $x\in P$, we denote by $p_x$ the corresponding element in $W=W_0\ltimes P$. The set of dominant weights is defined to be $P^+=\{x\in P~|~l(w_0p_x)=l(w_0)+l(p_x) \} $. We have - Each $z\in \mathbf{c}_0$ can be written uniquely as $w_1w_0p_xw_2^{-1}$ with $w_1,w_2\in B_0$, $x\in P^+$, see [@Xi1990based 3.2]. For every $x\in P$, we can find $y,z\in P^+$ such that $x=y-z$ and set $\theta_x={\widetilde{T}_{p_y}}({\widetilde{T}_{p_z}})^{-1}$. It is known that $ \theta_x $ is well defined and is independent of the choice of $ y,z $. For $x\in P^+$, we define $S_x=\sum_{x'\in P}d(x',x)\theta_{x'}$, where $d(x',x)$ is the dimension of the weight space $V(x)_{x'}$, and $V(x)$ is an irreducible module of highest weight $x$ of the simply connected complex algebraic group $G_\mathbb{C}$ with Weyl group $W_0$. Then - The elements $S_x$, $x\in P^+$ form a $\mathbb{Z}[\Gamma]$-basis of the center $\mathcal{Z}$ of $\mathcal H$. And $S_{x}S_{x'}=\sum_{x''\in P^+}m(x,x',x'')S_{x''}$, where $m(x,x',x'')$ is the multiplicity of the irreduicble module $V(x'')$ in the tensor product $V(x)\otimes V(x')$ (see [@xi1994representations 2.9]). - $C_{w_0p_x}=C_{w_0}S_x$ for $x\in P^+$. Hence $C_{w_0p_x}C_{w_0p_y}=\sum_{z\in P^+}m(x,y,z)h_{w_0,w_0,w_0}C_{w_0z}$ for $x,y\in P^+$(see [@xi1994representations 3.21]). \[center form\] Define $\mathcal{Z}_{\mathbb{Z}}$ to be the $\mathbb{Z}$-submodule of $\mathcal{Z}$ generated by $\{S_x\,|\,x\in P^+ \}$. By (d), $\mathcal{Z}_{\mathbb{Z}}$ is a $\mathbb{Z}$-lattice of $\mathcal{Z}$ (that is $\mathcal{Z}=\mathbb{Z}[\Gamma]\otimes_\mathbb{Z}\mathcal{Z}_{\mathbb{Z}}$), and $\mathcal{Z}_{\mathbb{Z}}$ is isomorphic to the representation ring of the algebraic group $G_\mathbb{C}$. The following facts are well known or easy to check and will be used latter. \[predecomp\]Let $\mathcal{H}$ be the Hecke algebra of $W$ with respect to a positive weight function $L:W\longrightarrow \Gamma$. Then - ${P^*_{u,w}}=q_s^{-1}{P^*_{us,w}}$ if $u<us\leq w$, $ws<w$. In particular, ${P^*_{y,w_v}}=q_{yw_v}^{-1}=q^{L(y)-L(w_v)}$, for $y\leq w_v$. Hence $C_{w_v}=\sum_{y\leq w_v}q_{yw_v}^{-1}{\widetilde{T}_{y}}=q_{w_v}^{-1}\sum_{y\leq w_v} T_y$. - $h_{w_v,w_v,w_v}$ is equal to ${q_{w_v}^{-1}}\sum_{y\in W_v} q_y^2$ and is nonzero in $\mathbb{Z}[\Gamma]$ since $\Gamma$ is totally ordered. - $C_{ww_v}=E_wC_{w_v}$ for $w\in U_v$ where $E_w=\sum_{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)}}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u}}$. - $C_{w_vw^{-1}}=C_{w_v}F_w$ for $w\in U_v$ where $F_w=\sum_{{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)}}}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u^{-1}}}$. - $\{{ww_v}\vert w\in U_v\}$ is a left cell of $W$. *Proof.* (i) and (ii) are well known, see for example [@xi1994representations 1.14.13 and 1.17(i)(ii)]. (iii) can be calculated straightforward as follows. $$\begin{aligned} C_{ww_v}&=\sum_{z\leq ww_v}{P^*_{z,ww_v}}{\widetilde{T}_{z}}\\ &=\sum_{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)\\y\leq w_v}}{P^*_{uy,ww_v}}{\widetilde{T}_{u}}{\widetilde{T}_{y}}\\ &=\sum_{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)\\y\leq w_v}}q_{yw_v}^{-1}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u}}{\widetilde{T}_{y}}& \text{(by (i))}\\ &=\left(\sum_{{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)}}}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u}}\right)\left(\sum_{y\leq w_v}q_{yw_v}^{-1}{\widetilde{T}_{y}}\right)\\&=E_wC_{w_v}.& \text{(by (i))}\end{aligned}$$ \(iv) Let ${\cdot\,}^\flat:\mathcal{H}\longrightarrow \mathcal{H}$ be the anti-isomorphism of algebras over $\mathbb{Z}[\Gamma]$ such that $({\widetilde{T}_{u}})^\flat={\widetilde{T}_{u^{-1}}}$ for $u\in W$. Then ${\cdot\,}^{\flat}$ commutes with the bar involution $\overline{\,\cdot\,}$ in \[aff hec alg\](a), and hence $(C_w)^\flat=C_{w^{-1}}$. Therefore $C_{w_vw^{-1}}=(C_{ww_v})^\flat=(E_wC_{w_v})^\flat=C_{w_v}F_w$, where we use the fact that $w_v^2=1$ and $(E_w)^\flat=F_w$. \(v) follows from the proof of Theorem 3.22 in [@xi1994representations]. A formula of Xi {#formula} =============== In [@xi1994based Theorem 2.9], Xi established a decomposition formula for $C_w, \ w\in \mathbf{c}_0$ when the weight function is constant. The formula is reproved in [@Blasiak2009fac]. In this section we will show that Xi’s formula remains valid for general (positive) weight functions. For $A$, $B\in X$ are two alcoves, the distant function $d(A,B)$ is defined to be the number of hyperplanes separating $A$ from $B$ counting by signs (see [@Lusztig1980Janzten 1.4]), and there exists a partial order $<$ on $X$ induced by the alcoves with distant 1, [@Lusztig1980Janzten 1.5]. The following technical lemma is useful, see [@Lusztig1980Janzten Lemma 4.3] and [@bremke1997generalized Lemma 4.3]. \[lus\_lemma\]Let $W$ be an extended affine Weyl group with a positive weight fuction $L$. Let $v$ be a special point, $A$ be an alcove containing $v$ in its closure and $y\in W_v$ such that $A=yA_v^+$. Let $s_1$,$\dots$,$s_k\in S$ be such that $d(A_v^+,s_k\cdots s_1A_v^+)=k$ and let $1\leq i_1,\dots i_p\leq k$ be such that $$\begin{gathered} s_{i_t}\cdots\hat{s}_{i_{t-1}}\cdots\hat{s}_{i_1}\cdots s_1(A)<\hat{s}_{i_t}\cdots\hat{s}_{i_{t-1}}\cdots\hat{s}_{i_1}\cdots s_1(A)\end{gathered}$$ for $t=1,\cdots, p$. Here $\hat{s}_{i_t}$ means deleting $s_{i_t}$. Then we have - $L({s_{i_1}})+\cdots+ L({s_{i_p}})\leq L(y)$. - If moreover $s_k\cdots s_1(A_v^+)\subset \Pi_v^+$ and $A\neq A_v^+$ (i.e. $y\neq1$), then the strict inequality holds, i.e., $L({s_{i_1}})+\cdots+ L({s_{i_p}})< L(y)$. Let $\mathcal{M}$ be the $\mathbb{Z}[\Gamma]$-module with basis $X$. Define $\mathcal{M}$ to be an $\mathcal{H'}$-module via [[\_[s]{}]{}[A]{}=]{}\[tma\] [sA]{} & if $sA>A$\ +(q\_[s]{}-q\_s\^[-1]{})[A]{}& if $sA<A$ for $s\in S$. For $w\in W'$, $A\in X$, if we write $${\widetilde{T}_{w}}{A}=\sum_{B\in X}\pi_{w,A,B}{B},\quad \pi_{w,A,B}\in\mathbb{Z}[\Gamma],$$ then $\pi_{w,A,B}$ actually are polynomials in $\xi_s=q_s-q_s^{-1}, s\in S$ with integral and positive coefficients. The following corollary of the above lemma originates from [@Lusztig1980Janzten 4.2]. \[estimate\] Let $v$ be a special point, $u\in W'$ be such that $uA_v^+\subset\Pi_v^+$ and $y\in W_v $ be such that $A=yA_v^+\neq A_v^+$. Recall that the degree map is defined by \[aff hec alg\](b). Then we have $$\deg(\pi_{u,A,B})<L({y})$$for any $B\in X$, where $\pi_{u,A,B}$ is the coefficients of ${B}$ in ${\widetilde{T}_{u}}{A}$, i.e. ${\widetilde{T}_{u}}{A}=\sum_{B\in X}\pi_{u,A,B}{B}$. *Proof.* Assume that $u$ has a reduced expression $u=s_k\cdots s_1$ with $s_i\in S$, $1\leq i\leq k$. Then $uA_v^ +\subset\Pi_v^+ $ implies that $d(A_v^+,s_k\cdots s_1A_v^+)=k$. By definition, we have $${\widetilde{T}_{u}}{A}={\widetilde{T}_{s_k}}\cdots {\widetilde{T}_{s_1}}(A)=\sum_{I\in\mathcal{I}}\prod_{k=1}^{p_I}(q_{s_{i_k}}-q_{s_{i_k}}^{-1}){s_k\cdots \hat{s}_{i_{p_I}}\cdots \hat{s}_{i_2}\cdots \hat{s}_{i_1}\cdots s_1(A)}$$ where $\mathcal{I}$ is the set of all the sequences $(i_1<i_2<\cdots <i_{p_I})$ such that $$s_t\cdots \hat{s}_{i_{t-1}}\cdots \hat{s}_{i_2}\cdots \hat{s}_{i_1}\cdots s_1(A)<\hat{s}_t\cdots \hat{s}_{i_{t-1}}\cdots \hat{s}_{i_2}\cdots \hat{s}_{i_1}\cdots s_1(A)$$ for $t=1,\dots,p_I$. Now using Lemma (ii), we obtain the conclusion. \[est2\] Let $v$ be a special point, $u\in W'$ be such that $uA_v^+\subset\Pi_v^+$, $u'\in W'$ be such that $u' A_v^+\subset {\mathcal{C}_{v}^{+}}$ and $y\in W_v$ be such that $y<w_v$. Write the product ${\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{{u'}^{-1}}}$ as $\sum_{z}a_{u,y,u'}^z{\widetilde{T}_{z}}$ with $a_{u,y,u'}^z\in\mathbb{Z}[\Gamma]$. Then we have $\deg(a_{u,y,u'}^z)<L({yw_v})$ in $\Gamma$. *Proof.* The idea of the proof is same as that in [@Lusztig1985cells 7.9] whose aim is to prove the boundedness of $\textbf{a}$-function for an affine Weyl group. Let $A=u'A_v^-$. Then $A\subset \mathcal{C}_v^-$. Hence ${\widetilde{T}_{{u'}^{-1}}}{A}={A_v^-}$. This implies that ${\widetilde{T}_{y}}{\widetilde{T}_{{u'}^{-1}}}{A}={yA_v^{-}}={yw_vA_v^+}$. On one hand, $${\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{{u'}^{-1}}}{A}={\widetilde{T}_{u}}{yw_vA_v^+}=\sum_B\pi_{u,yw_vA_v^+,B}{B}.$$ On the other hand, $${\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{{u'}^{-1}}}{A}=\sum_z a_{u,y,u'}^z{\widetilde{T}_{z}}{A}=\sum_{z,B}a_{u,y,u'}^z\pi_{z,A,B}{B}.$$ Thus we have $$\sum_{z}a_{u,y,u'}^z\pi_{z,A,B}=\pi_{u,yw_vA_v^+,B}.$$ Since $a_{u,y,u'}^z$, $\pi_{z,A,B}$, and $\pi_{u,yw_vA_v^+,B}$ are all polynomials in $\xi_s$ with positive integral coefficients, we have $$\deg(a_{u,y,u'}^z\pi_{z,A,B})\leq \deg(\pi_{u,yw_vA_v^+,B}).$$ By formula , we know that $\pi_{z,A,zA} $ has constant term 1. Therefore, for $B=zA$, $$\deg(a_{u,y,u'}^z)\leq\deg(\sum_{z}a_{u,y,u'}^z\pi_{z,A,B})\leq\deg(\pi_{u,yw_vA_v^+,B})<L({yw_v}).$$ The last strict inequality is deduced from Corollary \[estimate\]. \[J0\] Let $v$ be a special point, $w\in W'$ be such that $wA_v^+\subset\Pi_v^+$, $w'\in W'$ be such that $w'A_v^+\subset{\mathcal{C}_{v}^{+}}$. Then$$\begin{gathered} C_{ww_v{w'}^{-1}}=E_wC_{w_v}F_{w'}\end{gathered}$$ where $E_w=\sum_{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)}}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u}}$, $F_w=\sum_{{\substack{u\leq ww_v\\l(uw_v)=l(u)+l(w_v)}}}{P^*_{uw_v,ww_v}}{\widetilde{T}_{u^{-1}}}$. *Proof.* On one hand,$$\begin{aligned} E_wC_{w_v}F_{w'}&=\sum_{u,y,u'}{P^*_{uw_v,ww_v}}q_{yw_v}^{-1}{P^*_{u'w_v,w'w_v}}{\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{{u'}^{-1}}}\\ &=\sum_{u,y,u',z}{P^*_{uw_v,ww_v}}{P^*_{u'w_v,w'w_v}}(q_{yw_v}^{-1}a_{u,y,u'}^z){\widetilde{T}_{z}}, \end{aligned}$$ where $u$ runs over $u\leq ww_v$, $l(uw_v)=l(u)+l(w_v)$, $u'$ runs over $u'\leq w'w_v$, $l(u'w_v)=l(u')+l(w_v)$, and $y$ runs over $y\leq w_v$. The element ${P^*_{uw_v,ww_v}}{P^*_{u'w_v,w'w_v}}$ has negative degree unless $u=w$ and $u'=w'$. And by Corollary \[est2\], $q_{yw_v}^{-1}a_{u,y,u'}^z$ has negative degree unless $y=w_v$. Thus $$E_wC_{w_v}F_{w'}\equiv {\widetilde{T}_{w}}{\widetilde{T}_{w_v}}{\widetilde{T}_{w'^{-1}}}\mod{\mathbb{Z}[\Gamma^{<0}]}.$$ On the other hand, $$E_wC_{w_v}F_{w'}=\frac1{h_{w_v,w_v,w_v}}C_{ww_v}C_{w_vw'^{-1}},\quad \overline{{h}_{w_v,w_v,w_v}}=h_{w_v,w_v,w_v}$$ (see Lemma \[predecomp\](iii)(iv)), hence $E_wC_{w_v}F_{w'}$ is $\overline{\,\cdot\,}$-invariant. The theorem then follows from the definition of KL-basis (\[aff hec alg\](a)). \[uXi\] Let $v,v'$ be two special points. Let $u\in W'$ be such that $uA_v^+=A_{v'}^+\subset{\mathcal{C}_{v}^{+}}$, let $w\in W'$ be such that $wA_{v'}^+\subset \Pi_{v'}^+$, and let $w'\in W'$ be such that $w'A_{v}^+\subset \Pi_{v}^+$. Then $$C_{wuw_vw'^{-1}}=E_wC_{uw_v}F_{w'}.$$ *Proof.* Since $u$ is a translation such that $uA_v^+=A_{v'}^+$, we can write $uw_v=w_{v'}u'^{-1}$ for some $u'$. Then $u'=w_vu^{-1}w_{v'}$ and $u'A_{v'}^-=w_vu^{-1}A_{v'}^+=w_vA_v^+=A_v^-\subset\mathcal{C}_{v'}^-$. This implies that $u'A_{v'}^+\subset \mathcal{C}_{v'}^+$. Using the theorem above for the special point $v'$, we get $$E_wC_{uw_v}=E_wC_{w_{v'}u'^{-1}}=E_wC_{w_{v'}}F_{u'}=C_{ww_{v'}u'^{-1}}=C_{wuw_v}.$$ Using the theorem above for the special point $v$ we see $C_{wuw_v}=E_{wu}C_{w_v}$. Again using the theorem above for the special point $v$, we get $C_{wuw_vw'^{-1}}=E_{wu}C_{w_v}F_{w'}=E_wC_{uw_v}F_{w'}$. Recall from \[low\_c\_0\](b) that, each $z\in \mathbf{c}_0$ has the form $w_1w_0p_xw_2^{-1}$ with $w_1,w_2\in B_0$, $x\in P^+$. The following decomposition formula for the element $C_{w_1w_0p_xw_2^{-1}}$ is useful. \[0xi\] $C_{w_1w_0p_xw_2^{-1}}=E_{w_1}C_{w_0}S_xF_{w_2}=C_{w_1w_0w_2^{-1}}S_x$ for $w_1,w_2\in B_0,x\in P^+$. *Proof.* It is easy to see $C_{z\pi}=C_z{\widetilde{T}_{\pi}}$ for any $z\in W'$, $\pi\in \Omega$. Using this fact, we can multiply some elements in $\Omega$ to $w_1w_0p_xw_2^{-1}$ to make the theorem \[uXi\] applicable. Then we get $C_{w_1w_0p_xw_2^{-1}}=E_{w_1}C_{w_0p_x}F_{w_2}$. By \[low\_c\_0\](e), $C_{w_0p_x}=C_{w_0}S_x$, and hence the first equality follows. Since $S_x$ is in the center of $\mathcal{H}$, the second equality follows. The following result is due to Guilhot (see [@Guilhot2008lowest]). \[left\] The left cells in the lowest two-sided cell of $\mathbf{c}_0$ of an extended affine Weyl group with positive weight function have the form $\Theta_w=\{w'w_vw^{-1}\vert w'\in U_v\}$, $w\in B_v$. In particular, the number of the left cells in $\mathbf{c}_0$ is equal to $|B_v|=|W_0|$. *Proof.* Since we have known that $\Theta_1=\{w'w_v\vert w'\in U_v\}$ is a left cell (see Lemma \[predecomp\](v)), the corollary can be deduced directly from Theorem \[J0\]. The two-sided ideal $\mathcal{H}^{\mathbf{c}_0}=\mathbb{Z}[\Gamma]\{C_w \mid w\in\mathbf{c}_0\}$ is independent of the choice of the total order $\leq$ on $\Gamma$ and the positive weight function $L$. *Proof.* By Theorem \[0xi\], $\mathcal{H}^{\mathbf{c}_0}$ is the (two-sided) ideal generated by $C_{w_0}$. Then the corollary follows from the formula $C_{w_0}=q_{w_0}^{-1}\sum_{w\in W_0}T_w$, see Lemma \[predecomp\](i). The two-sided ideal $\mathcal{H}^{\mathbf{c}_0}$ has affine cellularity [@Guilhot2014Cellularity Theorem 4.9], in the sense of [@koenig2012affine]. Indeed, the commutative ring in the definition of affine cell ideal is taken to be the center $\mathcal{Z}$. And the anti-involution is taken to be ${\cdot}^\flat$ (see the proof of Lemma \[predecomp\](iv)), noting that $(C_{w_1w_0p_xw_2^{-1}})^\flat=C_{w_2p_{-x}w_0w_1^{-1}}=C_{w_2w_0p_{x^{*}}w_1^{-1}}$ where $x^*=-w_0(x)$ and hence $(S_{x})^\flat=S_{x^*}$. Therefore, by Theorem \[0xi\] one can conclude that $(C_{w_1w_0p_xw_2^{-1}},\flat)$ is a cellular basis of the lowest two-sided ideal. The cellular basis in [@Guilhot2014Cellularity] is actually taken to be $(C_{w_1w_0w_2^{-1}}P_x,\flat)$ where $P_x=S_{\omega_1}^{x_1}S_{\omega_2}^{x_2}\cdots S_{\omega_r}^{x_r}$ with $x=x_1\omega_1+\cdots+x_r\omega_r$ and $\omega_i$’s be the fundamental dominant weights. Therefore, the decomposition number studied in [@Guilhot2014Cellularity Sect. 6] is related to the decomposition of tensor product of fundamental representations. Lusztig’s properties P1-P15 {#conj} =========================== Recall that $W$ is an extended affine Weyl group, $\Gamma$ is a totally ordered abelian group, and $L$ is a positive weight function of $W$. An element $\alpha\in\mathbb{Z}[\Gamma]$ can be written in the form $\sum_{i}a_iq^{\gamma_i}$ with $a_i\in \mathbb{Z}$, $\gamma_i\in\Gamma$. The degree map $\deg:\mathbb{Z}[\Gamma]\longrightarrow\Gamma $ is defined by $\deg(\sum_{i}a_iq^{\gamma_i})=\max\{\gamma_i~|~a_i\neq0 \}$. Lusztig’s $\mathbf{a}$-function is define by $\textbf{a}(z):=\sup\{\deg(h_{x,y,z})~|~x,y\in W \}$. Write $h_{x,y,z}=\gamma_{x,y,z^{-1}}q^{\textbf{a}(z)}+$ terms of degrees less than $\textbf{a}(z)$, if $\textbf{a}(z)\neq \infty$. Let $e$ be the neutral element of $W$. Define $\Delta(z):=-\deg({P^*_{e,z}})$. Then ${P^*_{e,z}}$ has the form $${P^*_{e,z}}=q^{-L(z)}+\cdots+n_zq^{-\Delta(z)}$$ where $L(z)\geq\Delta(z)$ and $n_z$ is a nonzero integer. The $\mathbb{Z}[\Gamma]$-linear map $\tau:\mathcal{H}\longrightarrow\mathbb{Z}[\Gamma]$ is defined by $\tau({\widetilde{T}_{w}})=\delta_{w,e}$. It is well known that $$\label{tau} \tau({\widetilde{T}_{x}}{\widetilde{T}_{y}})=\delta_{xy,e}.$$ The subset $\d$ of $\mathbf{c}_0$ is defined to be $$\d=\{z\in \mathbf{c}_0\mid\textbf{a}(z)=\Delta(z) \}.$$ The following proposition says that Lusztig’s properties P1-P15 (formulated in [@lusztig2003hecke 14.2]) hold for the lowest generalzied two-sided cell $\mathbf{c}_0$. \[lus conj\]Keep the assumptions above. For any $z\in \mathbf{c}_0$, we have $\Delta(z)\geq \textbf{a}(z)$. If $d\in \d$, $x,y\in \mathbf{c}_0$ satisfying $\gamma_{x,y,d}\neq 0$, then $x=y^{-1}$. For any $y\in \mathbf{c}_0$, there exists a unique $d\in \d$ such that $\gamma_{y^{-1},y,d}\neq0$. $\mathbf{c}_0=\{z\in W~|~\textbf{a}(z)=L(w_0) \}$. If $d\in\d$, $y\in \mathbf{c}_0$ and $\gamma_{y^{-1},y,d}\neq 0$ then $\gamma_{y^{-1},y,d}=n_d=1$. If $d\in\d$, then $d^2=e$. If $x,y,z\in \mathbf{c}_0$, then $\gamma_{x,y,z}=\gamma_{y,z,x}$. If $x,y,z\in \mathbf{c}_0$ are such that $\gamma_{x,y,z}\neq0$, then $x\sim_L y^{-1}$, $y\sim_L z^{-1}$, $z\sim_L x^{-1}$. For any left cell $\Theta$ in $\mathbf{c}_0$, $\Theta$ contains a unique $d\in\d$, and $\gamma_{x^{-1},x,d}\neq0$ for any $x\in \Theta$. [^1] In $\mathbb{Z}[\Gamma]\otimes_\mathbb{Z}\mathbb{Z}[\Gamma]$, we have $\sum_{y'\in \mathbf{c}_0}h_{x,y',y}\otimes h_{w,x',y'}=\sum_{y'\in \mathbf{c}_0}h_{x,w,y'}\otimes h_{y',x',y}$ for $w\in \mathbf{c}_0$ and $x,x'\in W$. *Proof.* P$4'$ is just the description in \[low\_c\_0\](a), see [@bremke1997generalized 6.13]. Now, for any $w\in B_0,w'\in U_0$ consider $$\begin{aligned} \tau(C_{ww_0w'^{-1}})&=\tau(E_wC_{w_0}F_{w'})\\ &=\sum_{u,u',y}{P^*_{uw_0,ww_0}}{P^*_{y,w_0}}{P^*_{u'w_0,w'w_0}}\tau({\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{u'^{-1}}})\end{aligned}$$ where $u$ runs over $u\leq ww_0$, $l(uw_0)=l(u)+l(w_0)$, $u'$ runs over $u'\leq w'w_0$, $l(u'w_0)=l(u')+l(w_0)$, and $y$ runs over $y\leq w_0$. If $\tau({\widetilde{T}_{u}}{\widetilde{T}_{y}}{\widetilde{T}_{u'^{-1}}})\neq0$, then $\tau({\widetilde{T}_{u}}{\widetilde{T}_{yu'^{-1}}})\neq0$ which implies that $u^{-1}=yu'^{-1}$ (see ) and hence $y=e$, $u=u'$. Therefore $$\begin{aligned} {P^*_{e,ww_0w'^{-1}}}&=\tau(C_{ww_0w'^{-1}})\\ &=\sum_u{P^*_{uw_0,ww_0}}q^{-L(w_0)}{P^*_{uw_0,w'w_0}}\\ &=\begin{cases} q^{-L(w_0)} \text{+ lower degree terms} \qquad \qquad\qquad\qquad\qquad\text{if }w=w'\\ \text{an element of $\mathbb{Z}[\Gamma]$ with degrees } <L(w_0) \quad\text{ if } w\neq w'. \end{cases}\end{aligned}$$ Therefore $\Delta(z)\geq L(w_0)=\textbf{a}(z)$ for any $z\in \mathbf{c}_0$ and the set $\d$ of $z$ such that $\textbf{a}(z)=\Delta(z)$ is $\{ww_0w^{-1}\mid w\in B_0 \}$. Thus P$1'$ and P$6'$ are proved. The elements in $\d$ will be called distinguished involutions. We also see that $n_d=1$ for each $d\in\d$. By Corollary \[left\], - $\Theta_w:=\{w'w_0w~|~w'\in U_0\},w\in B_0$ are all the left cells in $\mathbf{c}_0$, and hence every left cell contains a unique element in $\d$. For $y\in \mathbf{c}_0$, denote by $\Theta_y$ the left cell containing $y$. The unique distinguished involution in $\Theta_y$ is denoted by $d_y$. Now for $x,y\in \mathbf{c}_0$ consider the equation $$\tau(C_xC_y)=\sum_{z\in\Theta_y}h_{x,y,z}{P^*_{1,z}}.$$ By (P1’) and (a), for any $z\in \Theta_y$ such that $z\neq d_y$, we have $\Delta(z)>\textbf{a}(z)$, and hence, for such $z$, $h_{x,y,z}{P^*_{1,z}}$ has negative degree. Thus, $$\tau(C_xC_y)=\sum_{z\in\Theta_y}h_{x,y,z}{P^*_{1,z}}\equiv h_{x,y,d_y}{P^*_{1,d_y}}\equiv \gamma_{x,y,d_{y}}\mod{\mathbb{Z}[\Gamma^{<0}]}.$$ On the other hand, by the definition of KL-basis, we see $\tau(C_xC_y)\equiv \delta_{xy,1}\mod{\mathbb{Z}[\Gamma^{<0}]}$. Hence, $\gamma_{x,y,d_{y}}= \delta_{xy,1}$. Obviously, $\gamma_{x,y,d}=0$ if $d\neq d_y$ since $h_{x,y,d}=0$. Parts P$2'$, P$3'$, P$5'$ and P$13'$ are proved. P$7'$ follows from the two facts below. - For $z\in \mathbf{c}_0$, $\gamma_{x,y,z}$ equals the coefficient of $q_{w_0}$ in $\tau({\widetilde{T}_{x}}{\widetilde{T}_{y}}{\widetilde{T}_{z}})$, see [@bremke1997generalized Cor.4.5] and the references therein. - $\tau(hh')=\tau(h'h)$ for any $h,h'\in \mathcal{H}$. P$8'$ follows from P$7'$, see[@lusztig2003hecke 14.8]. P$15'$ follows from Lemma \[comm\] below. Let $\mathcal{E}$ be a free $\mathbb{Z}[\Gamma]\otimes \mathbb{Z}[\Gamma]$ module with basis $\{\mathcal{E}_w\vert w\in \mathbf{c}_0\}$. We define a left module structure of $\mathcal{H}$ on $\mathcal{E}$ by $$C_x\mathcal{E}_w=\sum_{z\in W}( h_{x,w,z}\otimes 1)\mathcal{E}_z\quad \text{ for } x\in W \text{ and } w\in \mathbf{c}_0$$ and define a right module structure of $\mathcal{H}$ on $\mathcal{E}$ by $$\mathcal{E}_w C_y=\sum_{z\in W} (1\otimes h_{w,y,z})\mathcal{E}_z\quad \text{ for } y\in W \text{ and } w\in \mathbf{c}_0.$$ We will write $h_{x,y,z}=h_{x,y,z}\otimes1$ and $h'_{x,y,z}=1\otimes h_{x,y,z}$. \[comm\] $\mathcal{E}$ is an $\mathcal{H}$-bimodule with above actions, i.e. the left and right module structures are commutative. In particular, P$15'$ holds. *Proof.* The following claim is needed. \(g) Let $u$, $u'$ be both in $U_v$, then $(C_{uw_v}\mathcal{E}_{w_v})C_{w_vu'^{-1}}=C_{uw_v}(\mathcal{E}_{w_v}C_{w_vu'^{-1}})$. Since $C_{uw_v}C_{w_v}=E_uC_{w_v}C_{w_v}=h_{w_v,w_v,w_v}E_uC_{w_v}=h_{w_v,w_v,w_v}C_{uw_v}$, we have $C_{uw_v}\mathcal{E}_{w_v}=h_{w_v,w_v,w_v}\mathcal{E}_{uw_v}$. Similarly $C_{uw_v}C_{w_vu'^{-1}}=h_{w_v,w_v,w_v}E_uC_{w_v}F_{u'}$. Thus, $E_uC_{w_v}F_{u'}$ is $\overline{\,\cdot\,}$-invariant, hence we can write $E_uC_{w_v}F_{u'}=\sum_{z\in \mathbf{c}_0}b_zC_z$ with $b_{z}$ in $\mathbb{Z}[\Gamma]$ and $\overline{\,\cdot\,}$-invariant. Thus $h_{uw_v,w_vu'^{-1},z}=b_zh_{w_v,w_v,w_v}$, $z\in \mathbf{c}_0$. Since $\mathbf{a}(z)=L(w_0)=\deg(h_{w_v,w_v,w_v})$, $b_z=\gamma_{uw_v,w_vu'^{-1},z^{-1}}\in \mathbb{Z}$. Hence $\mathcal{E}_{uw_v}C_{w_vu'^{-1}}=h'_{w_v,w_v,w_v} \sum_{z\in \mathbf{c}_0}\gamma_{uw_v,w_vu'^{-1},z}\mathcal{E}_z$. Then $$\begin{aligned} (C_{uw_v}\mathcal{E}_{w_v})C_{w_u'^{-1}}&=h_{w_v,w_v,w_v}\mathcal{E}_{uw_v}C_{w_vu'^{-1}}\\ &=h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v} \sum_{z\in \mathbf{c}_0}\gamma_{uw_v,w_vu'^{-1},z}\mathcal{E}_z\end{aligned}$$ Similar computations show that $C_{uw_v}(\mathcal{E}_{w_v}C_{w_vu'^{-1}})=h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v} \sum_{z\in \mathbf{c}_0}\gamma_{uw_v,w_vu'^{-1},z}\mathcal{E}_z$, and thus the claim (g) is proved. Now we prove the lemma. For any $x,y\in W$, $w\in B_v$, $w'\in U_v$, we have $$\begin{aligned} (C_x\mathcal{E}_{ww_vw'^{-1}})C_y&=\frac1{h_{w_v,w_v,w_v}}(C_x(C_{ww_v}\mathcal{E}_{w_vw'^{-1}}))C_y\\ &=\frac1{h_{w_v,w_v,w_v}}((C_xC_{ww_v})\mathcal{E}_{w_vw'^{-1}})C_y\\ \intertext{(We can write $C_xC_{ww_v}=\sum_{u\in U_v}h_{x,ww_v,uw_v}C_{uw_v}$ since $\{uw_v|u\in U_v \}$ is a left cell.)} &=\frac1{h_{w_v,w_v,w_v}}(\sum_{u\in U_v}h_{x,ww_v,uw_v}C_{uw_v}\mathcal{E}_{w_vw'^{-1}})C_y \\ &=\frac1{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} (\sum_{u\in U_v}h_{x,ww_v,uw_v}C_{uw_v}(\mathcal{E}_{w_v}C_{w_vw'^{-1}}))C_y \intertext{(by claim (g) we have)}\\ &=\frac1{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} (\sum_{u\in U_v}h_{x,ww_v,uw_v}(C_{uw_v}\mathcal{E}_{w_v})C_{w_vw'^{-1}})C_y\\ &=\frac{h_{w_v,w_v,w_v}}{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} (\sum_{u\in U_v}h_{x,ww_v,uw_v}(\mathcal{E}_{uw_v}C_{w_vw'^{-1}}))C_y\\ &=\frac{h_{w_v,w_v,w_v}}{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} \sum_{u\in U_v}h_{x,ww_v,uw_v}\mathcal{E}_{uw_v}(\sum_{u'\in U_v}h'_{w_vw'^{-1},y,w_vu'^{-1}}C_{w_vu'^{-1}})\\ &=\frac1{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} \sum_{\substack{u\in U_v\\u'\in U_v}}h_{x,ww_v,uw_v}h'_{w_vw'^{-1},y,w_vu'^{-1}}(C_{uw_v}\mathcal{E}_{w_v})C_{w_vu'^{-1}}\end{aligned}$$ Similarly we can get $$C_x(\mathcal{E}_{ww_vw'^{-1}}C_y)=\frac1{h_{w_v,w_v,w_v}h'_{w_v,w_v,w_v}} \sum_{\substack{u\in U_v\\u'\in U_v}}h_{x,ww_v,uw_v}h'_{w_vw'^{-1},y,w_vu'^{-1}}C_{ww_v}(\mathcal{E}_{w_v}C_{w_vu'^{-1}}).$$ Using claim (g) again, we see that $(C_x\mathcal{E}_{ww_vw'^{-1}})C_{y}=C_x(\mathcal{E}_{ww_vw'^{-1}}C_y)$. Now the lemma follows since each $z\in \mathbf{c}_0$ has the form $z=ww_vw'^{-1}$ for some $w\in B_v$, $w'\in U_v$. Now taking the coefficients of $1\otimes q_{w_{0}}$ in the equality in P$15'$, one obtains the following Corollary: \[cor\_comm\] For any $x,z\in W$, $y\in \mathbf{c}_0$, we have $$\sum_{w\in \mathbf{c}_0,v\in \mathbf{c}_0}h_{x,y,w}\gamma_{w.z.v^{-1}}=\sum_{v\in \mathbf{c}_0,w\in \mathbf{c}_0}h_{w,z,v^{-1}}\gamma_{y,z,w^{-1}}.$$ Based rings of $\mathbf{c}_0$ {#base} ============================= Let $J_0$ be the free $\mathbb{Z}$-module with basis $\{t_x\mid x\in \mathbf{c}_0 \}$. Define multiplication in $J_0$ by$$\label{mul} t_xt_y=\sum_{z\in\textbf{c}_0}\gamma_{x,y,z^{-1}}t_z.$$ where the integer $\gamma_{x,y,z^{-1}}$ is the coefficient of $q^{\textbf{a}(z)}$ in $h_{x,y,z}$. According to [@lusztig2003hecke §18] we see that Prop. \[lus conj\] implies the following result. \[defJ\] With the multiplication , the $\mathbb{Z}$-module $J_0$ becomes an associative ring with with identity $\sum_{d\in \d}t_d$. The ring $J_0$ is called the [*based ring*]{} of the lowest generalized two-sided cell $\textbf{c}_0$. Moreover, according to [@lusztig2003hecke Thm. 18.9], we have \[maptoJ\] The $\mathbb{Z}[\Gamma]$-linear map $ \phi :\mathcal{H}\longrightarrow \mathbb{Z}[\Gamma]\otimes_\mathbb{Z}J_0$ defined by $$\quad C_x\mapsto \sum_{d\in\d,z\in \mathbf{c}_0}h_{x,d,z}t_z$$is a homomorphism of $\mathbb{Z}[\Gamma]$-algebras with 1. \[based ring\] If $u=w_1w_0p_xw_2^{-1}$, $u'=w_3w_0p_{x'}w_4^{-1}$, $u''=w_5w_0p_{x''}w_6^{-1}$, where $w_i\in B_0$, $i=1,\dots,6$ and $x,x',x''\in P^+$, then $\gamma_{u,u',u''^{-1}}\neq0$ implies that $w_2=w_3$, $w_4=w_6$, $w_5=w_1$, and $\gamma_{u,u',u''^{-1}}=m(x,x',x'')$ where $m(x,x',x'')$ is the multiplicity of the simple module $V(x'')$ in the tensor product $V(x)\otimes V(x')$. *Proof.* We follow the approach of [@Xi1990based Cor.4.3]. By P$8'$ in Proposition \[lus conj\], $\gamma_{u,u',u''^{-1}}\neq0$ implies that $u \sim_L u'^{-1}$, $u'\sim_L u''$, and $u''^{-1}\sim_L u^{-1}$. Then by Corollary \[left\] about the description of the left cells in $\textbf{c}_0$, we have $w_2=w_3$, $w_4=w_6$ and $w_5=w_1$. By Lemma \[predecomp\] and property \[low\_c\_0\] (e), $$\begin{aligned} C_{u}C_{u'}&=C_{w_1w_0p_xw_2^{-1}}C_{w_2w_0p_{x'}w_4^{-1}}\notag\\&=E_{w_1p_{w_0x}}C_{w_0w_2^{-1}}C_{w_2w_0}F_{(p_{x'}w_4^{-1})^{-1}}\\ &=\sum_{y\in P^+}h_{w_0w_2^{-1},w_2w_0,w_0p_y}E_{w_1p_{w_0x}}C_{w_0p_y}F_{(p_{x'}w_4^{-1})^{-1}}\\ &=\frac1{h_{w_0,w_0,w_0}^2}\sum_{y\in P^+}h_{w_0w_2^{-1},w_2w_0,w_0p_y}C_{w_1w_0p_{x}}C_{w_0p_y}C_{w_0p_{x'}w_4^{-1}}\\ &=\sum_{x'',y\in P^+}h_{w_0w_2^{-1},w_2w_0,w_0p_y}a_{y,x''}C_{w_1w_0p_{x''}w_4^{-1}}\end{aligned}$$ for some integer $a_{y.x''}$. Furthermore, we have $$\label{g1} a_{0,x''}=m(x,x',x'').$$ Thus we have $h_{u,u',w_1w_0p_{x''}w_4^{-1}}=\sum_{y\in P^+}a_{y,x''}h_{w_0w_2^{-1},w_2w_0,w_0p_y}$. Hence, we have $$\label{g2} \gamma_{u,u',(w_1w_0p_{x''}w_4^{-1})^{-1}}=\sum_{y\in P^+}a_{y,x''}\gamma_{w_0w_2^{-1},w_2w_0,(w_0p_y)^{-1}}$$ Then the theorem would follows from , , and the following claim: $$\label{conf} \text{If }y\neq0,\text{ then }\gamma_{w_0w_2^{-1},w_2w_0,(w_0p_y)^{-1}}=0.$$ By P$7'$, we have $\gamma_{w_0w_2^{-1},w_2w_0,(w_0p_y)^{-1}}=\gamma_{(w_0p_y)^{-1},w_0w_2^{-1},w_2w_0}$. Thus to prove , we only need to prove $h_{(w_0p_y)^{-1},w_0w_2^{-1},w_0w_2^{-1}}$ has degree $<L(w_0)$ if $y\neq0$. But this follows immediately from the equality $C_{p_{-y}w_0}C_{w_0w_2^{-1}}=C_{p_{-y}w_0w_2^{-1}}$ (see Theorem \[0xi\]). This completes the proof. \[iso\] Let $\operatorname{Mat}_{B_0}(\z)$ be the matrix ring over $\z$ and indexed by $B_0$, see Definition \[center form\] for the definition of $\z$. Then there is an isomorphism of rings between the based ring $J_0$ and the matrix ring $\operatorname{Mat}_{B_0}(\z)$ given by $$J_0\longrightarrow \operatorname{Mat}_{B_0}(\z), t_{w_1w_0p_xw_2^{-1}}\mapsto S_xI_{w_1,w_2}$$ where $I_{w_1,w_2}$ is the matrix whose $(w_1,w_2)$-entry is 1 and other entries are 0. *Proof.* This is follows from the last theorem and the formula $$S_{x}S_{x'}=\sum_{x''\in P^+}m(x,x',x'')S_{x''}$$ (see Section \[low\_c\_0\] (d)). From Theorem \[based ring\], we see that $\gamma_{x,y,z}$, $x,y,z\in \mathbf{c}_0$ is actually independent of the choice of the positive weight function, and so is the based ring $J_0$. There is one way to see it directly. Since $\gamma_{x,y,z}$ is the coefficients of $q_{w_0}$ in $m_{x,y,z^{-1}}:=\tau({\widetilde{T}_{x}}{\widetilde{T}_{y}}{\widetilde{T}_{z}})$(see Section \[conj\](e)), it is independent of the choice of the total order on $\Gamma$. Assume that $q_1=q_s,q_2=q_t$ with $s$ and $t\in S$ are not conjugate in $W$ (if it is possible). Write $\xi_1=q_1-q_1^{-1}$, $\xi_2=q_2-q_2^{-1}$ and denote $\nu_1$ (resp. $\nu_2$) the number of simple reflections conjugate to $s$ (resp. $t$) in a reduced expression of $w_0$. Obviously, $\nu_1+\nu_2=l(w_0)$. Then we can write $m_{x,y,z}$ in the form $$\begin{gathered} m_{x,y,z}=\sum_{\substack{0\leq i\leq \nu_1\\0\leq j\leq\nu_2}}a_{i,j}\xi_1^i\xi_2^j\end{gathered}$$with $a_{i.j}\in\mathbb{Z}$; the proof is similar to that of [@Lusztig1985cells Thm. 7.2]. Then we can conclude that $\gamma_{x,y,z}=a_{\nu_1,\nu_2}$ is independent of the choice of the positive weight function. In this way, we also see the based ring $J_0$ is well defined for any positive weight function, since it is well know that $J_0$ is well defined when the weight function is constant. But to construct the homomorphism $\phi$ in Theorem \[maptoJ\], it seems that the Proposition \[lus conj\] is unavoidable. Type $\tilde C_n$ case {#nonext} ====================== In this section the affine Weyl group $W'$ is of type $\tilde C_r$ $(r\ge 1)$. Here we identify $\tilde{A}_1$ with $\tilde{C}_1$, and $\tilde{B}_2$ with $\tilde{C}_2$. We will discuss the based ring of lowest generalized two-sided cell in this case for completeness. Note that in this case, some weight functions on $W'$ can not be extended to $W$. ![Dynkin diagram of type $\tilde{C}_r,r\geq1$.[]{data-label="fig:diag-eps-converted-to"}](diag-eps-converted-to){width="0.6\linewidth"} We number the simple reflections of $W'$ as usual, see Fig. \[fig:diag-eps-converted-to\]. Let $L$ be a positive weight function on $W'$ such that $L(s_0)\neq L(s_r)$. It is no harm to assume that $L(s_0)< L(s_r)$. Now the claim in \[extended aff\](a) does not hold in this case. We have to modify the definition of special points. A point $\lambda\in E$ is called special if $m(\lambda):=\sum_{\lambda\in H}L(H)$ is maximal, see [@bremke1997generalized; @bremke1996Phd_thesis]. It turns out that the set of special points is $Q=\mathbb{Z}R$. Define the quarters $\mathcal{C}_\lambda^+,\lambda\in Q$, as in subsection 2.3. The boxes are defined to be the connected components of the complement in $E$ of the set of hyperplanes supporting the walls of the quarters $\mathcal{C}_\lambda^+,\lambda\in Q$. The box containing $\lambda(\in Q)$ in its closure and contained in $\mathcal{C}_\lambda^+$ is still denoted by $\Pi_\lambda^+$. Now each box contains $|W_0|$ alcoves (while in the case $L(s_0)=L(s_r)$ each box contains $|W_0|/|\Omega|=|W_0|/2$ alcoves.) Then the lowest generalized two sided cell of $(W',L)$ is $$\mathbf{c}'_0=\{w_1w_0p_xw_2^{-1}\,|\,w_1,w_2\in B_0,x\in Q^+ \}$$ where $B_0=\{w\in W'\,|\,wA_0^+\subset\Pi_0^+ \}$, $w_0$ is the longest element $W_0$, and $Q^+=Q\cap P^+$. It can also be characterized by the $\textbf{a}$-value$$\mathbf{c}'_0=\{z\in W'\,|\,\textbf{a}(z)=L(w_0) \}.$$ One can see [@bremke1997generalized; @bremke1996Phd_thesis] for these results. The discussions in the Section \[formula\]–\[base\] are also applicable in current situation. We state it here without proof. Since Lemma \[lus\_lemma\] still holds (see [@bremke1997generalized Lemma 4.3]), we have decomposition (compare with Theorem \[J0\],\[uXi\])$$C_{w_1w_0p_xw_2^{-1}}=E_{w_1}C_{w_0p_x}F_{w_2}=E_{w_1}C_{w_0}F_{(p_xw_2^{-1})^{-1}}$$ for any $w_1,w_2\in B_0$, $x\in Q^+$. Then the Lusztig conjectures on cells (P1-P15) hold for $\mathbf{c}'_0$. The proof is completely similar to Proposition \[lus conj\]. Then we have the based ring $J'_0=\mathbb{Z}\{t_w\,|\,w\in \mathbf{c}'_0 \}$ with structure constant $\gamma_{x,y,z^{-1}}$ and the homomorphism from $\mathcal{H'}$ to $\mathbb{Z}[\Gamma]\otimes_\mathbb{Z}J'_0$, see Theorem \[maptoJ\]. For any $x\in Q$, write $x=y-z$ with some $y,z\in Q^+$ and define $\theta_x$ to be ${\widetilde{T}_{y}}({\widetilde{T}_{z}})^{-1}$. Also define, for $x\in Q^+$, $$S_x=\sum_{x'\in Q}d(x',x)\theta_{x'}$$ where $d(x',x)$ is the dimension of the weight space $V(x)_{x'}$ as Section \[low\_c\_0\]. Then $S_x,x\in Q^+$ is a $\mathbb{Z}[\Gamma]$-basis of the center of $\mathcal{H}'$. Actually this holds for any affine Hecke algebra $\mathcal{H'}$. It is easy to see that if $x\in Q^+$, then $d(x',x)\neq0$ only if $x'\in Q$. Then one can see, for $x,x'\in Q^+$, $$S_{x}S_{x'}=\sum_{x''\in Q^+}m(x,x',x'')S_{x''},$$ where $m(x,x',x'')$ is multiplicity of $V(x'')$ in the tensor product $V(x)\otimes V(x')$. Following the same lines as [@lusztig1983singularities] or [@xi1994representations 2.9], we have $$C_{w_0p_x}=C_{w_0}S_{x},\text{ for } x\in Q^+.$$ And hence $C_{w_0p_x}C_{w_0p_y}=\sum_{z\in Q^+}m(x,y,z)h_{w_0,w_0,w_0}C_{w_0z}$ for $x,y\in Q^+$. Then the structure of the based ring $J_0$ can be explicitly described as $$t_{w_1w_0p_xw_2^{-1}}t_{w_3w_0p_yw_4^{-1}}=\sum_{z\in Q^+}\delta_{w_2,w_3}m({x,y,z})t_{w_1w_0p_zw_4^{-1}}$$ where $w_1,w_2,w_3,w_4\in B_0$, $x,y\in Q^+$. In other words, we have [**Proposition 6.2.**]{} The based ring $J'_0$ of the lowest generalized two-sided cell $\mathbf{c}'_0$ of $W'$ is isomorphic to a matrix ring of order $|W_0|$ over the commutative ring $\mathcal{Z'}_{\mathbb{Z}}$, where $\mathcal{Z'}_{\mathbb{Z}}=\mathbb{Z}\langle S_x\,|\,x\in Q^+ \rangle$ (which is isomorphic to a subring of the representation ring of the algebraic group $G_\mathbb{C}$). [**Remark 6.3.**]{} If $L$ is a positive weight function of $W'$ such that $L(s_0)=L(s_r)$, then $L$ can be extended to a weight function on $W$. In this case the lowest generalized two-sided cell $\mathbf{c}''_0$ of $W'$ equals $W'\cap \mathbf{c}_0$, where $\mathbf{c}_0$ is the lowest generalized two-sided cell of $W$. Note that $\mathbf{c}''_0$ is different from $\mathbf{c}'_0$ (see subsection 6.1 for the definition of $\mathbf{c}'_0$) and the based ring of $\mathbf{c}''_0$ is a subring of $J_0$ but it is not isomorphic to the based ring $J'_0$ of $\mathbf{c}'_0$. Irreducible representations attached to $\mathbf{c}_0$ {#rep1} ====================================================== Let $k$ be a field. Assume that there is a ring homomorphism $\mathbb{Z}[\Gamma]\longrightarrow k$. Set $\mathcal{H}_k:=\mathcal{H}\otimes_{\mathbb{Z}[\Gamma]}k$, $J_{0,k}:=J_{0}\otimes_\mathbb{Z}k$. The homomorphism $\mathcal{H}_k\longrightarrow J_{0,k}$ induced by $\phi$ in Theorem \[based ring\] is denoted again by $\phi$. Denote by $\mathcal{H}_k^{\mathbf{c}_0}$ the two-sided ideal of $\mathcal{H}_k$ spanned by all $C_w\otimes 1$, $w\in\textbf{c}_0$. We shall simply write $C_w$ and $t_w$ instead of $C_w\otimes 1$ and $t_w\otimes 1$ respectively. In this section we use the explicit structure of $J_0$ to study representations of $\mathcal{H}_k$ attached to $\mathbf{c}_0$. We shall follow the approaches in [@lusztig1987cellsIII; @xi2007representations]. Given a $J_{0,k}$-module $E$, through the homomorphism $\phi:\mathcal{H}_k\longrightarrow J_{0,k}$, we get an $\mathcal{H}_k$-module structure on $E$. This $\mathcal{H}_k$-module will be denoted by $E_\phi$ or $\phi_*(E)$. We will see that oftern $E_{\phi}$ has a unique simple quotient when $E$ is irreducible, see Lemma \[unique\_max\_submod\] below. We say that a representation $M$ of $\mathcal{H}_k$ is attached to $\mathbf{c}_0$ if $H_k^{\mathbf{c}_0}M\neq0$. The set of all irreducible representations of $\mathcal{H}_k$ attached to $\mathbf{c}_0$ is denoted by $\irr(\mathcal{H}_k,\mathbf{c}_0)$. The set of all irreducible representations of $\mathcal{H}_k$ is denoted by $\irr(\mathcal{H}_k)$. Since $\mathcal{H}_k^{\mathbf{c}_0}$ is a two-sided ideal generated by $C_{w_0}$ (see Theorem \[0xi\]), we have - $\irr(\mathcal{H}_k, \mathbf{c}_0)=\{M\in \irr \mathcal{H}_k~|~C_{w_0}M\neq0 \}$. Note that $J_{0,k}$ is naturally a left $J_{0,k}$-module by multiplication in $J_{0,k}$. Define a right $\mathcal{H}_{k}$-module structure on $J_{0,k}$ by $$t_wC_x=\sum_{v\in \mathbf{c}_0 }h_{w,x,v}t_v\quad \text{ for }w\in \mathbf{c}_0, x\in W.$$ Then by Corollary \[cor\_comm\] - $J_{0,K}$ is a $J_{0,k}$-$\mathcal{H}_{k}$-bimodule. Via the homomorphism $\phi :\mathcal{H}_{k}\longrightarrow J_{0,k}$, $J_{0,K}$ becomes an $\mathcal{H}_k$-bimodule. We have - $\mathcal{H}_k$ acts on $J_{0,K}$ by $C_xt_w=\sum_{v\in \mathbf{c}_0}h_{x,w,v}Z_v$ for $x\in W, w\in \mathbf{c}_0$, In fact, $C_xt_w=\phi(C_x)t_w=\sum_{d\in\d,u\in \mathbf{c}_0}h_{x,d,u}t_ut_w=\sum_{\substack{d\in\d,\\u,v\in \mathbf{c}_0}}h_{x,d,u}\gamma_{u,w,v^{-1}}t_v$, which is $\sum_{v\in \mathbf{c}_0}h_{x,w,v}Z_v$ by $ P15' $, $P2'$, $P3'$, $P5'$, $P7'$. Let $M$ be an $\mathcal{H}_k$-module. Then $\hat {M}:={J_{0,K}}\otimes_{\mathcal{H}_k} M$ is a $J_{0,K}$-module. Define $\tilde M=\phi_*(\hat M)$. It is easy to verify that - The map $\tilde M\longrightarrow M$, $t_w\otimes\ m\mapsto C_u m$ is a homomorphism of $\mathcal{H}_k$-modules. If $M$ is a simple $\mathcal H_k$-module with $C_{w_0}M\neq0$, then $M$ is the unique composition factor of $\tilde M$ such that $C_{w_0}M\neq0$ Let $E$ be a $J_{0,k}$-module and $N$ be an $\mathcal{H}_k$-submodule of $E_\phi$. Using Corollary \[cor\_comm\] we can see the following - $\widehat{N}\longrightarrow E$, $Z_u\otimes n\mapsto \phi(C_u).n$ is a homomorphism of $J_{0,k}$-modules, where $n\in N$ is viewed as an element of $E$. The image of this map is $\mathcal{H}_k^{\mathbf{c}_0}N$. The following lemma is crucial for main results in this section, see [@xi2007representations Lemma 2.4] for the original version. \[unique\_max\_submod\] Assume that $E$ is a simple $J_{0,k}$-module such that $C_{w_0}E_\phi\neq 0$. Then the subspace $K=\{b\in E_\phi |C_u.b=0 \text{ for all }u\in \mathbf{c}_0\}$ is the unique maximal submodule of $E_\phi$. In particular, $E_\phi$ has only one composition factor $M'$ such that $C_{w_0}M'\neq0$. And $M'$ is the unique simple quotient of $E_\phi$. *Proof*. It is easy to see that $K$ is an $\mathcal{H}_k$-submodule of $E_\phi$. Let $v\in E_\phi$ be an element not in $K$ and $N=\mathcal{H}_k. v$. Then the image $\mathcal{H}_k^{\mathbf{c}_0}N$ of the homomorphism $\hat{N}\longrightarrow E$ is nonzero. Since $E$ is a simple $J_{0,k}$-module, we have $E=\mathcal{H}_k^{\mathbf{c}_0}N$. Thus $N=E_\phi$. Therefore $K$ is the unique maximal submodule of $E_\phi$. Other statements follow immediately. Denote by $\irr(J_{0,k},C_{w_0})$ the set of all irreducible representations $E$ of $ J_{0,k}$ such that $C_{w_0}E_{\phi}\neq0$. The above lemma says that there is a well defined map $$\rho:\irr(J_{0,k},C_{w_0})\longrightarrow\irr(\mathcal{H}_k,\mathbf{c}_0)$$ such that $\rho(E)$ is the unique simple quotient of $E_\phi$. \[irr\_bijection\] $\rho$ is bijective. *Proof.* First we prove that $\rho$ is surjective. Let $M\in \irr(\mathcal{H}_k,\mathbf{c}_0)$. Then the map $\tilde M\longrightarrow M$ has nonzero image $\mathcal{H}_k^{\mathbf{c}_0}M\neq0$ since $C_{w_0}M\neq0$, and hence surjective. Since $\tilde M=\phi_*(\hat M)$, $\hat M$ must have a composition factor $E$ such that $E_\phi$ has a composition factor $M$. Since $C_{w_0}M\neq0$, by Lemma \[unique\_max\_submod\], $M$ is the unique simple quotient of $E$, i.e. $\rho(E)=M$. Therefore $\rho$ is surjective. Now we prove that $\rho$ is injective. Let $E\in\irr(J_{0,k},C_{w_0})$ and $\pi:E_\phi\longrightarrow M=\rho(E)$ be the quotient map. Let $p':\hat{E}\longrightarrow \hat M$, $t_u\otimes x\mapsto t_u\otimes \pi(x)$ be the homomorphism of $J_{0,k}$-modules induced from $\pi$. Then we have commutative diagram $$\xymatrix{\hat{E}\ar[r]^{p'}\ar[d]_{\theta} & \hat M\ar[d]^p\\E_\phi\ar[r]^\pi& M}$$ where $\theta$ is the homomorphism of $J_{0,k}$-module defined in (e) and $p$ is essentially the map defined in (d). The map $p'$ induces a surjective homomorphism $\overline{p'}:\hat{E}/\ker\theta\longrightarrow \hat M/p'(\ker\theta)$ of $J_{0,k}$-modules. On one hand, $C_{w_0}M\neq0$ implies that $p$ is surjective and hence $\ker p\neq \hat M$. On the other hand, the commutative diagram implies that $p'(\ker \theta)\subset \ker p$. Then we have $\hat M/p'(\ker\theta)\neq0$. Since $\widehat{E}/\ker\theta\simeq E$ is simple, $\overline{p'}$ is an isomorphism. Therefore $E$ is a composition factor of $\hat M$. We claim that $\hat M$ admits one and only one composition factor $E'$ such that $C_{w_0}E'_\phi\neq0$. Otherwise, by Lemma \[unique\_max\_submod\], $\tilde M$ has more than one composition factor $M'$ such that $C_{w_0}M'\neq0$, which contradicts (d). Since $C_{w_0}E_\phi\neq0$, $E$ is unique characterized as the simple factor of $\hat M$ on which the action of $C_{w_0}$ is nonzero. Thus $\rho$ is injective. This completes the proof. By proposition above, to parametrize the irreducible representations of $\mathcal{H}_k$ attached to $\mathbf{c}_0$, it suffices to parametrize the irreducible representations $E$ of $J_{0,k}$ such that $C_{w_0}E_{\phi}\neq0$. The set of simple modules (up to isomosphism) of $\mathcal{Z}_k=\mathcal Z_{\mathbb Z}\otimes k$ are one-to-one correspondence to the set $\mspec$ of maximal ideals of $\mathcal{Z}_k$. For any $\m\in\mspec$, the field $k_\m:=\mathcal{Z}_k/\m$ is a simple $\mathcal{Z}_k$-module. Denote by $\lambda_\m$ by the quotient map $\mathcal{Z}_k\rightarrow k_\m$. Recall that $J_0$ is isomorphic to $\operatorname{Mat}_{B_0}(\z)$ via $t_{w_1w_0p_xw_2^{-1}}\mapsto S_xI_{w_1,w_2}$ where $I_{w_1,w_2}$ is the matrix whose $(w_1,w_2)$-entry is 1 and other entries are 0. Thus, given a maximal ideal $\m\in\mspec$, the homomorphism $J_{0,k}\longrightarrow \operatorname{Mat}_{B_0}(k_\m)$, $t_{w_1w_0p_xw_2^{-1}}\mapsto \lambda_\m(S_x)I_{w_1,w_2}$ gives a representations of $J_{0,k}$. The affording space will be denoted by $E_\m$. Then - The map $\m\mapsto E_\m$ gives a bijection between the set $\mspec$ and the set of simple representations of $J_{0,k}$ over $k$. And $\dim_{k} E_\m=(\dim_{k_\m} E_\m)[k_\m:k]=|W_0|[k_\m:k]$. We denote $(E_\m)_\phi$ simply by $E_{\m,\phi}$. \[condition1\] $C_{w_0}E_{\m,\phi}=0$ if and only if $\sum_{x\in P^+}h_{w_0,ww_0,w_0p_x}S_x\in\m$ for all $w\in B_0$. In particular, $C_{w_0}E_{\m,\phi}\neq0$ for any maximal ideal $\m\in\mspec$ if $h_{w_0,w_0,w_0}$ is nonzero in $k$. *Proof.* Note that $$\phi(C_{w_0})=\sum_{w\in B_0,x\in P^+}h_{w_0,ww_0w^{-1},w_0p_xw^{-1}}t_{w_0p_xw^{-1}}=\sum_{w\in B_0,x\in P^+}h_{w_0,ww_0,w_0p_x}t_{w_0p_xw^{-1}}.$$ Thus $C_{w_0}$ acts on $E_{\m,\phi}$ by the matrix $$\sum_{w\in B_0,x\in P^{+}}h_{w_0,ww_0,w_0p_x}\lambda_\m(S_x)I_{e,w}.$$ Hence, $C_{w_0}E_{\m,\phi}=0$ if and only if, for any $w\in B_0$, $\sum_{x\in P^+}h_{w_0,ww_0,w_0p_x}\lambda_\m(S_x)=0$, i.e. $\sum_{x\in P^+}h_{w_0,ww_0,w_0p_x}S_x\in\m$. The second statement follows from the fact that when $w=e$, $\sum_{x\in P^+}h_{w_0,ww_0,w_0p_x}S_x=h_{w_0,w_0,w_0}T_e$ where $T_e$ is the identity of $\mathcal{H}$. Here are some notations. $I_0=S\cap W_0$. For any subset $I\subset I_0$, the dominant weight $x_I$ is defined to be $\sum_{i\in I}\omega_i$, where $\omega_i$ is the fundamental dominant weight corresponding to $i$. Let $w_I$ denote the longest element in the parabolic subgroup $W_I$ of $W'$. And denote the complement of $I$ in $I_0$ by $I'$. Then (see the proof in [@xi1994based 3.7]) - $x_Iw_{I'}=ww_0$ for some $w\in B_0$. Define $$\begin{aligned} \label{ad} \alpha_I:=&\sum_{x\in P^+}h_{w_0,x_Iw_{I'},w_0p_x}S_x\in\mathcal{Z}_k:=\mathcal{Z}\otimes_{\mathbb{Z}[\Gamma]}k,\\ \zeta_I:=& h_{w_I,w_I,w_I}\in k,\\ \Delta_k:=&\{I\subset I_0~|~\zeta_{I'}\neq0,\text{ but }\zeta_{I'\cup\{i \}}=0 \text{ for any } i \in I \}.\label{ad3}\end{aligned}$$ Then the following proposition is proved in [@xi1994based 3.7] in one parameter case. One can easily generalize it to the case of positive weight function in view of the preparation in Section \[formula\] and Section \[conj\]. \[condition2\]If $\m\in\mspec$, then $\sum_{x\in P^+}h_{w_0,ww_0,w_0p_x}S_x\in\m$ for all $w\in B_0$ if and only if $\alpha_I\in\m$ for all $I\in \Delta_k$. Now combining Proposition \[irr\_bijection\], Proposition \[condition1\] with Proposition \[condition2\], we obtain \[basic set\] Fix a field $k$ and a specialization $\mathbb{Z}[\Gamma]\longrightarrow k$. The simple modules of $\mathcal{H}_k$ attached to $\mathbf{c}_0$ have a bijection with the set $\{\m\in\mspec\,|\,\alpha_I\notin\m \text{ for some }I\in \Delta_k\}$, where $\alpha_I$ and $\Delta_k$ are defined by (\[ad\]) (\[ad3\]). Note that if $h_{w_0,w_0,w_0}\neq0$, then $\emptyset\in \Delta_k$ and $\alpha_{\emptyset}=h_{w_0,w_0,w_0}\notin\m$, for any $\m\in \mspec$. So in this case the simple $\mathcal{H}_k$-modules attached to $\mathbf{c}_0$ are in bijection with the set $\mspec$. If $k$ is algenraically closed, then $\mspec$ has a bijection with the set of representatives of semisimple conjugacy classes in the simply connecte simple algebraic group $G_\mathbb{k}$ over $k$ with root system $R$. The correspondence is as follows. If $s\in G_\mathbb{C}$ is a semisimple element, then we can define a homomorphism $\lambda_s:\mathcal{Z}_k\ra k$, $\lambda_s(S_x)=tr(s,V(x))$ for $x\in P^+$, where $V(x)$ is a simple $G_\mathbb{C}$-module with highest weight $x$. Then $\ker \lambda_s\in\mspec$. When $W'$ is of type $\tilde C_n$ using the results in section 6 we can get similar results for representations of Hecke algebras $\mathcal H'_k$ of $W'$. We conclude this section with a dimension formula. For any $\mathcal{H}_k$-module $M$, denote by $\operatorname{Ann}_M\mathbf{c}_0$ the largest submodule of $M$ annihilated by the lowest two-sided ideal $\mathcal{H}_k^{\mathbf{c}_0}$. For any $J_{0,k}$-module $E$ we denote $$\rho(E)=E_\phi/\operatorname{Ann}_{E_\phi}\mathbf{c}_0.$$ \[kdim\] Let $\m\in\mspec$. Then the dimension of $\rho(E_\m)$ over $k_\m$ is the rank of the matrix $(\lambda_\m(m_{w,w'}))_{w,w'\in B_0}$, where $$m_{w,w'}=\sum_{x\in P^+}h_{w_0w^{-1},w'w_0,w_0p_x}S_x\in \mathcal{Z}_k.$$ *Proof*. By Theorem \[J0\], we have $$\label{l1} \operatorname{Ann}_{E_\m}\mathbf{c}_0=\{v\in E_{\m,\phi}\,\vert\, C_{w_0w^{-1}}v=0 \text{ for all } w\in B_0\}.$$ One can see that $\phi(C_{w_0w^{-1}})= \sum_{\substack{w'\in B_0\\x\in P^+}}h_{w_0w^{-1},w'w_0w'^{-1},w_0p_xw'^{-1}}t_{w_0p_xw'^{-1}}$ acts on $E_\m$ by the matrix $$\begin{aligned} &\sum_{\substack{w'\in B_0\\x\in P^+}}h_{w_0w^{-1},w'w_0w'^{-1},w_0p_xw'^{-1}}\lambda_\m(S_x)I_{e,w'}\\ &=\sum_{\substack{w'\in B_0\\x\in P^+}}h_{w_0w^{-1},w'w_0,w_0p_x}\lambda_\m(S_x)I_{e,w'}\\ &=\sum_{w'\in B_0}\lambda_\m(m_{w,w'})I_{e,w'}.\end{aligned}$$Thus, by , we have $\dim_{k_\m}\operatorname{Ann}_{E_\m}\mathbf{c}_0=\# B_0-\operatorname{rank}(\lambda(m_{w,w'}))_{w,w'\in B_0}$. Therefore $$\dim_{k_\m}(\rho(E_\m))=\operatorname{rank}(\lambda_\m(m_{w,w'}))_{w,w'\in B_0}.$$ Representations over the residue field of the center {#rep2} ==================================================== In this section, we provide another perspective for the representations of affine Hecke algebras. Recall that $\h$ is an affine Hecke algebra over $\mathbb{Z}[\Gamma]$, $\mathcal{Z}$ is the center of $\h$, $J_0$ is the based ring of the lowest generalized two-sided cell $\mathbf{c}_0$. Denote $\jg=J_0\otimes_\mathbb{Z}\mathbb{Z}[\Gamma]$. We will see that the map $\phi:\h\ra\jg$ is injective. And we will find out the set of prime ideals of $\mathcal{Z}$ such that $\phi_\pr:k_\pr\h\ra k_\pr\jg$ is an isomorphism, where $k_\pr$ is the residue field of the center $\mathcal{Z}$ at $\pr$. In particular, $\pr=0$ is such an ideal. Denote $\Delta=\h C_{w_0}$. - $\Delta$ is an $\h$-$\mathcal{Z}$-bimodule. - $\Delta$ is a faithful left $\h$-module. - $\Delta$ is a free $\mathcal{Z}$-module with basis $\{C_{ww_0}~|~w\in B_0 \}$. *Proof.* (i) is obvious. The proof of (ii) is similar to [@lusztig1987cellsIII 1.7]. First as claimed in [@lusztig1987cellsIII 1.1(c)], we have - For any $y\in W$ we can find $s_1,\cdots, s_p\in S$ such that $l(ys_1s_2\cdots s_p)=l(y)+p$ and $l(ys_1s_2\cdots s_pt)>l(ys_1s_2\cdots s_p)$ for any $t\in S_0:=S\cap W_0$. Now assume $0\neq h\in\h$. Write $h=\sum a_w{\widetilde{T}_{w}}$. Let $y$ be a maximal $w$ (under Bruhat order) such that $a_w\neq0$. Then one can find $s_1,s_2,\cdots,s_p$ as in (a). Let $h'=h{\widetilde{T}_{s_1}}\cdots{\widetilde{T}_{s_p}}$. It is easy to see that the coefficient of ${\widetilde{T}_{ys_1\cdots s_p}}$ in $h'C_{w_0}$ is $a_y$, which is nonzero by hypothesis. Thus $h'C_{w_0}\neq0$. Hence we have proved that there exists $u\in \Delta=\h C_{w_0}$ such that $h.u\neq0$. Therefore, $\mathcal{H}C_{w_0}$ is a faithful $\h$-module. (iii). First, by Lemma \[predecomp\](v) or by Theorem \[J0\], $\Delta$ is a $\mathbb{Z}[\Gamma]$-module generated by $\{C_{ww_0}|w\in U_0 \}$. By Theorem \[0xi\], for each $w\in U_0$, there exists $w'\in B_0$, $x\in P^+$ such that $C_{ww_0}=C_{w'w_0p_x}=C_{w'w_0}S_x$. Thus $\Delta$ is generated by $\{C_{w'w_0}|w'\in B_0 \}$ over $\mathcal{Z}$. Second, $\{C_{w'w_0}|w'\in B_0 \}$ is free over $\mathcal{Z}$ since $\{C_{ww_0}|w\in U_0 \}$ is free over $\mathbb{Z}[\Gamma]$. The $\h$-$\mathcal{Z}$-bimodule structure on $\Delta$ naturally induces a $\zg$-algebra homomorphism $$\label{map1} \varphi:\h\ra\operatorname{End}_\mathcal{Z}(\Delta).$$And by Corollary \[iso\], there is an isomorphism of $\zg$-algebras $$\begin{aligned} \eta : &\jg\ra \operatorname{End}_\mathcal{Z}(\Delta),\label{map2}\\& t_{w_1w_0p_xw_2^{-1}}\mapsto (C_{w_3w_0}\mapsto \delta_{w_2,w_3}S_xC_{w_1w_0}).\end{aligned}$$ . \[commutative\] The following diagram is commutative, $$\xymatrix{\h\ar[r]^{\phi}\ar[rd]_{\varphi} & \jg\ar[d]^\eta_\simeq\\{}& \ezd}$$ where the map $\phi$ is defined in Theorem \[maptoJ\], the map $\varphi$ is defined in , and the map $\eta$ is defined in . *Proof.* Since $\{C_{ww_0}\,|\,w\in B_0\}$ is a $\mathcal{Z}$-basis of $\Delta$, it suffices to prove that for any $w\in W$, $w_1\in B_0$, $(\eta\circ\phi)(C_w)(C_{w_1w_0})=\varphi(C_w)(C_{w_1w_0})=C_w C_{w_1w_0}$. $$\begin{aligned} LHS&=\eta(\sum_{w_3,w_2\in B_0,x\in P^{+}}h_{w,w_3w_0w_3^{-1},w_2w_0p_xw_3^{-1}}t_{w_2w_0p_xw_3^{-1}})(C_{w_1w_0})\\ &=\sum_{w_2,x}h_{w,w_1w_0w_1^{-1},w_2w_0p_xw_1^{-1}}C_{w_2w_0p_x}\\ &=\sum_{w_2,x}h_{w,w_1w_0,w_2w_0p_x}C_{w_2w_0p_x}=RHS\end{aligned}$$ From this theorem, one can see that $\phi$ and $\varphi$ are the same if one identifies $J_0$ with $\ezd$. \[y1\] - $\phi:\h\ra \jg$ is injective. - $\phi(\mathcal{Z})$ is the center of $\jg$. *Proof.* Since $\Delta$ is a faithful $\h$-module, $\varphi$ is injective. Thus $\phi$ is injective and (i) follows. Since the center of $\operatorname{End}_\mathcal{Z}(\Delta)$ is $\varphi(\mathcal{Z})$, the center of $\jg$ is $\phi(\mathcal{Z})$. So (ii) follows. For any prime ideal $\mathfrak{p}\in\operatorname{Spec}\mathcal{Z}$, the residue field of $\mathcal{Z}$ at $\pr$ is denoted by $k_\pr$, which by definition is the field of fractions of the quotient $\mathcal{Z}/\pr$. The natural morphism $\mathcal{Z}\ra k_\pr$ is denoted by $\lambda_\pr$. Set $$\begin{aligned} k_\pr\mathcal{H}:=k_\pr\otimes_\mathcal{Z}\mathcal{H}, \quad k_\pr\jg&:=k_\pr\otimes_\mathcal{Z}\jg\quad k_\pr\Delta=k_\pr\otimes_\mathcal{Z}\Delta\\ \phi_\pr&:k_\pr\mathcal{H}\ra k_\pr\jg\\ m_{w_1,w_2}&:=\sum_{x\in P^+}h_{w_0w_1^{-1},w_2w_0,w_0p_x}S_x\in\mathcal{Z}\\ \det&:=\det(m_{w_1,w_2})_{w_1,w_2\in B_0}\in\mathcal{Z}.\end{aligned}$$ Both $\mathcal{H}$ and $\jg$ are free $\mathcal{Z}$-module with rank $|W_0|^2$, see [@xi1994representations 2.10]. Thus $\dim_{k_\pr}k_\pr\mathcal{H}=\dim_{k_\pr}k_\pr\jg=|W_0|^2$. \[y4\] $\phi_\pr$ is an isomorphism if and only if $k_\pr\Delta$ is a simple module of $k_\pr\mathcal{H}$. *Proof.* By Theorem \[commutative\], $k_\pr\mathcal{H}$ acts on $k_\pr\Delta$ via the homomorphism $\phi_{\pr}:k_\pr\mathcal{H}\ra k_\pr\jg\simeq \operatorname{End}_{k_\pr}(k_\pr \Delta)$. If $k_\pr\Delta$ is a simple $k_\pr\mathcal{H}$-module (which must be of dimension $|W_0|$), then by Artin-Wedderburn theorem, $\phi_\pr(k_\pr\mathcal{H})$ must be of dimension $\geq|W_0|^2$. Thus $\phi_\pr$ is surjective, and hence bijective. The another direction is easy. \[y2\] $\phi_\pr$ is an isomorphism if and only if $\det\notin\pr$. *Proof.* First assume that $\phi_\pr$ is an isomorphism. Then $k_\pr\mathcal{H}$ is a simple algebra over $ k_\pr $, and $k_\pr\Delta$ is its only simple module up to isomorphism. Let $ h $ be the image in the algebra $k_\pr\mathcal{H}$. Since $k_\pr\Delta=(k_\pr\mathcal{H})h $ is a simple module, we have $ h\neq 0 $. Moreover, $ h^2\neq 0 $; otherwise, $ h $ is a nonzero nilpotent element in $k_\pr\mathcal{H}$, which contradicts with that $k_\pr\mathcal{H}$ is a simple algebra. Hence $h(k_\pr\Delta)\neq0$. Then the quotient of $k_\pr\Delta$ by the submodule consisting of the elements annihilated by the ideal $ (k_\pr\mathcal{H})h(k_\pr\mathcal{H}) $ is nonzero (see Lemma \[unique\_max\_submod\]) and has the dimension $\operatorname{rank}(\lambda_\pr(m_{w_1,w_2}))_{w_1,w_2\in B_0}$ (see Proposition \[kdim\]). But $k_\pr\Delta$ is a simple module. We see that the matrix $ (\lambda_\pr(m_{w_1,w_2}))_{w_1,w_2\in B_0} $ is of full rank and hence $\lambda_\pr(\det)\neq0$, i.e. $\det\notin\pr$. Conversely, if $\lambda_\pr(\det)\neq0$, then $k_\pr\Delta$ is a simple module (see Proposition \[kdim\]), and hence $\phi_\pr$ is an isomorphism by Lemma \[y4\]. \[last\] Let $K$ be the field of fractions of $\mathcal{Z}$, and let $\mathcal{Z}[\frac1\det]$ be the subring of $K$ generated by $\mathcal{Z}$ and $\frac1\det$. Then $\phi:\mathcal{H}[\frac1\det]\ra\jg[\frac1\det]$ is an isomorphism of $\mathcal{Z}[\frac{1}{\det}]$-algebras. In particular, $K\mathcal{H}\simeq K\jg$ is a split simple algebra over $K$. *Proof.* By Corollary \[y1\], $\phi:\mathcal{H}[\frac1\det]\ra\jg[\frac1\det]$ is injective. Note that the prime ideals $ \mathfrak{q} $ of $ \mathcal{Z}[\frac1\det] $ is in bijection with the prime ideals $ \pr $ of $ \mathcal{Z} $ with $ \det\notin \pr $. Precisely, $ \mathfrak{q}=\mathfrak{p}[\frac1\det] $. Since $ (\mathcal{Z}[\frac1\det])_\mathfrak{q}=\mathcal{Z}_\pr $, we know that the localization $ (\mathcal{H}[\frac1\det])_\mathfrak{q}\ra(\jg[\frac1\det])_\mathfrak{q} $ is an isomorphism for any prime ideal $ \mathfrak{q} $ of $ \mathcal{Z}[\frac1\det] $ (see Proposition \[y2\]). Then we conclude that $ \phi:\mathcal{H}[\frac1\det]\ra\jg[\frac1\det] $ is an isomorphism by the following well-known fact about commutative algebras (applying to $ A=\mathcal{Z}[\frac1\det] $, $ M=\mathcal{H}[\frac1\det] $, and $ N= \jg[\frac1\det]$): - Let $A$ be a noetherian commutative ring, and $f:M\ra N$ be a morphism of finitely generated $A$-modules. If $f_\mathfrak{q}:k_\mathfrak{q} M\ra k_\mathfrak{q} N$ is surjective for any prime ideal $\mathfrak{q}$ of $A$, then $f$ is surjective. **Acknowledgment:** This paper is a part of PhD thesis of the author under the supervision of Professor Nanhua Xi. The author thanks the guide and helpful comments of Professor Nanhua Xi. This work is partly supported by the National Natural Science Foundation of China (No.11321101). [^1]: The notation here is taken from [@geck2011rep].
--- abstract: | In all local realistic theories worked out till now, locality is considered as a basic assumption. Most people in the field consider the inconsistency between local realistic theories and quantum mechanics to be a result of non-local nature of quantum mechanics. In this Paper, we derive the Greenberger-Horne-Zeilinger type theorem for particles with instantaneous (non-local) interactions at the hidden-variable level. Then, we show that the previous contradiction still exists between quantum mechanics and non-local hidden variable models.\ PACS number : 03.67.Dd, 03.65.Ud\ Keywords: Non-locality, GHZ Theorem, Hidden variables. author: - | A. Fahmi$^{1}$[^1] and M. Golshani$^{1,2}$[^2]\ [$^{1}$Institute for Studies in Theoretical Physics and Mathematics (IPM)]{}, [P. O. Box 19395-5531, Tehran, Iran.]{}\ [$^{2}$Department of Physics, Sharif University of Technology,]{} [P. O. Box 11365-9161, Tehran, Iran.]{} title: '**Locality and the Greenberger-Horne-Zeilinger Theorem** ' --- Introduction ============ One of the main problems in physics that has attracted physicists’ attention in recent years is locality. This notion has different meanings, interpretations and applications in different fields of studies. Physicists consider locality principle as a physical constraint which should be satisfied by any new theory. Quantum mechanics (QM) has been challenging this principle for a long period. Non-locality in QM, however, enters into calculations as a consequence of the entanglement between some appropriate degrees of freedom of two separated particles, which makes them to show a correlated behavior. Entanglement has emerged as one of the most striking feature of quantum mechanics. However, an exact quantitative relation between non-locality and entanglement is not been known. The non-local property of QM was first demonstrated by Einstein-Podolsky-Rosen (EPR) [@EPR], who explicitly suggested that any physical theory must be both local and realistic. The manifestation of these conditions was then appeared in the so-called Bell inequality [@Bell], where locality is a crucial assumption, violated by quantum mechanical predictions. From the so-called Bell’s inequalities one can infer Bell’s theorem which states that: “ *In a certain experiment all realistic local theories with hidden variables are incompatible with quantum mechanics* ”. In Bell’s theorem, the locality assumption was involved quantitatively for the first time. In the other words; If the result of measurement on the particle $1$ $(2)$ is called $A$ $(B)$, then, in a local hidden variable (LHV) model, the locality condition is defined as: $$\begin{aligned} A(a,b,\lambda)=A(a,\lambda), \hspace{.5cm} B(a,b,\lambda)=B(b,\lambda).\end{aligned}$$ In the above expression the result of a measurement on the particle $1$ $(2)$ is independent of parameters and the result of the measurement performed on the particle $2$ $(1)$. The various experiments have been performed to distinguish between QM and local realistic theories [@As]. The existing contradiction between QM and LHV models, and also the violation of Bell’s inequality in experiments lead us to conclude that we may doubt on one of the initial assumptions of Bell’s inequality, i.e, locality or reality. Of course, there are some people who believe that other assumptions and loopholes might be involved, instead of the locality and reality assumptions [@Barre; @As1]. For example, in a recent paper, Barrett *et al.* [@Barre] used the two-side memory loophole, in which the hidden variables of $n$ pair can depend on the previous measurement choices and outcomes in both wings of the experiment. They have shown that the two-side memory loophole allows a systematic violation of the CHSH inequality. Bell’s inequality has been derived in different ways [@CHSH; @CH; @KS]. Greenberger *et al.* [@GHZ], Hardy [@Hardy] and Cabello [@Cab] have shown that it is possible to demonstrate the existence of non-locality for the case of more than two particles without using any inequality. In another model, Scarani and Gisin [@Gisin] considered some superluminal hidden communication or influences to reproduce the non-local correlations. There exists another attempting to clarify QM properties. For example, Zeilinger *et al.* [@Zei] suggested information explanation of QM. Others works on this subject can be summarized as follows: The extension of Bell’s inequality and Greenberger, Horne, Zeilinger (GHZ) theorem to continuous-variables [@Wenger]; The Bell-type inequality that involves all possible settings of the local measurements apparatus [@Zuk]; The extension of local hidden variable models (LHV) to multiparticle and multilevel systems [@Cab1]; The violation of Bell’s inequality beyond Cirelson’s bound [@Cab2]. On the other hand, as is known, Bell shows that it is impossible to reproduce the Bell correlation function by using a local realistic model. Some people extended Bell’s approach, by considering realistic interpretation of QM and showed that exact simulation of the Bell correlation (singlet state) is possible by using local hidden variables augmented by just one bit of classical communication [@Brass; @Bacon]. Hence, it has recently been shown that all causal correlations between two parties which respectively give one bit outputs $a$ and $b$ upon receiving one bit inputs $x$ and $y$ can be expressed as convex combinations of local correlations (i.e., correlations that can be simulated with local random variables) and non-local correlations of the form $a+b=x\cdot y$ *mod 2*. They showed that a single instance of the latter elementary non-local correlation suffices to simulate exactly all possible projective measurements that can be performed on the singlet state of two qubits, with no communication needed at all [@Cerf]. Although, these theories explain some part of QM, but they could not present a complete description of QM. For example, in the Popescu and Rohrlich non-local machine [@Cerf], we have not all of QM properties, it has been shown that entanglement swapping is not simulated by a non-local machine [@Short] and quantum multiparties correlations arising from measurements on a cluster state cannot be simulated by a $n$ non-local boxes, for any $n$ [@BP]. In this Paper, we derive the GHZ-type [@GHZ] theorem for particles with instantaneous (non-local) interactions at the hidden-variable level. Then, *we show that the previous contradiction still exists between QM and non-local hidden variable models.* Locality Condition in the Greenberger-Horne-Zeilinger Theorem ============================================================= Greenberger, Horne and Zeilinger (GHZ) showed the consequence of Bell’s theorem in a different way [@GHZ], using a system which consists of three or more correlated spin-$\frac{1}{2}$ particles. GHZ argued that if quantum mechanical predictions hold true for the entangled three-particle or four-particle states, then the local hidden variable theories cannot reproduce quantum mechanical results. The GHZ theorem is, in fact, a synthesis of Bell’s theorem [@Bell] and Kochen-Specker’s theorem [@KS], and it indicates that we cannot attribute values to the results of simultaneous measurements of three or more correlated particles, without encountering a mathematical inconsistency. This theorem provides a new test for the evaluation of concepts like locality and non-contextuality on the basis of complete quantum correlations (for multi-particle entangled states, the assumption of non-contextuality is usually taken to be equivalent to locality). GHZ considerd a system of four spin $\frac{1}{2}$ particles so that particles $1$ and $2$ move freely in the positive $z$-direction and particles $3$ and $4$ in the negative $z$-direction. The Stern-Gerlach orientation analyzers are $\widehat{n}_{1},\widehat{n}_{2},\widehat{n}_{3}$ and $\widehat{n}_{4}$ for the beams of particles $1,2,3$ and $4$, respectively. If these four particles result from the decay of a single spin-1 particle into a pair of spin-1 particles, each of which decays into a pair of spin-$\frac{1}{2}$ particles, then, with the $z$ component of spin initially zero and remaining so throughout the decay process, the quantum mechanical spin state of the four particles is: $$\begin{aligned} |\psi\rangle=\frac{1}{\sqrt{2}}[|+\rangle_{1}|+\rangle_{2}|-\rangle_{3}|-\rangle_{4}- |-\rangle_{1}|-\rangle_{2}|+\rangle_{3}|+\rangle_{4}].\end{aligned}$$ The expectation value $E(\widehat{n}_{1},\widehat{n}_{2},\widehat{n}_{3} \widehat{n}_{4})$ of the product of outcomes, when the orientations are as indicated, is: $$\begin{aligned} E(\widehat{n}_{1},\widehat{n}_{2},\widehat{n}_{3} \widehat{n}_{4})&=&\langle\psi|(\overrightarrow{\sigma}_{1}.\widehat{n}_{1}) (\overrightarrow{\sigma}_{2}.\widehat{n}_{2}) (\overrightarrow{\sigma}_{3}.\widehat{n}_{3}) (\overrightarrow{\sigma}_{4}.\widehat{n}_{4}) |\psi\rangle =\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}\cos\theta_{4}\\\nonumber &&-\sin\theta_{1}\sin\theta_{2}\sin\theta_{3}\sin\theta_{4}\times \cos(\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4})\end{aligned}$$ where $(\theta_{i}\phi_{i})$ are polar and azimuthal angles of $\widehat{n}_{i}$. For simplicity, we shall restrict our attention to $\widehat{n}$’s in the $x-y$ plane, so that: $$\begin{aligned} \label{3} E(\widehat{n}_{1},\widehat{n}_{2},\widehat{n}_{3} \widehat{n}_{4})=E(\phi_{1},\phi_{2},\phi_{3},\phi_{4})=- \cos(\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4})\end{aligned}$$ EPR’s assumptions in the GHZ argument can be adapted to the four-particle situation as follows:\ *(i) Perfect correlation*: Knowledge of the outcomes for any three particles enables a prediction with certainty for the outcome of the fourth.\ *(ii) Locality*: Since the four particles are arbitrarily far apart at the time of measurement, and are assumed not to interact, hence no real change can takes place in any one of them as a consequence of what is done on the other three.\ *(iii) Reality*: If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity ) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.\ *(iv) Completeness*: Every element of the physical reality must have a counterpart in the physical theory.\ Noting the above premises, GHZ defined four functions $A(\phi_{1},\lambda),B(\phi_{2},\lambda),C(\phi_{3},\lambda), D(\phi_{4},\lambda)$ with values $+1$ and $-1$, these functions being the outcomes of spin measurement along the respective directions when the complete state of the four particles is specified by $\lambda$. Using the above premises and some algebra, GHZ could derive following relations [@GHZ]: $$\begin{aligned} A(2\phi,\lambda)=A( 0 ,\lambda)= Const., \hspace{1cm}\forall \hspace{.2cm} \phi\end{aligned}$$ $$\begin{aligned} A(\phi'+\pi,\lambda)=-A( 0 ,\lambda)=Const., \hspace{1cm}\forall \hspace{.2cm} \phi'\end{aligned}$$ GHZ showed that the above relations are inconsistent with each other for $\phi=\frac{\pi}{2}, \phi'=0$. Thus, they come to the conclusion that a hidden inconsistency is present in premises *(i)-(iv)*. In our approach to the GHZ theorem, we keep the hypotheses *i,iii,iv* as before and replace *ii* by the following statement [@Fa]:\ *(ii’) Non-locality:* at the time of measurement, when the four particles interact, a real change can take place in a particle as a consequence of any thing that may be done on the other three particles.\ The above condition changes our view about locality and the GHZ theorem. In [@Fa], we defined non-locality condition at the hidden variable level as: $$\begin{aligned} \label{5} A(\phi_{1},\phi_{2},\phi_{3},\phi_{4},\lambda)=f_{A}(\phi_{1},\lambda) g_{A}(\phi_{2},\lambda)h_{A}(\phi_{3},\lambda)k_{A}(\phi_{4},\lambda)\nonumber\\\end{aligned}$$ Similar relations hold for $B, C$ and $D$. Using the above form of non-locality, we have: $$\begin{aligned} A(2\phi_{1},2\phi_{2},2\phi_{3},2\phi_{4},\lambda)= A(0,0,0,0,\lambda)\end{aligned}$$ $$\begin{aligned} A(\theta_{1}+\pi,\theta_{2}+\pi,\theta_{3}+\pi,\theta_{4},\lambda)= -A(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\lambda)\end{aligned}$$ For $\overrightarrow{\phi}=(\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2},0)$ and $\overrightarrow{\theta}=(0,0,0,0)$ one gets: $$\begin{aligned} A(\pi,\pi,\pi,0,\lambda)= A(0,0,0,0,\lambda)\end{aligned}$$ $$\begin{aligned} A(\pi,\pi,\pi,0,\lambda)= -A(0,0,0,0,\lambda)\end{aligned}$$ which leads to the usual GHZ result. Extension of Non-locality Condition to the Greenberger-Horne-Zeilinger Theorem ============================================================================== In this section we would like to extend our non-locality condition to a more general form. This form can be applied to a very large class of non-local hidden variable theories. One generalization of eq. (\[5\]) is in the following manner: $$\begin{aligned} \label{6} A(\phi_{1},\phi_{2},\phi_{3},\phi_{4},\lambda) %\\\nonumber =\sum_{j_{A}}f_{A}^{j_{A}}(\phi_{1},\lambda) g_{A}^{j_{A}}(\phi_{2},\lambda)h_{A}^{j_{A}}(\phi_{3},\lambda)k_{A}^{j_{A}}(\phi_{4},\lambda)\end{aligned}$$ Where $f, g, h$ and $k$ are arbitrary functions that depend on their arguments with: $$\begin{aligned} -1\leq A(\phi_{1},\phi_{2},\phi_{3},\phi_{4},\lambda)\leq1\end{aligned}$$ and similar relations hold for $B,C,D$. This type of generalization is not special. Actually, for any realistic variable we can make such an assumption. Also, these variables could not be considered as a mathematical generalization but as something that could convey some physical meaning [@Bohm1].\ The product of these physical variables is: $$\begin{aligned} \label{7} ABCD=\prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi_{1},\lambda) g_{i}^{j_{i}}(\phi_{2},\lambda)h_{i}^{j_{i}}(\phi_{3},\lambda)k_{i}^{j_{i}}(\phi_{4},\lambda)\end{aligned}$$ We define the correlation function $E(\phi_{1}, \phi_{2}, \phi_{3}, \phi_{4})$ in the form: $$\begin{aligned} \label{77} E(\phi_{1}, \phi_{2}, \phi_{3}, \phi_{4})=\int \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi_{1},\lambda) g_{i}^{j_{i}}(\phi_{2},\lambda)h_{i}^{j_{i}}(\phi_{3},\lambda)k_{i}^{j_{i}}(\phi_{4},\lambda) \rho(\lambda)d\lambda\end{aligned}$$ where the probability distribution function for the uncontrollable hidden variable $\lambda$ is represented by $\rho(\lambda)$, with: $$\begin{aligned} \int \rho(\lambda)d\lambda=1, \hspace{1cm} \rho(\lambda)\geq1\end{aligned}$$ Using the relations (\[3\]) for the expectation value of the spin correlation, one can show that for $\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4}=0$: $$\begin{aligned} \label{8} \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi,\lambda) g_{i}^{j_{i}}(\phi,\lambda)h_{i}^{j_{i}}(\phi,\lambda)k_{i}^{j_{i}}(\phi,\lambda)=-1\end{aligned}$$ $$\begin{aligned} \label{9} \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi,\lambda) g_{i}^{j_{i}}(\phi',\lambda)h_{i}^{j_{i}}(\phi,\lambda)k_{i}^{j_{i}}(\phi',\lambda)=-1\end{aligned}$$ $$\begin{aligned} \label{10} \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi,\lambda) g_{i}^{j_{i}}(\phi',\lambda)h_{i}^{j_{i}}(\phi',\lambda)k_{i}^{j_{i}}(\phi,\lambda)=-1\end{aligned}$$ $$\begin{aligned} \label{11} \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi',\lambda) g_{i}^{j_{i}}(\phi,\lambda)h_{i}^{j_{i}}(\phi/2+\phi'/2,\lambda)k_{i}^{j_{i}}(\phi/2+\phi'/2,\lambda)=-1\end{aligned}$$ By changing $\phi'$ in Eq. (\[9\]), we get a set of equations (with constant $\phi$) which can be written as a matrix equation:\ $$\begin{aligned} \left( \begin{array}{cccccccc} \prod_{i=A}^{D} g_{i}^{1}(\phi_{1}')k_{i}^{1}(\phi_{1}') & . & . & & \prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{1}')k_{i}^{j_{i}}(\phi_{1}') & . & . &\prod_{i=A}^{D} g_{i}^{n}(\phi_{1}')k_{i}^{n}(\phi_{1}') \\ \prod_{i=A}^{D} g_{i}^{1}(\phi_{2}')k_{i}^{1}(\phi_{2}') & . & . & & \prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{2}')k_{i}^{j_{i}}(\phi_{2}') & . & . &\prod_{i=A}^{D} g_{i}^{n}(\phi_{2}')k_{i}^{n}(\phi_{2}') \\ . & . & . & & . & . & . & . \\ . & . & . & & . & . & . & . \\ \prod_{i=A}^{D} g_{i}^{1}(\phi_{k}')k_{i}^{1}(\phi_{k}') & . & . & & \prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{k}')k_{i}^{j_{i}}(\phi_{k}') & . & . &\prod_{i=A}^{D} g_{i}^{n}(\phi_{k}')k_{i}^{n}(\phi_{k}') \\ . & . & . & & . & . & . & . \\ . & . & . & & . & . & . & . \\ \prod_{i=A}^{D} g_{i}^{1}(\phi_{n}')k_{i}^{1}(\phi_{n}') & . & . & & \prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{n}')k_{i}^{j_{i}}(\phi_{n}') & . & . &\prod_{i=A}^{D} g_{i}^{n}(\phi_{n}')k_{i}^{n}(\phi_{n}') \\ \end{array} \right)\end{aligned}$$ $$\begin{aligned} \label{12} \times \left( \begin{array}{c} \prod_{i=A}^{D} f_{i}^{1}(\phi_{1})h_{i}^{1}(\phi_{1}) \\ \prod_{i=A}^{D} f_{i}^{2}(\phi_{1})h_{i}^{2}(\phi_{1}) \\ . \\ . \\ \prod_{i=A}^{D} f_{i}^{j_{i}}(\phi_{1})h_{i}^{j_{i}}(\phi_{1}) \\ . \\ . \\ \prod_{i=A}^{D} f_{i}^{n}(\phi_{1})h_{i}^{n}(\phi_{1}) \\ \end{array} \right)=-\left( \begin{array}{c} 1 \\ 1 \\ . \\ . \\ 1 \\ . \\ . \\ 1 \\ \end{array} \right)\end{aligned}$$ In the above matrix equation we have droped some of unnecessary indices for simplicity and $\phi'$’s are arbitrary angles (see appendix B). Also, we can get similar matrix equations by changing $\phi$ and holding $\phi'$ constant. By calculating $\prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{l},\lambda) h_{i}^{j_{i}}(\phi_{l},\lambda) $ from each of these equations, we have: $$\begin{aligned} \label{13} \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{l},\lambda) h_{i}^{j_{i}}(\phi_{l},\lambda)= \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{m},\lambda) h_{i}^{j_{i}}(\phi_{m},\lambda)\nonumber\\\end{aligned}$$ In above relation, $f_{i}^{j_{i}}$ is the $j$th term of Eq. (\[6\]) related to $i$th party. (To distinguish between the functions of different parties we have tagged them, for example, by $j_{i}$). Similarly, by using Eq. (\[10\]), we have: $$\begin{aligned} \label{14} \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{l},\lambda) k_{i}^{j_{i}}(\phi_{l},\lambda)= \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{m},\lambda) k_{i}^{j_{i}}(\phi_{m},\lambda)\nonumber\\\end{aligned}$$ A consequence of these is that: $$\begin{aligned} \label{15} &&\prod_{i=A}^{D} h_{i}^{j_{i}}(\phi_{l},\lambda)(k_{i}^{j_{i}}(\phi_{l},\lambda)^{-1}= \prod_{i=A}^{D} h_{i}^{j_{i}}(\phi_{m},\lambda)(k_{i}^{j_{i}}(\phi_{m},\lambda))^{-1}\end{aligned}$$ Now, we can use two approaches. First, by assuming that any term in the Eq. (\[6\]) is $f_{i}^{j_{i}}, g_{i}^{j_{i}}, h_{i}^{j_{i}}, k_{i}^{j_{i}}=\pm1$, Eq. (\[15\]) can be written in the following form (cf [@Fa]), $$\begin{aligned} \label{16} \prod_{i=A}^{D} h_{i}^{j_{i}}(\phi_{l},\lambda)k_{i}^{j_{i}}(\phi_{l},\lambda)= \prod_{i=A}^{D} h_{i}^{j_{i}}(\phi_{m},\lambda)k_{i}^{j_{i}}(\phi_{m},\lambda)\end{aligned}$$ Second, by considering that $A^{2}(\phi_{1}, \phi_{2}, \phi_{3},\phi_{4}, \lambda)= 1$ and using Eq. (\[6\]), we can construct matrix equations similar to (\[12\]), which finally lead to: $$\begin{aligned} \label{17} [f_{A}^{j_{A}}(\phi_{1},\lambda)]^{2}=X(\Phi, \lambda)\end{aligned}$$ where $\Phi$ does not contain $\phi_{1}$ (see appendix A). Although, the above equation is surprising, nevertheless, if $f_{i}^{j_{i}}=\pm 1$, we can convince ourself with this result. The above relation holds for any $\phi_{1}$ and similar relations holds for $g,h,k$. If we consider $k_{i}^{j_{i}}$ type of Eq. (\[17\]) and apply to Eq. (\[15\]), we get Eq. (\[16\]) again. Now, by using Eqs. (\[8\]), (\[11\]) and (\[16\]), and after some algebra, we have: $$\begin{aligned} \label{18} \prod_{i=A}^{D} f_{i}^{j_{i}}(\phi_{l},\lambda) =\prod_{i=A}^{D} f_{i}^{j_{i}}(\phi_{m},\lambda)\end{aligned}$$ Similar relations can be derived for $g, h$ and $k$. This equation is a quite surprising result, we would expect the change of $f, g, h$ and $k$’s arguments, which their combination represents an intrinsic spin quantity, to have some different outcomes. We can repeat the whole calculation by assuming that $\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4}=\pi$ in the Eq. (\[77\]). Then we have: $$\begin{aligned} \label{19} \prod_{i=A}^{D}\sum_{j_{i}}f_{i}^{j_{i}}(\phi'+\pi,\lambda) g_{i}^{j_{i}}(\phi,\lambda)h_{i}^{j_{i}}(\phi/2+\phi'/2,\lambda) k_{i}^{j_{i}}(\phi/2+\phi'/2,\lambda)=1\end{aligned}$$ From the Eqs. (\[8\]), (\[16\]), and (\[19\]), we have: $$\begin{aligned} \label{20} \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{s},\lambda)=- \prod_{i=A}^{D}f_{i}^{j_{i}}(\phi_{k}+\pi,\lambda)\end{aligned}$$ Now, considering Eqs. (\[18\]) and (\[20\]), it is easily seen that for $\phi_{l}=\phi_{s}=\phi_{k}=0, \phi_{m}=\pi$, and we get: $$\begin{aligned} \label{21} \prod_{i=A}^{D} f_{i}^{j_{i}}(0,\lambda)&=&\prod_{i=A}^{D} f_{i}^{j_{i}}(\pi,\lambda)\nonumber\\ \prod_{i=A}^{D}f_{i}^{j_{i}}(0,\lambda)&=&- \prod_{i=A}^{D}f_{i}^{j_{i}}(\pi,\lambda)\end{aligned}$$ We thus have brought to surface an inconsistency hidden in the premisses ($i,ii', iii, iv$). We can also consider equations (\[18\]) and (\[20\]) for $g, h, k$. One gets: $$\begin{aligned} \label{22} A(\phi_{1},\phi_{2},\phi_{3},\phi_{4},\lambda)= A(\phi_{1}',\phi_{2}',\phi_{3}',\phi_{4}',\lambda)\end{aligned}$$ $$\begin{aligned} \label{23} A(\varphi_{1}+\pi,\varphi_{2},\varphi_{3},\varphi_{4},\lambda) =-A(\varphi_{1}',\varphi_{2}',\varphi_{3}',\varphi_{4}',\lambda)\end{aligned}$$ By using suitable angles, a discrepancy in one $A$’s would result. So we reach the same results as the GHZ theorem, even though we are using a special case of non-locality. Our approach rules out a broader class of hidden variable theories. Conclusion ========== Bell’s theorem [@Bell] states that any local realistic view of the world is incompatible with QM. This is often interpreted as demonstrating the existence of non-locality in QM [@Bohm]. In this paper, we have replaced Bell’s locality condition by a more general condition to obtain the GHZ theorem and have shown that the same incompatibility exist for the case of non-local realistic models. Thus, we can conclude that the disagreement in the GHZ theorem is not necessarily due to the violation of Bell’s locality condition. Consequently, we should focus on other GHZ assumptions to find the origin of inconsistency. Also, it is worthy to note that one could even go further and extend the above non-local approach to other relevant theorems such as Kochen-Specker [@KS], CH [@CH] and Hardy [@Hardy1] theorems. There exists an important question: do some non-local hidden variable theories exist that our approach can be applied to them? One can expects more general cases of non-local theories which are incompatible with quantum mechanics. However, it is an open problem whether one can construct a hidden variable model, satisfying the above requirements, to show the consistency of our argument in a more concrete manner. Further more, in another paper, we showed that it is not possible to reproduce a QM correlation, using only local measurements (done by two space-like separated parties), augmented with a classical communication having one bit information [@Bacon] or by a single use of non-local box [@Cerf]. Although, other people have shown that all of QM properties cannot be simulated by a nonlocal box completely [@Short; @BP]. Therefore, it can be concluded that some other alternative view points might be involved. On the other hand, these are people who still believe that QM is a local theory [@Peres], and some others consider information as the root of the interpretation of QM [@Zei]. Hence, G. Brassard and co-workers suggested the field of communication complexity, which concerns the amount of information that must be transmitted between two parties to compute some function of private inputs that they hold [@Brass1]. Anyway, the above arguments indicate that we must have a deeper understanding of the notions of locality, reality and entanglement.\ \ [**Acknowledgment**]{}: We would like to thank P. H. Eberhard for his comments and A. T. Rezakhani for critical reading of the manuscript. (This work was supported under the project: *AZAD*). Appendix A: Measurement Results Matrix Equation =============================================== In this appendix, we would like to derive the equation (\[17\]). As we mentioned, by using equations $A^{2}(\phi_{1}, \phi_{2}, \phi_{3},\phi_{4}, \lambda)= 1$ and (\[6\]), we construct matrix equation as follows: $$\begin{aligned} \left( \begin{array}{cccccccc} [g_{A}^{1}(\phi_{2}^{1})h_{A}^{1}(\phi_{3}^{1})k_{A}^{1}(\phi_{4}^{1})]^{2} & . & . & & g_{A}^{j_{A}}(\phi_{2}^{1})h_{A}^{j_{A}}(\phi_{3}^{1})k_{A}^{j_{A}}(\phi_{4}^{1})g_{A}^{s_{A}}(\phi_{2}^{1})h_{A}^{s_{A}}(\phi_{3}^{1})k_{A}^{s_{A}}(\phi_{4}^{1}) & . & . &[g_{A}^{n}(\phi_{2}^{1})h_{A}^{n}(\phi_{3}^{1})k_{A}^{n}(\phi_{4}^{1})]^{2} \\\nonumber [g_{A}^{1}(\phi_{2}^{2})h_{A}^{1}(\phi_{3}^{2})k_{A}^{1}(\phi_{4}^{2})]^{2} & . & . & & g_{A}^{j_{A}}(\phi_{2}^{2})h_{A}^{j_{A}}(\phi_{3}^{2})k_{A}^{j_{A}}(\phi_{4}^{2})g_{A}^{s_{A}}(\phi_{2}^{2})h_{A}^{s_{A}}(\phi_{3}^{2})k_{A}^{s_{A}}(\phi_{4}^{2}) & . & . & [g_{A}^{n}(\phi_{2}^{2})h_{A}^{n}(\phi_{3}^{2})k_{A}^{n}(\phi_{4}^{2})]^{2} \\ . & . & . & & . & . & . & . \\\nonumber . & . & . & & . & . & . & . \\\nonumber [g_{A}^{1}(\phi_{2}^{k})h_{A}^{1}(\phi_{3}^{k})k_{A}^{1}(\phi_{4}^{k})]^{2} & . & . & & g_{A}^{j_{A}}(\phi_{2}^{k})h_{A}^{j_{A}}(\phi_{3}^{k})k_{A}^{j_{A}}(\phi_{4}^{k})g_{A}^{s_{A}}(\phi_{2}^{k})h_{A}^{s_{A}}(\phi_{3}^{k})k_{A}^{s_{A}}(\phi_{4}^{k}) & . & . &[g_{A}^{n}(\phi_{2}^{k})h_{A}^{n}(\phi_{3}^{k})k_{A}^{n}(\phi_{4}^{k})]^{2} \\\nonumber . & . & . & & . & . & . & . \\\nonumber . & . & . & & . & . & . & . \\\nonumber [g_{A}^{1}(\phi_{2}^{n})h_{A}^{1}(\phi_{3}^{n})k_{A}^{1}(\phi_{4}^{n})]^{2} & . & . & & g_{A}^{j_{A}}(\phi_{2}^{n})h_{A}^{j_{A}}(\phi_{3}^{n})k_{A}^{j_{A}}(\phi_{4}^{n})g_{A}^{s_{A}}(\phi_{2}^{n})h_{A}^{s_{A}}(\phi_{3}^{n})k_{A}^{s_{A}}(\phi_{4}^{n}) & . & . & [g_{A}^{n}(\phi_{2}^{n})h_{A}^{n}(\phi_{3}^{n})k_{A}^{n}(\phi_{4}^{n})]^{2} \\\nonumber \end{array} \right)\end{aligned}$$ $$\begin{aligned} \label{23} \times\left( \begin{array}{c} [f_{A}^{1}(\phi_{1})]^{2} \\ f_{A}^{1}(\phi_{1})f_{A}^{2}(\phi_{1}) \\ . \\ . \\ f_{A}^{j_{A}}(\phi_{1})f_{A}^{s_{A}}(\phi_{1}) \\ . \\ . \\\nonumber [f_{A}^{n}(\phi_{1})]^{2} \\ \end{array} \right)=\left( \begin{array}{c} 1 \\ 1 \\ . \\ . \\ 1 \\ . \\ . \\ 1 \\ \end{array} \right)\\\end{aligned}$$ In the above matrix equation we have dropped some of unnecessary indices for simplicity and $\phi_{l}^{k}$’s are arbitrary angles. We would like to solve the above matrix equation to derive column matrix element functions $f_{A}^{j_{A}}(\phi_{1})$ with $j_{A}=1,...,n $. It is not complicated to show that the above matrix equation is solvable if matrix equations which have been constructed by $A(\phi_{1},\phi_{k}, \phi_{l},\phi_{m}, \lambda)= \pm 1$ (similar to the matrix equation with fix $\phi_{1}$ and arbitrary $\phi^{k}_{2},\phi^{l}_{3}$ and $\phi^{m}_{4}$ with $k=1,...,n$) have inverse. That matrix equation has the following form: $$\begin{aligned} \label{24} (ghk)_{n\times n}(f)_{n\times 1}=(\pm1)_{n\times1}\end{aligned}$$ If one of rows or columns of $(ghk)$ would be a linear combination of other rows or columns respectively, then, $(ghk)$ matrix has not inverse. In the first case, where (one of rows would be equal to a linear combination of other rows), we get to the appropriate relations: $$\begin{aligned} \label{25} g_{A}^{j_{A}}(\phi^{l}_{2},\lambda)=g_{A}^{j_{A}}(\phi^{q}_{2},\lambda)\nonumber\\ h_{A}^{j_{A}}(\phi^{l}_{3},\lambda)=h_{A}^{j_{A}}(\phi^{q}_{3},\lambda)\nonumber\\ k_{A}^{j_{A}}(\phi^{l}_{4},\lambda)=k_{A}^{j_{A}}(\phi^{q}_{4},\lambda)\end{aligned}$$ In other words, $g,h,k$ are independent of their arguments. In other cases (one of columns would be equal to linear combination of other columns), we would have, for example: $$\begin{aligned} g_{A}^{1}(\phi_{l},\lambda)h_{A}^{1}(\phi_{q},\lambda)k_{A}^{1}(\phi_{s},\lambda)=g_{A}^{2}(\phi_{l},\lambda)h_{A}^{2}(\phi_{q},\lambda)k_{A}^{2}(\phi_{s},\lambda)\end{aligned}$$ At lest two sentences in the $A$ expansion (\[6\]) are equal to each other. It is not complicated to show that after a rearrangement of $A$ sentences, we get the same matrix equation as (\[24\]). After, solving the matrix equation (\[23\]), we would be get an appropriate result: $$\begin{aligned} \label{26} [f_{A}^{j_{A}}(\phi_{1},\lambda)]^{2}=X(\Phi, \lambda)\hspace{1 cm} f_{A}^{j_{A}}(\phi_{1},\lambda)f_{A}^{s_{A}}(\phi_{1},\lambda)=Y(\Phi, \lambda)\end{aligned}$$ where $\Phi$ does not contain $\phi_{1}$. If we repeat equation (\[23\]), with the change of $\phi_{1}$ ($g,h$ and $k$ arguments don’t change), we derive similar equations as (\[26\]), thus: $$\begin{aligned} \label{27} [f_{A}^{k}(\phi^{l}_{1})]^{2}=[f_{A}^{k}(\phi^{m}_{1})]^{2}\hspace{1cm} f_{A}^{j_{A}}(\phi^{l}_{1})f_{A}^{s_{A}}(\phi^{l}_{1})=f_{A}^{j_{A}}(\phi^{m}_{1})f_{A}^{s_{A}}(\phi^{m}_{1})\end{aligned}$$ Appendix B: Correlation Function’s Matrix Equation ================================================== Matrix equation (\[12\]) has the following form: $$\begin{aligned} (GK)_{n\times n}(FH)_{n\times 1}=-(1)_{n\times1}\end{aligned}$$ We would like to solve the above matrix equation to derive column matrix element functions $(FH)_{i,1}$, if one of rows or columns of $(GK)$ would be a linear combination of other rows or columns respectively, then, $(GK)$ matrix has no inverse. In the first case (one of rows would be equal to a linear combination of other rows), we get the appropriate relation $$\begin{aligned} \prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{l}',\lambda)k_{i}^{j_{i}}(\phi_{l}',\lambda)=\prod_{i=A}^{D} g_{i}^{j_{i}}(\phi_{k}',\lambda)k_{i}^{j_{i}}(\phi_{k}',\lambda),\end{aligned}$$ This says that these combinations of $g$ and $k$ are independent of their arguments ($\phi_{l}'$). In the other case (one of columns would be equal to a linear combination of other columns), we would have, for example, $g_{A}^{1}(\phi_{1}',\lambda)k_{A}^{1}(\phi_{1}',\lambda)=g_{A}^{2}(\phi_{1}',\lambda)k_{A}^{2}(\phi_{1}',\lambda)$ (at lest two sentences in the $A$ expansion (\[6\]) are equal to each other), it is not complicated to show that after a rearrangement of $A$ sentence and solving a matrix equation, similar to Eq. (12), we get: $$\begin{aligned} f_{A}^{1}(\phi_{l},\lambda) h_{A}^{1}(\phi_{l},\lambda)+f_{A}^{2}(\phi_{l},\lambda) h_{A}^{2}(\phi_{l},\lambda)=f_{A}^{1}(\phi_{k},\lambda) h_{A}^{1}(\phi_{k},\lambda)+f_{A}^{2}(\phi_{k},\lambda) h_{A}^{2}(\phi_{k},\lambda)\end{aligned}$$ After using equation (\[27\]) and similar relations for $h_{A}^{j_{A}}$ and $h_{A}^{j_{A}}$, we get the following relations: $$\begin{aligned} f_{A}^{1}(\phi_{l},\lambda) h_{A}^{1}(\phi_{l},\lambda)=f_{A}^{1}(\phi_{k},\lambda) h_{A}^{1}(\phi_{k},\lambda)\\\nonumber f_{A}^{2}(\phi_{l},\lambda) h_{A}^{2}(\phi_{l},\lambda)=f_{A}^{2}(\phi_{k},\lambda) h_{A}^{2}(\phi_{k},\lambda)\end{aligned}$$ That is appropriate result for derivation of eq. (\[13\]) again. [99]{} A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. $\mathbf{47}$, 777 (1935). J.s. Bell, Physics (Long Island City, N.Y.) $\mathbf{1}$, 195 (1964); reprinted in Bell, J.S. *Speakable and Unspeakable in Quantum Mechanics* (Cambridge Univ. Press, Cambridge, 1987). A. Aspect, P. Grangier, and G. Roger, *Phys. Rev. Lett.* $\mathbf{49}$, 91-94 (1982); A. Aspect, J. Dalibard, and G. Roger, *Phys. Rev. Lett.* $\mathbf{49}$, 1804-1807 (1982); W. Tittel, *et al.*, *Phys. Rev. Lett.* $\mathbf{81}$, 3563-3566 (1998); J. Pan, *et al.*, *Nature* $\mathbf{403}$, 515 (2000); M. A. Rowe, *et al.*, *Nature* $\mathbf{409}$, 791 (2001); A. Vaziri, G. Weihs, and A. Zeilinger, *Phys. Rev. Lett.* $\mathbf{89}$, 240401 (2002); T. B. Pittman, and J. D. Franson, *Phys. Rev. Lett.* $\mathbf{90}$, 240401 (2003); Y. Huang, *et al.* *Phys. Rev. Lett.* $\mathbf{90}$, 250401 (2003); Z. Zhao, *et al.*, *Phys. Rev. Lett.* $\mathbf{91}$, 180401 (2003); W. Tittel, and G. Weihs, *Q. Inf. Comp.* $\mathbf{1}$, 3 (2001); Z. Chen, *et al.*, Phys. Rev. Lett. $\mathbf{90}$, 160408 (2003); C. Simon, C. Brukner, and A. Zeilinger, Phys. Rev. Lett. $\mathbf{86}$, 4427 (2001); C. A. Sackett, *et al.*, Nature $\mathbf{404}$, 256 (2000). J. Barrett, D. Collins, L. Hardy, A. Kent, and S. Popescu, Phys. Rev. A $\mathbf{66}$, 42111 (2002). A. Aspect, *Nature* $\mathbf{398}$, 189-190 (1999); N. Gisin, and H. Zbinden, *Phys. Lett. A* $\mathbf{264}$, 103-107 (1999); P. G. Kwiat, *et al.*, *Phys. Rev. A* $\mathbf{49}$, 3209-3220 (1994); M. Freyberger, *et al.* *Phys. Rev. A* $\mathbf{53}$, 1232-1244 (1996); A. Beige, W. J. Munro, and P. L. Knight, *Phys. Rev. A* $\mathbf{62}$, 0521021 (2000). J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. $ \mathbf{23}$, 880 (1969). J. F. Clauser, and M.A. Horne, Phys. Rev.D $\mathbf{10}$, 526 (1974); S. Kochen, E. P. Specker, J. Math. $\mathbf{17}$, 59 (1967). D. M. Greenberger, M. A. Horne, A. Shimony, A. Zeilinger, Am. J. Phys. $\mathbf{58}$, 1131 (1990). L. Hardy, Phys. Rev. Lett. $\mathbf{71}$, 1665 (1993). A. Cabello, Phys. Rev. Lett. $\mathbf{86}$, 1911 (2001). V. Scarani, and N. Gisin, Phys. Lett. A $\mathbf{295}$, 167 (2002). A. Zeilinger, *Nature* $\mathbf{408}$, 639-641 (2000); A. Zeilinger, *Nature* $\mathbf{438}$, 743 (2005); Brukner and A. Zeilinger, Phys. Rev. Lett. $\mathbf{83}$, 3354 (1999); Brukner and A. Zeilinger, Phys. Rev. A $\mathbf{63}$, 022113 (2001); Brukner and A. Zeilinger, e-print arXiv:quantph/0212084; Brukner and A. Zeilinger, Phil. Trans. R. Soc. Lond. A $\mathbf{360}$, 1061 (2002), e-print arXiv:quant-ph/0201026; A. Shafiee, F. Safinejad, F. Naqsh Foundations of Physics Letters, $\mathbf{19}$, No. 1, (2006). J. Wenger, *et al.*, Phys. Rev. A $\mathbf{67}$, 12105 (2003); H. Jeong, *et al.*, Phys. Rev. A $\mathbf{67}$, 12106 (2003); Z. Chen, and Y. Zhang, quant-ph/0103082; S. Massar, and S. Pironio, Phys. Rev. A $\mathbf{64}$, 62108 (2001); P. van Loock, and S. L. Braunstein, Phys. Rev. A $\mathbf{63}$, 22106 (2001), for compleat review read S. L. Braunstein and P. Van Loock Rev. Mod. Phys. $\mathbf{77}$, 513 (2005). D. Kaszlikowski, and M. Zukowski, Phys. Rev. A $\mathbf{61}$, 22114 (2000). A. Cabello, Phys. Rev. A $\mathbf{63}$, 22104 (2001); D. Kaszlikowski, *et al.*, Phys. Rev. Lett. $\mathbf{85}$, 4418 (2000). A. Cabello, Phys. Rev. Lett. $\mathbf{88}$, 60403 (2002). G. Brassard, R. Cleve and A. Tapp, Phys. Rev. Lett. $\mathbf{83}$, 1874 (1999); G. Brassard, quant-ph/0101005. B. F. Toner and D. Bacon, Phys. Rev. Lett. $\mathbf{91}$, 187904 (2003); D. Bacon and B. F. Toner , Phys. Rev. Lett. $\mathbf{90}$, 157904 (2003); K. Svozil, Phys. Rev. A **72** 050302 (R) (2005). S. Popescu and D. Rohrlich, Found. Phys. $\mathbf{24}$, 379 (1994); N. J. Cerf, N. Gisin, S. Massar, and S. Popescu, Phys. Rev. Lett. $\mathbf{94}$, 220403 (2005); J. Barrett and S. Pironio, Physical Review Letters $\mathbf{95}$ 140401 (2005); N. Brunner, N. Gisin and V. Scarani, New Journal of Physics $\mathbf{7}$ 88 (2005). A. J. Short *et al.*, Phys. Rev. A $\mathbf{73}$, 012101 (2006). Barrett, J. and Pironio, S., (2005) Phys. Rev. Lett. $\mathbf{95}$ 140401. A. Fahmi, and M. Golshani, Phys. Lett. A, $\mathbf{306}$, 259, (2003). For example in ref [@Bohm], p. 140, the authors have defined some sets of hidden variables where the result of measurement depend on them (hidden variables $\mu_{a}$ and $\mu_{b}$ associated with the corresponding pieces of measuring apparatus of the first and second particles respectively. In addition there will be some sets of variables $\lambda_{A}$ and $\lambda_{B}$, belonging to the particle $A$ and $B$ themselves). D. Bohm, and B. J. Hiley, *The Undivided Universe: an ontological interpretation of quantum theory* (Routledge 1993). L. Hardy, Phys. Lett. A $\mathbf{161}$, 21 (1991). A. Peres, private communication. A. Peres, and D. R. Terno, *Rev. Mod. Phys.* $\mathbf{76}$, (2004). G. Brassard, *Nature* $\mathbf{1}$, 2-4 (2005); G. Brassard, *et al.*, Phys. Rev. Lett. $\mathbf{96}$, 250401 (2006). [^1]: fahmi@theory.ipm.ac.ir [^2]: golshani@ihcs.ac.ir
--- abstract: 'We consider a compact star-shaped mean convex hypersurface $\Sigma^2\subset \mathbb{R}^3$. We prove that in some cases the flow exists until it shrinks to a point . We also prove that in the case of a surface of revolution which is star-shaped and mean convex, a smooth solution always exists up to some finite time $T < \infty$ at which the flow shrinks to a point asymptotically spherically.' address: - 'Department of Mathematics, Columbia University, New York, USA' - 'Department of Mathematics, Columbia University, New York, USA' author: - 'Panagiota Daskalopoulos$^*$' - 'Natasa Sesum$^*$' title: 'The harmonic mean curvature flow of nonconvex surfaces in $\mathbb{R}^3$' --- [^1] Introduction ============ We will consider in this work the deformation of a compact hyper-surface $\Sigma_t$ in ${\mathbb R}^3$ with no boundary under the [ *harmonic mean curvature flow*]{} (HMCF) namely the flow $$\label{hmcf0} \frac{\partial P}{\partial t} = - \frac{G}{H} \, \nu$$ which evolves each point $P$ of the surface in the direction of its normal unit vector with speed equal to the [*harmonic mean curvature*]{} of the surface $G/H$, with $G$ denoting the Gaussian curvature of $\Sigma_t$ and $H$ its mean curvature. Here $\nu$ denotes the outer unit normal to the surface at $P$. This flow remains weakly parabolic without the condition that $\Sigma_t$ is strictly convex. However, it becomes degenerate at points where the Gaussian curvature $G$ vanishes. The existence of solutions to the HMCF with strictly convex smooth initial data was first shown by Andrews in [@An3] who also showed that under the HMCF strictly convex smooth surfaces converge to round points in finite time. In [@Di], Diëter established the short time existence of solutions to the HMCF with weakly convex smooth initial data. More precisely, Diëter showed that if at time $t=0$ the surface $\Sigma_0$ satisfies $ G \geq 0$ and $H>0$, then there exists a unique strictly convex smooth solution $\Sigma_t$ of the HMCF defined on $0<t< \tau$, for some $\tau>0$. By the results of Andrews, the solution will exist up to the time where its enclosed volume becomes zero. In [@CD] Caputo and the first author considered the highly degenerate case where the initial surface is weakly convex with flat sides, where the parabolic equation describing the motion of the surface becomes highly degenerate at points where both curvatures $G$ and $H$ become zero. The solvability and optimal regularity of the surface $\Sigma_t$, for $t >0$, was addressed and studied by viewing the flow as a [*free-boundary*]{} problem. It was shown that a surface $\Sigma_0$ of class $C^{k,\gamma}$ with $k \geq 1$ and $0<\gamma \leq 1$ at $t=0$, will remain in the same class for $t >0$. In addition, the strictly convex parts of the surface become instantly $C^\infty$ smooth up to the flat sides on $t >0$, and the boundaries of the flat sides evolve by the curve shortening flow. The case $G <0$ was recently studied by the first author and R. Hamilton in [@DH1], under the assumption that the initial surface is a surface or revolution with boundary, and has $G <0$ and $H <0$ everywhere. It was shown in [@DH1] that under certain boundary conditions, there exists a time $T_0 >0$ for which the HMCF admits a unique solution $\Sigma_t$ up to $T_0$, such that $H< 0$ for all $t< T_0$ and $H(\cdot,T_0) \equiv 0$ on some set of sufficiently large measure. In addition, the boundary of the surface evolves by the curve shortening flow. In this work we address the questions of short time and long time existence and regularity of the HMCF under the assumption that $\Sigma_0$ is star-shaped with $H >0$ but with $G$ changing sign. Let $M^2$ be a smooth, compact surface without boundary and $F_0: M^2 \to {\mathbb R}^3$ be a smooth immersion of $M^2$. Let us consider a smooth family of immersions $F(\cdot,t): M^2 \to {\mathbb R}^3$ satisfying $$\label{equation-HMCF} \frac{\partial F(p,t)}{\partial t} = -\kappa(p,t) \cdot \nu (p,t) \tag{HMCF}$$ where $\kappa = G/H$ denotes the harmonic mean curvature of $\Sigma_t := F(M^2,t)$ and $\nu$ its outer unit normal at every point. This is an equivalent formulation of the HMCF. For any compact two-dimensional surface $M^2$ which is smoothly embedded in ${\mathbb R}^3$ by $F: M^2 \to {\mathbb R}^3$, let us denote by $g=(g_{ij})$ the induced metric, and by $\nabla$ the induced Levi-Civita connection. The second fundamental form $A = \{h_{ij}\}$ is a symmetric bilinear form $A(p): T_p\Sigma \times T_p M \to {\mathbb R}$, defined by $A(u,v) = \langle \nabla_u\nu, v\rangle$. The Weingarten map $W(p): T_p\Sigma \to T_p\Sigma$ of $T_p M$ given by the immersion $F$ with respect to the normal $\nu$, can be computed as $ h_j^i = g^{ik}h_{kj}$. The eigenvalues of $W(p)$ are called the principal curvatures of $F$ at $p$ and are denoted by $\lambda_1=\lambda_1(p)$ and $\lambda_2=\lambda_2(p)$. The mean curvature $H:= {\mathrm{trace}}(W) = \lambda_1 + \lambda_2$, the total curvature $|A|^2 := {\mathrm{trace}}(W^t\, W) = \lambda_1^2 + \lambda_2^2$ and the Gauss curvature $G = \det W = \lambda_1\, \lambda_2$. \[rem-andrews\] We will recall some standard facts about homogeneous of degree one functions of matrices that can be found in [@An1]. The speed speed $\kappa$ of the interface evolving by the HMCF can be viewed as a function of the Weingarten map $W$ and therefore, more generally, as a function $\kappa: S \to {\mathbb R}$, where $S$ denotes the set of all symmetric, positive transformations of $T\Sigma^2$ with strictly positive trace. Let $\lambda_1, \lambda_2$ be the eigenvalues of $A \in S$. We can then define the symmetric function $f(\lambda_1,\lambda_2) := \kappa(A)$. We have: - If $f$ is concave (convex) and $\lambda_i > \lambda_j$, then $\frac{\partial f}{\partial \lambda_i} - \frac{\partial f}{\partial \lambda_j}$ is negative (positive). - Let $\ddot{\kappa} \in T\Sigma \otimes T^*\Sigma \otimes T\Sigma \otimes T^*\Sigma$ denote the second derivative of $\kappa$ at the point $A \in S$. If $A$ is diagonal, then $$\label{eq-ddot} \ddot{\kappa}(\xi,\eta) = \sum_{p,q}\frac{\partial^2\kappa}{\partial\lambda_p\lambda_q} \xi_p^p\eta_q^q + \sum_{p\neq q} \frac{\frac{\partial\kappa}{\partial\lambda_p} - \frac{\partial\kappa}{\partial\lambda_q}}{\lambda_p - \lambda_q}\xi_p^q\eta_p^q.$$ The outline of the paper is as follows: i. In section \[section-ste\] we will establish the short time existence of (HMCF), under the assumption that the initial surface $\Sigma_0$ is compact of class $C^{2,1}$ and has $H >0$. To do so we will have to bound $H$ from below away from zero independently of ${\epsilon}$. This does not follow naturally from the evolution of $H$. To obtain such a bound we need to combine the evolution of $H$ with the evolution of the gradient of the second fundamental form. This explains our assumption that $\Sigma \in C^{2,1}$. ii. In section \[section-ltee\] we will study the long time existence of the regularized flow (HMCF$_{\epsilon}$) (defined in the next section). We will show that there exists a maximal time of existence $T_{\epsilon}$ of a smooth solution $\Sigma^{{\epsilon}}_t$ of (HMCF$_{\epsilon}$) such that either $H(P_t,t) \to 0$, as $t \to T_{\epsilon}$ at some points $P_t \in \Sigma_t^{\epsilon}$, or $\Sigma^{{\epsilon}}_t$ shrinks to a point as $t \to T_{{\epsilon}}$. In addition, we will establish uniform in ${\epsilon}$ curvature bounds and curvature pinching estimates. In the special case where the initial data is a surface of revolution, we will show that the flow always exists up to the time when the surface shrinks to a point. iii. In section \[section-lte\] we will pass to the limit, ${\epsilon}\to 0$, to establish the long time existence of (HMCF). Short time Existence {#section-ste} ==================== Our goal in this section is to show the following short time existence result for the HMCF. \[thm-STE\] Let $\Sigma_0$ be a compact hyper-surface in ${\mathbb R}^3$ which is of class $C^{2,1}$ and has strictly positive mean curvature $H > 0$. Then, there exists $T > 0$ for which the harmonic mean curvature flow (\[equation-HMCF\]) admits a unique $C^{2,1}$ solution $\Sigma_t$, such that $H > 0$ on $t\in [0,T)$. Because the harmonic mean curvature flow becomes degenerate when the Gauss curvature of the surface $\Sigma_t$ changes sign, we will show the short time existence for equation (\[equation-HMCF\]) by considering its $\epsilon$-regularization of the flow defined by $$\label{equation-reg}\tag{HMCF$_{\epsilon}$} \frac{\partial F_{\epsilon}}{\partial t} = -(\frac{G}{H} + \epsilon \, H)\cdot \nu$$ and starting at $\Sigma_0$. We will denote by $\Sigma_t^{\epsilon}$ the surfaces obtained by evolving the initial surface $\Sigma_0$ along the flow . Since the right hand side of can be viewed as a function of the second fundamental form matrix $A$, a direct computation shows that its linearization is given by $$\mathcal{L}_{\epsilon}(u) = \frac{\partial}{\partial h_k^i} \left (\frac{G}{H} + \epsilon H \right )\nabla_i \, \nabla_k u = a_{\epsilon}^{ik}\, \nabla_i \, \nabla_k u$$ with $$\label{eq-coeff} a_{\epsilon}^{ik} = \frac{\partial}{\partial h_k^i}\left (\frac{G}{H} + \epsilon H \right).$$ Notice that if we compute $a^{ik}_{\epsilon}$ in geodesic coordinates around the point (at which the matrix $A$ is diagonal) we get $$\label{eq-coeff1} a^{ik}_\epsilon = \left( \begin{array}{cc} \frac{\lambda_2^2}{(\lambda_1 + \lambda_2)^2} + \epsilon & 0 \\ 0 & \frac{\lambda_1^2}{(\lambda_1 + \lambda_2)^2} + \epsilon \\ \end{array} \right)$$ which is strictly positive definite, no matter what the principal curvatures are. The following short time existence for the regularized flow follows from the standard theory on the existence of solutions to strictly parabolic equations. \[prop-ste-e\]Let $\Sigma_0$ be a compact hyper-surface in ${\mathbb R}^3$ which is of class $C^{1,1}$ and has strictly positive mean curvature $H > 0$. Then, there exists $T_{\epsilon}> 0$, for which the harmonic mean curvature flow (\[equation-reg\]) admits a smooth solution $\Sigma_t^{\epsilon}$, such that $H > 0$ on $t\in [0,T_{\epsilon})$. Our goal is to show that if the initial surface $\Sigma_0$ is of class $C^{2,1}$, then there is a $T_0> 0$, so that $T_{\epsilon} \ge T_0$ for every $\epsilon$ and that we have uniform estimates on $F_{\epsilon}$, independent of $\epsilon$, so that we can take a limit of $F_{\epsilon}$ as $\epsilon\to 0$ and obtain a solution of (\[equation-HMCF\]) that is of class $C^{2,1}$. The main obstacle here is to exclude that $H_{\epsilon}(P_{\epsilon},t_{\epsilon}) \to 0$, as ${\epsilon}\to 0$, for some points $P_{\epsilon}\in \Sigma_{t_{\epsilon}}^{\epsilon}$ and times $t_{\epsilon}\to 0$. Notice that our flow cannot be defined at points where $H=0$. [*Notation.*]{} 1. When there is no possibility of confusion, we will use the letters $c$, $C$ and $T_0$ for various constants which are independent of ${\epsilon}$ but change from line to line. 2. Throughout this section we will denote by $\lambda_1, \lambda_2$ the two principal curvatures of the surface $\Sigma_t^{\epsilon}$ at a point $P$ and will assume that $\lambda_1 \ge \lambda_2.$ 3. When there is no possibility of confusion we will drop the index ${\epsilon}$ from $H, G, A, g_{ij}, h_{ij}$ etc. The next lemma follows directly from the computations of B. Andrews in [@An1] (Chapter 3). \[lemma-evolution\] If $\Sigma^{\epsilon}_t$ moves by (\[equation-reg\]), with speed $\kappa_\epsilon := \frac{G}{H} + \epsilon H$, the computation in [@An1] gives us the evolution equations i. $\frac{\partial}{\partial t}H = \mathcal{L}_{\epsilon} H + \frac{\partial^2 \kappa_\epsilon}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p\nabla_j h_m^l + \frac{\partial \kappa_\epsilon}{\partial h_m^l}h_p^lh_m^p\, H$ ii. $\frac{\partial}{\partial t} \kappa_\epsilon = \mathcal{L}_{\epsilon} \kappa_{\epsilon}+ \frac{\partial \kappa_\epsilon}{\partial h_j^i}h_{il}h_{lj}\, \kappa_\epsilon$. Note that if $\kappa := \frac{G}{H}$, we have $$\label{eq-useful0} \frac{\partial\kappa}{\partial h_p^q}h_m^qh_p^m = \sum_{i=1}^2\frac{\partial\kappa} {\partial\lambda_i}\lambda_i^2 = 2\, \kappa^2$$ hence $$\label{eq-useful00} \frac{\partial\kappa_{\epsilon}}{\partial h_p^q}h_m^qh_p^m = \sum_{i=1}^2\frac{\partial\kappa_{\epsilon}} {\partial\lambda_i}\lambda_i^2 = 2\, \kappa^2 + {\epsilon}\, |A|^2$$ with $|A|^2 = \lambda_1^2+\lambda_2^2$. We then conclude from the above lemma that $H$ and $\kappa_{\epsilon}$ satisfy the evolution equations $$\label{eqn-H} \frac{\partial}{\partial t}H = \mathcal{L}_{\epsilon} H + \frac{\partial^2 \kappa_\epsilon}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p\nabla_j h_m^l + (2\, \kappa^2 + {\epsilon}\, |A|^2)\, H$$ and $$\label{eqn-k} \frac{\partial \kappa_\epsilon^2}{\partial t} = \mathcal{L}_{\epsilon} \kappa_{\epsilon}+ (2\, \kappa^2 + {\epsilon}\, |A|^2)\, \kappa_\epsilon^2.$$ We will now combine the above evolution equations to establish the following uniform bound on the second fundamental form. \[prop-H\] There exist uniform constants $C$ and $T_0$ so that $$\label{eqn-A} \max_{\Sigma_t^{{\epsilon}}}|A| \le C, \qquad \forall t\in [0,\min (T_{\epsilon}, T_0)\, ).$$ Recall that $H$ satisfies the equation . If we multiply this equation by $H$, we get $$\frac{\partial H^2}{\partial t} = \mathcal{L}_{\epsilon}(H^2) - 2a_{\epsilon}^{ik}\nabla_i H\nabla_k H + \frac{\partial^2 \kappa_{\epsilon}}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p\nabla_i h_m^l\cdot H + 2 (2 \kappa^2 + {\epsilon}\, |A|^2)\, H^2$$ with $\kappa=G/H$. Notice that the definiteness of the matrix $D^2 \kappa_{\epsilon}= [\frac{\partial^2 \kappa_{\epsilon}} {\partial h_p^q\partial h_m^l}]$ depends on the sign of $H$. It is easier to check this in geodesic coordinates around a point at which the Weingarten map is diagonalized. In those coordinates, by (\[eq-ddot\]), we have $$\sum_{p,q,m,l}\frac{\partial^2 \kappa_{\epsilon}}{\partial h_q^p\partial h_m^l}\nabla^i h_q^p\nabla_i h_m^l = \sum_{p,q}\frac{\partial^2\kappa_{{\epsilon}}}{\partial\lambda_p\lambda_q}\nabla^i h_p^p\nabla_i h_q^q + \sum_{p\neq q}\frac{\frac{\partial\kappa_{{\epsilon}}}{\partial\lambda_p} - \frac{\partial\kappa_{{\epsilon}}}{\partial\lambda_q}}{\lambda_p - \lambda_q}(\nabla_i h_p^q)^2$$ where the matrix $D^2\kappa_{{\epsilon}} := [\frac{\partial^2\kappa_{{\epsilon}}}{\partial\lambda_p\lambda_q}]$ is given by $$\label{eq-concavity} D^2\kappa_{\epsilon}= \left(\begin{array}{cc} -\frac{2\lambda_2^2}{(\lambda_1 + \lambda_2)^3} & \frac{2\lambda_1\lambda_2}{(\lambda_1+\lambda_2)^3} \\ \frac{2\lambda_1\lambda_2}{(\lambda_1+\lambda_2)^3} & -\frac{2\lambda_1^2}{(\lambda_1 + \lambda_2)^3} \\ \end{array} \right) = -\frac{2}{H^3}\left(\begin{array}{cc} \lambda_2^2 & -\lambda_1\lambda_2 \\ -\lambda_1\lambda_2 & \lambda_1^2\\ \end{array} \right)$$ and for $p\neq q$, $$\frac{\frac{\partial\kappa_{{\epsilon}}}{\partial\lambda_p} - \frac{\partial\kappa_{{\epsilon}}}{\partial\lambda_q}}{\lambda_p - \lambda_q} = \frac{\lambda_q^2 - \lambda_p^2}{\lambda_p - \lambda_q} = -\frac{1}{H}.$$ It is now easy to see that $$\label{eq-definite-H} \frac{\partial^2 \kappa_{\epsilon}}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p\nabla_i h_m^l\cdot H \le 0$$ hence $$\label{eqn-H2} \frac{\partial H^2}{\partial t} \le \mathcal{L}_{\epsilon}(H^2) + 2\, (2 \kappa^2 + {\epsilon}\, |A|^2)\, H^2.$$ Similarly, from the evolution of $\kappa_{\epsilon}$, namely , we obtain $$\label{eqn-k2} \frac{\partial \kappa_\epsilon^2}{\partial t} \leq \mathcal{L}_{\epsilon} (\kappa_{\epsilon}^2) + 2\, (2 \kappa^2 + {\epsilon}\, |A|^2)\, \kappa_\epsilon^2.$$ We observe that because of the appearance of the second fundamental form $|A|^2$ in the zero order term of the equations and , we cannot estimate the maximum of $H^2$ and $\kappa_{\epsilon}^2$ directly from each equation using the maximum principle. This is because the surface is not convex. However, it is possible to estimate the maximum of $H^2+\kappa_{\epsilon}^2$ by combining the two evolution equations. To this end, we set $M=H^2 + \kappa_{\epsilon}^2$ and compute, by adding the last two equations, that $$\label{eqn-M} \frac{\partial M}{\partial t} \leq \mathcal{L}_{\epsilon} M + 2\, (2 \kappa^2 + {\epsilon}\, |A|^2) \, M.$$ We will show the bound $$\label{eqn-bound} 2 \kappa^2 + {\epsilon}\, |A|^2 \leq C\, (H^2 + \kappa_{\epsilon}^2)$$ for some uniform in ${\epsilon}$ constant $C$, where $\kappa_{\epsilon}= \kappa+ {\epsilon}\, H$ and $\kappa = G/H$. Since $\kappa \leq \kappa_{\epsilon}$ it will be sufficient to show that $$\label{eqn-AA} |A|^2 \leq C\, (H^2 + \kappa^2).$$ Expressing everything in terms of the principal curvatures $\lambda_1$ and $\lambda_2$, the above reduces to the estimate $${\lambda}_1^2 + {\lambda}_2^2 \leq C\, \left ( ({\lambda}_1+{\lambda}_2)^2 + \left ( \frac{{\lambda}_1\, {\lambda}_2}{{\lambda}_1+{\lambda}_2} \right )^2 \right ).$$ If ${\lambda}_2=0$ the above inequality is clearly satisfied. Assume that ${\lambda}_2 \leq {\lambda}_1$ with ${\lambda}_2 \neq 0$ and set $\mu = {{\lambda}_1}/{{\lambda}_2}$. Since $H={\lambda}_1+{\lambda}_2 >0$, we conclude that $|\mu| \geq 1$. Then, the last inequality is expressed as $$1+ \frac 1{\mu^2} \leq C \, \left ( \frac{(1+\mu)^2}{\mu^2} + \frac{1}{(1+\mu)^2} \right )$$ which can be reduced to showing that $$\frac{(1+\mu)^2}{\mu^2} + \frac{1}{(1+\mu)^2} \geq c >0$$ for a uniform constant $c$, since $|\mu| \geq 1$. This inequality is clearly satisfied when $|\mu| \geq 1$. Hence, holds. Applying on we conclude get $$\frac{\partial M}{\partial t} \leq \mathcal{L}_{\epsilon} M + \theta \, M^2$$ for some uniform constant $\theta$. The maximum principle then implies the differential inequality $$\frac{d M_{\max}}{d t} \leq \theta \, M_{\max}^2$$ which readily implies that $$\max_{\Sigma_t^{{\epsilon}}} M \le C, \qquad \forall t\in [0,\min (T_{\epsilon}, T_0)\, )$$ for some uniform in ${\epsilon}$ constants $C$ and $T_0$. This combined with implies finishing the proof of the proposition. To establish the short existence of the flow on $(0,T_0)$ for some $T_0 >0$, we still need to bound $H$ from below away from zero independently of ${\epsilon}$. This does not follow naturally from the evolution of $H$, because the equation carries a quadratic negative term which depends on the derivatives of the second fundamental form. Hence to establish the lower bound on $H$ we need to combine the evolution of $H$ with the evolution of the gradient of the second fundamental form. This is shown in the next proposition. \[prop-nabla-curv\] There exist uniform in ${\epsilon}$ positive constants $T_0 $, $C$ and $\delta$, so that $$|\nabla A| \le C \quad \mbox{and} \quad H \ge \delta, \quad \mbox{on}\,\, \Sigma_t^{{\epsilon}}$$ for $t\in [0, \min (T_{\epsilon},T_0)\,)$. We will first compute the evolution equation for $\sum_{i,j}|\nabla h_i^j|^2$. Lets first see how $h_i^j$ evolves. We have $$\frac{\partial}{\partial t}h_i^j = \mathcal{L}_{{\epsilon}}(h_i^j) + \frac{\partial^2 \kappa_{{\epsilon}}}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p \nabla^j h_m^l + \frac{\partial \kappa_{{\epsilon}}}{\partial h_m^l}h_p^l h_m^p h_j^i.$$ &gt;From the previous equation, commuting derivatives we get $$\label{eq-der} \begin{split} \frac{\partial}{\partial t} \nabla_r h_i^j &= \mathcal{L}_{{\epsilon}}(\nabla_r h_i^j) + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q \partial h_n^s}\nabla_r h_n^s\nabla_p\nabla_q h_i^j + \frac{\partial \kappa_{{\epsilon}}}{\partial h_{pq}}R_{rpqm}\nabla_m h_i^j \\ &+ \frac{\partial^3 \kappa_{\epsilon}}{\partial h_p^q\partial h_m^l\partial h_n^s}\nabla_rh_n^s\nabla^ih_p^q\nabla_j h_m^l + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_pq\partial h_m^l}\nabla_r\nabla^ih_p^q \nabla_j h_m^l + \\ &+ \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_pq\partial h_m^l}\nabla^ih_p^q \nabla_r\nabla_j h_m^l + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_m^l\partial h_n^s}\nabla_r h_n^sh_p^lh_m^ph_j^i + \\ &+ \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}\nabla_r h_p^l h_m^p h_i^j + \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}h_p^l\nabla_rh_m^p h_i^j + \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}h_p^l h_m^p \nabla_rh_i^j. \end{split}$$ Let $w = \sum_{i,j}|\nabla h_i^j|^2$. Since $|\nabla h_i^j|^2 = g^{pq}\nabla_p h_i^j\nabla_q h_i^j$ and $\frac{\partial g_{ij}}{\partial t} = 2\kappa_{{\epsilon}}h_{ij}$, we get $$\begin{split} \label{eq-nabla} \frac{\partial w}{\partial t} &= -4g^{pa}g^{qb}\kappa_{{\epsilon}}h_{ab}\nabla_p h_i^j\nabla_q h_i^j + \mathcal{L}_{{\epsilon}}(w) - 2\dot{\kappa}_{{\epsilon}}(\nabla^2h_i^j, \nabla^2 h_i^j) + \\ &+ g^{pq}\frac{\partial\kappa_{{\epsilon}}}{\partial h_a^b}R_{pabs}\nabla_sh_i^j\nabla_q h_i^j + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra}\nabla_r h_m^l \nabla_p\nabla_q h_i^j \nabla_a h_i^j + \\ &+ \frac{\partial^3}{\partial h_p^q\partial h_m^l \partial h_n^s} g^{ra}\nabla_r h_n^s \nabla^i h_p^q\nabla_j h_m^l \nabla_a h_i^j + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra} \nabla_r\nabla^i h_p^q\nabla_j h_m^l\nabla_a h_i^j + \\ &+ \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra} \nabla^i h_p^q\nabla_r\nabla_j h_m^l\nabla_a h_i^j + \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_m^l\partial h_n^s}g^{ra}\nabla_rh_n^s h_p^lh_m^p h_i^j + \\ &+ \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}g^{ra}\nabla_r h_p^l h_m^p h_i^j \nabla_a h_i^j + \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}g^{ra} h_p^l \nabla_r h_m^p h_i^j \nabla_a h_i^j + \\ &+ \frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l}g^{ra} h_p^l h_m^p \nabla_rh_i^j \nabla_a h_i^j. \end{split}$$ Whenever we see $i$ and $j$ in the previous equation we assume that we are summing over all indices $i$ and $j$. Also, $$\dot{\kappa}_{{\epsilon}}(\nabla^2 h_i^j, \nabla^2 h_i^j) = \frac{\partial\kappa_{{\epsilon}}}{\partial h_p^q}g^{pq}g^{cd} \nabla_q\nabla_c h_i^j\nabla_p\nabla_d h_i^j.$$ Notice that since $|A| \le C$ for all $t\in [0, \min (T_{{\epsilon}},T_0)\, )$, we have $$|\frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l}| \le \frac{C_1}{H^3} \quad \mbox{and} \quad |\frac{\partial^3\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l\partial h_n^s}| \le \frac{C_1}{H^4}$$ for a uniform constant $C$. We next compute the evolution equation for ${1}/{H}$ from the evolution of $H$, namely equation . By direct computation we get that $$\frac{\partial}{\partial t}(\frac{1}{H}) = \mathcal{L}_{{\epsilon}}(\frac{1}{H}) - \frac{2}{H^3}\frac{\partial\kappa_{{\epsilon}}}{\partial h_p^q} \nabla_p H\nabla_q H - \frac{1}{H^2}\frac{\partial^2\kappa_{{\epsilon}}} {\partial h_p^q\partial h_m^l}\nabla^i h_p^q\nabla_i h_m^l - \frac{1}{H}\frac{\partial\kappa_{{\epsilon}}}{\partial h_m^l} h_p^l h_m^p.$$ Taking away the second negative term on the right hand side we easily conclude the differential inequality $$\frac{\partial}{\partial t}(\frac{1}{H})\le \mathcal{L}_{{\epsilon}}(\frac{1}{H}) + \frac{C\,w}{H^5} + \frac{C}{H}.$$ Combining the evolution equations of $w$ and $1/H$ we will now compute the evolution equation for $$\mathcal{V} := w + \frac{1}{H}.$$ We look at the point $(P,t)$ at which $\mathcal{V}$ achieves its maximum at time $t$ and choose coordinates around $P$ so that both, the second fundamental form and the metric matrix are diagonal at $P$. Using the exact form of coefficients $a^{ik}_{\epsilon}= \frac{\partial\kappa_{{\epsilon}}}{\partial h_p^q}$ computed in (\[eq-coeff\]) we get $$\begin{split} \label{eq-good} -2\sum_{i,j}\dot{\kappa}_{{\epsilon}}(\nabla^2 h_i^j, \nabla^2 h_i^j) &= -2\sum_{i,j} \frac{\partial\kappa_{{\epsilon}}}{\partial h_p^q} g^{cd}\nabla_q\nabla_c h_i^j \nabla_p\nabla_d h_i^j\\ &= -\frac{2}{H^2}\sum_{i,j}[\, \lambda_2^2 (\nabla_1\nabla_1h_i^j)^2 + \lambda_1^2(\nabla_2\nabla_2 h_i^j)^2 \\ &\,\,\, \quad+ (\lambda_1^2 + \lambda_2^2)\, (\nabla_1\nabla_2 h_i^j)^2 \, ]. \end{split}$$ Our goal is to absorb all the remaining terms that contain the second order derivatives, appearing in the evolution equation for $\mathcal{V}$, in the good term (\[eq-good\]). By looking at the evolution equation of $w$, we see that those second order terms are $$\mathcal{O} = \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra}\nabla_r h_m^l\nabla_p\nabla_q h_i^j\nabla_a h_i^j,$$ $$\mathcal{P} = \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra}\nabla_r\nabla^i h_p^q\nabla_j h_m^l\nabla_a h_i^j$$ and $$\mathcal{R} = \frac{\partial^2\kappa_{{\epsilon}}}{\partial h_p^q\partial h_m^l} g^{ra}\nabla^ih_p^q\nabla_r\nabla_jh_m^l\nabla_a h_i^j$$ where we understand summing over all indices. Denote by $\xi_m^l := \nabla_r h_m^l$ and by $\eta_p^q := \nabla_p\nabla_q h_i^j$. If we specify the coordinates around the maximum point $P$ in which $W$ and $g$ are diagonal, by (\[eq-ddot\]) $$\begin{split} \label{equation-O} \mathcal{O} &= g^{ra}\nabla_r h_i^j \, \left (\sum_{p,q}\frac{\partial^2\kappa}{\partial\lambda_p\lambda_q} \xi_p^p\eta_q^q + \sum_{p\neq q} \frac{\frac{\partial\kappa}{\partial\lambda_p} - \frac{\partial\kappa}{\partial\lambda_q}}{\lambda_p - \lambda_q}\xi_p^q\eta_p^q \right ) \\ &= \nabla_r h_i^j \left (-\frac{2\lambda_2^2}{H^3}\nabla_r h_1^1\nabla_1\nabla_1 h_i^j - \frac{2\lambda_1^2}{H^3}\nabla_r h_2^2\nabla_2\nabla_2 h_i^j + \frac{2\lambda_1\lambda_2}{H^3}\nabla_r h_1^1\nabla_2\nabla_2 h_i^j + \right .\\ &+ \left . \frac{2\lambda_1\lambda_2}{H^3}\nabla_r h_2^2\nabla_1\nabla_1 h_i^j - \frac{1}{H}(\nabla_1\nabla_2 h_i^j\nabla_r h_1^2 + \nabla_2\nabla_1 h_i^j \nabla_r h_2^1 \right ). \end{split}$$ Since $|A| \le C$ and $$\frac{1}{H} = \frac{\lambda_1 + \lambda_2}{H^2} \le \frac{|\lambda_1| + |\lambda_2|}{H^2}$$ by the Cauchy-Schwartz inequality, we can estimate $\mathcal{O}$ term by term, namely $$\begin{aligned} \left | 2\nabla_r h_i^j\frac{\lambda_2^2}{H^3}\nabla_r h_1^1\nabla_1\nabla_1 h_i^j \right | &=& \left |2\nabla_r h_i^j\frac{\lambda_2}{H^2}\nabla_r h_1^1 \right |\, \left |\frac{\lambda_2}{H}\, \nabla_1\nabla_1 h_i^j \right | \\ &\le& C\frac{w^2}{H^4} + \beta_1\frac{\lambda_2^2}{H^2}|\nabla_1\nabla_1 h_i^j|^2 \\ &\le& C\frac{w^2}{H^4} + \beta_1\sum_{ij}\dot{\kappa_{\epsilon}}(\nabla^2h_i^j\nabla^2 h_i^j)\end{aligned}$$ and $$\begin{aligned} \left |2\frac{\lambda_1\lambda_2}{H^3}\nabla_r h_i^j\nabla_r h_1^1\nabla_1\nabla_1 h_i^j \right | &=& \left |2\frac{\lambda_1}{H^2}\, \nabla_r h_1^1\nabla_r h_i^j \right | \, \left |\frac{\lambda_2}{H}\nabla_1\nabla_i h_i^j \right | \\ &\le& C\frac{w^2}{H^4} + \beta_1\frac{\lambda_2^2}{H^2}|\nabla_1\nabla_1 h_i^j| \\ &\le& C\frac{w^2}{H^4} + \beta_1\sum_{ij}\dot{\kappa}_{\epsilon}(\nabla^2 h_i^j\nabla^2 h_i^j)\end{aligned}$$ and $$\begin{aligned} \left | \frac{1}{H}\nabla_1\nabla_2 h_i^j\nabla_rh_1^2\nabla_rh_i^j \right | &\le& \left |\frac{1}{H^2}\nabla_r h_1^2\nabla_r h_i^j \right |\, \left |\frac{\lambda_1 + \lambda_2}{H}\nabla_1\nabla_2 h_i^j \right |\\ &\le& C\frac{w^2}{H^4} + \beta_1(\lambda_1^2 + \lambda_2^2)|\nabla_1\nabla_2 h_i^j|^2 \\ &\le& C\frac{w^2}{H^4} + \beta_1\sum_{ij}\dot{\kappa}_{\epsilon}(\nabla^2 h_i^j,\nabla^2 h_i^j)\end{aligned}$$ where $\beta_1 > 0$ is a uniform small number. We can estimate other terms in $\mathcal{O}$ the same way and combining all those estimates yield $$|\mathcal{O}| \le C\frac{w^2}{H^4} + \beta \sum_{i,j} \dot{\kappa}_{{\epsilon}}(\nabla^2 h_i^j, \nabla^2 h_i^j)$$ where $\beta > 0$ is a small fixed number. In order to estimate $\mathcal{P}$, we would like to be able somehow to switch the pair of indices $\{i,r\}$ with $\{p,q\}$ so that we reduce estimating $\mathcal{P}$ to the previous case of $\mathcal{O}$. We will use Gauss-Codazzi equations in the form $$\nabla_l h_{ij} = \nabla_i h_{lj}.$$ In our special coordinates at the point we have $$\begin{split} \label{eq-reduction} \nabla_r\nabla^i h_p^q &= \nabla_r\nabla^i(h_{ps}g^{qs}) = \nabla_r(g^{ij}\nabla_j(h_{ps}g^{qs})) \\ &= \nabla_r(g^{ij}g^{qs}) \cdot \nabla_j h_{ps} + g^{ij}g^{qs}\nabla_r\nabla_j h_{ps} + h_{ps}\nabla_r(g^{ij}\nabla_j g^{qs}) \\ &= \nabla_p\nabla_q h_{rj} + \nabla_r(g^{ij}g^{qs}) \cdot \nabla_j h_{ps} + h_{pp}\nabla_r(g^{ij}\nabla_j g^{pq}) \end{split}$$ We have the following: [*Claim.*]{} There is a uniform constant $\tilde{C}$ depending on $C$, so that $$\label{claim-bound-g} |g(\cdot,t)|_{C^2} \le \tilde{C}$$ as long as $|A| \le C$. To prove we observe that in geodesic coordinates $\{x_i\}$ around a point $p$, which corresponds to the origin in geodesic coordinates, we have $$\label{equation-g} g_{ij}(x) = \delta_{ij} + \frac{1}{3}R_{ipqj}x^px^q + O(|x|^3)$$ and that an easy computation shows that $$\nabla_p\nabla_q g_{ij} (0) = -\frac{1}{3}R_{ipqj}.$$ By the Gauss equations, we have $R_{ipqj} = h_{iq}h_{pj} - h_{ij}h_{pq}$, which yields to $|\nabla_p\nabla_q g_{ij}| \le \tilde{C}$ as long as $|A| \le C$. This together with (\[equation-g\]) proves the Claim. Combining - , we obtain as in the estimate of $\mathcal{O}$, the bound $$|\mathcal{P}| \le \frac{Cw^2}{H^4} + \beta\sum_{i,j}\dot{\kappa}_{{\epsilon}} (\nabla^2 h_i^j, \nabla^2 h_i^j).$$ Similarly, we get the estimate for $\mathcal{R}$. We conclude that $$\label{eqn-oooo} |\mathcal{O}| + |\mathcal{P}| + |\mathcal{R}| \le \frac{Cw^2}{H^4} + 3\beta\sum_{i,j}\dot{\kappa}_{{\epsilon}} (\nabla^2 h_i^j, \nabla^2 h_i^j).$$ Choosing $\beta > 0$ so that $3\beta < 2$ in and analyzing the right hand side of the evolution of $w$ term by term, we obtain the following estimate at the maximum point $P$ of $\mathcal{V}$ at time $t$ $$\frac{d \, \mathcal{V}_{\max}}{dt} \le Cw + \frac{Cw^2}{H^4} + \frac{C\sqrt{w}}{H^3} + \frac{Cw}{H^5} + \frac{C}{H}.$$ Young’s inequlity, implies the estimates $$\frac{w^2}{H^4} \le w^6 + \frac{1}{H^6} \le \mathcal{V}^6$$ and $$\frac{w}{H^5} \le w^6 + \frac{1}{H^6} \le \mathcal{V}^6$$ and $$\frac{\sqrt{w}}{H^3} \le w + \frac{1}{H^6} \le \mathcal{V} + \mathcal{V}^6.$$ Hence, denoting by $f(t) = \mathcal{V}_{\max}(t)$ we obtain $$\label{eq-good-f} \frac{d f}{dt} \le C\, (f + f^6)$$ which implies the existence of uniform constants $\bar C$ and $T_0$, depending only on $C$ and $f(0)$, so that $$\sup_{\Sigma_t^{{\epsilon}}}\, (\frac{1}{H} + \sum_{i,j}|\nabla h_i^j|^2) \le \bar{C}, \qquad \mbox{for all} \,\, t \in [0,\min (T_{\epsilon}, T_0)\, ).$$ This finishes the proof of the lemma. Having all the curvature estimates (that are proved above), we can justify the short time existence of the $C^{2,1}$-solution to the (\[equation-HMCF\]). For every ${\epsilon}> 0$, let $T_{{\epsilon}}$ be the maximal time so that $$|A|_{C^1(\Sigma_t^{{\epsilon}})} \le C, \qquad \mbox{and} \qquad H \ge \delta >0$$ where $C, \delta$ are constants taken from Proposition \[prop-nabla-curv\]. Take now ${\epsilon}_i\to 0$. We have that $|A|_{C^1(\Sigma_t^{{\epsilon}_i})} \le C$, which implies $|F_{{\epsilon}_i}|_{C^{2,1}} \le C$, for all $t\in [0,T_0]$. By the Arzela-Ascoli theorem there is a subsequence so that $F_{{\epsilon}_i}(\cdot,t)\stackrel{C^{2,1}}{\to} F(\cdot,t)$, where $F(\cdot,t)$ is a $C^{2,1}$ solution to (\[equation-HMCF\]). Since we have a comparison principle for $C^{2,1}$ solutions to (\[hmcf0\]) as discussed above, the uniqueness of a $C^{2,1}$ solution immediately follows. Long time existence for the ${\epsilon}$-flow {#section-ltee} ============================================= In this section we will study the long time existence for the ${\epsilon}$-regularized flow assuming that $\Sigma_0$ is an arbitrary smooth surface with mean curvature $H > 0$, Euler characteristic $\chi(\Sigma_0) > 0$ and it is star-shaped with respect to the origin. Throughout the section we fix ${\epsilon}>0$ sufficiently small, we denote by $\Sigma_t^{\epsilon}$ the surface evolving by and, to simplify the notation, we drop the index ${\epsilon}$ from $F,\nu,H,G,\kappa,A,g_{ij}, h_{ij}$ etc. The ${\epsilon}$-flow has one obvious advantage over (\[hmcf0\]), it is not degenerate and therefore it has smoothing properties. Indeed, it follows from the Krylov and Schauder estimates that a $C^{1,1}$ solution of is $C^\infty$ smooth. Assume that $\Sigma_t^{\epsilon}$ is a solution of on $[0,T_{\epsilon})$ and let us consider the evolution equation for the area form $d\mu_t$, namely $$\frac{\partial }{\partial t} \, d\mu_t = -2 \, ( \frac{G}{H} + {\epsilon}\, H)\, \frac{H}{2} \, d\mu_t = -(G + {\epsilon}\, H^2 ) \, d\mu_t.$$ Integrating it over the surface $\Sigma_t^{\epsilon}$ we obtain the following ODE for the total area $\mu_t(\Sigma_t^{\epsilon})$ of the surface $\Sigma_t^{\epsilon}$ $$\frac{d}{d t}\, \mu_t(\Sigma_t^{\epsilon}) = - \int_{\Sigma_t^{\epsilon}} (G + {\epsilon}\, H^2 ) \, d\mu_t.$$ By the Gauss-Bonnet formula we have $$\int_{\Sigma_t^{\epsilon}} G \, d\mu_t = 2\pi\, \chi(\Sigma_t).$$ Since $\Sigma_0$ is a surface with positive Euler characteristic, then by the uniformization theorem $\chi(\Sigma_t) = 2$ and therefore we conclude the equation $$\label{eqn-vol} \frac{d}{d t} \, \mu_t(\Sigma_t^{\epsilon}) = - 4\, \pi - {\epsilon}\, \int_{\Sigma_t^{\epsilon}} H^2 \, d\mu_t.$$ Denote by $T_{\epsilon}$ the maximum time of existence of . Integrating in time from $0$ to $T_{\epsilon}$, solving with respect of $T_{\epsilon}$ and using that $\mu_t(\Sigma_t^{\epsilon}) \geq 0$, gives $$T_{\epsilon}\leq \frac 1{4\pi} \mu_0(\Sigma_0) - \frac {\epsilon}{4\pi} \, \int_0^{T_{\epsilon}} \int_{\Sigma_t^{\epsilon}} H^2 \, d\mu_t.$$ This, in particular shows that $$\label{eqn-Te} T_{\epsilon}\leq \frac 1{4\pi} \mu_0(\Sigma_0)$$ where $ \mu_0(\Sigma_0) $ is the area of the initial surface $\Sigma_0$. Our goal is to prove the following result, concerning the long time existence of the flow . We will also establish curvature bounds and curvature pinching estimates which are independent of ${\epsilon}$. \[thm-e\] Let $\Sigma_0$ be a compact star-shaped hyper-surface in ${\mathbb R}^3$ which is of class $C^{1,1}$ and has strictly positive mean curvature $H > 0$. Then, there exists a maximal time of existence $T_{\epsilon}$ of a smooth flow $\Sigma_{{\epsilon}}^t$ such that either: (i) $H(P_t,t) \to 0$, as $t \to T_{\epsilon}$ at some points $P_t \in \Sigma_t^{\epsilon}$, or (ii) $\Sigma_{{\epsilon}}^t$ shrinks to a point as $t \to T_{{\epsilon}}$ and $T_{\epsilon}$ is given explicitly by $$\label{eq-ext-time} T_{\epsilon}= \frac 1{4\pi} \mu_0(\Sigma_0)- \frac {{\epsilon}}{4\pi} \, \int_0^{T_{\epsilon}} \int_{\Sigma_t^{\epsilon}} H^2 \, \mu_t$$ where $\mu_0(\Sigma_0)$ is the total area of $\Sigma_0$. Moreover, $\int_{\Sigma_t^{{\epsilon}}} H^2\, d\mu_t$ is uniformly bounded for all $t\in [0,T_{{\epsilon}})$, independently of ${\epsilon}$. Assume that (i) does not happen in Theorem \[thm-e\]. Then, we have $$\label{eq-assump} \min_{\Sigma_{{\epsilon}}^t} H(\cdot,t) \geq \delta >0, \,\,\, \mbox{for all} \,\,\, t\in [0,T_{{\epsilon}})$$ where $T_{{\epsilon}}$ is the maximal existence time of a smooth flow $\Sigma_{{\epsilon}}^t$. \[thm-max-time\] Assuming that (i) doesn’t happen in Theorem \[thm-e\], then the maximal time of existence $T$ of the flow satisfies $T \leq \mu_0(\Sigma_0)/4\pi$ and $$\limsup_{t\to T}|A| = \infty.$$ The bound $T \leq M_0/4\pi$ is proven above. Assume that $\max_{\Sigma_{{\epsilon}}^t}|A| \le C$ for all $t\in [0,T)$. Then we want to show that the surfaces $\Sigma^{{\epsilon}}_t$ converge, as $t \to T$, to a smooth limiting surface $\Sigma^{\epsilon}_T$. Similarly as in [@Di], using the curvature bounds we have, for all $0 < t_t < t_2 <T$, the bounds $$|F(p,t_1) - F(p,t_2)| \le C\, |t_2 - t_1| \qquad \mbox{and} \qquad |\frac{\partial}{\partial t}g_{ij}|^2 \le C$$ for a uniform in $t$ constant $C$, which imply that $F(\cdot,t)$ converges, as $t\to T$ to some continuous surface $\tilde \Sigma^{T}_{\epsilon}$. We get uniform $C^2$-bounds on $F$ out of the bound on $|A|$. Since our equation is uniformly parabolic and the operator $\kappa$ is concave, by Krylov and Schauder estimates we obtain all higher derivative bounds. We have just shown that the surface $\Sigma_T^{\epsilon}$ is $C^\infty$ smooth. Also from our assumption $$H(\cdot,T) \geq \delta >0, \qquad \mbox{on} \,\, \Sigma_T^{\epsilon}.$$ By Proposition \[prop-ste-e\] there exists $\tau_{\epsilon}>0$ for which a smooth flow can be continued on $[T,T+\tau_{\epsilon})$, which contradicts our assumption that $T$ is maximal. Hence, $\limsup_{t\to T_{{\epsilon}}}|A| = \infty$ and the result follows. Monotonicity formula -------------------- We will now show the monotonicity property of the quantity $$Q_{\epsilon}= \langle F_{\epsilon},\nu \rangle + 2 t \, \kappa_\epsilon$$ along the flow (\[equation-reg\]). This will play an essential role in establishing the long time existence. Similar quantity was considered by Smoczyk in [@Sm]. \[lemma-monotone\] Assuming that $q_{\epsilon}(0):=\min_{\Sigma_0} \langle F_{\epsilon},\nu\rangle \geq 0 $, the quantity $$q_{\epsilon}(t):=\min_{\Sigma_t^{\epsilon}} (\langle F_{\epsilon},\nu\rangle + 2 t \, \kappa_\epsilon)$$ is increasing in time for as long as the solution $\Sigma_t^{\epsilon}$ exists. Hence, $$q_{\epsilon}(t):=\min_{\Sigma_t^{\epsilon}} (\langle F_{\epsilon},\nu\rangle + 2 t \, \kappa_\epsilon) \geq q_{\epsilon}(0) \geq 0.$$ We will compute the evolution of $Q_{\epsilon}$ and apply the maximum principle. We begin by computing the evolution of $\langle F_{\epsilon},\nu\rangle$. We have: $$\mathcal{L}_{\epsilon}(\langle F_{\epsilon},\nu\rangle) = a_{\epsilon}^{ik}\nabla_i(\nabla_k \langle F_{\epsilon},\nu\rangle) = a_{\epsilon}^{ik}\nabla_i(\langle e_k, \nu\rangle + \langle F_{\epsilon},h_{kj}e_j\rangle),$$ since $$\nabla_i\nu = h_{ij}e_j, \qquad \nabla_ie_j = -h_{ij}\nu, \qquad \nabla_i F_{\epsilon} = e_i.$$ Using Gauss-Codazzi equation $\nabla_i h_{kj} = \nabla_j h_{ik}$ and since $\nabla_i(\langle e_k,\nu\rangle) = 0$ we get $$\begin{aligned} \mathcal{L}_{\epsilon}(\langle F_{\epsilon},\nu\rangle) &=& a_{\epsilon}^{ik} [\langle \nabla_iF_{\epsilon},h_{kj}e_j\rangle + \langle F, h_{kj}\nabla_i e_j \rangle + \langle F, e_j\cdot\nabla_i h_{jk}\rangle] \\ &=& a_{\epsilon}^{ik}[h_{ik} - h_{ij}h_{jk}\langle F_{\epsilon},\nu\rangle + \langle F, e_j\cdot \nabla_j h_{ik}\rangle] \\ &=& \kappa_\epsilon - a^{ik}h_{ij}h_{jk}\langle F_{\epsilon},\nu\rangle + \langle F_{\epsilon}, \nabla \kappa_\epsilon\rangle. \end{aligned}$$ On the other hand, since $\frac{\partial\nu}{\partial t} = \nabla \kappa_\epsilon$, it follows $$\frac{\partial}{\partial t}\langle F_{\epsilon},\nu\rangle = -\kappa_\epsilon + \langle F_{\epsilon}, \nabla \kappa_\epsilon\rangle$$ which yields $$\frac{\partial}{\partial t}\langle F_{\epsilon},\nu\rangle - \mathcal{L}_\epsilon(\langle F_{\epsilon},\nu\rangle) = -2\kappa_\epsilon + a^{ik}h_{ij}h_{jk}\langle F_{\epsilon},\nu\rangle.$$ We also have $$\frac{\partial \kappa_\epsilon}{\partial t} = \mathcal{L}_{\epsilon}(\kappa_\epsilon) + a_{\epsilon}^{ik}h_{ij}h_{jk}\kappa_\epsilon.$$ Hence $Q_{\epsilon} = \langle F_{\epsilon},\nu\rangle + 2t\, \kappa_\epsilon$ satisfies $$\label{equation-Q} \frac{\partial Q_{\epsilon}}{\partial t} = \mathcal{L}_{\epsilon} Q_\epsilon+ a_{\epsilon}^{ik}\nabla_i\nabla_k Q_{\epsilon} + a_{\epsilon}^{ik}h_{ij}h_{jk} Q_{\epsilon}.$$ Notice that the right hand side of (\[equation-Q\]) is a strictly elliptic operator and $$a_{\epsilon}^{ik}h_{ij}h_{jk} \geq 0.$$ We conclude by the maximum principle that $$q_{\epsilon}'(t) = \frac{d}{dt}(\, \langle F_{\epsilon},\nu\rangle + 2t\, \kappa_\epsilon\rangle \, )_{\min} \ge 0$$ assuming that $q_{\epsilon}(0) \geq 0$. This implies that $q_{\epsilon}(t) \geq q_{\epsilon}(0)$ finishing the proof of the lemma. Notice that if instead of $Q_{\epsilon}$ we take the quantity $$Q_{\eta, {\epsilon}} = \langle F_{\epsilon},\nu \rangle + 2(t + \eta) \, \kappa_\epsilon$$ for any constant $\eta \in {\mathbb R}$, the same computation as above yields to that $Q_{\eta,{\epsilon}}$ satisfies $$\frac{\partial Q_{\eta,{\epsilon}}}{\partial t} =\mathcal{L}_{\epsilon} Q_{\eta,\epsilon} + a _{\epsilon}^{ik}\nabla_i\nabla_k Q_{\eta,{\epsilon}} + a_{\epsilon}^{ik}h_{ij}h_{jk}Q_{\eta,{\epsilon}}.$$ Assume that at time $t=0$, we have $$q_{\eta, {\epsilon}}(0) = \min_{\Sigma_0} (\langle F,\nu \rangle + 2 \, \eta \, \kappa_{\epsilon}) \geq 0$$ for some $\eta \in {\mathbb R}$ (notice that $F_{\epsilon}=F$ at $t=0$). Then, the maximum principle to the above equation, gives: \[rem-useful\] For any $\eta \in {\mathbb R}$, such that $q_{\eta, {\epsilon}} (0):= \min_{\Sigma_0} \, ( \langle F ,\nu \rangle + 2\, \eta \, \kappa_{\epsilon}) \geq 0 $ the quantity $$q_{\eta, {\epsilon}}(t) := \min_{\Sigma^{\epsilon}_t} \, (\langle F_{\epsilon},\nu \rangle + 2(t + \eta) \, \kappa_\epsilon\, )$$ is increasing in time. Hence $$\label{equation-cases} {q}_{\epsilon,\eta}(t) := \min_{\Sigma^{\epsilon}_t} \, (\langle F_{\epsilon},\nu \rangle + 2(t + \eta) \, \kappa_\epsilon\, ) \ge {q}_{\epsilon,\eta}(0) \geq 0.$$ Since the initial surface $\Sigma_0$ is star-shaped, we may choose $\eta > 0$ so that we have $q_{\eta,{\epsilon}}(0) > 0$. This is possible by continuity, since $\langle F,\nu\rangle > 0$. By Proposition \[rem-useful\] we have $${Q}_{\eta,{\epsilon}}(\cdot,t) \ge q_{\eta,{\epsilon}} (t) >0$$ which implies the lower bound $$\label{equation-better1} \kappa_{\epsilon}= \frac{G}{H} + \epsilon H \ge - \frac{\langle F_{\epsilon},\nu\rangle}{2 (t+\eta)}.$$ We will show next that implies a uniform lower bound on $\kappa_{{\epsilon}}$, independently of ${\epsilon}$. To this end, we need to bound $\langle F_{{\epsilon}},\nu\rangle$, independently of ${\epsilon}$. This bound follows from the comparison principle for curvature flows with the property that the speed is an increasing function of the principal curvatures. More precisely, we have the following lemma. \[lem-bound-F\] There exists a constant $C$, independent of $\epsilon$, so that $$\label{equation-lower-speed} \kappa_\epsilon:=\frac{G}{H} + \epsilon\, H \ge -C, \qquad \forall t \in [0,T_\epsilon).$$ We claim that there is a uniform constant $C$ so that $|F_{\epsilon}| \le C$, for all $t \in [0,T_{\epsilon})$. To see that, let $\psi_0: S^2 \to \mathbb{R}^3$ denote the parametrization of a sphere that encloses the initial hypersurface $\Sigma_0$. By the result of B. Andrews in [@An1], the solution $\psi_{\epsilon}(\cdot,t)$ of (\[equation-reg\]) with initial condition $\psi_0$ shrinks to a point in some finite time $\tilde{T}_{\epsilon}$. Moreover, $$\label{eqn-tildeT} \tilde T_{\epsilon}\leq \tilde T < \infty$$ for a uniform constant $\tilde T$. The standard comparison principle shows that the images of $F_{\epsilon}$ and $\psi_{\epsilon}$ stay disjoint for all the time of their existence. To see this, we consider the evolution of $$d(p,q,t) := |F_{\epsilon}(p,t) - \psi_{\epsilon}(q,t)|, \qquad (p,q) \in \Sigma_t^{\epsilon}\times S^2.$$ Assume that the minimum of $d$ at time $t$ occurs at $(p_0,q_0)$. If $W$ denotes the Weingarten map, at that minimum point $W(p_0) \ge W(q_0)$, so by the monotonicity of our speed $\kappa_{\epsilon}$, $\kappa_{\epsilon}(W(p_0)) \ge \kappa_{\epsilon}(W(q_0))$. The maximum principle tells us that $d_{min}(t)$ is non-decreasing and therefore the distance between the images of $F_{{\epsilon}}$ and $\psi_{\epsilon}$ is non-decreasing. Hence, they stay disjoint in time. As a consequence of that, our hypersurfaces $F_{\epsilon}(\cdot,t)$ stay enclosed by the sphere $\psi_0$ for all times of their existence (since $\psi_{\epsilon}(\cdot, t)$ are enclosed by $\psi_0$) and therefore $|F_{\epsilon}|(\cdot,t) \le C$ for a uniform constant $C$. The above bound implies that $\langle F_{\epsilon}, \nu \rangle \leq C$, for a uniform in ${\epsilon}$ constant $C$ and all $t \in [0,T_{\epsilon})$. This together with (\[equation-better1\]) yield to (\[equation-lower-speed\]). Curvature pinching estimates for the ${\epsilon}$-flow ------------------------------------------------------ Define $$\mathcal{F_{\epsilon}} := \langle F_{\epsilon},\nu\rangle + 2t\, \kappa_{\epsilon}.$$ Notice that division by $\mathcal{F_{\epsilon}}$ makes sense since by Lemma \[lemma-monotone\], $(\mathcal{F_{\epsilon}})_{\min}$ is increasing in time and $(\mathcal{F_{\epsilon}})_{\min}(0) \ge \delta > 0$ due to star-shapedness. As we showed above, $\sup_{\Sigma_t}|F_{\epsilon}| \le C$ for a uniform constant $C$. Rewrite the evolution equation for $H$ from Lemma \[lemma-evolution\] in the form $$\frac{\partial}{\partial t}H = \mathcal{L}_{\epsilon}(H) + \ddot{\kappa}_{\epsilon}(\nabla W, \nabla W) + \dot{\kappa}_{\epsilon}(W^2)H$$ where $W$ is the Weingarten map and $$\ddot{\kappa}_{\epsilon}(\nabla W, \nabla W) = \frac{\partial^2 \kappa_{\epsilon}}{\partial h_q^p\partial h_m^l} \nabla^i h_q^p\nabla_j h_m^l \quad \mbox{and} \quad \dot{\kappa}_{\epsilon}(W^2) = \frac{\partial\kappa_{\epsilon}}{\partial h_m^l}h_p^lh_m^p.$$ Then, by direct computation we have $$\label{eq-quotient} \frac{\partial}{\partial t} \left ( \frac{H}{\mathcal{F}_{\epsilon}} \right ) = \mathcal{L}_{\epsilon}\left (\frac{H}{\mathcal{F}_{\epsilon}} \right ) + \frac{2}{\mathcal{F}_{\epsilon}}\, \dot{\kappa}_{\epsilon}\left (\nabla\mathcal{F}, \nabla\frac{H}{\mathcal{F}_{\epsilon}} \right )+ \frac{1}{2\mathcal{F}_{\epsilon}}\, {\mathrm{trace}}_g\ddot{\kappa_{\epsilon}}(\nabla W, \nabla W).$$ By (\[eq-definite-H\]), the last term in this equation is negative. Hence, by the maximum principle, the supremum of ${H}/{\mathcal{F_{\epsilon}}}$ is decreasing. In particular, we have: Assume that $\Sigma_t^{\epsilon}$ is a solution of on $[0,T_{\epsilon})$ with $\Sigma_0$ as in Theorem \[thm-STE\]. Then, $$\label{eq-better-H} \sup_{\Sigma_t^{\epsilon}\times[0,\tau)}\frac{H}{\langle F_{\epsilon}, \nu\rangle + 2t\, \kappa_{\epsilon}} \le C, \qquad \mbox{on} \,\,\, [0,T_{\epsilon})$$ for a uniform constant $C$ that depends only on $\Sigma_0$. Denote by $\lambda_1, \lambda_2$ the two principal curvatures of the surface $\Sigma_t^{\epsilon}$ at some time $t$ and point $P$. \[lem-111\] If there is some time $t_0$ so that $\liminf_{t\to t_0} H(\cdot,t) = 0$, then $$\label{eq-eigen-zero} \liminf_{t\to t_0} \, (\lambda_1^2 + \lambda_2^2) = 0.$$ Assume $\liminf_{t\to t_0} H(\cdot,t) = 0$. We distinguish the following two cases: 1. $\lambda_1 > 0$ and $\lambda_2 \ge 0$. In this case (\[eq-eigen-zero\]) immediately follows. 2. $\lambda_1 > 0$ and $\lambda_2 < 0$. By Lemma \[lemma-monotone\], $$\kappa_{\epsilon}:=\frac{G}{H} + {\epsilon}\, H \ge -C$$ uniformly in time, which implies $$\lambda_1\, |\lambda_2| \le C\, H + {\epsilon}\, H^2.$$ Since $\liminf_{t\to t_0} H = 0$, at least for one of the two principal curvatures must tend to zero, i.e. $$\label{eq-both-zero} \liminf_{t\to t_0} |\lambda_i| =0.$$ Since $\liminf_{t\to t_0} H = 0$, readily follows. \[lem-bigger-eigen\] There exist uniform (in time $t$ and ${\epsilon}$) constants $C >0$ and $ {\epsilon}_0>0$, such that for every $0 < {\epsilon}\leq {\epsilon}_0$, if $\lambda_2 \leq 0$ at $P$, then $$\lambda_1 \leq C.$$ Since $\lambda_2 \leq 0$, we have $G/H \leq 0$. Hence, from and the bound $|\langle F_{\epsilon}, \nu\rangle| \leq C_0$, for a uniform in time constant $C_0$, we conclude that $$H \leq C + C\,{\epsilon}H$$ for a constant $C$ that depends only on the initial data. We conclude that for ${\epsilon}\leq {\epsilon}_0$, with ${\epsilon}_0$ sufficiently small depending only on the initial data $\Sigma_0$, we have $$H:= \lambda_1 + \lambda_2 \leq C$$ from which the desired bound on $\lambda_1$ follows with the aid of the previous lemma. \[lem-smaller-eigen\] There exist uniform (in $t$ and ${\epsilon}$) constants $C >0$ and $ {\epsilon}_0>0$, such that for every $0 < {\epsilon}\leq {\epsilon}_0$ we have $$\lambda_2 \geq -C.$$ Assume that $\lambda_2 < 0$ (otherwise the bound is obvious). Then, $\lambda_1 >0$ (since $H=\lambda_1+\lambda_2 >0$) and by Lemma \[lem-bound-F\], we have $$\kappa_{\epsilon}:= \frac {G}{H} + {\epsilon}\, H \geq - C$$ for a uniform in time constant $C$. Also, by the previous lemma $H \leq \lambda_1 \leq C$. Hence, $$|\lambda_2| \leq C\,(1+{\epsilon})\, \frac{\lambda_1+\lambda_2}{\lambda_1} \leq \tilde C.$$ Lemma \[lem-bigger-eigen\] implies that if the flow terminates because of the blowing up of the second fundamental form, that could only happen in the convex region of $\Sigma_t^{\epsilon}$ where $\lambda_1 \geq 0$, $\lambda_2 \geq 0$. \[lem-lambda-pinch\] There exist uniform (in time $t$ and ${\epsilon}$) constants $C >0$, $C_0 >0$ and ${\epsilon}_0$, such that for every ${\epsilon}\geq {\epsilon}_0$ if $\lambda_1 \geq C_0$ at $P$, then $\lambda_2 >0$ at $P$ and $$1 \leq \frac{\lambda_1}{\lambda_2} \leq C.$$ From the previous lemma, $\lambda_2 >0$ if $C_0$ is chosen sufficiently large. Hence, from the bound we conclude $$\label{eqn-lala} \begin{split} (\lambda_1 + \lambda_2)^2 &\leq C_1 \, (\lambda_1 + \lambda_2) + 2T_{\epsilon}\, [ \, \lambda_1\, \lambda_2 + {\epsilon}\, (\lambda_1+\lambda_2)\, ] \\ &\le \tilde C_1 \, (\lambda_1 + \lambda_2) + \tilde C_2 \, \lambda_1\, \lambda_2 \end{split}$$ for some uniform in ${\epsilon}$ and $t$ constants $\tilde C_1$ and $\tilde C_2$. By taking $C_0$ sufficiently large, we can make $$(\lambda_1 + \lambda_2)^2 - C_1 \, (\lambda_1 + \lambda_2) \geq \frac 12 \, (\lambda_1 + \lambda_2)^2.$$ Hence, implies the bound $$\lambda_1^2 + \lambda_2^2 \leq 2 \, \tilde C_2 \, \lambda_1\, \lambda_2$$ from which the desired estimate readily follows. To facilitate future references we combine the previous three lemmas in the following proposition: \[lem-pinch-e\] There exist ${\epsilon}_0 > 0$ and positive constants $C_1, C_2$, uniform in $0<{\epsilon}<{\epsilon}_0$ and $t$, so that for every $0 < \epsilon < \epsilon_0$, we have i. $\lambda_2 \ge -C_1$, and ii. $\lambda_1 \le C_1\lambda_2 + C_2$. In the proof of Theorem \[thm-e\] we will also need the following bound. \[lem-unif-H\] There is a uniform constant $C$, independent of ${\epsilon}$ and $t$ so that $$\int_{\Sigma_t^{{\epsilon}}} H^2\, d\mu_t \le C.$$ We begin by noticing that that $\int_{\Sigma_t^{{\epsilon}}} G\, d\mu_t$ is a topological invariant, equal to $2\pi\chi$, where $\chi$ is the Euler charactersistic of $\Sigma_0$. Since $\chi=2$ we then have $$\label{eqn-euler} \int_{\Sigma_t^{{\epsilon}}} G\, d\mu_t = 4\, \pi.$$ At any point we can choose the coordinates in which the second fundamental form is diagonal, with eigenvalues $\lambda_1$ and $\lambda_2$ as before and $\lambda_1 \ge \lambda_2$. By Lemma \[lem-pinch-e\] we have $\lambda_1 \leq C_1\, \lambda_2 + C_2$ which gives the inequality $$G := \lambda_1\, \lambda_2 \geq \frac 1{C_1}\, \lambda_1^2 - \frac {C_2}{C_1} \, \lambda_1.$$ Using Cauchy-Scwartz we conclude the bound $$\lambda_1\, \lambda_2 \geq \tilde{C}_1\lambda_1 ^2 - \tilde{C}_2$$ where $\tilde{C}_1, \tilde{C}_2$ are some uniform constants independent of ${\epsilon}$ and time. This yields to the estimate $$|A|^2 = \lambda_1^2 + \lambda_2^2 \le C_1\, G + C_2$$ which after integrated over $\Sigma_t^{{\epsilon}}$ implies the bound $$\int_{\Sigma_t^{{\epsilon}}} |A|^2\, d\mu_t \le C_1\int_{\Sigma_t^{{\epsilon}}} G\, d\mu_t + C_2 \, \mu_t(\Sigma_t^{\epsilon})$$ with $\mu_t(\Sigma_t^{\epsilon})$ denoting, as above, the surface area of $\Sigma_t^{\epsilon}$. By , $\mu_t(\Sigma_t^{\epsilon})\leq \mu_0(\Sigma_0)$, where $\mu_0(\Sigma_0)$ denotes the surface area of $\Sigma_0$. Hence, the lemma readily follows from . The proof of Theorem \[thm-e\] ------------------------------ Having all the ingredients from the previous sections we will finish the proof of Theorem \[thm-e\]. Fix an ${\epsilon}$ and let $T=T_{\epsilon}$ be a maximal time up to which the flow exists. To simplify the notation we will omit the ${\epsilon}$-scripts from everything, including $T_{\epsilon}$ and the surface $\Sigma_t^{\epsilon}$, denoting them by $T$ and $\Sigma_t$ respectively. Because of Proposition \[thm-max-time\], the second fundamental form blows up at time $T$. Hence, there is a sequence of $t_i\to T$ and $p_i\in \Sigma_{t_i}$ so that $$Q_i:= |A|(p_i,t_i)= \max_{t\in [0,t_i]}\max_{\Sigma_{t_i}}|A|(\cdot,t_i) \to \infty, \qquad \mbox{as}\,\, i\to\infty.$$ Consider the sequence $\tilde \Sigma^i_t$ of rescaled solutions defined by $$\label{eqn-rescaled} \tilde{F}_i(\cdot,t) := Q_i(F(\cdot,t_i+\frac{t}{Q_i^2}) - p_i).$$ Notice that under the above rescaling all points $p_i$ are shifted to the origin. If $g, H$ and $A :=\{h_{jk}\}$ are the induced metric, the mean curvature and the second fundamental form of $\Sigma_t$, respectively, then the corresponding rescaled quantities are given by $$\tilde{g}_i = Q_i^2 g, \qquad \tilde{H}_i = \frac{H}{Q_i}, \qquad |\tilde{A}_i|^2 = \frac{|A|^2}{Q_i^2}.$$ Consider a sequence of rescaled solutions $\tilde{\Sigma}_t^i$. They have a property that $$\max_{\tilde{\Sigma}_t^i}|\tilde{A}_i| \le 1, \,\,\, \mbox{for}\,\, t\in [-1,0] \quad \mbox{and} \quad |\tilde{A}_i|(0,0) = 1.$$ The above uniform estimates on the second fundamental form yield uniform higher order estimates on $\tilde{F}_i(\cdot,t)$ and the Theorem of Arzela-Ascoli gives us a uniformly convergent subsequence $\tilde{F}_{i_k}(\cdot,t)$ on compact subsets, converging to a smooth $\tilde{F}(\cdot,t)$ for every $t \in [0,1]$. Notice that $$\tilde{\kappa}_i = \frac{\tilde{G}_i}{\tilde{H}_i} + {\epsilon}\, {\tilde{H}_i} = \frac{\lambda_1\lambda_2}{Q_i(\lambda_1 + \lambda_2)}+ {\epsilon}\, \frac{\lambda_1+\lambda_2}{Q_i}$$ and therefore by Proposition \[lem-pinch-e\], $$|\tilde{\kappa}_i| \le \begin{cases} \frac{C}{Q_i} \quad &\mbox{if} \,\,\, \lambda_1, \lambda_2 << Q_i \\ C, \quad &\mbox{if} \,\,\, \lambda_1, \lambda_2 \sim Q_i \end{cases}$$ since $\lambda_2 \ge -C$ and $\lambda_1$ is big, comparable to the rescaling constant $Q_i$, if and only if $\lambda_2$ is big and comparable to $Q_i$ (both $\lambda_1$ and $\lambda_2$ are computed at time $t_i + t/Q_i^2$). This implies that $\tilde{F}(\cdot,t)$ solves $\frac{\partial}{\partial t}\tilde{F}(\cdot,t) = -\tilde{\kappa_{\epsilon}}\, \nu$, where $$\tilde{\kappa}_{\epsilon}= \begin{cases} 0, \quad &\mbox{if} \,\,\,\tilde{\lambda}_1 = 0, \,\, \tilde{\lambda}_2 = 0 \\ \frac{\tilde{\lambda}_1\tilde{\lambda}_2}{\tilde{\lambda}_1+\tilde{\lambda}_2}+{\epsilon}\, (\tilde \lambda_1+\tilde \lambda_2), \quad &\mbox{if} \,\, \tilde{\lambda}_1 > 0, \, \tilde{\lambda}_2 > 0. \end{cases}$$ By Proposition \[lem-pinch-e\] there are uniform constants $C_1, C_2$ so that $$\lambda_1 \le C_1\lambda_2 + C_2$$ which holds uniformly on $\Sigma_t$, for all $t \ge 0$ for which the flow exists, which after rescaling yields $$\label{eq-pinch-resc} \tilde{\lambda}^i_1 \le C_1\tilde{\lambda}^i_2 + \frac{C_2}{Q_i}.$$ The previous estimate implies that the limiting surface (which we denote by $\tilde{\Sigma}_0$) is convex (possibly not strictly convex). There are two possibilities for $\tilde{\Sigma}_0$: either it is a flat plane or it is a non-flat complete weakly convex smooth hypersurface in $\mathbb{R}^3$. Let $\tilde{F}_0$ be a smooth embedding of $\tilde{\Sigma}_0$ into $\mathbb{R}^3$. Due to our rescaling, the norm of the second fundamental form of rescaled surfaces is $1$ at the origin and therefore $\tilde{\Sigma}_0$ is not a plane, but is strictly convex at least somewhere. It has the property that $$\sup_{\tilde{\Sigma}_0}|\tilde{A}| \le C.$$ By the results in [@EH] there is a smooth complete solution $\bar{\Sigma}_t$ to the mean curvature flow $$\label{eq-mcf} \begin{cases} \frac{\partial}{\partial t}\bar{F}(p,t) &= - \bar H\nu(p,t), \qquad p\in \bar{\Sigma}_t, \,\,\, t > 0 \\ \bar{F}(p,0) &= \tilde{F}_0. \end{cases}$$ The results in [@EH] (see Theorems $2.1$, $2.3$, $3.1$ and $3.4$, which provide with curvature estimates and are of local nature) imply that the curvature of $\bar{\Sigma}_t$ stays uniformly bounded for some short time $t\in [0,T_0)$. The evolution for $\bar H$ along the mean curvature flow is given by $$\frac{\partial}{\partial t} \bar H = \Delta \bar H + \bar H \, |\bar A|^2.$$ As in [@EH], due to the curvature bounds, the mean curvature $\bar H$ satisfies the conditions of Theorem $4.3$ in [@EH] (the maximum principle for parabolic equations on complete hypersurfaces) and therefore nonnegative mean curvature is preserved along the flow. This together with the strong maximum principle implies that if $\bar H$ is not identically zero at $t = 0$, then it becomes strictly positive at $t > 0$. We also know that $\bar{\Sigma}_0$ satisfies $\bar{\lambda}_1 \le C\, \bar{\lambda}_2$ for a uniform constant $C$, which follows from (\[eq-pinch-resc\]) after taking the limit as $i\to\infty$. Since we are assuming $\bar{\lambda}_1 \geq \bar \lambda_2$ this can be written as $$\label{eq-pinch-est0} \bar h_{ij} \ge \eta \, \bar H\, \bar g_{ij},$$ for some uniform constant $\eta > 0$ and we will say the second fundamental form of $\bar{\Sigma}$ is $\eta$-pinched. By the curvature bounds, the maximum principle for complete hypersurfaces and the evolution for $\bar h_{ij} - \eta \bar H \bar g_{ij}$ it follows that the pinching estimate (\[eq-pinch-est0\]) is preserved by the mean curvature flow (as in [@Hu]). In particular, this implies that $\bar h_{ij}$ is strictly positive definite, which means $\bar{\Sigma}_t$ is strictly convex for $t > 0$. The result of R. Hamilton in [@Ha2] states that a smooth strictly convex and complete hypersurface with its second fundamental form $\eta$-pinched must be compact. Hence, it follows that $\bar{\Sigma}_t$ has to be compact for $t > 0$. In this case, the initial data $\tilde{\Sigma}_0$ has to be compact as well. We recall that $\tilde \Sigma_0$ is the limit of the hyper-surfaces $\tilde \Sigma_0^i$ which are obtained via re-scaling from the surfaces $\Sigma_{t_i}$. Hence, since $\tilde \Sigma_0$ is compact, there are constants $i_0, C$ so that for $i \ge i_0$, we have $$\label{eq-diam-shrink} {\mathrm{diam}}(\Sigma_{t_i}) < \frac{C}{Q_i} \to 0 \,\,\, \mbox{as} \,\,\, i\to\infty,$$ and therefore $\Sigma_{t_i} \to \{\bar{p}\}$. \[claim-shrink\] For any point $q\in \mathbb{R}^3$, we have $$\frac{\partial}{\partial t}|F - q|^2 = \mathcal{L_{\epsilon}}(|F - q|^2) - 2\frac{|A|^2}{H^2}.$$ Follows by a simple computation. By Claim \[claim-shrink\], $|F - \tilde{p}|_{\max}(t)$ is decreasing along (\[equation-reg\]) and therefore $$\Sigma_t \to \{\tilde{p}\}, \qquad \mbox{as} \,\,\, t\to T$$ which implies that the surface $\Sigma_t$ shrinks to a point as as $t\to T$. Hence, $\mu_t(\Sigma_t) \to 0$ as $t\to T$. It follows by (\[eqn-vol\]) that $T$ must be given by (\[eq-ext-time\]). Passing to the limit ${\epsilon}\to 0$ {#section-lte} ====================================== We will assume in this section that $\Sigma^{\epsilon}_t$ are solutions of the flow which satisfy the condition uniformly in ${\epsilon}$, with $T_{\epsilon}$ given by . We shall show that we can pass to the limit ${\epsilon}\to 0$ to obtain a solution of the which is defined up to time $$T:=\lim_{{\epsilon}\to 0} T_{{\epsilon}} = \frac{\mu_0(\Sigma_0)}{4\pi}.$$ The key result is the following uniform bound on the second fundamental form $A$ of $\Sigma_{\epsilon}$. \[prop-uniform-e\] Under assumption (\[eq-assump\]), for any $\tau < T$, there is a uniform constant $C = C(\tau)$ so that $$\label{eq-unif-e0} \max_{\Sigma_t^{{\epsilon}}}|A|(\cdot,t) \le C, \qquad \forall {\epsilon}> 0 \quad \mbox{and} \quad \forall t\in [0,\tau].$$ where $A$ denotes the second fundamental form of the surface $ \Sigma_t^{{\epsilon}}$. Assume there is $\tau < T$ for which (\[eq-unif-e0\]) doesn’t hold. Then, there exist sequences $t_i\to\tau$, ${\epsilon}_i \to 0$ and $p_i\in \Sigma_{t_i}^{{\epsilon}_i}$ so that $$Q_i := |A|(p_i,t_i) = \max_{\Sigma_t^{{\epsilon}_i}\times [0,t_i]}|A| \to \infty \,\,\, \mbox{as} \,\,\, j\to\infty.$$ Consider, as before, the rescaled sequence of solutions $\tilde \Sigma^i_t$ defined by the immersions $\tilde{F}_i(\cdot,t): M^2 \to {\mathbb R}^3$, $$\tilde{F}_i(\cdot,t) := Q_i(F_{{\epsilon}_i}(\cdot,t_i+\frac{t}{Q_i^2}) - p_i).$$ Due to our rescaling, the second fundamental form of rescaled surfaces is uniformly bounded in $i$. This uniform estimates on the second fundamental form yield uniform $C^2$-bounds on $\tilde{F}_i(\cdot,0)$ and the Theorem of Arzela-Ascoli gives us a uniformly convergent subsequence on compact subsets, converging in the in $C^{1,1}$-topology to a $C^{1,1}$ surface $\tilde \Sigma$ defined by the immersion $\tilde{F}$. By Lemma \[lem-pinch-e\], there are uniform constants $C_1, C_2$ so that the estimate $$\lambda_1 \le C_1\lambda_2 + C_2$$ holds uniformly on $\Sigma_t^{{\epsilon}}$, for all $t \ge 0$ for which the flow exists, and all ${\epsilon}$, which after rescaling yields to the estimate $$\tilde{\lambda}^i_1 \le C_1\tilde{\lambda}^i_2 + \frac{C_2}{Q_i}.$$ Hence, the limiting surface $\tilde{\Sigma}$ is convex. There are two possibilities for $\tilde{\Sigma}$, either it is a flat plane, or it is a complete convex $C^{1,1}$-hypersurface. Due to our rescaling, the curvatures of the rescaled surfaces $\tilde \Sigma^i_t$ are uniformly bounded in $i$. This in particular implies a uniform local Lipshitz condition on $\tilde{F}_i(M^2,0)$. This means that there are fixed numbers $r_0$ and $C_0$ so that for every $q\in\tilde{F}_i(M^2)$, $\tilde{F}_i(U_{r_0,q})$ (where $U_{r_0,q}$ is a component of $\tilde{F}_i^{-1}(B_{r_0}(\tilde{F}_i(q)))$ containing $q$, and $B_{r_0}$ is a ball of radius $r_0$ in $\mathbb{R}^3$) can be written as the graph of a Lipshitz function over a hyperplane in $\mathbb{R}^3$ through $\tilde{F}_i(q)$ with Lipshitz constant less than $C_0$. Notice that both $C_0$ and $r_0$ are independent of $i$, they both depend on a uniform upper bound on the second fundamental form. This means the limiting surface $\tilde{\Sigma}$ will satisfy a uniform local Lipshitz condition. \[lem-plane\] The limiting hypersurface $\tilde{\Sigma}$ is not a plane. Assume that the limiting hypersurface $\tilde{\Sigma}$ is a plane. Then, for each $i$ we can write $\tilde \Sigma_i$ in a neighbourhood which is a ball $B(0,1)$ of radius $1$ around the origin as a graph of a $C^2$-function $\tilde u_i$, over some hyperplane $\mathcal{H}_i$. In particular, we can choose one that is tangent to $\tilde{\Sigma}_i$ at the origin. Then $$\label{eq-graph} \tilde{h}^i_{jk} = \frac{D_{jk} \tilde u_i}{(1+|D\tilde u_i|^2)^{\frac{1}{2}}}.$$ We can choose a coordinate system in each hyperplane so that the second fundamental form and also $D^2 u_i$ are diagonal at the origin. The function $u_i$ is a height function that measures the distance of our surface from the hyperplane $\mathcal{H}_i$. We also have that $u_i\stackrel{C^{1,1}}{\to}\tilde{u}$ as $i\to\infty$ and $u_i(0) = 0$ for all $i$. If $\tilde{\Sigma}$ were a plane then $\tilde{u} \equiv 0$ and $|D\tilde{u}_i| \equiv 0$ which would imply $|\tilde{u}|_{C^{1,1}} \equiv 0$. Take ${\epsilon}> 0$ very small. Then there would exist $i_0$ so that for $i \ge i_0$, $|u_i|_{C^{1,1}} < {\epsilon}$ on $B(0,1) \subset \tilde{\Sigma}_i$. Since we have (\[eq-graph\]), the last estimate would contradict the fact $|\tilde{A}_i|(0,0) = 1$, that is valid by the way we rescaled our solution. It follows from the previous lemma and the discussion above that $\tilde{\Sigma}$ is a complete convex, non-flat $C^{1,1}$-surface that satisfies $\tilde{\lambda}_1 \le C\tilde{\lambda}_2$, whenever those quantities are defined (since a surface is $C^{1,1}$, the principal curvatures are defined almost everywhere). Because of our uniform curvature estimates of the rescaled sequence, $\tilde{\Sigma}$ is a uniformly locally Lipschitz surface. By the results in [@EH] there is a solution $\bar F_t$ of the Mean Curvature flow (\[eq-mcf\]) with initial data $\tilde \Sigma$ on some time interval $[0,T_1)$ and $\bar {F}_t$ is smooth for $t > 0$. We can now carry out the same argument as in the proof of Theorem \[thm-e\] to show that $\tilde{\Sigma}$ has to be compact. That would mean that for $j >>1$, $$\label{eq-ar-small} {\mathrm{diam}}\, (\Sigma_{t_j}^{{\epsilon}_j}) \le \frac{C}{Q_j} \quad \mbox{and} \quad {\mathrm{area}}\, (\Sigma_{t_j}^{{\epsilon}_j}) \le \frac{C}{Q_j}$$ for a uniform constant $C$. Since $T_j \to \tau < T$, and Lemma \[lem-unif-H\] contradict . This shows that holds true, therefore finishing our proof. We will now show that because of we can pass along subsequences ${\epsilon}_i \to 0$ and show that the solutions $\Sigma^{{\epsilon}_i}_t$ converge to a solution $\Sigma_t$ of . Observe first that since ${\partial F_{{\epsilon}}}/{\partial t} = -(\kappa + {\epsilon}\, H)\nu$, by Proposition \[prop-uniform-e\] we have that $|{\partial F_{{\epsilon}}}/{\partial t}| \le C$, uniformly in ${\epsilon}$. Hence, $F_{{\epsilon}}$ is uniformly Lipshitz in $t$. Combining this with Proposition \[prop-uniform-e\] and the assumption , we conclude that for every $\tau < T$ there is a subsequence ${\epsilon}_i\to 0$ and a $1$-parameter family of $C^{1,1}$ surfaces $F(\cdot,t)$, so that $F_{{\epsilon}_i} \to F$ in the $C^{1,1}$ norm, ${\partial F_{{\epsilon}_i}}/{\partial t}\to {\partial F}/{\partial t}$ in the weak sense and $F$ satisfies $$\label{eq-our-sol} \frac{\partial F}{\partial t} = -\kappa\, \nu.$$ Due to (\[eq-assump\]) our solution has the property that $$\label{eqn-delta} {\mathrm{ess \, inf}}_{\Sigma_t \times [0,T)} H\ge \delta.$$ The limiting solution of (\[eq-our-sol\]) does not depend on the sequence ${\epsilon}_i \to 0$. Consider the evolution of a surface $\Sigma_t$ by a fully-nonlinear equation of the form $$\label{eq-F} \frac{\partial F}{\partial t} = - \mathcal{F}(h_{ij})\, \nu$$ where $h_{ij}$ is the second fundamental form and $\mathcal{F}$ is a function of the eigenvalues of $\{h_{ij}\}$, which we denote by $\lambda_1,\lambda_2$ and assume that $\lambda_1 \geq \lambda_2$. Let $\mu = \lambda_2/{\lambda_1}$ and take $$\mathcal{F}(\lambda_2,\mu) = \begin{cases} \frac{\lambda_1\lambda_2}{\lambda_1+\lambda_2} = \frac{\lambda_2}{1+\mu}, & \text{for $\mu\ge -\delta_1$} \\ \frac{\lambda_2}{1-\delta_1}, & \text{otherwise} \end{cases}$$ which we can be written as $$\label{eq-other-form} \mathcal{F}(h_{ij}) = \begin{cases} \kappa, & \text{for $Hg_{ij} \ge (1-\delta_1)h_{ij}$} \\ \frac{H + \sqrt{H^2 - 4G}}{1-\delta_1}, & \text{otherwise.} \end{cases}$$ We can also consider solutions of (\[hmcf0\]) in the viscosity sense (defined in [@CGG] and [@ES]). In that case (\[eq-F\]) can be written in the form $$\label{eq-visc} u_t = \begin{cases} \frac{\det(D_i(\frac{D_j u}{|Du|}))}{{\mathrm{div}}(\frac{Du}{|Du|})}, \,\,\, \text{when ${\mathrm{div}}(\frac{Du}{|Du|})\delta_{ij} \ge (1-\delta_1)D_i(\frac{D_j u}{|Du|})$} \\ \frac{{\mathrm{div}}(\frac{Du}{|Du|}) + \sqrt{{\mathrm{div}}(\frac{Du}{|Du|}) - 4\det(D_i(\frac{D_ju}{|Du|}))}}{1-\delta_1} \qquad \text{otherwise}. \end{cases}$$ Equation (\[eq-visc\]) can be expressed as $$\label{eq-visc1} u_t + \mathcal{F}_1(t,\nabla u, \nabla^2 u) = 0,$$ with $$\mathcal{F}_1(t,p,X) = -\frac{|p|\det(X - \frac{p}{|p|}\otimes (X\cdot \frac{p}{|p|})}{{\mathrm{trace}}((I - \frac{p}{|p|}\otimes \frac{p}{|p|})\cdot X)}$$ if $$I \cdot {\mathrm{trace}}((I - \frac{p}{|p|}\otimes \frac{p}{|p|})\cdot X) \ge (1-\delta_1)(X - X\cdot \frac{p}{|p|}\otimes \frac{p}{|p|})$$ and $$\begin{split} \mathcal{F}_1(t,p,X) = &\frac{1}{1-\delta_1}\left (\frac{1}{|p|}({\mathrm{trace}}((\delta_{ij} - \frac{p}{|p|}\otimes \frac{p}{|p|})X \right ) \\ \\ &+ \sqrt{\frac{1}{|p|^2}[({\mathrm{trace}}((\delta_{ij} - \frac{p}{|p|}\otimes \frac{p}{|p|})X))^2 - \frac{4}{|p|}\det(X - \frac{(X\cdot p)}{|p|}\otimes\frac{p}{|p|})}) \end{split}$$ otherwise. Notice that the lower bound together with our curvature pinching estimates (that follow from the Proposition \[lem-pinch-e\]) imply that $$H\, g_{ij} \ge (1-\delta_1)h_{ij}$$ for some $1 >\delta_1 > 0$. This implies that we can view a solution to (\[eq-our-sol\]) as a solution to (\[eq-F\]) with $F$ as in (\[eq-other-form\]). The function $\mathcal{F}_1(t,p,X)$ is continuous on $(0,T) \times \mathbb{R}^2\backslash \{0\} \times S^{2\times 2}$, it satisfies the conditions of Theorem $7.1$ in [@CGG] and (\[eq-visc1\]) is a degenerate parabolic geometric equation in the sense of Definition $5.1$ in [@CGG]. Theorem $7.1$ in [@CGG] shows the uniqueness of viscosity solutions to (\[eq-visc1\]). The $C^{1,1}$ solution on $[0,T)$ constructed above is a viscosity solution to (\[eq-visc1\]) and by the uniqueness result it is the unique $C^{1,1}$ solution to (\[eq-our-sol\]). This means that the limiting solution of (\[eq-our-sol\]) does not depend on the sequence ${\epsilon}_i \to 0$. Radial case =========== In this section we will employ the results from the previous section to completely describe the long time behaviour of (\[hmcf0\]) in the case of surfaces of revolution, $r = f(x,t)$ around the $x$-axis. For such a surface of revolution the two principal curvatures are given by $$\label{eq-princ-cur} \lambda_1 = \frac{1}{f(1+f_x^2)^{\frac{1}{2}}} \qquad \mbox{and} \qquad \lambda_2 = -\frac{f_{xx}}{(1+f_x^2)^{\frac{3}{2}}}.$$ Therefore, $$H = \lambda_1 + \lambda_2 = \frac{-ff_{xx} + f_x^2 + 1}{f(1+f_x^2)^{\frac{3}{2}}} > 0$$ and $$G = \lambda_1\, \lambda_2 = \frac{-f_{xx}}{f(1+f_x^2)^2}.$$ When the surface evolves by (\[hmcf0\]), $f(x,t)$ evolves by $$\label{eq-hmcf-rad} f_t = \frac{f_{xx}}{-ff_{xx} + f_x^2 + 1}.$$ We will consider solutions $f(\cdot,t)$ on an interval $I_t = [a_t,b_t] \subset [0,1]$ such that $f(a_t,t) = f(b_t,t) = 0$, $f > 0$ and $\tilde{H} = -ff_{xx} + f_x^2 + 1 > 0$. From (\[eq-princ-cur\]) we see that $\lambda_1 > 0$ and $\lambda_2$ changes its sign, depending on the convexity of $f$. The linearization of (\[eq-hmcf-rad\]) around a point $f$ is $$\label{eq-linear-rad} \tilde{f}_t = \frac{1+f_x^2}{\tilde{H}^2}\, \tilde{f}_{xx} - \frac{2f_xf_{xx}}{\tilde{H}^2}\, \tilde{f}_x + \frac{f_{xx}^2}{\tilde{H}^2}\tilde{f}$$ which is uniformly parabolic when $\tilde{H}$ is away from zero, no matter what is the sign of the smaller eigenvalue $\lambda_2$. \[thm-radial\] Assume that at time $t = 0$, $\Sigma_0$ is a $C^{1,1}$ star-shaped surface of revolution $r = f(x,0)$, for $x\in [0,1]$, $f(0,0) = f(1,0) = 0$, $f(\cdot,0) > 0$ and $H > 0$. Then, the flow exists up to the maximal time $$T= \frac {\mu_0(\Sigma_0)}{4\pi}$$ when the surface $\Sigma_t$ contracts to a point. Moreover, the surface becomes strictly convex at time $t_1 < T$ and asymptotically spherical at its extinction time $T$. Since the equation is strictly parabolic when $\tilde H >0$, the short time existence of a smooth solution on some time interval $[0,\tau]$, follows by classical results. Having a smooth solution to (\[hmcf0\]) on $[0,\tau]$ implies that we have a smooth solution $f(\cdot,t)$ to (\[eq-hmcf-rad\]). By the comparison principle, $f(x,t)$ is defined on $I_t = [a_t,b_t] \subset [0,1]$ and $f(a_t,t) = f(b_t,t) = 0$. Since the surface is smooth and $H >0$ on $[0,\tau]$, the expressions for $\lambda_1$ and $\lambda_2$ in (\[eq-princ-cur\]) yield to the bounds $$\limsup_{x\to a_t} f\, |f_x| \le C_1(t) \quad \mbox{and} \quad \limsup_{x\to b_t}f\, |f_x| \le C_2(t), \quad \mbox{for}\,\, 0 \leq t \leq \tau.$$ In the next lemma we will show that the above bounds do not depend on the lower bound on $H$, but only on the initial data. \[lem-ff\] Assume that the solution $f$ is smooth on $[0,t_0)$, for some $t_0 \leq T$ and $H >0$ on $[0,t_0)$. Then, there exists a uniform constant $C$, depending only on initial data, so that $$\label{eq-upper-ffx} f^2f_x^2 \le C, \qquad \mbox{for all} \,\, t \in [0,t_0).$$ We will bound $f^2 \, f_x^2$ from above by the maximum principle. Let us compute its evolution equation. We first compute the evolution of $f_x$ by differentiating (\[eq-hmcf-rad\]) in $x$. We get $$\label{eq-der-der} (f_x)_t = \frac{f_{xxx} \, (1+f_x^2) - f_x\, f_{xx}^2}{\tilde{H}^2},$$ which yields the following equation for $f_x^2$: $$\begin{aligned} (f_x^2)_t &=& \frac{2f_{xxx}f_x\, (1+f_x^2) - 2f_x^2\, f_{xx}^2}{\tilde{H}^2} = \frac{((f_x^2)_{xx} - 2f_{xx}^2)\, (1+f_x^2) - 2f_x^2f_{xx}^2}{\tilde{H}^2} \\ &=& \frac{(f_x^2)_{xx}\, (1+f_x^2) - 4f_x^2f_{xx}^2 - 2f_{xx}^2}{\tilde{H}^2}.\end{aligned}$$ The function $f^2$ satisfies the equation $$( f^2)_t = \frac{(f^2)_{xx} - 2f_x^2}{\tilde{H}^2}.$$ Combining the last two equations we obtain $$\begin{aligned} (f^2 f_x^2)_t &=& \frac{(f_x^2)_{xx}\, (1+f_x^2) - 4f_x^2f_{xx}^2 - 2f_{xx}^2} {\tilde{H}^2}\, f^2 + 2\, \frac{f_{xx} f f_x^2}{\tilde{H}^2}\\ &=&\frac{(f^2f_x^2)_{xx}\,(1+f_x^2)}{\tilde{H}^2} - \frac{(1+ f_x^2)((f^2)_{xx}\, f_x^2 - 2(f_x^2)_x\, (f^2)_x)} {\tilde{H}^2} + 2\, \frac{f_{xx}ff_x^2}{\tilde{H}^2}. \end{aligned}$$ Let $t < t_0$. We distinguish the following two cases: [*Case 1.*]{} The $(f^2f_x^2)_{\max}(t)$ is attained in the interior of $(a_t,b_t)$. Then, at that point $(f^2f_x^2)_x = 0$, which implies (since $f(\cdot,t) > 0$ in the interior) that $$\label{eq-rel} f_x^3 = -f f_xf_{xx}.$$ Hence, the maximum principle implies the differential inequality $$\begin{aligned} \label{eq-subst1} \frac{d}{dt}(f^2f_x^2)_{\max}(t) &\le& -\frac{1+f_x^2}{\tilde{H}^2}\left ((f^2)_{xx}f_x^2 - 2(f_x^2)_x(f^2)_x \right) + 2\, \frac{f_{xx}ff_x^2}{\tilde{H}^2} \nonumber \\ &=& -8\, \frac{(1+f_x^2)f_x^4}{\tilde{H}^4} - 2\frac{f_x^4}{\tilde{H}^2} \le 0.\end{aligned}$$ [*Case 2.*]{} The $(f^2f_x^2)_{\max}(t)$ is attained at one of the tips $\{a_t, b_t\}$. Assume it is attained at $a_t$. The point of the surface $\Sigma_t$ that arises from $x=a_t$ can be viewed as the interior point of $\Sigma_t$ around which our surface is convex. We can solve locally, around the point $x = a_t$ (say for $x\in [a_t,x_t]$) the equation $y = f(x,t)$ with respect to $x$, yielding to the map $x = g(y,t)$. Notice that $f f_x ={y}/{g_y}$ and that $x = a_t$ corresponds to $y = 0$. Since $\{f(x,t)|x\in [a_t,x_t]\}\cup \{-f(x,t)|x\in [a_t,x_t]\}$ is a smooth curve, we have that $x = g(y,t)$ is a smooth graph for $y\in [-f(x_t,t),f(x_t,t)]$. If $f^2f_x^2(\cdot,t)$ attains its maximum somewhere in $[a_t,x_t)$, then ${y^2}/{g_y^2}$ attains its maximum in the interior of $(-f(x_t,t),f(x_t,t))$. We will now compute the evolution of $y^2/g_y^2$ from the evolution of $f^2\, f_x^2$. Since $$f_x(x,t) = \frac{1}{g_y(y,t)}$$ from the evolution of $f^2\, f_x^2$ we get $$\left ( \frac{y^2}{g_y^2} \right )_t = \frac{(1+g_y^2)}{g_y^2\tilde{H}^2} \left (\frac{y^2}{g_y^2} \right )_{xx} - \frac{(1+g_y^2)}{g_y^2\tilde{H}^2} \left ((y^2)_{xx}\, \frac{1}{g_y^2} \ - 2 \left (\frac{1}{g_y^2}\right )_x(y^2)_x \right ) + \frac{2\, y_{xx}y}{g_y^2\tilde{H}^2}.$$ By direct computation we have $$\left (\frac{1}{g_y^2} \right )_x = -\frac{2g_{yy}}{g_y^3} \quad \mbox{and} \quad (y^2)_x = \frac{2y}{g_y} \quad \mbox{and} \quad y_{xx} = -\frac{g_{yy}}{g_y^3}$$ and $$\left (\frac{y^2}{g_y^2} \right )_{xx} = \left (\frac{y^2}{g_y^2} \right )_{yy} - \left (\frac{y^2}{g_y^2} \right )_y \, \frac{g_{yy}}{g_y^3} \qquad \mbox{and} \qquad (y^2)_{xx} = \frac{2}{g_y^2} - \frac{2yg_{yy}}{g_y^3}.$$\ Combining the above yields to $$\begin{aligned} \left (\frac{y^2}{g_y^2} \right)_t &=& \frac{(g_y^2+1)}{g_y^2\tilde{H}^2} \left(\frac{y^2}{g_y^2} \right )_{yy} - \left (\frac{y^2}{g_y^2}\right )_y\,\frac{g_{yy}\, (1+g_y^2)}{g_y^5\tilde{H}^2}\\ &-& \frac{(1+g_y^2)}{g_y^2\tilde{H}^2}\left (\frac{2}{g_y^4} + 2\frac{yg_{yy}}{g_y^5} \right) - \frac{2\, yg_{yy}}{g_y^5\tilde{H}^2}\end{aligned}$$ which can be re-written it as $$\begin{aligned} \label{eq-g1} \left (\frac{y^2}{g_y^2} \right)_t &=& \frac{(g_y^2+1)}{g_y^2\tilde{H}^2}\, \left (\frac{y^2}{g_y^2} \right )_{yy} - \left (\frac{y^2}{g_y^2} \right )_y\, \frac{g_{yy}(1+g_y^2)}{g_y^2\tilde{H}^5}\nonumber\\ &-& \frac{(1+g_y^2)}{g_y^2\tilde{H}^2}\left(\frac{2}{g_y^4} + \frac{2\, y^2g_yg_{yy}}{yg_y^6} \right) - \frac{2\, y^2g_yg_{yy}}{yg_y^6\tilde{H}^2}.\end{aligned}$$ At the maximum point of ${y^2}/{g^2_y}$ we have $$y^2g_yg_{yy} = yg_y^2.$$ This together with the maximum principle applied to (\[eq-g1\]) yield to the differential inequality $$\label{eq-subst2} \frac{d}{dt}\left (\frac{y^2}{g_y^2} \right )_{\max}(t) \le -\frac{4\, (1+g_y^2)}{g_y^6\tilde{H}^2} - \frac{2}{\tilde{H}^2g_y^4} \le 0.$$ Estimates (\[eq-subst1\]) and (\[eq-subst2\]) imply that $(f^2f_x^2)(x,t) \le C$, for all $x\in [a_t,b_t]$ and all $t \leq t_0$, where $C$ is a uniform constant independent of time. This finishes the proof of the lemma. \[lem-lower-H0\] Let $T= {\mu_0(\Sigma_0)}/{4\pi}$ be as in Theorem \[thm-radial\]. Then, there exists a uniform constant $\delta$, depending only on the initial data, so that $H \ge \delta > 0$, for all $t\in [0,T)$. It is enough to show that if $H >0$ on $[0,t_0)$, then $H \geq \delta >0$ there. We recall that $\lambda_1 = {1}/{f(1+f_x^2)^{{1}/{2}}}$. Hence, the estimate (\[eq-upper-ffx\]) yields to the bound $$\label{eq-lam1} \lambda_1 \ge c >0 \qquad \mbox{on} \,\, \Sigma_{t}, \quad \mbox{for} \,\, t\in [0,t_0).$$ Since $H = \lambda_1 + \lambda_2$, if $\lambda_2 \ge 0$, then $H \ge \lambda_1 \ge c$. If $\lambda_2 < 0$ and $H \le c/{2}$ (otherwise we are done) by (\[eq-lam1\]) we have $$\lambda_1 - |\lambda_2| \le \frac c2 \Rightarrow |\lambda_2| \ge \frac{c}{2}.$$ Observe next that Lemma \[lem-bound-F\] implies the bound $$\frac{\lambda_1|\lambda_2|}{H} \le C, \qquad \mbox{for a uniform constant} \,\, C.$$ Hence $$H \ge \frac{\lambda_1|\lambda_2|}{C} \ge \frac{c^2}{2C}.$$ In any case, we have $$H \ge \min \left \{\frac{c}{2}, \frac{c^2}{2C} \right \}$$ which shows our lemma with $\delta:= \min \{\frac{c}{2}, \frac{c^2}{2C} \}$. \[lem-blowing-up\] Let $[0,T)$ be the maximal interval of existence of a solution to (\[hmcf0\]). Then, $\max_{\Sigma_t}|A|$ becomes unbounded as $t\to T$. Assume that $\sup_{\Sigma_t}|A| \le C$, for all $t\in [0,T)$ and write $$H = \frac{\tilde{H}}{f(1+f_x^2)^{3/2}}$$ with $\tilde H = -ff_{xx} + f_x^2 +1$. Then $H \leq C$ (since $|A|$ is bounded) and $H \geq \delta >0$ (by the previous result). Hence, $$c_1 \le \frac{f(f_x^2 + 1)^{3/2}}{\tilde{H}} \le c_2$$ which implies $$\frac{c_1}{f(1+f_x^2)^{1/2}} \le \frac{1+f_x^2}{\tilde{H}} \le \frac{c_2}{f(1+f_x^2)^{1/2}}.$$ We can rewrite it as $$c_1\, \lambda_1 \le \frac{1+f_x^2}{\tilde{H}} \le c_2\, \lambda_1$$ which together with (\[eq-lam1\]) and $|A| \le C$ imply the bounds $$\label{eq-unif-ellip} C_1 \le \frac{1+f_x^2}{\tilde{H}} \le C_2$$ for uniform constants $C_1, C_2$, for all $t\in [0,T)$. This means the linearization (\[eq-linear-rad\]) of (\[eq-hmcf-rad\]) is uniformly elliptic on time interval $[0,T)$. If our surface of revolution at time $t$ is given by an embedding $F(\Sigma,t)$, which is a solution to (\[hmcf0\]), $|A| \le C$ implies $|F|_{C^2} \le C$ on the time interval $[0,T)$ and the speed $|\kappa| \le C$ (we will use the same symbol $C$ to denote different uniform constants). It is easy to see that $F(\cdot,t)$ converges to a continuous limit $F(\cdot,T)$ as $t\to T$, since $$|F(x,t_1) - F(x,t_2)| \le \int_{t_1}^{t_2}|\kappa|\, dt \le C|t_1 - t_2|.$$ Due to $$|\frac{\partial}{\partial t}g_{ij}|^2 = |2h_{ij}\kappa|^2 \le 4|A|^2\kappa^2 \le C$$ and [@Ha1] we have that $F(\cdot,T)$ represents a surface. It is a $C^{1,1}$ surface of revolution $r=f(x,T)$ around the $x$-axis that comes as a limit as $t\to T$ of surfaces of revolution $r=f(x,t)$. Take $0 < {\epsilon}<< b_T - a_T$ arbitrarily small. Consider $f(r,t)$ on $x\in [a_t +{\epsilon}, b_t-{\epsilon}]$, that is, away from the tips $x = a_t$ and $x = b_t$ where $f = 0$ and $f_x$ becomes unbounded. Since our solution is $C^{1,1}$, $c_1 \le f(r,T) \le c_2$ and $|f_x| \le c_3$, at time $t = T$ and for $x\in [a_T+{\epsilon},b_T-{\epsilon}]$, where $c_1, c_2, c_3$ all depend on ${\epsilon}$. Due to (\[eq-unif-ellip\]), equation (\[eq-hmcf-rad\]) is uniformly parabolic and standard parabolic estimates yield $$\label{eq-away-tips} |f(\cdot,T)|_{C^k} \le C({\epsilon},k), \qquad \mbox{for every} \,\, k > 0 \,\, \mbox{and} \,\, x\in [a_T+{\epsilon},b_T-{\epsilon}].$$ We can repeat the previous discussion to every ${\epsilon}> 0$ to conclude that our surface $\Sigma_T$ is smooth for $x\in (a_T,b_T)$. By writing our surface locally as a graph $x = g(y,t)$ around the tips (at which our surface is strictly convex), we can show that our surface is smooth at the tips as well (similar methods to those discussed above apply in this case). The same proof as the one for the flow (HMCF$_{\epsilon}$) which was presented in the previous section, shows that our radial surface $\Sigma_t$ shrinks to a point at $T = \frac{\mu_0(\Sigma_0)}{4\pi}$, where $\mu_0(\Sigma_0)$ is the total area of $\Sigma_0$. In particular, this means $f(x,t) \to 0$ as $t\to T$. We will show next that at some time $t_1 < T$ the surface $\Sigma_{t_1}$ becomes strictly convex. This will follow from the next lemma. \[lem-lower\] Assume that $f$ is a solution of the HMCF on $[0,T)$. Then, there exists a constant $c >0$, independent of $t$, such that $f(x,t) \geq c$, at all points $(x,t)$, with $0 \leq t < T$ and $f_{xx}(x,t) \geq 0$. Fix $t < T$. Since our surface $\Sigma_t$ is convex around the tip $x = a_t$ we have $f_{xx} \leq 0$ there. Let $c_t$ be the largest number in $[a_t,b_t]$ so that $\Sigma_t$ is strictly convex for $x\in [a_t,c_t]$. If $c_t=b_t$, then $\Sigma_t$ is convex and we have nothing to show. Otherwise, $f_{xx}(x,t) \leq 0$ for $a_t \leq x \leq c_t$ and $f_{xx}(x,t) > 0$ in $(c_t,c_t+{\epsilon}_t)$ for some ${\epsilon}_t >0$. Hence, $f_x(\cdot,t)$ is increasing in $x$, for $x\in (c_t,c_t+{\epsilon}_t)$. Consider the function $f_x(\cdot,t)$ on the interval $x\in [c_t,b_t)$. From the above discussion and the fact that $\lim_{x\to b_t} f_x(x,t) = -\infty$, we conclude that the maximum $$M(t):= \max \, \{ \, f_x(x,t), \,\, x \in [c_t,b_t]\, \}$$ is attained in the interior of $[c_t,b_t]$. Recall the evolution equation for $f_x$ to be $$(f_x)_t = \frac{f_{xxx}(1+f_x^2)}{\tilde{H}^2} - \frac{f_xf_{xx}^2}{\tilde{H}^2}.$$ Hence, assuming that $M(t) \geq 0$, the maximum principle implies that $M'(t) \leq 0$. This shows that $f_x$ is uniformly bounded from above on $[c_t,b_t]$. Since a similar argument can be applied near the other tip $b_t$, we finally conclude that $|f_x|$ is uniformly bounded in the non-convex part (if it exists) away from the tips. We will now conclude the proof of the lemma. Assume that $f_{xx}(x,t) \ge 0$, which holds in a non-convex part of our evolving surface. At that point, we have $$\lambda_2 := -\frac{f_{xx}}{(1+f_x^2)^{3/2}} \le 0.$$ Since $\lambda_2 \leq 0$, Lemma \[lem-bigger-eigen\] implies the bound $$\lambda_1:= \frac{1}{f(1+f_x^2)^{1/2}} \leq C$$ which reduces to the the bound $$f \ge \frac{1}{C(1+f_x^2)^{1/2}} \ge \frac{1}{\tilde{C}} =: c$$ in the non-convex part where $f_x^2 \leq C$, uniformly in $t$. This finishes the proof of the lemma. [*We will now conclude the proof of Theorem \[thm-radial\]:*]{} Since $f(x,t) \to 0$ as $t\to T$, with $T=\frac{\mu_0(\Sigma_0)}{4\pi}$, there is some time $t_1 <T$ so that $$f(x,t) < \frac c2, \qquad \mbox{for all} \,\, x \in [a_t,b_t]$$ where $c > 0$ is the constant taken from Lemma \[lem-lower\]. Hence, by Lemma \[lem-lower\] the surface $\Sigma_t$ is convex for $t \geq t_1$. Since $H \geq \delta >0$ for all $t <T$, the surface $\Sigma_{t_1}$ is strictly convex. The result of Andrews in [@An1], implies that $\Sigma_t$ shrinks asymptotically spherically to a point as $t\to T$. [11]{} Andrews,B., [*Contraction of convex hypersurfaces in Euclidean space*]{}, Calc.Var. 2 (1994), 151–171. 0.05in B. Andrews. Contraction of convex hypersurfaces in Riemannian spaces 39, no. 2, 407-431, 1994 0.05in B. Andrews. Motion of hypersurfaces by Gauss curvature 195, no. 1, 1-34, 2000. 0.05in B. Andrews. Pinching estimates and motion of hypersurfaces by curvature functions : Math. DG /0402311 0.05in M.C. Caputo, P. Daskalopoulos. Highly Degenerate Harmonic Mean Curvature flow, preprint. 0.05in Y.-G.Chen, Y.Giga, S.Goto. Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations. 33, 749-786, 1991. 0.05in P.Daskalopoulos, R.Hamilton. Harmonic Mean Curvature flow on Surfaces of Negative Gaussian Curvature. , 14 (2006), no. 5, 907–943. 0.05in S.Di$\ddot e$ter. Nonlinear degenerate Curvature flow for weakly convex hypersurfaces. , 22, 2: 229 - 251, 2005 0.05in G.Huisken, K.Ecker. Interior estimates for hypersurfaces moving by mean curvature , 105: 547-569, 1991. 0.05in L.C.Evans, J.Spruck. Motion of level sets by mean curvature I , 33, 635-681, 1991. 0.05in C.Gerhardt. Flow of nonconvex hypersurfaces into spheres. , 32, 299-314, 1990. 0.05in R.Hamilton. Three-manifolds with positive Ricci curvature , 17, 255-306, 1982. 0.05in R.Hamilton. Convex hypersurfaces with pinched second fundamental form , 2, 167-172, 1994. 0.05in G.Huisken. Flow by mean curvature of convex hypersurfaces into spheres , 20, 237-266, 1984. 0.05in N.V.Krylov. Nonlinear elliptic and parabolic equations of second order. , 1978. 0.05in K.Smoczyk. Starshaped hypersurfaces and the mean curvature flow. , 95, 225-236, 1998. J.Urbas. On the expansion of starshaped surfaces by symmetric functions of their principal curvatures. , 205, 355-372, 1990. 0.05in [^1]: $*:$ Partially supported by NSF grant 0604657
--- abstract: 'We use standard polynomial expansion technique to show the existence of a relation between polytropic model and the description of gas spheres at finite temperature. A numerical analysis is made concerning the obtained perturbative parameters. It is shown that there is a correspondence between polytropic and gas sphere thermal models for fermions and bosons. For instance, the polytropic index $n$ displays evident correlation with temperature and chemical potential.' date: Released 2011 Xxxxx XX title: 'A Polytropic Approach to Semi-relativistic Isothermal Gas Spheres at Arbitrary Temperature' --- \[firstpage\] Stars – gas spheres – polytropic model – fermion star – boson star. Introduction {#s:Intro} ============ As they barely emit light, compact objects are considered as part of the dark matter in cosmology. Dark matter is a parcel of mass that can only be detected by its gravitational effects. With this definition and depending on where the referential is placed many things can be considered as dark matter; for instance, even Jupiter can be considered as dark matter for a distant observer. This subject has increasingly been accepted as decisive for understanding the evolution of the universe. There are even speculations [@Portilho] that in the Earth, the Chandler wobble excitation and damping, one of the open problems in geophysics, can be considered as a consequence of geological interaction with an oblate ellipsoid made of dark matter. The same phenomenon is also observed in other planets. Differently from dark energy, which has no effective mass but presents its effects similarly decisive for the evolution of the universe, dark matter has mass and may consist of many kinds of particles: dust, neutrinos, neutrons, protons (hydrogen), and even bosons, like alpha particles, Higgs bosons or axions [@Kolb]. Many of those particles come from the early universe, encompassing some of the lore of its evolution. Fraction of those particles are roving over the free space. But, in the early universe, some of those particles departed from roaming, clustered and formed self-gravitating compact objects, like fermion stars and boson stars. Fermion stars is a general name denoting particular ones like neutron star and white dwarf stars. Since Oppenheimer and Volkoff [@Oppenheimer], these compact objects have received large attention and many of their properties are already determined, and astronomers can use the theoretical information to detect them. From Chandrasekhar [@Chandrasekhar] one can also learn that neutron stars and white dwarf stars can be studied as a degenerate Fermi gas under itself gravitational field, with an equation of state to determine its pressure and density of energy. In contrast to fermion star there is the so-called boson star [@Ruffini; @Liddle; @Mielke2], built up with self-gravitating bosons at zero temperature. In this case, one can also use relativistic approach, but the star structure can be studied directly from the stress-energy tensor since it is possible to write a Lagrangian for bosons in place of using an equation of state. Despite this compact object has not been detected yet, there are many theoretical efforts to understand its properties, like formation and stability [@Gleiser; @Mielke1], rotation [@Claudio95; @Claudio06], and even the interaction [@Claudio98] between bosons and fermions in the same spherical system. The aim of the present article is to show that there is a relation between the statistical mechanics of the gas inside the star and the polytropic model for both fermions and bosons, starting from the isothermal gas sphere, and departing from this stage to slightly reach a non-zero temperature in first approximation. Systems of self-gravitating bosons and fermions {#s:Systems} =============================================== It is well known that neutron stars are actually to be considered as fermion stars. Neutron stars are kind of self-gravitating Fermi gas and the study of this kind of quantum fluid is in the concern of statistical mechanics. Thus, the neutron star matter, which is usually treated in the classical regime, requires further studies in the quantum regime to be better understood specially under such high pressures and curvatures as those experienced in stars. In Physics there are two kinds of quantum statistics: fermion and boson gases. Statistical mechanics is considered by Ingrosso and Ruffini [@Ingrosso] in the context of both fermion and boson stars. If the fermion or the boson gas becomes gravitationally bound and stable, the gas will reach a characteristic energy density $\rho$ and pressure $p$ obeying the statistics related to temperature and average energy, such that: $$\begin{aligned} p &=& 16\sqrt{2}S_1 \beta^{5/2} \left[ F_{3/2}(\theta ,\beta) + \frac{\beta}{2}F_{5/2}(\theta ,\beta) \right] \label{E1.1}\\ & & \nonumber \\ \rho &=& 3\sqrt{2}S_2 \beta^{3/2} \left[ F_{1/2}(\theta ,\beta) + \beta F_{3/2}(\theta ,\beta) \right] \label{E1.2}\end{aligned}$$ where: $$\theta =\mu /kT$$ $$\beta = k_B T/mc^2$$ $$S_1 =(2s+1)m^4c^5/48\pi^2\hbar^3$$ $$S_2 =(2s+1)m^4c^5/6\pi^2\hbar^3$$ and, $m$ is the mass and $s$ is the spin of the considered particle (fermion or boson). The $F$ functions are mathematical tools from statistical mechanics [@Pathria], given by: $$\begin{aligned} F_k (\theta ,\beta ) & = & \int_0^{\infty} x^k \left( 1+\frac{\beta x}{2}\right)^{1/2} g(x, \theta ) dx \label{E1.3} \\ & & \nonumber \\ g(x, \theta ) & = & \frac{1}{\exp (x-\theta) \pm 1} \label{E1.4}\end{aligned}$$ where the sign $+(-)$ refers to fermions (bosons). The gas presents a chemical potential $\mu$, and $k_B$ is the Boltzmann constant. Polytropes {#s:Polyt} ========== The study of polytropic stars provided great simplifications concerning the treatment of compact objects. In thermodynamics polytropes are paths similar to adiabatics, isobarics and isothermals, and families of Lane-Emdem equations can be separated concerning their polytrope indices $n$. This is a very important tool to classify self-gravitating objects and defining their internal energy, gravitational energy and possible stability [@Zeldovich]. Relativistic stars ------------------ In such to consider gas spheres, we use the spherical metric element: $$ds^{2}=-B(r)d\tau ^{2}+A(r)dr^{2}+r^{2}d\theta ^{2}+r^{2}\sin ^{2}\theta d\varphi ^{2} \label{3.1}$$ Chandrasekhar proposed that a compact object could be treated as a perfect fluid described by an energy-momentum tensor: $$T_{\mu\nu}=p g_{\mu\nu} + \left( p+\rho c^2 \right) u_\mu u_\nu \label{3.2}$$ where $u^\nu$ is the four-velocity of the gas, $p$ is the pressure and $\rho$ is the energy density. Taking units where $c=1$, the four-velocity vectors $u$ obey the relation $u_\mu u_\nu g^{\mu\nu} =-1$, in such that $u_r = u_\theta = u_\phi =0$ and $u_t = -(g^{tt})^{-1/2} = -\sqrt{B}$. The non-vanishing Ricci components, $R_{rr}$,$R_{\theta\theta}$ and $R_{tt}$, give rise, respectively, to: $$\frac{B''}{2B}-\frac{B'}{4B} \left( \frac{A'}{A} + \frac{B'}{B} \right) - \frac{A'}{rA} = - 4\pi G (\rho -p)A \label{3.3a}$$ $$-1 + \frac{r}{2A} \left( - \frac{A'}{A} + \frac{B'}{B} \right) + \frac{1}{A} = - 4\pi G (\rho -p)r^2 \label{3.3b}$$ $$-\frac{B''}{2A} + \frac{B'}{4A} \left( \frac{A'}{A} + \frac{B'}{B} \right) - \frac{B'}{rA} = - 4\pi G (\rho + 3p)B \label{3.3c}$$ It is also useful the equation for the hydrostatic equilibrium: $$\frac{B'}{B} = - \frac{2p'}{p+\rho} \label{3.4}$$ and the equation for the total mass inside the star: $${\cal M} = \int_{0}^{R} 4\pi r^2 \rho (r) dr \label{3.5}$$ So, it is possible to define $A(r)$ using the limit to the infinity: $$A(r)=\frac{1}{ \displaystyle{ \left( 1-\frac{2G{\cal M}}{r} \right)} } \label{3.6}$$ Equation (\[3.3b\]) can be rewritten as: $$\begin{aligned} -1 + \left[ 1-\frac{2G{\cal M}}{r} \right] \left[ 1-\frac{rp'}{p+\rho} \right] + \frac{2G{\cal M}}{r} -4\pi G\rho r^2 \nonumber \\ = - 4\pi G \left( \rho - p \right) r^2 \label{3.7}\end{aligned}$$ This last equation can be written as: $$-r^2 p'(r) = G{\cal M} \rho \frac{ {\displaystyle \left[ 1+ \frac{p}{\rho} \right] \left[ 1+ \frac{4\pi G r^3 p }{\cal M} \right]} } {{\displaystyle \left[ 1- \frac{2G{\cal M}}{r} \right]} } \label{3.8}$$ This equation can be used to determine pressure evolution in $r$. It can be used in Astrophysics applications within the Newtonian approach after some non-relativistic corrections. Two isentropic special cases are of interest: stars at absolute zero (neutron stars, white dwarf stars and boson stars), and stars in convective equilibrium. Semi-relativistic approach -------------------------- When considering the Newtonian case, one can take [@Weinberg]: $$p \ll \rho \label{3.9a}$$ $$4\pi r^3 p \ll {\cal M} \label{3.9b}$$ $$\frac{2G{\cal M}}{r} \ll 1 \label{3.9c}$$ Using these approximations in eqn.(\[3.8\]) simplify to: $$-r^2 p' = G {\cal M} \rho \label{3.10}$$ which by means of eqn.(\[3.5\]) gives: $$\frac{d}{dr}\left[ \frac{r^2}{\rho} p' \right] = -4 \pi G r^2 \rho \label{3.11}$$ This equation could be a linear differential equation (LDE), except for the fact that it is to be solved both on $p(r)$ and on $\rho (r)$. But, if $p$ and $\rho$ have a relation (known as ’equation of state’) then eqn.(\[3.11\]) becomes an actual LDE, with solution depending only upon $p$ or $\rho$. In fact, this equation of state is the polytropic equation: $$p= K \rho^\gamma \label{3.12}$$ where $K$ and $\gamma$ are constants. Now one can see that with the adequate boundary conditions, such as $p'(0)=0$ (to keep $\rho (0)$ finite), eqn.(\[3.11\]) gives $p=p(r)$. This relation is famous for long time since, for instance, W. Thomson (Lord Kelvin) in 1887 studied the natural stirring produced in a great free fluid mass like the Sun while it is cooling at its surface. At Joule’s suggestion, Kelvin also studied this natural stirring of a moist atmosphere condensation of vapor in the upward currents of air, which is nowadays of recognized importance in meteorology, known as polytropic change [@Chandrasekhar]. Following this historical tradition, in Astrophysics any star for which the equation of state takes the form (\[3.12\]) is called a polytrope [@Weinberg]. Some typical cases are: $\gamma =6/5$ is in the range of super large gaseous stars; white dwarfs (and generally fermion stars) present $4/3 \leq \gamma \leq 5/3$, where $\gamma \cong 4/3$ correspond to largest mass white dwarfs and $\gamma \cong 5/3$ to small mass white dwarfs; there are also the incompressible stars with very high polytropic indexes ($\gamma \rightarrow \infty$). Isothermal gas spheres ---------------------- Isothermal gas spheres are important in Astrophysics since it is a starting point to understand composite stars, and also to the study of stars consisting of envelopes with different temperatures [@Chavanis]. For a standard star [@Chandrasekhar], we have from the theorems of the equilibrium of the star: $$p= \left( \frac{k_B}{\mu H}\right) \rho T + \frac{1}{3} \sigma T^4 \label{3.13}$$ where $k_B$ is the Boltzmann constant, $\mu$ is the mean molecular weight, $H$ is the mass of the proton, and $\sigma$ is the Stefan-Boltzmann constant. If the star is in gravitational equilibrium, one can write its equation of state as: $$p= K\rho + D \label{3.14}$$ with: $$K=\frac{k_B T}{\mu H} \,\,\, , \,\,\, D=\frac{1}{3} \sigma T^4$$ Notice that both $K$ and $D$ depends on the temperature. Comparing eqns.(\[3.12\]) and (\[3.14\]) we also can learn that the isothermal sphere case is close to correspond to a polytropic equation with $\gamma =1$, if we take $D\rightarrow 0$. Taylor expanding the equation of state {#s:Taylor} ====================================== In this article we assume the star is under hydrostatic equilibrium, and using equations (\[E1.1\])-(\[E1.2\]) one can show that: $$\begin{aligned} p=\left(\frac{16 S_1}{3 S_2}\right) \rho + 8\sqrt{2} S_1 \beta^{7/2}F_{5/2} - 16\sqrt{2} S_1 \beta^{3/2}F_{1/2} \label{E4.1}\end{aligned}$$ which is a polytropic-like equation, unless for the two additional terms at the end. It can be compared to equation (\[3.12\]) if we consider: $$\begin{aligned} p= K\rho^\gamma + C(s,\theta , \beta) \label{E4.2}\end{aligned}$$ where $K$ is constant for a star in equilibrium, and the power factor $\gamma$ is sometimes expressed using $\gamma = (1+ 1/n)$ or $n=(\gamma -1)^{-1}$, where $n$ is called the polytropic index. New physics can be inferred if equations (\[3.12\]) and (\[E4.1\]) are generalities of: $$\begin{aligned} p= K\rho^{1+\delta} \label{E4.3}\end{aligned}$$ Some considerations about the results so on. The function $C$ depends on temperature, but the polytropic equation does not explicitly takes into account the temperature of the star. Moreover, the case $\gamma =1$ corresponds to an isothermal sphere at constant temperature and can be considered as a special situation in Astrophysics, since $n\rightarrow\infty$. Hence, $\delta$ takes into account deviations from this case (if $\delta =0$ we recover the isothermal sphere). We can Taylor expand (\[E4.3\]) around the central density $\rho =\rho _0$: $$\begin{aligned} p=K\left\{ \rho_0^{1+\delta } + (\delta +1)\rho_0^{\delta}\rho - (\delta +1)\rho_0^{1+\delta } \right\} + {\cal O} ((\rho -\rho_0 )^2 , \delta ^2 ) \label{E4.4}\end{aligned}$$ Thus, up to first order in $\rho -\rho_0$ and $\delta$: $$\begin{aligned} p=\left[ K (\delta +1)\rho_0^{\delta } \right] \rho + K \rho_0^{1+\delta } - K (\delta +1) \rho_0^{1+\delta } \label{E4.5}\end{aligned}$$ A comparison between this last equation and (\[E4.1\]) brings us to a system of non-linear equations, which is straightforward solved giving: $$\begin{aligned} \left\{ \begin{array}{lcl} \delta &=& 2\alpha -1 \label{E4.7a} \\ \rho_0 &=& 3 \sqrt{2} S_2 \beta^{3/2} F_{1/2} \label{E4.7b} \\ K &=& {\displaystyle \frac{8\sqrt{2} S_1 \beta^{7/2}F_{5/2}} {\left( 3\sqrt{2} S_2 \beta^{3/2}F_{1/2}\right)^{2\alpha}} } \label{E4.7c} \end{array} \right.\end{aligned}$$ where: $$\begin{aligned} \alpha = \frac{F_{1/2}}{\beta^2 F_{5/2}} \label{E4.6}\end{aligned}$$ The value of $\delta$ brings relevant information, since one can associate it with the polytropic index, i.e., $\delta = 1/n$. Apart of $F_k$ functions one can write: $\alpha \sim m^2 c^4/k_{\rm{B}}^2 T^2$. Thence, since function $\delta$ is directly related to $\alpha$, which depends on the mass of the particle on consideration, results for bosons and fermions may present large numerical differences. Results {#s:Results} ======= The aim here is to show that there is a relation between the polytropic model and the statistical mechanics of the gas inside the star. Consequently, in the previous section it has been shown that the parameters $\delta$, $\rho_0$ and $K$, can be related to the temperature and to the chemical potential. We separate the results for bosons and for fermions in such to compare the behaviour for these two cases. To illustrate the parameters behaviour we have used neutrons as fermions, and the $Z^0$ bosons. Their masses, spins and other constants can be recovered, [*e.g.*]{}, in Particle Data Group [@PDG]. Here we consider fermion mass is 938.3MeV/$c^2$ with spin 1/2, and boson mass is $91.19$GeV/$c^2$ with spin 1. For fermions, Fig.1 shows the plot for the parameter $\delta$ in equation (\[E4.7a\]) in function of the temperature and the chemical potential. The evaluation of the expression for $\delta$ in this case reveals an inverse dependence in square temperature and a constant of order 10 to 27, [*i.e.*]{}, $\delta \sim 10^{27} T^{-2}$, which is the dominant term. In this case, we can see that $\delta$ presents large values for the temperature range. The temperature range has been chosen to vary from $T\sim 0$ to $T\sim 3$K, which is the range where significant variations have been observed. The chemical potential has been chosen to vary between $10^{-26}$ and $10^{-25}$ since in this range the integrals present some variation. The statistical mechanics integrals, eqs.(\[E1.3\])-(\[E1.4\]), are theoretically defined from zero to infinity, but for numerical purposes we used Simpson integration with $x$ variating from $0.1$ to infinity to avoid a singularity in MAPLE during boson computations. ![Plot of $\delta$ in function of the temperature $T$ and chemical potential $\mu$ for fermions. Notice that the values of $\delta$ are in order of $10^{27}$, and decreases strongly for higher temperatures. \[fig1\]](Fig1.ps){width="\linewidth"} For bosons, Fig.2 show that $\delta$ there is a similar behaviour. Meanwhile, notice that for low temperatures the function increases to values that are of order $10^{31}$; this is because the expression for $\delta$ in this case is of order $10^{31} T^{-2}$. One have to point out that despite fermions and bosons cases present similar displays, the values on $\delta$ axis are very different in order: boson gas can reach higher values, since $Z^0$ mass is larger than the neutron one. If one uses the so expected Higgs boson ($H^0$, presently under search in CERN’s LHC), with expected mass above 114GeV (LEP) [@PDG], the discrepancies are greater. ![Plot of $\delta$ for bosons. Notice that the values are in order $10^{31}$, and decreases as temperature increases. \[fig2\]](Fig2.ps){width="\linewidth"} The behaviour for the parameter $\rho_0$ for fermions is shown in fig.3, and for bosons in fig.4. Aside the integrals involved, the parameter $\rho_0$ scales like $T^{3/2}$, growing with temperature. (Observe that the two displays present the axis for $T$ and $\mu$ in different direction if compared to fig.1 and fig.2). ![Plot of $\rho_0$ function for fermions. Notice that in this case $\rho_0 < 2.5$. \[fig3\]](Fig3.ps){width="\linewidth"} ![Plot of $\rho_0$ function for bosons. Notice that the values are in order $10^{5}$. \[fig4\]](Fig4.ps){width="\linewidth"} In the range of temperature selected for these sample graphics we have obtained nearly vanishing $K$ functions (for both fermions and bosons); meanwhile, for high temperatures we have observed a variating $K$ parameter. In the range of temperatures where both $\delta$ and $\rho_0$ have significant values the parameter $K$ is nearly vanishing, and vice versa. This result for $K$ requires further investigation, since $K \neq 0$ is a necessary condition to avoid a vanishing pressure gas (this is a work in progress). Anyhow, for any of the numerical ranges tested we observed that $\delta =1/n$ (and $\rho_0$) presented a strong correlation with temperature and chemical potential, within the limits of the expansion used for the equation of state. Limits and Validity {#s:Limits} =================== The results obtained in the previous section can rise questions concerning the expansion validity for any of the numerical ranges evaluated, since $\delta$ and $\rho_0$ display huge values. In fact, figures \[fig1\] to \[fig4\] form just part of a whole: in those figures only domains where the surface presented pronounced variation are presented; temperature and chemical potential ranges extend to infinity with no limitations up to now. Despite having achieved the aim of this work, which is to say showing there is a connection between the polytropic and the isothermal sphere models, we perform a deeper analysis of the polynomial expansion (\[E4.5\]). As explained in section \[s:Taylor\], equation (\[E4.5\]) comes from: $$\begin{aligned} p &=& K\left\{ \rho_0^{1+\delta } + (\delta +1)\rho_0^{\delta} (\rho -\rho_0 ) + \frac{\delta (\delta -1)}{2} \rho_0^{\delta -1}(\rho -\rho_0 )^2 \right. \nonumber \\ & & \left. + \frac{\delta (\delta^2 -1)}{6} \rho_0^{\delta -2}(\rho -\rho_0 )^3 + \cdots \right\} \label{E6.1}\end{aligned}$$ which is a Taylor expansion of (\[E4.3\]) around the central density $\rho =\rho _0$. Two points are important to keep the validity for this expansion: terms $(\rho -\rho_0 )^2 \rightarrow 0$ and $\delta \rightarrow 0$. The first condition is a necessary one otherwise parts of the third term in eq.(\[E6.1\]) mix up with those from the previous term. Moreover, the first condition can be interpreted as $\rho (r) \simeq\rho_0$, which is a reasonable demand when considering an isothermal gas sphere during the formation of a star, for instance. The second condition is necessary since there are terms in $\delta^2$ that must be neglected in such eq.(\[E4.5\]) can be considered an useful approximation and to guarantee its resemblance to eq.(\[E4.1\]). In figures \[fig1\] to \[fig4\] we can see that $\delta$ displays huge values and it is questionable if the comparison that results in the present model is valid. But, as remarked above the values of $T$ and $\mu$ extend to infinity. Thence, we can look for regions in the domain $T \times\mu$ where $\delta \rightarrow 0$. Since $\delta = 2\alpha -1$, the limit $\delta \rightarrow 0$ yields $\alpha\rightarrow 1/2$, or: $$\left( \alpha -\frac{1}{2} \right) \rightarrow 0 .$$ Numerically we defined a region where this limit is achieved with a certain precision $\epsilon$: $$\begin{aligned} \left| \alpha -\frac{1}{2} \right| < \epsilon \label{E6.2}\end{aligned}$$ and we have used this condition to search only for points that are within this precision. If we can find such points we perform an existence condition. Indeed we were able to find such points for both configurations, boson and fermion stars. This would be expected since $\delta$ decreases as $T$ and $\mu$ increases. In fact, considering fermions to be neutrons, then $\alpha$ in eq.(\[E4.6\]) is given by: $$\begin{aligned} \alpha_{\rm Fermi}= 0.1186\times 10^{27} \frac{\int_0^{\infty} \frac{ \sqrt{x}\sqrt{1+0.4592\times 10^{-13}Tx} } {\exp\left( x- \frac{0.7243\times 10^{23}\mu}{T} \right) +1 } } {T^2 \int_0^{\infty} \frac{ x^{5/2}\sqrt{1+0.4592\times 10^{-13}Tx} } {\exp\left( x- \frac{0.7243\times 10^{23}\mu}{T} \right) +1 } } \label{E6.3}\end{aligned}$$ which shows that, to obtain the required limit we need $$\alpha\sim \frac{10^{27}}{T^2} \rightarrow \frac{1}{2},$$ and that yields temperatures such as $T\sim 10^{13}$K. Since eq.(\[E6.3\]) demand a big computational effort we use this analytical view to search for points in the vicinity of this temperature (for fermion configurations the program spent about 5 hours and for bosons about 18 hours, using standard computers). In the case of $Z^0$ bosons we have obtained: $$\begin{aligned} \alpha_{\rm Bose}= 0.1120\times 10^{31} \frac{\int_0^{\infty} \frac{ \sqrt{x}\sqrt{1+0.4725\times 10^{-15}Tx} } {\exp\left( x- \frac{0.7243\times 10^{23}\mu}{T} \right) -1 } } {T^2 \int_0^{\infty} \frac{ x^{5/2}\sqrt{1+0.4725\times 10^{-15}Tx} } {\exp\left( x- \frac{0.7243\times 10^{23}\mu}{T} \right) -1 } } \label{E6.3}\end{aligned}$$ and for that the required limit for $\alpha$ is achieved with temperatures $T\sim 10^{13}$K. Fig. \[fig5\], then, shows the points obtained for the fermions case. Crosses then represent points attending eq.(\[E6.2\]) with $\epsilon =0.5$, $\mu$ variating from $1\times 10^{-27}$ to $2\times 10^{-7}$ with step $2\times 10^{-9}$, and temperature variating from $1\times 10^{12}$ to $9\times 10^{14}$ with step $9\times 10^{12}$, giving ten thousand points to be tested. Hopefully, we obtained some points with $\epsilon \leq 0.5$ that are those for which the model can serve to the desired purpose since $\delta$ is small. Notice that we use a log-log plot due to the great discrepancy for the values. ![Search for points attending eq.(\[E6.2\]) with $\epsilon =0.5$. Crosses correspond to points where $\delta \rightarrow 0$ for the fermions case. Logarithm is base 10. \[fig5\]](Fig5.ps){width="\linewidth"} Fig. \[fig6\] shows the points obtained for the bosons case. Circles represent points attending eq.(\[E6.2\]) with $\epsilon =0.5$, $\mu$ variating from $1\times 10^{-27}$ to $2\times 10^{-7}$ with step $2\times 10^{-9}$, but temperature variating from $1\times 10^{14}$ to $9\times 10^{16}$ with step $9\times 10^{14}$, giving again ten thousand points to be tested. For bosons we also observed the existence of points for which the model apply within the precision $\epsilon$. ![Search for points attending eq.(\[E6.2\]) with $\epsilon =0.5$. Circles correspond to points where $\delta \rightarrow 0$ for the bosons case. Logarithm is base 10. \[fig6\]](Fig6.ps){width="\linewidth"} Fig. \[fig7\] shows both cases as seen from above. One can see some clouds of points (that due to the softness of the function grouped like ‘spits’). The number of points obtained for fermionic configurations was greater than the bosonic ones, in the limits tested. From this plot one can see that there is a difference at the characteristic temperatures for bosons and fermions, and we can define different ’regimes’ where the model approaches correctly polytropes to isothermal gas spheres. ![Merged plot of both cases, while searching for points attending eq.(\[E6.2\]) with $\epsilon =0.5$. Spits at the top correspond to bosons and spits at the bottom correspond to fermions. \[fig7\]](Fig7.ps){width="\linewidth"} Once one can rely on the existence of regions where $\delta\rightarrow 0$ there remains the following question: are corresponding values of central density $\rho_0$ comparable to those obtained in the literature? A rapid test can be made by taking two sample points in the clouds of the figure \[fig7\] for each case. We chose $\log(\mu )=-8$, and temperatures $\log ( T )\cong 14.5$ for fermions and $\log ( T )\cong 16.5$ for bosons. [l|ccccl]{}\ Case & $\mu$ & $T$ & $\rho_0$ & $\rho_0$\ & & & & (Correct Dimensions)\ \ Fermions & $1\times 10^{-8}$ & $3.16\times 10^{14}$ & $6.5195\times 10^{22}$ & 5.87 $\times 10^{39}$\ \ Bosons & $1\times 10^{-8}$ & $3.16\times 10^{16}$ & $3.7262\times 10^{30}$ & 3.35 $\times 10^{47}$\ In Table \[tab1\] numerical values computed for mass density are directly listed in the fourth row, for the described sample cases. But, during the treatment of relativistic stars, since eq.(\[3.2\]), numerical values for $\rho$ (and $\rho_0$) are in lack of a $c^2$ factor. Observe that this gives the correct units for mass density: notice that $\rho$ has the same units as $S_2$, and once $S_2$ is expressed in MKS-units ${\rm kg}\, {\rm m}^{-1}\, {\rm s}^{-2}$ the adequate correction in units of $c^2$ yields ${\rm kg}\, {\rm m}^{-3}$ as expected. Central densities on the fourth row are computed directly using eq.(\[E4.7b\]) and are not in correct dimensions. From Table \[tab1\] one can see that the sample values so obtained (last row) give rise to densities that are in accordance to those expected in reference texts. In fermion stars densities central densities are of order $10^{40}$kg/m$^3$ [@Zeldovich; @Ruffini]. In boson stars central densities are given by $\rho_0 \sim m^2/4\pi G$ and are of order $10^{48}$kg/m$^3$ [@Gleiser2; @Gleiser]. Discussion ========== In this paper we show that there is a relation between the statistical mechanics of a self-gravitating gas sphere model and the polytropic model; for short, we propose the terminology polytropic isothermal gas spheres. We start with the statistical mechanics treatment given for fermion and boson gases [@Ingrosso] and find an equivalent equation of state that is compared with the polytropic model. We observe that this relation is similar to isothermal gas spheres, i.e., a polytrope with $\gamma =1$. The main result starts in section \[s:Taylor\], where a polynomial expansion is performed and in first order in $\rho$ one can find the suggested correspondence for the polytropic model parameters: $K$, $\rho_0$ and $\delta = 1/n$. To illustrate our results we use neutrons as fermions and $Z^0$ bosons. The graphics for $\delta$ for fermions and bosons are similar in shape, but very different in values range, since $\alpha$ values show up remarkable differences due to the masses of the particles. Limits of this approach are the following. The main result on section \[s:Taylor\] is obtained only up to first order. But, we can see this matches the models in use for isothermal gas spheres. Thence, it is not expected remarkable differences if one try a larger polynomial expansion. We consider only non-charged particles to avoid further interactions. Probably, new physics can be extracted considering charges particles like protons, and $W^{\pm}$ bosons or $H^{\pm}$ Higgs bosons. One advantage in this picture is that eqs.(\[E4.7b\]) are describing the parameters dependence in temperature and chemical potential, and there is no need to separate the cases for low temperature and high temperature [@Ingrosso]. Initially we focused at low temperature range, closer to the (environment) cosmic background temperature, since the considered compact objects like neutron stars and boson stars are theoretically shaped using zero temperature. In order to preserve model validity we performed an analysis searching for regions (temperature versus chemical potential) where the expansion is adequate. We found those regions and corresponding central densities are similar to those known for fermion and boson stars. Future work could use the set of equations similar to (\[E4.7b\]), with a higher order approximation in the expansion, to detail the behaviour of the parameters for larger ranges of temperature and chemical potential which could be useful for a more expanded set of stellar objects and structures. Merafina [@Merafina] points out that semi-degenerate configurations have had a quite satisfactory treatment inside the bulge of the star, but for the outer shell, where there is no border, the thermodynamical quantities expansions fail. This problem can possibly be circumvented if one uses the polytropic approach to describe the fermionic stars (neutrons stars and white dwarf stars) and boson stars. The overall conclusion is that there is a strong correspondence between polytropic and gas sphere thermal models, with an evident correlation within their parameters, e.g., the polytropic index $n$, and possible temperatures, composition and chemical potentials. This is exactly what is expected from the theory of stellar interiors [@Chandrasekhar; @Zeldovich], but has never been reported before. Hence, in this paper we present an alternative to link parameters, and for studying their significant ranges in function of temperature and chemical potential. In special, the analysis of $n\rightarrow\infty$ [@Natarajan], and the envisage of some possible implications for cloud condensation [@Honda] is of potential interest. Acknowledgments {#acknowledgments .unnumbered} =============== We wish to thank L. Ph. Vasconcelos for proof-reading. We would like to thank A.L.A. Fonseca, M. D. Maia and E. A. Asano for their encouragement and suggestions. Special thanks to the referee since his/her clever questions led to a more detailed analysis of expansion limits of the model. This work was a complementary research activity of NAS – Nucleo de Astronomia de Santarem (Astronomy Group of Santarem). [99]{} Amsler C. [*et al*]{} (Particle Data Group), 2008, Phys. Lett. [**B 667**]{}, 1. Chandrasekhar S., 1939, [*An Introduction to the Study of Stellar Structure*]{}. The University of Chicago Press. Chavanis P. H., 2002, Astronomy & Astrophysics [**381**]{}, 709-730. Chavanis P. H., 2009, pre-print, astro-ph:0707.2292. de Sousa C. M. G., Tomazelli J. L., and Silveira V., 1998, Phys. Rev. [**D58**]{}, 123003. de Sousa C. M. G., 2006, pre-print, astro-ph/0612052. Edwards T. W. and Merilan P. M., 1981, Astrophys. J. [**244**]{}, 600-618. Ferrel, R. and Gleiser M., 1989, Phys. Rev. [**D 40**]{}, 2524-2531. Gleiser M., 1988, Phys. Rev. [**D 38**]{}, 2376-2385; [**D 39**]{}, 1257 (E). Honda M. and Honda Y. S., 2003, MNRAS [**341**]{}, 164. Ingrosso G., and Ruffini R., 1988, Nuovo Cimento [**B 101**]{}, 369. Kolb E. W., and Turner M. S. [*The Early Universe*]{}, Perseus Publishing. Kusmartsev F. V. , Mielke E. W., and Schunck F. E., 1991, Phys. Rev. [**D 43**]{}, 3895-3901. Liddle A. R., and Madsen M. S., 1992, Int. J. Mod. Phys.[** D1**]{}, 101-143(1992). Merafina M., 1990, Nuovo Cimento [**B 105**]{}, 985-992. Natarajan P., and Linden-Bell D., 1997, MNRAS [**286**]{}, 268. Oppenheimer J. R., and Volkoff G. M., 1939, Phys. Rev. [**55**]{}, 374-381. Pathria R.K., 1972. [*Statistical Mechanics*]{}. Pergamon Press. Portilho O., 2009, Braz. J. Phys. [**39**]{}, 1-11. Ruffini R., and Bonazzola S., 1969, Phys. Rev. [**187**]{}, 1767. Schunck F. E. and Mielke E. W., 2003, Class. Quant. Grav. [**20**]{}, R301-R356. Silveira V., and de Sousa C. M. G., 1995, Phys. Rev. [**D52**]{}, 5724-5728 (1995). Wald R. M., 1984, [*General Relativity*]{}. The University of Chicago Press. Weinberg S., 1972, [*Gravitation and Cosmology*]{}. John Wiley. Zeldovich, Ya.B. and Novikov, I.D., 1971, [*Stars and Relativity*]{}. Dover, 1996 (Original text: [*Relativistic Astrophysics*]{}, vol.1, The University of Chicago Press, 1971).
--- author: - | Riccardo Pucella\ Cornell University\ Ithaca, NY 14853 USA\ riccardo@cs.cornell.edu title: | SIGACT News Logic Column 10\ **Specifying Confidentiality**[^1] --- I would like to start my tenure as editor of the Logic Column by thanking Jon Riecke, who has edited this column since 1998. The Logic Column serves as a showcase of the many connections between logic and computer science. Logic has been connected with computer science since the early days of Turing. In the past few decades, logical methods have had a considerable impact. To get a sense of the range of applications, consider the 2001 NSF/CISE Workshop on The Unusual Effectiveness of Logic in Computer Science (see <http://www.cs.rice.edu/~vardi/logic/>). An article derived from the workshop appeared in the *Bulletin of Symbolic Logic* , and it is an exceedingly good read. If you get a copy of that issue of the Bulletin, make sure to also have a look at the article by Buss et al. [-@r:buss01], which discusses the current state of mathematical logic. If you have any suggestion concerning the content of the Logic Column, or even better, if you would like to contribute by writing a survey or tutorial on your own work or topic related to your area of interest, feel free to get in touch with me. Topic of interest include, but are not limited to: - recent results on logic in general, and in applications to computer science in particular; - reviews of research monographs and edited volumes; - conference reports; - relevant results and connections with other fields that make use of logical methods, such as mathematics, artificial intelligence, linguistics, and philosophy; - surveys of interesting uses of logical methods in computer science. And while we are on the topic of logical methods in computer science, let me take this opportunity to advertise a new online journal, aptly called *Logical Methods in Computer Science*. See <http://www.lmcs-online.org/> for more details. Modeling Confidentiality {#modeling-confidentiality .unnumbered} ======================== First-time novelists write transparently autobiographical novels; first-time columnists write about what they do. Therefore, this article will be about logical methods applied to security, a topic I have been involved with for the past few years. The goal is to illustrate a feature of logical methods: to unify under the umbrella of a formal language seemingly distinct notions that share a common intuition. Some of the results I report reflect ongoing work in collaboration with Sabina Petride, of Cornell University. There are a number of important concepts in security: confidentiality (keeping data secret), authentication (proving identity or origination), integrity (preventing data modification), and others. In this article, I will focus on confidentiality, arguably one of the core concepts. There are many views on confidentiality in the literature, with many corresponding definitions, in many different guises. My goal here is to show that these definitions can essentially be understood as follow: they capture the fact that unauthorized agents do not *know* anything about confidential data. The variations between definitions concern the kind of data being protected, and the properties of the data that are meant to remain unknown. The setting where the various definitions will be interpreted is a setting where we can make sense of such knowledge, in a pleasantly abstract way. The first step is to specify what we mean by an unauthorized agent. Generally, security is studied in an *adversarial* setting, that is, in the presence of an adversary. Given our focus on confidentiality, we assume that the adversary seeks to circumvent confidentiality, and obtain information about the confidential data. To simplify our problem slightly, we assume that confidentiality is meant to be enforced against such an adversary, and thus that ensuring confidentiality means that the adversary does not know anything about a confidential piece of data. The second step is therefore to capture the knowledge of the adversary in some general way. A particularly successful formalization of knowledge is due to Hintikka [-@r:hintikka62], and has been applied to many fields of computer science; see Fagin et al. [-@r:fagin95] for a survey. The formalization relies on the notion of *possible worlds*: a possible world is, roughly speaking, a possible way in which the world could be. To drive the intuition, consider the following situation. Suppose I witness Alice murdering Bob in the library, and suppose that it is a matter of fact that Alice used a fire poker, but for whatever reason, I did not notice the murder weapon that Alice used. Thus, there are (at least) two worlds that I consider as possible alternatives to the actual world: the actual world itself, where Alice used a fire poker, and a world where Alice used, say, a brick. I cannot be said to know that Alice murdered Bob using a poker, since from my point of view, it is possible that Alice did not: there is a world I consider possible where Alice used a brick. On the other hand, I can be said to know that Alice is a murderer, since it will be the case at every world I consider possible.[^2] Thus, I can be said to *know* a fact at a world if that fact is true in all the worlds I consider possible at that world. To reason about possible worlds and the knowledge of an agent with respect to these worlds, we use epistemic frames. An *epistemic frame* is a tuple $F=(W,{\mathcal{K}})$, where $W$ is a set of possible worlds (or states) and ${\mathcal{K}}$ is a relation on $W$ that represents the worlds that the agent considers as possible alternatives to other worlds; $(w,w')\in{\mathcal{K}}$ if the agent consider $w'$ as a possible world at world $w$. We often use the notation ${\mathcal{K}}(w)$ for $\{w' \mid (w,w')\in{\mathcal{K}}\}$. We identify a fact with the set of worlds where that fact holds. Thus, a fact is a subset $S$ of $W$. Following the intuition above, we say a fact $S$ is *known* at a world $w$ if $S\subseteq{\mathcal{K}}(w)$, that is, if at every world that the agent considers possible at world $w$, the fact $S$ holds at that world. To model the situation in the previous paragraph, consider a simple epistemic frame with three worlds $W=\{w_1,w_2,w_3\}$, where $w_1$ is the world where Alice murdered Bob in the library with a poker, $w_2$ is the world where Alice murdered Bob in the library with a brick, and $w_3$ is the world where Alice did not murder Bob. Thus, the fact $G_1$=“Alice murdered Bob in the library” is represented by the set $\{w_1,w_2\}$, and the fact $G_2$=“Alice murdered Bob with a poker” is represented by the set $\{w_1\}$. By assumption, the worlds I consider possible at $w_1$ are $\{w_1,w_2\}$, and thus ${\mathcal{K}}(w_1)=\{w_1,w_2\}$. Since ${\mathcal{K}}(w_1)=\{w_1,w_2\}\subseteq G_1$, I know the fact $G_1$, but since ${\mathcal{K}}(w_1)=\{w_1,w_2\}\nsubseteq\{w_1\}$, I do not know $G_2$. The framework can be easily extended to reason about the knowledge of multiple agents. It suffices to provide a relation ${\mathcal{K}}_i$ to every agent $i$. In this article, since we shall focus on confidentiality with respect to only a single adversary, we only need to reason about the adversary’s knowledge. This has the advantage of simplifying the framework and the notation. Of course, our discussion can be expanded to deal with multiple agents. In fact, we will assume multiple agents, but only model the knowledge of the adversary. Epistemic frames describe the structure of the model that we want to reason about. They have been quite successful in fields such as economics, where they are used to reason about the knowledge of economic agents [@r:aumann99]. While a lot can be done purely at the level of the model, one big advantage in casting a situation in epistemic frames is that we can define a formal language to let us do the reasoning without having to explicitly manipulate the worlds of the model. The language of *epistemic logic* starts with a set of primitive propositions $\Phi_0$ (describing the basic facts we are interested in reasoning about), and forming more general formulas using conjunction ${\varphi}\land\psi$, negation $\neg{\varphi}$, and knowledge formulas of the form $K{\varphi}$, read “the agent knows ${\varphi}$”. In order to interpret this language in an epistemic frame, that is, to say when a formula of the language is true at a world of the frame, we need to add an *interpretation* $\pi$ stating which primitive propositions are true at which worlds. An *epistemic structure* (also known as a Kripke structure) is a tuple $M=(W,{\mathcal{K}},\pi)$, where $(W,{\mathcal{K}})$ is an epistemic frame, and $\pi$ is an interpretation. The truth of a formula ${\varphi}$ at a world $w$ of structure $M$, written $(M,w){\models}{\varphi}$, is established by induction on the structure of ${\varphi}$: - $(M,w){\models}p$ if $p\in \pi(w)$ - $(M,w){\models}\neg{\varphi}$ if $(M,w)\not{\models}{\varphi}$ - $(M,w){\models}{\varphi}\land\psi$ if $(M,w){\models}{\varphi}$ and $(M,w){\models}\psi$ - $(M,w){\models}K {\varphi}$ if for all $w'\in{\mathcal{K}}(w)$, $(M,w'){\models}{\varphi}$. The semantics for primitive propositions shows the role of the interpretation. The semantics of negation and conjunction are the obvious ones. The semantics of knowledge formulas follows the intuition outlined above: a formula is known at a world $w$ if it is true at all the worlds the agent considers possible at $w$. We write $M{\models}{\varphi}$ if $(M,w){\models}{\varphi}$ for all worlds $w$. We therefore have two tools to reason about the knowledge of agents: a way to model the system with a notion of possible worlds, and a language to express properties of the system. These two tools come together when applying the framework to capture various notions of confidentiality in the literature. Rather than using epistemic structures in their full generality, we focus on a particular class of structures, inspired by the multiagent systems often used to model distributed systems. We assume a number of agents (named $1$ to $n$, for simplicity), including an adversary, named $\mathit{adv}$. We assume that every agent (including the adversary) is in some local state at any global state of the system. We take as the worlds of our model the global states of the system. We furthermore assume that the environment acts like an agent, and has its own local state, to account for the information that needs to be maintained but is not kept in the local state of any agent. Thus, a global state is a tuple $(s_e,s_{{\scriptscriptstyle\mathit{adv}}},s_1,\dots,s_n)$, where $s_e$ is the local state of the environment, $s_{{\scriptscriptstyle\mathit{adv}}}$ is the local state of the adversary, and $s_i$ is the local state of agent $i\in\{1,\dots,n\}$. Intuitively, the local state of an agent represents the part of the global state that he can observe. Thus, if an adversary has the same local state in two global states $s,s'$, then at state $s$, he should consider state $s'$ as a possible global state, since he can observe exactly the same local state in both cases. In other words, the basic possible worlds relation for the adversary (called an *indistinguishability relation* since it is based on the idea of distinguishing local states) we consider is the state-identity relation ${\mathcal{K}}^{{\scriptscriptstyle\mathit{local}}}$, which holds between two states if the adversary has the same local state in both states. Formally, $(s,s')\in{\mathcal{K}}^{{\scriptscriptstyle\mathit{local}}}$ if only only if $s_{{\scriptscriptstyle\mathit{adv}}}=s'_{{\scriptscriptstyle\mathit{adv}}}$. While we will find it useful to customize the indistinguishability relation of the adversary to control what he can observe, especially when dealing with cryptography, we shall always assume that the adversary cannot distinguish two states where he has the same local state. Thus, if ${\mathcal{K}}$ is an indistinguishability relation for the adversary, we have $(s,s')\in{\mathcal{K}}$ whenever $s_{{\scriptscriptstyle\mathit{adv}}}=s'_{{\scriptscriptstyle\mathit{adv}}}$, that is, ${\mathcal{K}}^{{\scriptscriptstyle\mathit{local}}}\subseteq{\mathcal{K}}$. Putting this all together, we define an *adversarial frame* as a tuple $F=(S,{\mathcal{K}})$, where $S$ is a set of global states, and ${\mathcal{K}}$ is an indistinguishability relation for the adversary with ${\mathcal{K}}^{{\scriptscriptstyle\mathit{local}}}\subseteq{\mathcal{K}}$. Similarly, an *adversarial structure* is a tuple $M=(S,{\mathcal{K}},\pi)$ where $(S,{\mathcal{K}})$ is an adversarial frame, and $\pi$ is an interpretation. Since adversarial structures are just epistemic structures, we can interpret an epistemic language over adversarial structures, where the knowledge operator captures the knowledge of the adversary. We now explore how we can use this framework to explicate many definitions of confidentiality used in the security literature. As we shall see, all the definitions will be captured semantically, that is, by describing appropriate conditions on adversarial structures, as well as describing appropriate indistinguishability relations for the adversary. Moreover, we will give an interpretation to these semantic conditions in terms of formulas of an epistemic logic. Roughly speaking, this interpretation means the adversary never knows a particular class of formulas; these formulas represent properties of the data defined to be confidential. Confidentiality and Information Flow {#confidentiality-and-information-flow .unnumbered} ==================================== A particular form of security is to ensure the confidentiality of information among users at different security levels. (This is often called *multilevel security*.) An example is the stereotypical classification of users (and data) in military systems, where security levels include “unclassified”, “classified”, and “top-secret”; users at a given level can access information marked at that level, and at lower levels. The model of the world is that these users share the same system, and the goal is to prevent the system from leaking information about the high-level secrets to lower levels. Generally, these security levels form a hierarchy [@r:denning76]. Consider the following example. A company operates a large computer network. Alice, the CEO, has access to all the data in the company. Bob, a consultant, uses the same system, but has restricted access. The company would like to ensure that Bob cannot gain any information about some of the high-level data that Alice enters in the system. That is, the company would like to prevent any sort of flow of information from high-level data to low-level users. In this setting, a confidentiality property specifies which flows of information are allowed, and which are forbidden. The most general form of confidentiality is to forbid any kind of information to flow from the high-level users to lower-level users. In the discussion that follows, I will suppose that there are only two classes of security levels, *high* and *low*, with the adversary being a low user; however, the ideas readily generalize to multiple security levels. The general approach goes back to the notion of nondeducibility introduced by Sutherland [-@r:sutherland86]. Halpern and O’Neill [-@r:halpern02a] have formalized this notion (and others) using possible worlds and epistemic logic. I describe one of their results in this section. To a first approximation, the intuition is that the low agent should not be able to rule out possibilities as far as the interesting part of the local state of the high agents is concerned. To capture the interesting part of the local state of the high agents, define an *information function* for agent $i$ to be a function $f$ on global states that depends only on agent $i$’s local state, that is, $f(s)=f(s')$ if $s_i=s'_i$. If an information function for agent $i$ describes the interesting aspects of the local state of agent $i$ that he seeks to keep confidential, then we can define $f$-secrecy with respect to the adversary:[^3] agent $i$ *maintains $f$-secrecy* in an adversarial frame $F$ if for all global states $s$ and all values $v$ in the image of $f$, $${\mathcal{K}}(s)\cap f^{-1}(v)\ne{\varnothing}.$$ In other words, the adversary considers all values of $f$ possible, at every global state. This fairly simple semantic characterization can be captured syntactically, using the epistemic logic described earlier, in a way that relates to the adversary’s knowledge about the high-level state. Let $\Phi_0$ be an arbitrary set of primitive propositions. If $f$ is an information function for agent $i$, a formula ${\varphi}$ in $M$ is said to be *$f$-local* if it depends only on the value of $f$, that is, whenever $f(s)=f(s')$, then $(M,s){\models}{\varphi}$ if and only if $(M,s'){\models}{\varphi}$. Thus, in some sense, ${\varphi}$ is a proposition that captures a property of the value of $f$. Of course, we have to account for the possibility that ${\varphi}$ is completely trivial. Say ${\varphi}$ is *nontrivial* in $M$ if there exists $s,s'$ such that $(M,s){\models}{\varphi}$ and $(M,s'){\models}\neg{\varphi}$. The following result is proved by Halpern and O’Neill [-@r:halpern02a]. \[t:kevin\] Let $F=(S,{\mathcal{K}})$ be an adversarial frame. Agent $i$ maintains $f$-secrecy in $F$ if and only if, for every interpretation $\pi$, if ${\varphi}$ is $f$-local and nontrivial in $M=(S,{\mathcal{K}},\pi)$ then $M{\models}\neg K{\varphi}$. Of course, the characterization of confidentiality above is extremely strong. A more realistic form of confidentiality should allow some form of information to leak. Timing information, for example, might fit in this category. In general, given any state, the adversary should be able to rule out states for the high-level agents that are in the distant past, or the distant future. However, formalizing the fact that this kind of information flow is allowed is difficult in practice. It is not obvious how to distinguish allowed timing information flow from attacks that rely on *timing channels* [@r:wray91]. Similarly, there are cases where we must permit the declassification of some data in order for the system to be able to perform useful work. The classical example used in the literature is a password-checking program: a program that prompts the user for a password, and logs him or her in if the password is correct. Assume the password is a high-level piece of data. If an adversary tries to login with password $p$ and fails, he has gained information about the true password, namely, that it is not $p$. Most work in information flow in the past few years has aimed at understanding this notion of declassification of data [@r:pottier00; @r:zdancewic01; @r:chong04]. Finally, there are many ways in which the above definitions are insufficiently precise. For one, they do not take the likelihood of states into account. Assume that the adversary initially believes that all the states of the agents are equally likely. If after some interaction with the system, the adversary still believes that all the high-level states are possible, but one is overwhelmingly more likely than the others, then one could easily argue that there there has been information leakage, although the above definitions do not capture it. Handling these kind of flows requires more quantitative forms of information flow properties [@r:gray98; @r:halpern02a]. Confidentiality and Symbolic Cryptographic Protocol Analysis {#confidentiality-and-symbolic-cryptographic-protocol-analysis .unnumbered} ============================================================ One thing that the framework of the last section did not take into account is the use of *cryptography* to hide data from the adversary. Defining confidentiality in the presence of cryptography is more challenging. This form of confidentiality is sometimes studied in the literature when considering cryptographic protocols, that is, protocols between agents that aim at exchanging messages with some security guarantees, such as ensuring that the messages remain confidential, or that the origin of the messages is authenticated. In the information-flow setting, we were interested in reasoning about what the adversary could infer about the local state of the high agents. In this section, however, the adversary is allowed to intercept messages, as well as forward and inject new messages into the system. We are interested in reasoning about what the adversary can infer about the messages he has intercepted, despite them being perhaps encrypted. This will be reflected in the language used to capture the confidentiality specification: to capture information flow, the formulas involved in the specification are those whose truth depends on the local state of the other agents; for cryptographic protocol analysis, as we shall see, the formulas involved in the specification are those whose truth depends on the messages intercepted by the adversary. There are a number of notions of confidentiality that have been studied in the cryptographic protocol analysis literature. A common one is based on the intuition that the adversary is not able to distinguish between states where the agents exchange message $m$ and states where the agents exchange some other message $m'$, for all messages $m$ and $m'$. This is the approach taken, for instance, in the spi calculus of Abadi and Gordon [-@r:gordon99]. Phrasing it this way brings us halfway to the framework of the last section; however, we need to take into account that the adversary should not be able to distinguish encrypted messages for which he does not have the corresponding decryption key. This requires a formalization of what the adversary can do to messages. The view we take in this section is that an adversary can do anything short of attempting to crack encrypted messages. Thus, we treat the particular encryption scheme used by the agents as perfect. (We weaken this assumption in the next section.) Such an adversary was first formalized by Dolev and Yao [-@r:dolev83]. Roughly speaking, a Dolev-Yao adversary can compose messages, replay them, or decrypt them if he knows the right keys. We first define a symbolic representation for messages, where we write $(m_1,m_2)$ for the pairing (or concatenation) of $m_1$ and $m_2$, and ${\{m\}_{k}}$ for the encryption of $m$ with $k$. We write ${k^{-1}}$ for the inverse key of $k$, that is, the key used to decrypt messages encrypted with $k$. We then define a relation ${\vdash}$, where $H{\vdash}m$ is interpreted as saying that the adversary can infer message $m$ from a set $H$ of messages. (Intuitively, $H$ is the set of messages he has intercepted). This relation is defined using the following inference rules: $$\begin{gathered} { \begin{array}{c} m\in H \\\hline H{\vdash}m \end{array}} \quad { \begin{array}{c} H{\vdash}{\{m\}_{k}} \quad H{\vdash}{k^{-1}} \\\hline H{\vdash}m \end{array}} \quad { \begin{array}{c} H{\vdash}(m_1, m_2) \\\hline H{\vdash}m_1 \end{array}} \quad { \begin{array}{c} H{\vdash}(m_1, m_2) \\\hline H{\vdash}m_2. \end{array}}\end{gathered}$$ Thus, for instance, if an adversary intercepts the messages ${\{m\}_{k_1}}$, ${\{{k_1^{-1}}\}_{k_2}}$, and ${k_2^{-1}}$, he can derive $m$ using these inference rules, since $$\{{\{m\}_{k_1}},{\{{k_1^{-1}}\}_{k_2}},{k_2^{-1}}\}{\vdash}m.$$ (The use of a symbolic representation for messages is the source of the name “symbolic cryptographic protocol analysis” given to this style of protocol analysis.) We can define an indistinguishability relation by following the formalization of Abadi and Rogaway [-@r:abadi02a]. (Similar ideas appear in Abadi and Tuttle [-@r:abadi91] and Syverson and van Oorschot [-@r:syverson94].) Intuitively, the adversary cannot distinguish two states if his local state is the same at both states, except that we identify encrypted messages for which he does not have the key, to capture the intuition that he cannot distinguish encrypted messages. For simplicity, assume that the local states $s_{{\scriptscriptstyle\mathit{adv}}}$ of the adversary simply consist of sets of messages (intuitively, the messages he has intercepted, along with any initial messages, such as public keys.) Given a message $m$ and a set of keys $K$, let ${\lfloor m \rfloor}_K$ be the result of replacing every indecipherable message in $m$ by $\wbox$. Formally, define $$\begin{aligned} {\lfloor p \rfloor}_K & = p\\ {\lfloor k \rfloor}_K & = k\\ {\lfloor (m_1,m_2) \rfloor}_K & = ({\lfloor m_1 \rfloor}_K,{\lfloor m_2 \rfloor}_K)\\ {\lfloor {\{m\}_{k}} \rfloor}_K & = \begin{cases} {\{{\lfloor m \rfloor}_K\}_{k}} & \text{if ${k^{-1}}\in K$}\\ \squareforqed & \text{otherwise.} \end{cases}\end{aligned}$$ It is easy to check that $m'$ is a submessage of ${\lfloor m \rfloor}_K$ if and only if $K\cup\{m\}{\vdash}m'$. We extend ${\lfloor - \rfloor}$ to sets of messages $H$ by taking ${\lfloor H \rfloor}_K = \{{\lfloor m \rfloor}_K \mid m\in H\}$. Define ${\mathit{Keys}}(H)=\{k\mid H{\vdash}k\}$. Finally, define the indistinguishability relation ${\mathcal{K}}^{{\scriptscriptstyle\mathit{dy}}}$ by taking $(s,s')\in{\mathcal{K}}^{{\scriptscriptstyle\mathit{dy}}}$ if and only if ${\lfloor s_{{\scriptscriptstyle\mathit{adv}}}\rfloor}_{K}={\lfloor s'_{{\scriptscriptstyle\mathit{adv}}}\rfloor}_{K'}$, where $K={\mathit{Keys}}(s_{{\scriptscriptstyle\mathit{adv}}})$ and $K'={\mathit{Keys}}(s'_{{\scriptscriptstyle\mathit{adv}}})$. In other words, the adversary cannot distinguish two states in which he has intercepted different messages, where the only difference between these messages occurs in the content of encrypted messages for which he does not have the decryption key. We restrict our attention to *message-transmission protocols*, protocols in which the goal is for agent $1$ to send a message to agent $2$ in a confidential way. We assume that the adversary can intercept messages from the network, and can also forward and inject messages into the network. We can associate with a protocol $P$ a set $S_P$ of global states corresponding to the states that the protocol goes through upon execution (including states that result from the adversary intercepting, forwarding, or injecting messages). We assume that the global states in $S_P$ include states for all the possible messages that could be sent. If ${\mathcal{M}}$ is the set of all messages that could be sent, and $m\in{\mathcal{M}}$, let $G(m)\subseteq S_P$ be the set of global states where agent $1$ sends message $m$ to agent $2$. We say a message-transmission protocol $P$ *preserves message secrecy* if for all global states $s\in S_P$ and all messages $m\in{\mathcal{M}}$, $${\mathcal{K}}^{{\scriptscriptstyle\mathit{dy}}}(s)\cap G(m)\ne{\varnothing}.$$ In other words, every local state of the adversary is compatible with agent $1$ having sent any possible message $m$. Can we capture this syntactically? Let $\Phi_0$ be a primitive vocabulary. Say ${\varphi}$ *depends only on the message exchanged* by the protocol if $(M,s){\models}{\varphi}$ if and only if $(M,s'){\models}{\varphi}$, whenever the same message is exchanged in both states $s$ and $s'$. The following result can be proved using techniques similar to those used to prove Theorem \[t:kevin\]. A message-transmission protocol $P$ preserves message secrecy if and only if, for every interpretation $\pi$, if ${\varphi}$ depends only on the message exchanged by $P$ and is nontrivial in $M=(S_P,{\mathcal{K}}^{{\scriptscriptstyle\mathit{dy}}},\pi)$ then $M{\models}\neg K{\varphi}$. An alternative approach, sometimes used in the literature, leads to a specification which is easier to enforce. This approach uses the ${\vdash}$ relation directly in the specification. This specification essentially says that the adversary cannot derive the content of the message being exchanged. (This is the approach taken, for instance, by Casper [@r:lowe98], a protocol analysis tool based on the CSP language [@r:hoare85].) Say a message-transmission protocol $P$ *preserves message DY-secrecy* if, at every state $s\in S_P$ where the message exchanged is $m$, $$s_{{\scriptscriptstyle\mathit{adv}}}{\nvdash}m.$$ This specification does not require an indistinguishability relation for the adversary, and this suggests that it can be captured by a specification that does not use knowledge. Indeed, the specification can be captured rather simply if we use the right language. As opposed to the way we have been specifying things until now, this time, we fix a particular vocabulary and a particular interpretation $\pi_0$. Let $\mathit{has}(m)$ be a fixed class of primitive propositions, one per message $m$, with $\mathit{has}(m)\in\pi_0(s)$ if and only if $s_{{\scriptscriptstyle\mathit{adv}}}{\vdash}m$. Let $\mathit{exchanged}(m)$ be a fixed class of primitive propositions, one per message $m$, with $\mathit{exchanged}(m)\in\pi_0(s)$ if and only if $m$ is the message exchanged by the protocol at state $s$. The following result follows immediately from the definition of message DY-secrecy. A message-transmission protocol $P$ preserves message DY-secrecy if and only if, for the model $M_0=(S_P,{\mathcal{K}},\pi_0)$ and all messages $m$, $M_0{\models}\mathit{exchanged}(m){\Rightarrow}\neg\mathit{has}(m)$. This specification does not use knowledge, and uses a particular model with a fixed interpretation. In fact, it can be seen as a form of *safety property*, following the classification of properties due to Alpern and Schneider [-@r:alpern85]. Roughly speaking, a safety property can be checked independently at all the points of the system; the truth or falsehood of a formula at a point does not depend on the other points of the system. This generally leads to efficient procedures for checking the specification. It is possible to refine the approach by considering more general ways for the adversary to derive messages, and to formally relate the results to specifications based on knowledge [@r:halpern02e]. Confidentiality and Cryptography {#confidentiality-and-cryptography .unnumbered} ================================ In the last section, the framework let us capture confidentiality in cryptographic protocols, under the assumption that the encryption was perfect; we did not allow the adversary to extract any information from an encrypted message for which he did not have the decryption key. Of course, in reality, encryption schemes are not perfect, and they can possibly leak information about the message being encrypted. In this section, we examine how we can capture the confidentiality of encryption schemes. Cryptography studies, among others, the properties of encryption schemes. Modern cryptography is motivated by two basic tenets. First, encryption schemes are concrete mathematical systems that act on strings (often taken to be bit strings). This view leads naturally to finer confidentiality properties than simply showing that the adversary cannot recover the message being encrypted. Rather, confidentiality should mean that the adversary cannot derive *any* information about the message being encrypted, including, say, that the first bit of the message is a 1. The second tenet is that we do not impose any restriction on the computations that the adversary can perform on encrypted messages, aside from the fact that they must be *feasible* computations. Generally, the class of feasible computations is the class of *probabilistic polynomial time algorithms* [@r:motwani95]. The definition of a probabilistic polynomial time algorithm is asymptotic: the running time of the algorithm is polynomial in the length of the input. Working with such a definition of feasibility is simplified by taking the encryption scheme itself to be defined asymptotically, where the parameterization is given by a *security parameter*. Intuitively, the larger the security parameter, the harder it is for an adversary to get information about encrypted messages. The basic definition of confidentiality for an encryption scheme is that the adversary learns nothing about the content of an encrypted message (except possibly, for technical reasons, information about the length of the plaintext). The definitions we use are essentially due to Goldwasser and Micali [-@r:goldwasser84], but simplified following Goldreich [-@r:goldreich98]. In particular, we assume a encryption scheme where the same key is used to encrypt and decrypt messages, and where the encryption is probabilistic: encryption with a given key yields a probability distribution over encrypted messages. We take $E(x)$ to be the distribution of encryptions of $x$, when the key is selected at random. Moreover, we assume that for a security parameter $\eta$, the keys have length $\eta$, and the scheme is used to encrypt messages of length $\eta^2$. Thus, we can simply take the security parameter to be the length of the keys. These restrictions, and the following definitions, are fairly technical, and I will refer to Goldreich [-@r:goldreich98; -@r:goldreich01] for intuitions and more in-depth discussions. The definition we use is that of *indistinguishability of encryptions*, which says that an adversary cannot distinguish, based on probabilistic polynomial time tests, whether two messages encrypted with a random key are the same message or not, even when provided with essentially arbitrary a priori information. Formally, let $A$ be a feasible algorithm (which we assume returns $0$ or $1$). We say a sequence $(x_\eta,y_\eta,z_\eta)_\eta$, where $|x_\eta|=|y_\eta|=\eta^2$ and $|z_\eta|$ is polynomial in $\eta$, is *$A$-indistinguishable* if $$1-{\mathrm{Pr}}\left[ A(E(x_\eta),z_\eta) = A(E(y_\eta),z_\eta)\right]$$ is a negligible function of $\eta$, where $f(\eta)$ is *negligible* in $\eta$ if for all polynomials $p$, $f(\eta)\le 1/p(\eta)$ for all $\eta$. In other words, two sequences are $A$-indistinguishable if the adversary cannot really distinguish, based on the output of $A$, whether an encrypted message is an encryption of $x_\eta$ or of $y_\eta$, even when provided with arbitrary information $z_\eta$. (For instance, $z_\eta$ could be the pair $(x_\eta,y_\eta)$, meaning that even when the adversary knows that the encrypted message is the encryption of either $x_\eta$ or $y_\eta$, he cannot tell which is the actual message that was encrypted.) Note that we do not require the probabilities to be equal, but that the difference should not be noticeable by a polynomially-bounded observer. An encryption scheme is *semantically secure* if, for all feasible algorithms $A$, all sequences $(x_\eta,y_\eta,z_\eta)_\eta$, where $|x_\eta|=|y_\eta|=\eta^2$ and $|z_\eta|$ is polynomial in $\eta$, are $A$-indistinguishable.[^4] One of the many achievements of modern cryptography is to show that there are encryption schemes that are semantically secure, assuming the existence of mathematical entities such as one-way functions [@r:goldreich01]. We can translate semantic security of an encryption scheme $C$ into properties in an adversarial frame, where the states of the adversary are sequences of messages (indexed by the security parameter), along with some initial information. Formally, a local state for the adversary is a pair $((x_\eta)_\eta,(z_\eta)_\eta)$, where $(x_\eta)_\eta$ is the sequence of messages to be encrypted, and $(z_\eta)_\eta$ is the sequence of a priori information.. Let $S_C$ be the set of all states where the adversary has such a local state. Note that $S_C$ does not directly model a particular protocol between agents; we are interested in modeling properties of an encryption scheme, not a protocol. To get an adversarial frame, we define an indistinguishability relation ${\mathcal{K}}^{{\scriptscriptstyle\mathit{crypt}}}$ as follows: take $(s,s')\in{\mathcal{K}}^{{\scriptscriptstyle\mathit{crypt}}}$ if and only if $(x_\eta,y_\eta,z_\eta)_\eta$ is $A$-indistinguishable for every feasible algorithm $A$, where $s_{{\scriptscriptstyle\mathit{adv}}}=((x_\eta)_\eta,(z_\eta)_\eta)$ and $s'_{{\scriptscriptstyle\mathit{adv}}}=((y_\eta)_\eta,(z_\eta)_\eta)$. We can, up to a point, capture semantic security of the encryption scheme using a knowledge specification. Let $\Phi_0$ be a primitive vocabulary. Say ${\varphi}$ *depends only on messages but not their length* in $M$ when the following properties hold: 1. if $s_{{\scriptscriptstyle\mathit{adv}}}=s'_{{\scriptscriptstyle\mathit{adv}}}$, then $(M,s){\models}{\varphi}$ if and only if $(M,s'){\models}{\varphi}$; 2. there exists $s,s'$ with $s_{{\scriptscriptstyle\mathit{adv}}}=((x_\eta)_\eta,(z_\eta)_\eta)$, $s'_{{\scriptscriptstyle\mathit{adv}}}=((y_\eta)_\eta,(z'_\eta)_\eta)$, and $|x_\eta|=|y_\eta|$ for all $\eta$, such that $(M,s){\models}{\varphi}$ and $(M,s'){\models}\neg{\varphi}$. The following result follows almost immediately from the definition of semantic security. If an encryption scheme $C$ is semantically secure, then, for every interpretations $\pi$, if ${\varphi}$ depends only on messages but not on their length and is nontrivial in $M=(S_C,{\mathcal{K}}^{{\scriptscriptstyle\mathit{crypt}}},\pi)$ then $M{\models}\neg K{\varphi}$. This formalizes one intuition behind semantic security, namely that it ensures the adversary cannot derive any (nontrivial) knowledge about the content of encrypted messages, except perhaps their length. It is not clear how to get the other direction of the implication without making stronger assumptions on the language or the models. This result is unsatisfying compared to the results of the previous section as far as it concerns reasoning about protocols. In particular, the states of the models are more “artificial”, and do not correspond directly to states that arise during the execution of a protocol. A more interesting result would be to characterize the knowledge of an adversary in the context of message-transmission protocols implemented using an encryption scheme with a property such as semantic security. This is an active research area. Some results have been obtained using techniques from programming languages [@r:lincoln98; @r:abadi02a], and logical techniques have been brought to bear on the question [@r:impagliazzo03], but no connection to knowledge has yet been established, as far as I know. #### Acknowledgments. Thanks to Steve Chong, Joe Halpern, Kevin O’Neill, Sabina Petride, and Vicky Weissman for comments. Abadi, M. and A. D. Gordon (1999). A calculus for cryptographic protocols: The [Spi]{} calculus.  [*148*]{}(1), 1–70. Abadi, M. and P. Rogaway (2002). Reconciling two views of cryptography (the computational soundness of formal encryption).  [*15*]{}(2), 103–127. Abadi, M. and M. R. Tuttle (1991). A semantics for a logic of authentication. In [*Proc. 10th ACM Symposium on Principles of Distributed Computing (PODC’91)*]{}, pp.  201–216. Alpern, B. and F. B. Schneider (1985). Defining liveness.  [*21*]{}, 181–185. Aumann, R. J. (1999). Interactive epistemology [I]{}: knowledge.  [*28*]{}(3), 263–301. Buss, S., A. Kechris, A. Pillay, and R. Shore (2001). The prospects for mathematical logic in the twenty-first century.  [*7*]{}(2), 169–196. Chong, S. and A. C. Myers (2004). Security policies for downgrading. In [*Proc. 11th ACM Conference on Computer and Communications Security (CCS’04)*]{}. ACM Press. Denning, D. E. (1976). A lattice model of secure information flow.  [*19*]{}(5), 236–243. Dolev, D. and A. C. Yao (1983). On the security of public key protocols.  [*29*]{}(2), 198–208. Fagin, R., J. Y. Halpern, Y. Moses, and M. Y. Vardi (1995). . MIT Press. Goldreich, O. (1998). , Volume 17 of [*Algorithms and Combinatorics*]{}. Springer-Verlag. Goldreich, O. (2001). . Cambridge University Press. Goldwasser, S. and S. Micali (1984). Probabilistic encryption.  [*28*]{}(2), 270–299. Gray, III, J. W. and P. F. Syverson (1998). A logical approach to multilevel security of probabilistic systems.  [*11*]{}(2), 73–90. Halpern, J. Y., R. Harper, N. Immerman, P. Kolaitis, M. Y. Vardi, and V. Vianu (2001). On the unusual effectiveness of logic in computer science.  [*7*]{}(2), 213–236. Halpern, J. Y. and K. O’Neill (2002). Secrecy in multiagent systems. In [*Proc. 15th IEEE Computer Security Foundations Workshop (CSFW’02)*]{}, pp.  32–46. IEEE Computer Society Press. Halpern, J. Y. and R. Pucella (2002). Modeling adversaries in a logic for reasoning about security protocols. In [*Proc. Workshop on Formal Aspects of Security (FASec’02)*]{}, Volume 2629 of [*Lecture Notes in Computer Science*]{}, pp.  115–132. Hintikka, J. (1962). . Cornell University Press. Hoare, C. (1985). . Prentice-Hall. Impagliazzo, R. and B. M. Kapron (2003). Logics for reasoning about cryptographic constructions. In [*Proc. 44th IEEE Symposium on the Foundations of Computer Science (FOCS’03)*]{}, pp.  372–383. Lincoln, P., J. C. Mitchell, M. Mitchell, and A. Scedrov (1998). A probabilistic poly-time framework for protocol analysis. In [*Proc. 5th ACM Conference on Computer and Communications Security (CCS’98)*]{}, pp.  112–121. Lowe, G. (1998). Casper: A compiler for the analysis of security protocols.  [*6*]{}, 53–84. Motwani, R. and P. Raghavan (1995). . Cambridge University Press. Pottier, F. and S. Conchon (2000). Information flow inference for free. In [*Proc. 5th ACM SIGPLAN International Conference on Functional Programming (ICFP’00)*]{}, pp.  46–57. ACM Press. Sutherland, D. (1986). A model of information. In [*Proc. 9th National Computer Security Conference*]{}, pp. 175–183. Syverson, P. F. and P. C. van Oorschot (1994). On unifying some cryptographic protocol logics. In [*Proc. 1994 IEEE Symposium on Security and Privacy*]{}, pp. 14–28. IEEE Computer Society Press. Wray, J. C. (1991). An analysis of covert timing channels. In [*Proc. 1991 IEEE Symposium on Security and Privacy*]{}, pp. 2–7. IEEE Computer Society Press. Zdancewic, S. and A. C. Myers (2001). Robust declassification. In [*Proc. 14th IEEE Computer Security Foundations Workshop (CSFW’01)*]{}, pp.  15–23. IEEE Computer Society Press. [^1]: © Riccardo Pucella, 2004. [^2]: This is under the assumption that I am not subject to illusions, or hallucinations, of course. Philosophers are fond of such counterexamples, which reveal implicit assumptions about the world that may affect our reasoning. When applying these ideas to computer science, we shall assume that our models take into account everything relevant to establish knowledge, including such implicit assumptions. [^3]: Strictly speaking, $f$-secrecy is defined with respect to any agent, but we already established that we care only about the adversary in this article. [^4]: Strictly speaking, this is the definition of an encryption scheme having indistinguishability of encryptions, which can be shown to be equivalent to the traditional definition of semantic security.
--- abstract: 'The storage industry is considering new kinds of storage devices that support data access function offloading, i.e. the ability to perform data access functions on the storage device itself as opposed to performing it on a separate compute system to which the storage device is connected. But what is the benefit of offloading to a storage device that is controlled by an embedded platform, very different from a host platform? To quantify the benefit, we need a measurement methodology that enables apple-to-apple comparisons between different platforms. We propose a Media-based Work Unit (MBWU, pronounced “MibeeWu”), and an MBWU-based measurement methodology to standardize the platform efficiency evaluation so as to quantify the benefit of offloading. To demonstrate the merit of this methodology, we implemented a prototype to automate quantifying the benefit of offloading the key-value data access function.' author: - Jianshen Liu - Philip Kufeldt - Carlos Maltzahn bibliography: - 'refs.bib' title: 'MBWU: Benefit Quantification for Data Access Function Offloading' --- Introduction ============ Benefit quantification is critical in value assessment of offloading data access functions from traditional host platforms to embedded platforms that are expected to serve beyond the role of transitional storage devices. Though a couple of frameworks focusing on breaking down the offloading complexity  [@gu2016biscuit; @phothilimthana2018floem] have been proposed in recent research, the fundamental question regarding how much can be saved from offloading a given data access function to an embedded platform has not been addressed. The challenge is whether to offload a data access function depends not only on the characteristics of workloads, which essentially are the function calls organized in some pattern, but also on the performance of the storage media with which the data access function interacts. In practical environments, hardware platforms and workloads of interest may differ significantly; solutions of benefit quantification focusing on specific functions  [@wang2016ssd; @kang2013enabling] or using simplified evaluation models  [@boboila2012active; @kim2011fast] may not apply to a different function. Furthermore, since different storage media have significantly different requirements on the bandwidth of various platform resources, the optimal placement of data access functions in terms of the platform efficiency can be dramatically different. We propose a Media-Based Work Unit (MBWU, pronounced “MibeeWu”) and developed an MBWU-based measurement methodology for the purpose of standardizing the efficiency comparison for different platforms running a given workload over a specific storage media. By evaluating the efficiency of each platform in terms of its cost (\$/MBWU), power (kW/MBWU), and space ($m^{3}$/MBWU), we can quantify the benefit of offloading a data access function from traditional host platforms to embedded platforms. We have implemented a prototype for evaluating key-value data management function offloading and generated instructive results from our experiment. We discuss MBWU as well as this measurement methodology in detail in section \[Methodology\]. Starting from Active Disks  [@riedel1998active; @keeton1998case; @riedel1997active; @tiwari2013active; @ouyang2013active], moving high-selectivity data access functions to storage devices gains increasing research interest mainly because of the conceivable benefits  [@kang2019towards] such as reducing the size of data transmission between hosts and storage devices, reducing total power consumption, increasing overall resource utilization, and simplifying the application design by leveraging high-level storage semantics. For example, key-value smart storage devices can substitute the translation layers from key-value down to physical block address, which includes a key-value to file translation in the front, a file to logical block address translation in the middle, and a logical block address to physical block address translation at the bottom. Besides these benefits, energy consumption is thought to be another major saving from offloading functions to storage devices. For example, Choi et al.  [@choi2015energy] identified more than 3x energy efficiency with 80Though various benefits have been studied, the storage industry remains conservative when adding data access functions to storage devices. The main barrier is that extra processing required in the storage device increases the cost of the device. Since applications run on system platforms, we believe an increase in storage device does not necessarily increase the overall platform cost. Considering the variety of workloads and the diversity of hardware, we need a systematic and reproducible methodology to quantify the overall benefit of offloading any given data access function to embedded platforms. The MBWU-based Methodology {#Methodology} ========================== Background ---------- The emergence of various storage technologies has changed the regular formula for constructing storage infrastructure. Historically, this formula was built around hiding the latency of storage devices using caching. However, innovations of recent NAND and storage-class memory technologies (e.g., V-NAND  [@kang2016256], 3D XPoint  [@wiki:3dxpoint]) have altered the cost-optimal placement of various software and hardware resources  [@theis2017end; @shulaker2017three] since storage media of different performance impose different demands for the bandwidth of CPU, memory, network, and storage interface. For example, applications may want memory closer to computation for slow storage media because hiding data access latency is important, while the applications may want storage closer to computation for fast media because high-speed networking fabrics and data buses are expensive. With more domain-specific processing units (e.g., GPU, Google TPU  [@wiki:tpu], FPGA) taking over computations from host CPUs, the storage industry asks itself the same question: What should be done to improve the cost-efficiency of utilizing a specific storage media for data access? In terms of the placement of data access functions, the specific question is: For a given workload, can offloading a data access function from host platforms to storage devices reduce the overall cost per performance when the workload uses the same storage media? What is MBWU ------------ Host platforms and embedded platforms differ significantly in cost, usage, performance, power, and form factor. To compare the cost per performance of different platforms, we need to have a reference point to normalize the performance value generated from heterogeneous platforms so that these normalized values are directly comparable. The reference point is required to be **platform-independent** but **media- and workload-dependent**. The reasons are as follows: - **platform-independence**: The reference point should be platform-related hardware independent. Otherwise, the normalized performance value of a platform may not be able to represent the efficiency of the platform utilizing a specific storage media under a workload. For example, if the reference point relates to a specific CPU architecture, then the normalization for the performance of a platform using a different CPU is skewed by the efficiency difference caused by the different CPUs. - **media-dependence**: Since the cost-optimal placement of functions is sensitive to types of storage media, the reference point should be media-dependent so that we can always normalize the performance of different platforms to the efficiency of utilizing a specific storage media. For this to work, all the different platforms under test should use the same type of storage media for performance evaluation. - **workload-dependence**: From an application point of view, the performance of a platform is the amount of work the platform can do in a unit of time. To measure the amount of work done, we need to define a unit of work as the reference point so that the performance of different platforms can be normalized to the number of units they can perform. Since different workloads have different work definitions, the unit of work should be measured in workload operations (WOs). Hence, the reference point is workload-dependent. The combination of platform-independence and media-dependence indicate that the reference point can only be media-based. We call this media-based and workload-dependent reference point **MBWU** and define it as the highest number of workload operations per second (WOPS) a given workload on a specific storage media can achieve with all external caching effects eliminated/disabled. In this definition, workload operations should not be simply interpreted as block I/Os. For key-value operations as the workload, a WO is a `GET`, `PUT`, or `DELETE`. For file operations as the workload, a WO is a `read` or `write`. On the other hand, we use the term storage media to express a configuration of storage devices. For example, a storage media can be a device with six flash chips, or a device that combines a spinning media, two flash components, and some non-volatile random-access memory. The MBWU definition has no requirement on what the storage media should look like, which means our MBWU-based measurement methodology is seamlessly applicable to different types of storage media. In the following sections we use the two terms *storage media* and *storage device* interchangeably. Finally, since an MBWU only depends on a specific storage media and a given workload, its WOPS should only be throttled by a specific storage media. Platform-related system resources like CPU, memory, and network can throttle the WOPS when measuring an MBWU. Resources like memory can also enhance the WOPS when the data access is from memory instead of storage devices. A throttled or enhanced WOPS number is not an MBWU because it is platform-dependent. Once an MBWU is measured, the performance of a platform can be measured by its maximum MBWUs under the same workload. The greater number of MBWUs a platform can generate, the higher performance the platform is. The efficiency of a platform can then be evaluated based on the cost, power, and space of this platform. For example, if the platform can generate *M* MBWUs, cost-efficiency of the platform (\$/MBWU) can be calculated by $ \frac{cost(plat)}{M} $. Similarly, the power-efficiency of the platform (kW/MBWU) can be calculated by $ \frac{amp \cdot volt}{1000 \cdot M} $, where *amp* and *volt* are the current and voltage of the platform respectively when it was generating *M* MBWUs. Space-efficiency of the platform ($m^{3}$/MBWU) can be calculated in a corresponding manner by $ \frac{volume(plat)}{M} $. The MBWU-based measurement methodology is intended to be used by storage device and storage system designers to assess whether to pair a given function to a specific storage media. It is not intended to be used to evaluate online methods during production workloads, because ensuring the workloads are the same for different platforms is difficult. How to Measure MBWU(s) ---------------------- Measuring an MBWU for a given workload that uses a specific storage media is different from measuring MBWUs for a platform except that the storage media and workload should be the same in these two measurements. To measure a single MBWU, we need a capable host to drive the peak performance of a given workload running on a single storage device with all external caching effects disabled. Since high-selectivity workloads are primarily I/O bound, looking for such a host is not difficult. The process of measuring MBWUs for a platform is to measure the maximum steady-state WOPS of the same workload running on the platform with normal caching configuration. The goal of this measurement is to evaluate what is the maximum WOPS that is eventually limited by the platform-related resources instead of storage devices. One way to push the WOPS to the limit is to replicate the workload on multiple storage devices. Once we have the value of the maximum WOPS, the MBWUs of this platform is equal to this value divided by a single MBWU. Figure \[fig:MBWUs\_to\_devices\] is an example to show the general relationship between the number of MBWUs and the number of storage devices. The increment of MBWUs brought by an additional storage device decreases as the device number increases until finally no increment exists on the total MBWUs. The final stable MBWUs is the MBWUs of this platform for the workload. One reason for the diminishing MBWUs increment shown in the figure is the increasing average CPU cycles on a single data read due to the increasing system memory pressure. ![Relationship between the Number of MBWUs and the Number of Storage Devices[]{data-label="fig:MBWUs_to_devices"}](MBWUs_to_devices.png) In addition to resources like CPU, memory, and network, the real estate issue is another important platform-related bottleneck. For example, a limitation on the available hardware connectors may limit the number of storage devices that can be attached to a platform, thus throttling the MBWUs of a platform as well. We have seen this type of limitation in our experiment (Section \[Evaluation\]). Evaluation Prototype -------------------- We chose key-value data management as a function to be offloaded in our study. The design goal of the evaluation prototype was to provide a framework to demonstrate the merit of our MBWU-based measurement methodology by automatically generate reproducible values that represent the benefit of offloading the key-value data management function for a given workload. Key-value data management is a typical high-selectivity function due to the massive amount of data needed to move back and forth between different levels of data representation in response to various operations in data management. For example, we used RocksDB  [@borthakur2013under] as the key-value engines to run YCSB  [@cooper2010benchmarking] workload A. We saw up to 6x traffic amplification between the key-value data received by the RocksDB (red dashed line) and the final data written out to the underlying block devices (red solid line) (Figure \[fig:data\_traffic\_amp\]). Though we used the key-value function as an example, there is nothing to prevent the MBWU-based measurement methodology from being applied to other functions, such as read/write functions in the file system and SELECT/PROJECT functions in the database management system. ![Amplification of Key-value Data Traffic[]{data-label="fig:data_traffic_amp"}](data_traffic_amp.png) Our prototype starts with pre-conditioning for NAND-based storage devices used to store workload data. This process is necessary as it relates to whether reproducible results are possible. In the prototype, the pre-conditioning is implemented following SNIA performance test specification  [@snia:pts18]; it purges the devices followed by performing a workload independent pre-conditioning process. After the storage devices are pre-conditioned, a number of RocksDB daemons are started and waiting for connection requests from YCSB. Storage devices, RocksDB daemons, and the YCSB processes are in a one-to-one relationship. Therefore, the number of RocksDB daemons is identical to the number of YCSB processes. The RocksDB daemon is implemented using Java RMI technology [@wiki:javarmi]. It exposes all public interfaces (e.g., *open()*, *close()*, *get()*, *put()*, *delete()*) of a RocksDB object to network securely by binding this object to an RMI registry (Figure \[fig:call\_graph\_rocksdb\_rmi\]). We have ported the RocksDB daemon program to support not only x86 and x86\_64, but also aarch64 platforms since most embedded platforms use ARM-based processors. A YCSB process looks up the corresponding RocksDB object from a specified RMI registry and requests to create a RocksDB instance on the host of the registry by issuing an *open()* remote call. This call gives the RocksDB object the location of a RocksDB options file, which defines the shape of the internal LSM  [@dong2017optimizing] tree and all the data migration policies for key-value data management. Having a consistent RocksDB options file for different platforms avoids using a “platform optimized” configuration file generated by RocksDB by default. Once RocksDB instances are successfully created, YCSB can start filling instances with initial key-value records to support later read/write operations. The data operation requests generated by YCSB are simply passed down by calling the exposed RocksDB interfaces. To ensure the final underlying LSM trees are consistent on platforms of different performance, we added an option to our prototype to control the speed of data loading. Slowing down the loads gives RocksDB instances enough system resources to finish regular data compaction and compression for keeping LSM trees stable. Finally, when the initial data are loaded, YCSB starts to run the workload specified by a parameter file with which the target workload is defined. YCSB offers various options to customize a workload: from the total number of operations, to request distribution, to the ratio between reads and writes, and so on. A high-level evaluation process of our prototype is shown in Figure \[fig:eval\_process\_hview\]. ![Call Graph of RocksDB RMI Server[]{data-label="fig:call_graph_rocksdb_rmi"}](call_graph_rocksdb_rmi.png) ![A High-level View of the Evaluation Process[]{data-label="fig:eval_process_hview"}](eval_process_hview.png) Depending on the configuration of a platform, the storage devices that RocksDB instances see can either be physical storage devices at local or network storage devices managed by any storage disaggregation protocol such as iSCSI  [@wiki:iSCSI] and NVMe-oF  [@minturn2015nvm]. The purge process, however, will always take place on the physical storage devices. We discuss different storage configurations and how they affect the cost-efficiency of a platform in Section \[Evaluation\]. Measuring MBWUs requires identifying which system resource is the bottleneck. Our prototype will automatically monitor and record utilization of CPU, memory, device I/O, network, and power for platforms during the whole evaluation process. At the end of a measurement, the prototype extracts all useful information from these logs and generates a platform resource utilization report for the target workload. Evaluation {#Evaluation} ========== The purpose of the evaluation is to demonstrate the use of our evaluation prototype discussed above for quantifying the benefit of offloading the key-value data management function from traditional host platforms to embedded smart key-value platforms. A smart key-value platform exposes a key-value interface instead of a block interface. Infrastructure -------------- Figure \[fig:infra\_setup\] shows the basic components of the two platforms we set up for comparison. For the traditional platform, RocksDB runs on the host and stores data on either direct-attached storage devices or network storage devices. For the embedded platform, RocksDB runs on a single board computer (SBC) named ROCKPro64  [@rockpro6419]. This SBC, together with two adapters and a block storage device, creates a smart key-value storage device that clients can interact with through a key-value interface. ![Configuration of Our Host Platform and Embedded Platform[]{data-label="fig:infra_setup"}](infra_setup.png) Test Setup and Results ---------------------- To better understand where the benefit of offloading come from and how much savings occur regarding benefit, we have designed a set of tests involving three-stages (Figure \[fig:test\_setup\]). Each test was a different setup with different placement of either software or hardware components. We used the following workload in all tests. The key size is 16 bytes, and the value size is 4 KiB. The read/write ratio is 50/50 following a Zipf  [@wiki:zipf] distribution for data accessing. Finally, the total size of dataset is 40 GiB. ![Three-stage Test Setup[]{data-label="fig:test_setup"}](test_setup.png) The first tests were integrated tests. Both YCSB and RocksDB ran within platforms and access data from direct-attached storage devices. The purpose of these tests was to reveal the benefit of leveraging cost-effective hardware to provide the function of a key-value data store. The first step was to measure the value of an MBWU. This measurement process went from using one YCSB thread to generate the defined key-value workload to using 32 YCSB threads for concurrent request generation. The reason we stopped at 32 threads relates to the use of SATA SSDs in our experiment. SATA interface provides a single command queue for a depth of up to 32. This feature suggests there is no need to use more than 32 threads to generate requests if the request generation is not slower than the request consumed by the underlying storage device. Figure \[fig:data\_traffic\_amp\] shows that the YCSB throughput was mostly stable with more than 20 threads. Considering the amplification factor between the traffic generated by YCSB and the traffic to the underlying storage device, we thought there was no need to test with more than 32 threads. However, if the evaluation regards faster storage devices such as NVMe SSDs, we may need to increase the thread number corresponding to the capability of the storage interface to measure an MBWU. In our results, we saw the YCSB reaches peak throughput with 32 threads on the host platform. We ensured this throughput was the MBWU by carefully examining the resource utilization report generated by our evaluation prototype. Once a single MBWU is measured, we can measure the MBWUs for the two platforms. Our host platform can host up to eight SSDs because it has limited hardware connectors. The workload performance with eight SSDs, as expected, was neither limited by the CPU nor the memory. As discussed previously, the real estate issue is one type of system bottlenecks. This type of bottleneck causes the other system resources to be underutilized; thus, it is conceivable that it increased the values of all three metrics (\$/MBWU, kW/MBWU and $m^{3}$/MBWU) for this platform. Under this restriction, our host platform can generate six MBWUs. Figure \[fig:integrated\_test\] shows the evaluation results of the host platform. [[ We skipped some data points in this and some of the following figures as we believed that those values could not be the peak performance numbers according to the trends.]{}]{} We applied the same measurement methodology to the embedded platform, and the results are shown in Figure \[fig:integrated\_test\_rockpro64\]. Limited by CPU performance of this platform, it can only generate 0.5 MBWU with a single SSD. After platform MBWUs are measured, we can compare these platforms using any of the three MBWU-based metrics. We saw that the embedded platform reduced the \$/MBWU by 64% compared to processing the same key-value workload on the host platform. At the same time, this platform reduced the kW/MBWU by 39.6% as well. These optimistic results show that it is worth offloading the key-value data management function to the embedded platform due to the significant saving from the hardware. Specifically, compared with the expensive and powerful resources used in the host platform, the cost reduction of the less powerful resources used in the embedded platform is greater than the performance reduction of these resources. In other words, it simply emphasizes the fact that improving the system performance through scaling out is much more cost-effective than through scaling up. ![Integrated Test: Evaluation for the Embedded Platform[]{data-label="fig:integrated_test_rockpro64"}](integrated_test_rockpro64.png) In network tests, YCSB sent key-value requests through network as opposed to through local bus in the integrated tests. The introduction of network traffic may have different impacts on different platforms depending on the availability of computing resources and the amount of network traffic. For our host platform, since its throughput performance was not CPU or memory bound in the integrated tests, adding the overhead to process network packets has lower performance impact than the embedded platform where its CPU was already the performance bottleneck for the defined key-value workload. Therefore, the purpose of the network tests was to evaluate how the introduction of this front-end network affected the benefit results we obtained from the integrated tests. Figure \[fig:network\_test\] and \[fig:network\_test\_rockpro64\] respectively show the results of our host platform and embedded platform for these tests. Based on the results, the host platform can generate 5.2 MBWUs, and the embedded platform can generate 0.37 MBWUs. With these numbers, we again compared the two platforms using the \$/MBWU and the kW/MBWU metrics. We saw that the embedded platform saved 57.86% of \$/MBWU compared to processing the same key-value workload on the host platform. For energy consumption, the embedded platform can reduce the kW/MBWU by 45.9% as well. The reduction of benefit on \$/MBWU is expected since the performance degradation on the embedded platform is greater than the performance degradation on the host platform. Similarly, the percentage of energy saving was increased because the host platform utilized additional system resources for network traffic processing, which raised its total power consumption. For the embedded platform, it had already enabled all system resources to process the workload. In other words, the embedded platform was already under the peak power consumption no matter whether it was required to deal with network traffic. ![Network Test: Evaluation for the Embedded Platform[]{data-label="fig:network_test_rockpro64"}](network_test_rockpro64.png) Storage disaggregation is known for simplifying and reducing the cost of storage management. It requires additional expense on network infrastructure as the data lives remotely. The faster the storage devices, the higher network bandwidth is required. Hiding the data management traffic within storage devices is especially attractive in this context due to the high amplification factor for data access—6x amplification means more than 5x extra expense on the network to support the bandwidth that is not directly relevant to user applications. In the case of key-value data management function, data amplification comes from data compaction and compression that frequently happen behind the scenes of client applications. In the last test setup, we simulated an environment with disaggregated storage devices to evaluate how much we can save from removing the back-end network requirement for data management traffic. The host and storage devices are connected using iSCSI. Our host platform exactly captured the cost overhead resulting from the data amplification. On the one hand, the built-in network interface card (NIC) in our host platform was unable to support the high bandwidth requirement of the back-end network; we had to install a capable NIC on it, which increased the cost of this platform. On the other hand, the new NIC occupied a PCIe slot causing the reduction of the number of available connectors for storage devices to 4. This reduction exacerbated the unbalance of system resource utilization on this platform and resulted in a lower MBWUs number that the platform can generate, thus decreasing the platform efficiency represented by the three MBWU-based metrics. It is worth mentioning that keeping the system resource utilization in balance for transitional platforms is practically untraceable. In the HPC environment, the ratios between different components in a traditional platform (e.g., the ratio between the number of CPU cores and the number of NICs, and the ratio between the size of memory and the number of storage devices) were designed at the time of purchase according to the requirements of expected workloads. However, the change of workloads is difficult to predict; should that change, the system resource utilization could easily become unbalanced. Figure \[fig:disaggregated\_test\] shows the performance results of our host. In disaggregated tests, the host platform could generate only 3.28 MBWUs. The number of MBWUs of the embedded platform is the same as in the network tests since its setup is the same. By putting all these numbers together, the embedded platform can save 73.4% of \$/MBWU and 70.7% of kW/MBWU if we choose not to use the host platform to process the target key-value workload. Related Work ============ Choi et al.  [@choi2015energy] evaluated the energy efficiency of scale-in clusters that support in-storage processing using computation and data-movement energy models. Do et al.  [@do2013query] suggested offloading selected query processing components to smart SSDs. The comparisons were conducted based on raw performance metrics such as elapsed time and energy in Joules, and did not involve any cost comparison. Floem  [@phothilimthana2018floem] is a programming system that aims to accelerate NIC applications development by providing abstractions to ease NIC-offloading design. Biscuit  [@gu2016biscuit] is a near-data processing framework. It allows developers to write data-intensive programs to be offloaded onto storage devices. Both Floem and Biscuit are similar to our evaluation prototype in that they provide a way to trial and error instead of modeling, which is helpful given the complexity of real-world hardware environment. Our MBWU-based measurement methodology differs from all the previous research in that it focuses on quantifying the benefit of offloading alternatives. Conclusion ========== Host platforms and embedded platforms differ greatly in resource allocations and placements, causing the cost per performance to be significantly different under the same workload. To quantify the benefit of offloading a given data access function to an embedded platform, we proposed a novel MBWU-based measurement methodology. The core of this methodology is to construct an MBWU as a workload-dependent and media-based reference point and use the MBWU to normalize the performance of different platforms such that the performance values of these platforms are directly comparable. It is the direct comparability that enables us to perform apple-to-apple efficiency comparisons for different platforms. Our evaluation prototype releases the power of this methodology and automates the evaluation process for quantifying the benefit of offloading the key-value data management function under a customized workload. Our next step is to evaluate the benefit of offloading other types of data access functions, such as data decryption/encryption functions, database select/project functions, and other data management functions. We believe this measurement methodology will be a useful tool as it fills the need for benefit quantification in current in-storage computing development.
--- author: - | Peng Gao\ Department of Physics, University of Toronto,\ 60 St. George st, Toronto, ON M5S 1A7, Canada\ \ [E-mail: gaopeng@physics.utoronto.ca]{} - | Boris Pioline\ Laboratoire de Physique Théorique et Hautes Energies[^1],\ Université Pierre et Marie Curie - Paris 6, 4 place Jussieu, F-75252 Paris cedex 05\ Laboratoire de Physique Théorique de l’Ecole Normale Supérieure[^2],\ 24 rue Lhomond, F-75231 Paris cedex 05\ \ [E-mail: pioline@lpthe.jussieu.fr]{} title: 'Topological wave functions and the 4D-5D lift' --- Introduction {#s1} ============ BPS black holes in $\cN=2$ supergravity theories have attracted revived attention recently, with the discovery of deep connections between topological strings and the entropy of four-dimensional [@Ooguri:2004zv] and five-dimensional [@Gopakumar:1998ii; @Katz:1999xq] black holes, as well as a direct relation between 4D black holes and 5D black holes and black rings, often known as the 4D-5D lift [@Bena:2004tk; @Gaiotto:2005gf; @Gaiotto:2005xt; @Bena:2005ni]. These advances have also led to much progress in our understanding of the topological string amplitude, in particular with regard to its wave function character [@Witten:1993ed; @Verlinde:2004ck; @Gerasimov:2004yx; @Aganagic:2006wq; @Gunaydin:2006bz; @Schwarz:2006br; @Huang:2006hq]. In this note, we revisit the holomorphic anomaly equations satisfied by the topological string amplitude from the perspective of the 4D-5D lift. Our analysis is based on the construction in [@Gunaydin:2006bz], where the standard topological amplitude [@Bershadsky:1993ta] (BCOV) was recast into a purely holomorphic wave function satisfying a generalized heat equation, as first suggested in [@Witten:1993ed] (see [@Schwarz:2006br] for a closely related construction). In the same work, the algebraic nature of the topological amplitude was elucidated in the context of so-called “magic” ${\cal N}=2$ supergravity theories [@Gunaydin:1983rk], characterized by the fact that their moduli space is a symmetric space. Some of these models are known to be consistent quantum $\cN=2$ theories [@Ferrara:1995yx], while others arise as truncations of theories with higher supersymmetry (see [@Dolivet:2007sz; @Bianchi:2007va] for recent progress on this issue). The outline of this note is as follows. In Section 2, we review the relevant results from [@Gunaydin:2006bz] pertaining to “magic” supergravities. In Section 3.1, we observe that the relation between the charges of 4D and 5D black holes related by the 4D-5D lift is a canonical transformation. This motivates us to introduce a new “5D” polarization for the topological wave function, $\Psi_{5D}(Q_i,J)$, related to the standard “real” polarization $\Psi_{{\mathbb{R}}}(p^I)$ by an appropriate Bogoliubov transformation. In Section 3.2, we show that this relation is an instance of the Gopakumar-Vafa connection between 5D black holes and topological strings [@Gopakumar:1998ii; @Katz:1999xq], provided we identify $\Psi_{5D}(Q_i,J)$ with the degeneracies of 5D BPS black holes. In Section 3.3, by exploiting the known Bekenstein-Hawking-Wald entropy of 5D black holes, we constrain the asymptotic behavior of the topological string amplitude at finite coupling but for large Kähler classes. In Section 4, we extend this 5D polarization to the putative “generalized topological amplitude” introduced in [@Gunaydin:2006bz], which remains to be constructed, and identify the canonical transformation as a particular Weyl reflection inside the 3D duality group. The details of two computations are relegated in Appendices A and B. Magic supergravities and topological wave functions =================================================== In this section, we briefly review the main results in [@Gunaydin:2006bz] on the algebraic nature of the topological string amplitude in “magic” $\cN=2, D=4$ supergravity theories. Some useful background can be found in [@Gunaydin:1983rk; @Pioline:2006ni]. In these models, the vector multiplet moduli space is a Hermitian symmetric tube domain ${\cal M}=G/K$ (a very special case of a special Kähler manifold), $G={\rm Conf}(J)$ is the “conformal group” associated to a Jordan algebra $J$ of degree three, $K$ is the maximal compact subgroup of $G$, a compact real form of the “reduced structure group” ${\rm Str}_0(J)$, and the role of the phase space $H^{\rm even}(X,\mathbb{R})$ in type IIA compactifications on a Calabi-Yau three-fold $X$ (or $H^3(X,\mathbb{R})$ in type IIB compactifications) is played by the “Freudenthal triple” associated to $J$, namely the real vector space \[RJJR\] V=JJ equipped with the symplectic form \[ompq\] = dp\^0 dq\_0 + dp\^i dq\_i dp\^I dq\_I where $(p^0,p^i,q_i,q_0)$ are the coordinates along the respective summands in . $V$ admits a linear action of $G$ which preserves the symplectic form $\omega$, and leaves the quartic polynomial \[i4pq\] I\_4(p\^I,q\_I) = 4 p\^0 N(q\_i) -4 q\_0 N(p\^i) + 4 \_[p\^i]{}N(p\^j) \_[q\_i]{}N(q\_j) - (p\^0 q\_0 + p\^i q\_i)\^2 invariant. The symplectic space $V$ may be quantized by replacing $(p^I,q_I)$ by operators $\hat{p}^I=p^I,\hat{q}_I={{\mathrm i}}\hbar\, \pa/\pa p^I$ acting on the Hilbert space $\mathcal{H}$ of $L^2$ functions of $n_v+1$ variables $p^I$, generating the Heisenberg group $H$ with center $Z=-{{\mathrm i}}\hbar$, \[\^I,\_J\]=Z\^I\_J  . The linear action of $G$ on $V$ leads to a unitary action of $G$ on $\mathcal{H}$ by generators in the universal enveloping algebra of $H$[^3] \[swrep\] $$\begin{gathered} S^i \mapsto - {\ensuremath{{\frac{{{\mathrm i}}}{2}}}}\hbar^2\, C^{ijk} \frac{\pa^2}{\pa p^j \pa p^k} - \hbar\, p^i {\frac{\partial}{\partialp^0}}, \qquad T_i \mapsto {\ensuremath{{\frac{{{\mathrm i}}}{2}}}}C_{ijk} p^j p^k - \hbar\,p^0 {\frac{\partial}{\partialp^i}}, \label{swrep-3} \\ R^j_i \mapsto -\delta^j_i\, \hbar\, p^0 {\frac{\partial}{\partialp^0}} + \hbar\, p^j {\frac{\partial}{\partialp^i}} - {\ensuremath{{\frac{1}{2}}}}C_{ikl}\, C^{jnl}\,\hbar \left(p^k {\frac{\partial}{\partialp^n}} + {\frac{\partial}{\partialp^n}} p^k\right)\ , \label{swrep-4}\\ D\equiv \frac{3}{n_v} R^i_i \mapsto -3 \hbar\, p^0 {\frac{\partial}{\partialp^0}} - \hbar\, p^i {\frac{\partial}{\partialp^i}} - {\ensuremath{{\frac{1}{2}}}}\hbar\, (n_v+3). \label{swrep-5}\end{gathered}$$ Here, $C_{ijk}$ is the cubic norm form of $J$, related to the prepotential $F_0$ describing the vector multiplet moduli space $\mathcal{M}$ via F\_0 =  , and $C^{ijk}$ is the “adjoint norm form”, satisfying the “adjoint identity” Q\_i = 12 C\_[ijk]{} Q\^j Q\^k Q\^i = 12 C\^[ijk]{} Q\_j Q\_k ,N(Q\_i)16 C\^[ijk]{}Q\_i Q\_j Q\_k . Thus, the Hilbert space $\mathcal{H}$ furnishes a unitary representation of the “Fourier-Jacobi group” $\tilde G = G \ltimes H$, known as the Schrödinger-Weil representation. Moreover, in [@Gunaydin:2006bz] (without any assumption of “magicness”), a sequence of transformations was constructed which takes the topological partition function $\Psi_{\rm BCOV}(t^i,\bar t^{\bar i};x^i,\lambda)$ from [@Bershadsky:1993ta], subject to two sets of holomorphic anomaly equations, into a purely holomorphic wave function $\Psi_{\rm hol}(t^i; y_i,w)$ satisfying a single heat equation[^4] \[heat\] \_[hol]{}(t\^i;y\_i,w) = 0 . In magic cases, it was further shown that this holomorphic wave function can be viewed as a matrix element \_[hol]{}(t\^i; y\_i,w) = | ( y\_i \^i + (w-t\^i y\_i) \^0 ) (t\^i T\_i) | \_0 where $|\Omega\rangle_0$ is the “vacuum” of the Schrödinger-Weil representation, annihilated by ${{{\widehat{q}}}}_I$, $S_i$ and the traceless part of $R^i_j$, and with charges D|\_0 = -12 (n\_v+3)|\_0 ,Z|\_0 = -[[i]{}]{}|\_0 . The heat equation (and ultimately the holomorphic anomaly equations of [@Bershadsky:1993ta]) can then be shown to follow from the operator identity in the Schr${\rm \ddot o}$dinger-Weil representation of $\tilde G$, \[gnp\] Z T\_i= p\^0q\_i + [${\frac{1}{2}}$]{} C\_[ijk]{}p\^jp\^k  , It is also useful to introduce the operator \[gnpJ\] 2 J = 23 Z \^iT\_i + p\^0 ( p\^0 q\_0 + 13 p\^i q\_i ) = p\^0 ( p\^0 q\_0 + p\^i q\_i) + 13 C\_[ijk]{}p\^ip\^jp\^k , whose significance will become apparent shortly. Topological wave functions and black hole entropy {#s3} ================================================= Our starting point is the observation that the right-hand sides of and formally give the electric charges $Q_i$ and angular momentum $J$ \[4d5d\] Q\_i &=& p\^0q\_i + 12 C\_[ijk]{} p\^jp\^k\[q4d5d\] ,\ 2 J &=& p\^0 ( p\^0 q\_0 + p\^i q\_i) + 13 C\_[ijk]{}p\^ip\^jp\^k , \[j4d5d\] of the 5D black hole (or more generally black ring) related to the 4D black hole with charges $(p^0, p^i,q_i,q_0)$ by the 4D-5D lift [@Bena:2004tk; @Gaiotto:2005xt; @Bena:2005ni]. Indeed, it is now well-known that a four-dimensional BPS black hole with $D6$ brane charge $p^0\neq 0$ and arbitrary $D4,D2,D0$ brane charges $p^i,q_i,q_0$ in type IIA string theory compactified on $X$ may be viewed at strong coupling as a 5D black ring carrying electric M2-brane charges $Q_i$, M5-brane dipole moments $P^i=-p^i/p^0$ and angular momentum $J_\psi=J$, wound around the circle of a Taub-NUT space with NUT charge $p^0$ [@Bena:2004tk; @Gaiotto:2005xt; @Bena:2005ni]. In the absence of D4-brane charge, the 5D configuration reduces to a single 5D black hole placed at the tip of the Taub-NUT space, as found in [@Gaiotto:2005gf]. Indeed, with this assignment of charges it may be shown that the Bekenstein-Hawking entropy of the 4D and 5D black holes agree up to the orbifold factor $1/|p^0|$ [@Gaiotto:2005gf; @Gaiotto:2005xt]. In the context of $\cN=2$ magic supergravities, this amounts to the identity S\_[4D]{}= = = S\_[5D]{} where $I_4$ is the quartic invariant in [@Pioline:2005vi]. It should also be noted that the charges $Q_i,J$ defined in are invariant under the “spectral flow” p\^0 p\^0 ,p\^i p\^i + p\^0 \^i ,q\_i q\_i - C\_[ijk]{} p\^j \^k - C\_[ijk]{} \^j \^k ,q\_0 q\_0 - \^i q\_i - 12 C\_[ijk]{} p\^i \^j \^k -C\_[ijk]{} \^i \^j \^k . which corresponds to switching on a flux on the Taub-NUT space [@Gaiotto:2005gf]. A 5D polarization for the topological amplitude ----------------------------------------------- The form of the holomorphic anomaly equations suggests introducing a new polarization where the operators $\hat Q_i$ and $\hat J$ are diagonalized. For this purpose, we note that, at the classical level, the 5D charges $(Q_i,P^i,J)$, supplemented by an extra charge $p_J=1/p^0$, are obtained from $(p^I;q_I)$ via a canonical transformation generated by \[Scan\] S( p\^0, p\^i ; Q\_i, J) = - + Q\_i -  . Indeed, a straightforward computation making use of the homogeneity of $N$ shows that q\_I =  , P\^i = - =- ,p\_J = - = so that dS = q\_Idp\^I - ( P\^idQ\_i + p\_JdJ) . This ensures that the change of variables from $(p^I;q_I)$ to $(Q^i,J;P_i,p_J)$ preserves the Darboux form of $\omega$, = dp\^I dq\_I = dQ\_i dP\^i + dJ dp\_[J]{} . Quantum mechanically, the wave function $\Psi_{5D}(Q_i,J)$ in the “5D” polarization where $\hat Q_i$ and $\hat J$ are diagonalized is therefore related to the wave function $\Psi_{{\mathbb{R}}}(p^I)$ in the “real” polarization [@Verlinde:2004ck], where $\hat p^I$ acts diagonally, via \[intert0\] \_(p\^I) = (- S(p\^I;Q\_i,J) ) \_[5D]{}(Q\_i,J)dQ\^idJ . Equivalently, \_(p\^I) ( - ) = ( - Q\_i) \_[5D]{}(Q\_i,J)dQ\^idJ \[intert\] Indeed, one may check that the operators Q\_i [[i]{}]{}p\^0+[${\frac{1}{2}}$]{}C\_[ijk]{} p\^j p\^k  ,2J = [[i]{}]{}(p\^0)\^2 + [[i]{}]{}p\^0p\^i + 2N(p\^i) acting on the l.h.s. of lead to insertions of $Q_i$ and $2J$ in the integral on the r.h.s, respectively. In words, we have found that the wave function in the 5D polarization is obtained by Fourier transforming the wave function in the real polarization with respect to $1/p^0$ and $p^i/p^0$, after multiplication by the tree-level part $e^{-\frac{{{\mathrm i}}}{\hbar} N(p^i)/p^0}$. 5D polarization and 5D black hole degeneracies ---------------------------------------------- In order to interpret the result , we now recall some facts and conjectures on the relation between the topological string amplitude and various invariants of the Calabi-Yau $X$. First, recall that the real polarized topological wave function $\Psi_{{\mathbb{R}}}(p^I)$ is related to the holomorphic topological wave function via [@Schwarz:2006br] \[psiholR\] e\^[F\_[hol]{}(t\^i,)]{} = (p\^0)\^[-1]{} \_(p\^I) ,=  ,t\^i =  . where $F_{\rm hol}(t^i,\lambda)$ is the holomorphic limit $\bar t^{\bar i}\to \infty$ of the topological partition function $F(t^i,\bar t^{\bar i},\lambda)$. Second, recall that the Gopakumar-Vafa conjecture [@Gopakumar:1998ii; @Katz:1999xq] relates the indexed partition function of 5D spinning BPS black holes to the topological amplitude[^5], \[gvc\] e\^[F\_[hol]{}(t\^i,)-F\_[0]{}(t\^i,)]{} = \_[Q\_i,J]{} \_[5D]{}(Q\_i,J) e\^[-2J +2 i Q\_i t\^i]{} . The conjecture also includes a relation to the BPS invariants $n_Q^g$ of the Calabi-Yau $X$ [@Gopakumar:1998ii], \[gvbps\] e\^[F\_[hol]{}(t\^i,)-F\_[pol]{}(t\^i,)]{} =& \[M(e\^[-]{})\]\^[/2]{} \_[Q\_i&gt;0,k&gt;0]{} (1-e\^[-k+2[[i]{}]{}Q\_i t\^i]{})\^[k n\_Q\^0]{}\ &\_[Q\_i&gt;0,g&gt;0]{} \_[=0]{}\^[2g-2]{} (1-e\^[-(g--1)+2[[i]{}]{}Q\_i t\^i]{} )\^ (-1)\^[g+]{} 2g-2\ n\_Q\^g Here, F\_[pol]{}(t\^i,)=- N(t\^i) - c\_[i]{} t\^i is the “polar part” of $F_{\rm hol}(t^i,\lambda)$, and $M(q)=\prod(1-q^n)^{-n}$ is the Mac-Mahon function. Unfortunately, both the BPS invariants $n_Q^g$ and the 5D black hole degeneracies $\Omega_{\rm 5D}(Q_i,J)$ so far lack a proper mathematical definition. This is in contrast to the now well-established relation between Gromov-Witten and Donaldson-Thomas invariants [@Iqbal:2003ds; @gw-dt], \[gvc2\] e\^[F\_[hol]{}(t\^i,)-F\_[pol]{}(t\^i,)]{} = \[M(e\^[-]{})\]\^[-/2]{} \_[Q\_i,J]{} (-1)\^[2J]{}N\_[DT]{}(Q\_i,2J) e\^[-2J +2 i Q\_i t\^i]{} where $N_{DT}(Q_i,2J)$ are the Donaldson-Thomas invariants. Physically, the latter count the bound states of one D6-brane with $2J$ D0-branes and $Q_i$ D2-branes wrapped along the $i$-th cycle in $H^{\rm even}(X,{\mathbb{R}})$. Finally, in [@Dijkgraaf:2006um], the 4D-5D lift was used to argue that $N_{DT}(Q_i,2J)\sim\Omega_{\rm 5D}(Q_i,J)$, thereby giving a heuristic derivation of the Gopakumar-Vafa conjecture . However, this argument does not account for the powers of the Mac-Mahon function in relative to , nor for the sign $(-1)^{2J}$. There is also a discrepancy (most likely due to a difference in the treatment of the center of motion degrees of freedom) between the prediction of the infinite product representation , $N_{DT}(Q,2J)=\sum_{g} \scriptsize \begin{pmatrix} 2g-2 \\ 2J+g-1\end{pmatrix} n_Q^g$, and the considerations in [@Katz:1999xq; @Huang:2007sb], which lead to $\Omega_{5D}(Q,J)=\sum_{g} \scriptsize \begin{pmatrix} 2g+2 \\ 2J+g+1\end{pmatrix} n_Q^g$. Without attempting to resolve these issues, we shall regard as a definition of the 5D black hole degeneracies $\Omega_{\rm 5D}(Q_i,J)$, and later assume that $\log\Omega_{\rm 5D}(Q_i,J)$ is given by the Bekenstein-Hawking-Wald formula, barring any “miraculous” cancellations. Substituting into and setting $c_i=0$ for simplicity, we obtain \[intert2\] (p\^0)\^[-1]{} \_(p\^I) ( ) = \_[Q\_i,J]{}( 8[[i]{}]{} + 2[[i]{}]{}Q\_i )\_[5D]{}(Q\_i,J) . Barring the power of $p^0$, allowing for rescalings of $Q_i$ and $J$ and setting the Planck constant to $\hbar=-2/\pi$, we see that and are consistent provided the topological wave function in the 5D polarization has delta function support on integer charges $Q_i,J$, with weights equal to the 5D black hole degeneracies,[^6] \[psiom\] \_[5D]{}(Q\_i,J) \~\_[Q\_i’,J’]{} \_[5D]{}(Q\_i’,J’) (Q\_i’-14 Q\_i,J’+18 J) . The power of $p^0$ in may be attributed to a quantum ordering ambiguity invisible in the semi-classical discussion in the previous Subsection, or may be absorbed in a redefinition of $\Omega_{5D}(Q_i,J)$. Note also that was motivated in the context of magic supergravities, but that holds (to the same extent as ) in arbitrary $\cN=2$ string compactifications. Thus, we conclude that the 5D black hole degeneracies $\Omega_{5D}(Q_i,J)$ can be viewed as a wave function in a particular “5D” polarization, related to the standard real polarization by the intertwining operator . The fact that the degeneracies $\Omega_{5D}(Q_i,J)$ can be interpreted as components of a wave function in a representation space of the group $\tilde G$ gives some support to the general expectation (voiced e.g. in [@Pioline:2005vi; @Gunaydin:2005mx]) that they should arise as Fourier coefficients of a certain automorphic form of $\tilde G$. Black hole entropy and asymptotics of the topological amplitude --------------------------------------------------------------- Assuming the validity of the Gopakumar-Vafa conjecture (and regardless of the correctness of the identification ), we can use our knowledge of the entropy of 5D black holes to constrain the asymptotic behavior of the topological string amplitude. Recall that the Bekenstein-Hawking entropy of 5D BPS spinning black holes is given at tree level by [@Breckenridge:1996is] \[S5D\] S\_[5D]{} = 2 where $Q$ is to be expressed in terms of the electric charges via Q\^[3/2]{} = 16 C\_[ijk]{} Q\^i Q\^j Q\^k ,Q\_i = 12 C\_[ijk]{} Q\^j Q\^k Equation is valid in the limit where $Q_i$ and $J$ are scaled to infinity, keeping the ratio $J^2/Q^3$ fixed and less than unity. Taking into account higher-derivative corrections of the form $\int c_i \, A^i \wedge R \wedge R$ together with their supersymmetric partners, the Bekenstein-Hawking-Wald entropy becomes [@Castro:2007ci] \[S5Dcor\] S\_[5D]{} = 2 ( 1 + + (c\^2) ) which is valid in the same regime. The free energy of rotating BPS black holes in 5 dimensions in a thermodynamical ensemble with electric potentials $\phi^i$ and angular velocity $\omega$, \[f5leg\] F\_[5D]{}(\^i,) \_[Q\_i,J]{} is easily computed to first order in $c_i$, (see Appendix B for details) F\_[5D]{}(\^i,) = - - 18 c\_i \^i + (c\^2) . \[f5exp\] This results holds for arbitrary, non-magic supergravities in 5D, and is considerably more elegant than its Legendre dual [^7]. The free energy provides the classical (saddle point) approximation to the integral in . The fluctuation determinant around the saddle point may be computed in the magic cases using the results in [@Pioline:2005vi]. Setting \^i = -2[[i]{}]{} t\^i ,= 2  , we find \[lim5\] \_(X\^I) \~\^[-1]{} ( )\^ where the prefactor can be trusted in magic cases only. In the scaling limit where the Bekenstein-Hawking formula can be trusted, the topological coupling $\lambda$ at the saddle point is fixed while $t^i$ are scaled to infinity, so that the terms displayed in are the first two in a systematic expansion at large $t^i$, for fixed $\lambda$. It is noteworthy that the terms proportional to $1/\lambda^2$ and $1/(\pi^2+\lambda^2)$ in the exponent cancel in the limit of large $\lambda$, leaving a term of order $1/\lambda^4$ only. Incidentally, we note that the linear term in $t^i$ in the exponent induces a correction $Q_i\to Q_i-\frac18 c_i$ to the 4D-5D lift formulae , consistent with [@Castro:2007hc; @Castro:2007ci] in the absence of angular momentum, but giving a different correction than the one found in [@Castro:2007ci] when $J\neq 0$. On the other hand, at small topological coupling and finite values of $t^i$, yields \[lim4\] \_(p\^I) \~(p\^0)\^[1-]{} \~ \^[-1]{} The semi-classical limit $\lambda\to 0$ at fixed $t^i$ is consistent with the entropy of 4D BPS black holes, a fact which lies at the basis of the OSV conjecture [@Ooguri:2004zv]. For completeness, we show in Appendix B how is consistent with the usual form of the BCOV topological amplitude, \[bcovexp\] \_[BCOV]{} (t\_i,|t\_i,x\^i,) \~\^[-1]{} ( -(2[[i]{}]{})\^3 + (\^0) ) after performing the sequence of transformations given in [@Gunaydin:2006bz]. The regimes of validity of and in principle overlap when $\lambda$ goes to zero and $t^i$ to infinity. While the two results agree in the strict classical limit, the prefactors do not. Moreover matching the terms in the exponent would require $F_1(t^i)\sim \frac{2\pi{{\mathrm i}}}{8} c_i t^i + \frac{(2\pi{{\mathrm i}})^3}{\pi^2}N(t^i)$, which violates the assumption that $F_1$ is grows linearly at large $t^i$. This discrepancy suggests that the two limits $t^i\to \infty$ and $\lambda$ do not commute. It would be interesting to understand the physical origin of this phenomenon. The generalized topological amplitude in the 5D polarization ============================================================ In the previous section, we have shown that the 4D-5D lift formula and Gopakumar-Vafa conjectures had a simple interpretation as a change of polarization in the Schrödinger-Weil representation of the Fourier-Jacobi group $\tilde G = G \ltimes H$, where $G$ is the four-dimensional U-duality group and $H$ is the Heisenberg algebra of electric, magnetic and NUT charges. Dimensional reduction and extended topological amplitude -------------------------------------------------------- Physically, the Fourier-Jacobi group $\tilde G$ naturally arises as a subgroup of a larger group $G'$, the duality group after reducing the 4D supergravity (or compactifying) down to 3 dimensions. After dualizing the one-forms into scalars, the theory in 3 dimensions can be expressed as a non-linear sigma model on a quaternionic-Kähler space $\mathcal{M}_3 = G'/SU(2)\times G_c$, where $G_c$ is a compact form of the duality group $G$ in 4 dimensions [@Breitenlohner:1987dg]. The reduction from $G'$ to its subgroup $\tilde G$ corresponds to decoupling gravity in 3 dimensions. In the language of Jordan algebras, $G'={\rm QConf}(J)$ is the “quasi-conformal group” associated to the Jordan algebra $J$ [@Gunaydin:2000xr; @Gunaydin:2007qq; @Pioline:2006ni]. Its Lie algebra can be obtained by supplementing the solvable group $\tilde G$ with the negative roots ${{{\widehat{p}}}}^{I'}, {{{\widehat{q}}}}_I, Z'$, obeying the “dual Heisenberg algebra” $[{{{\widehat{p}}}}^{I'}, {{{\widehat{q}}}}_J^{'}]= Z'\, \delta^I_J$, and introducing a new Cartan generator $\Delta \equiv [Z,Z']$, such that $Z,Z',\Delta$ forms a $Sl(2,{\mathbb{R}})$ subalgebra commuting with $G$ (see Figure \[qconfr\]). Moreover, the group $G'$ admits a distinguished unitary representation known as the “minimal” representation, whose functional dimension $n_v+2$ is the smallest among the unitary irreducible representations of $G'$. The minimal representation of $G'$ extends the Schrödinger-Weil representation of $\tilde G$ in the following way: classically, the Freudenthal triple is extended into \[ERJJR\] V’= \_[y]{} \_[p\^0]{} J\_[p\^i]{} J\_[q\_i]{} \_[q\_0]{} \_[p\_y]{} equipped with the symplectic form ’ = dy dp\_y + dp\^0 dq\_0 + dp\^i dq\_i  . The linear space $V'$ turns out to be symplectically isomorphic to the minimal co-adjoint orbit of the complexification of $G'$ (itself isomorphic to the hyperkähler cone over the quaternionic-Kähler space $G'/(SU(2)\times G_c)$), and therefore admits a holomorphic symplectic action of $G'$ on . The minimal representation is obtained by quantizing this action (see e.g. [@Gunaydin:2007qq] for details on this procedure). Quantum mechanically, the minimal representation of $G'$ may be obtained from the Schrödinger-Weil representation by allowing the center $Z={{\mathrm i}}\hbar$ to become dynamical, i.e. supplement the Hilbert space $\mathcal{H}$ of $L^2$ functions of $n_v+1$ variables $p^I$ with an extra variable $y$, and set [^8], $$\begin{gathered} Z \mapsto {{\mathrm i}}y^2, \label{minrep-1} \\ {{\mathrm i}}{{{\widehat{q}}}}_0 \mapsto y {\frac{\partial}{\partialp^0}}, \quad {{\mathrm i}}{{{\widehat{q}}}}_i \mapsto y {\frac{\partial}{\partialp^i}}, \quad {{\mathrm i}}{{{\widehat{p}}}}^i \mapsto {{\mathrm i}}y p^i, \quad {{\mathrm i}}{{{\widehat{p}}}}^0 \mapsto {{\mathrm i}}y p^0, \label{minrep-2} \end{gathered}$$ while keeping the same formulae for the action of $G$ as in . The rest of the generators of $G'$ are obtained by commuting the generators above with Z’ [${\frac{1}{2}}$]{} - ( I\_4(\^I, \_I) + ) ,y \_y + 12 \[minrep-3\] where the constant $\kappa$ depends on the ordering chosen in $I_4(\hat p^i,\hat q_i)$. In particular, [[i]{}]{}\_I’ && \[\_I,Z’\] [[i]{}]{} \_y +  ,\[minrep-4\]\ [[i]{}]{}’\^[I]{} && \[\^I,Z’\] [[i]{}]{}p\^I \_y - \[minrep-5\]  . These formulae define the minimal representation in the real polarization, where the operators ${{{\widehat{p}}}}^I$ and $Z$ are diagonalized. At fixed value of $y$, the representation of the subgroup $\tilde G\subset G'$ reduces to the Schrödinger-Weil representation studied in the previous section, after appropriate $y$-rescalings. As argued in [@Gunaydin:2006bz], the relation between $\tilde G$ and $G'$ on the one hand, and between the Schrödinger-Weil representation of $\tilde G$ and the minimal representation of $G'$ on the other hand, is closely analogous to the relation of the Fourier-Jacobi group $Sl(2,{\mathbb{R}})\ltimes H_3$ and Siegel’s genus 2 modular group $Sp(4,{\mathbb{R}})$, familiar from the mathematical theory of Jacobi and Siegel modular forms [@eichlerzagier] (Here $H_3$ is the three-dimensional Heisenberg algebra $[p,q]=Z$, where $(p,q)$ transform as a doublet of $Sl(2,{\mathbb{R}})$). In that case, the Schrödinger-Weil representation of $Sl(2)\ltimes H_3$ on $L^2$-functions of one variable is then given by the restriction of the metaplectic representation of $Sp(4,{\mathbb{R}})$ on $L^2$-functions of two variables, at a fixed value of the center $Z$. At the automorphic level, the $m$-th Fourier coefficient of a Siegel modular form with respect to the action of the center $Z$ yields a Jacobi form of $Sl(2,{\mathbb{Z}})\times H_3$ of index $m$ [@eichlerzagier]. Based on this analogy, it was suggested in [@Gunaydin:2006bz] that, in cases where the vector multiplet moduli space is symmetric, the standard BCOV topological amplitude should arise as a Fourier coefficient at $Z=-{{\mathrm i}}$ of an automorphic form under the larger group $G'={\rm QConf}(J)$, referred to as the “extended topological amplitude”. It was further speculated in [@Gunaydin:2006bz] that the Fourier coefficients at other values of $Z$ yield non-Abelian generalizations of the Donaldson-Thomas invariants. At this point, we note that the dimensional reduction to 3 dimensions, which has been of great utility in describing four-dimensional stationary black holes [@Breitenlohner:1987dg; @Gunaydin:2005mx; @Gunaydin:2007bg], is also very useful in order to describe five-dimensional black holes with a $U(1)$ isometry [@Maison:1979kx; @Giusto:2007fx; @Bouchareb:2007ax; @Berkooz:2008rj]. The two reductions differ, however, since 5D black holes are best described by reducing the 5D Lagrangian along the time-like direction $t$ first, and then along a space-like direction $\psi$, while 4D black holes are more conveniently described by first reducing from 5D to 4D along the space-like direction $\psi$, and then from 4D to 3D along the time-like direction $t$. The two procedures are related by a Weyl reflection $W$ inside the diffeomorphism group of the $(t,\psi)$ torus, which happens to be the $Sl(2)$ subgroup of $G'$ generated by ${{{\widehat{q}}}}_0, {{{\widehat{q}}}}_0^{'}$ and their commutator [@Berkooz:2008rj]. The Weyl reflection $W$ maps the Heisenberg algebra $\{p^I,q_I,Z\}$ (enclosed in the vertical box of Figure \[qconfr\]) to the Heisenberg algebra $H'=\{ {{{\widehat{q}}}}_0', T_i, {\widehat{p}}^i, Z, {\widehat{p}}^0 \}$ (enclosed by the tilted box). In particular, the D2 and D0 brane charges ${{{\widehat{q}}}}_i$ and ${{{\widehat{q}}}}_0$ are mapped to $T_i$ and ${{{\widehat{q}}}}_0^{'}$. According to and above, the corresponding generators in the minimal representation are given by \[tqp0min\] [[i]{}]{}T\_i = ( \^0 \_i + 12 C\_[ijk]{} \^j \^k) ,\_0’ = + \^0 \_y where ${{{\widehat{p}}}}_y={{\mathrm i}}\pa_y$. This are indeed the 5D electric charges $Q_i$ and angular momentum $J$ in , up to a normalization factor and and additive term in ${{{\widehat{q}}}}_0^{'}$[^9]. Moreover, the unit D6-brane charge requirement $p^0=1$, appropriate for lifting a 4D black hole to a smooth 5D black hole, is mapped to $Z=-{{\mathrm i}}$, which is the necessary requirement for the $\psi$ circle bundle over $S^2$ to be topologically $S^3$ [@Berkooz:2008rj]. Conversely, the vanishing of the time-like NUT charge $Z=0$ for 4D black holes is mapped to the absence of $p^0$ charge for 5D black holes  [@Berkooz:2008rj]. A 5D polarization for the minimal representation ------------------------------------------------ We now construct the analogue of the 5D polarization in this generalized setting. For this purpose, we need to supplement the 5D charges $(Q_i,J)$ and their canonical conjugate $(P^i,p_J)$ with an extra canonical pair $(L,p_L)$, preserving the fact that $(Q_i,J)$ are related to $(p^I,y,q_I,p_y)$ by . The canonical transformation generated by \[genfg\] S’(p\^0,p\^i,p\_y;Q\_i,J,L) = - Q\_i + + L satisfies these conditions (compare to ). Indeed, after some algebra, one finds that the 5D phase space variables $(Q_i,J,L;P^i,p_J,p_L)$ are related to the 4D phase space variables $(p^I,p_y;q_I,y)$ via \[q4d5dg\] Q\_i &=& p\^0q\_i + 12 C\_[ijk]{} p\^jp\^k ,\ 2J &=& + 12 p\^0 p\_y\ L &=& p\^0 y  ,P\^i =  ,p\_J=  ,\[q4d5dg3\]\ p\_L &=& - The generating function of the canonical transformation from $(p^0,p^i,y)$ to $(Q_i,J,L)$ is obtained by Legendre transforming with respect to $p_y$, which removes the last term in and sets $y=L/p^0$ consistently with above. Quantum mechanically, the wave function in the generalized 5D polarization $\Psi_{5D}(Q^i,J,L)$ is therefore related to the generalized wave function in the real polarization $\Psi_{\rm gen}(p^0,p^i,y)$ via \_[gen]{}(p\^I,y) e\^[-[[i]{}]{} N(p\^i)/p\^0]{} = ( 2[[i]{}]{} - [[i]{}]{} Q\_i) \_[5D]{}(Q\_i,J,L)(L-p\^0 y)dQ\^idJdL \[intertg\] . In this new polarization, the “tilted” Heisenberg algebra $H'$, with center ${\widehat{p}}^0$ is now canonically represented, \_0’ = 2[[i]{}]{}J &,& T\_i = [[i]{}]{}Q\_i ,\ Z’ = 12 L \_J &,& \^i = L \_[Q\_i]{} , \^0 = [[i]{}]{}L . In fact, the intertwiner represents the action of the Weyl reflection $W$, which takes $H$ into $H'$. Thus, all generators in the 5D polarization can be obtained from those in the 4D polarization by reflecting the root diagram in Figure 1 along the dotted axis and changing variables 2J p\^0, Q\_i y p\^i ,L y\^2  . For example, \[ZDEZ\] Z= 12 L \_J ,= L \_L - J \_J , Z’= 2J \_L + -  . These results agree and generalize the ones obtained for $G'=G_{2(2)}$ in Section 3.7.2 of [@Gunaydin:2007qq], after performing an overall Fourier transform over all $p_I$. Note that implies that $(2J,L)$ transform linearly as a doublet under the $Sl(2,{\mathbb{R}})$ symmetry generated by $Z,Z',\Delta$, which is Ehlers’ symmetry in four dimensions. Thus, the 5D polarization constructed here would be the most convenient starting point to implement Ehlers’ symmetry on the generalized topological string amplitude. Discussion ========== In this note, motivated by the formal analogy between the holomorphic anomaly equations and the 4D-5D lift formulae for “magic” supergravities, we gave a quantum mechanical interpretation of the Gopakumar-Vafa relation as a Bogoliubov transformation from the real polarization, where the 4D magnetic charge operators ${{{\widehat{p}}}}^I$ operators act diagonally, to the the “5D” polarization, appropriate to the operators $\hat Q_i$ and $\hat J$. Moreover, we used to the known Bekenstein-Hawking-Wald entropy of 5D BPS black holes to constrain the asymptotic behavior of the topological wave function in the real polarization, at finite topological coupling but large Kähler (or, in the B-model, complex structure) moduli $t^i$. In the process we found two relatively minor discrepancies: (i) a yet unexplained shift of genus $g\to g-2$ in the relation between $N_{DT}(Q_i,2J)$ and $\Omega_{\rm 5D}(Q_i,J)$, and (ii) a disagreement at subleading order in the expected overlapping regime of validity of the asymptotic expansions afforded by the 4D and 5D black hole entropy. The former may probably be solved by a proper accounting of the zero-modes of a 5D black hole at the tip of Taub-NUT space, while the latter suggests a non-commutativity of the limits $\lambda\to 0$ and $t^i\to\infty$. It would certainly be useful to resolve these puzzles, and improve our understanding of 5D black hole micro-states. In the last section of this paper, we extended the construction of the 5D polarization to the case where gravity is no longer decoupled, and the duality group is enlarged from $G\ltimes H$ to a semi-simple Lie group $G'$. In particular, we found that the intertwiner from the real to the 5D polarization represents a particular Weyl reflection in the 3D duality group $G'$, which exchanges the two directions in the internal 2-torus. Assuming that a “generalized topological amplitude” living in the minimal representation of $G'$ can really be defined, it is interesting to ask what information it may capture. In [@Gunaydin:2006bz], it was suggested that $\Psi_{\rm gen} (p^I,y)$ would give access to non-Abelian Donaldson-Thomas invariants of rank $y^2$. The 5D polarized wave function $\Psi_{5D}(Q_i,J,L)$ constructed herein naturally suggests an interpretation in terms of counting 5D black hole micro-states of charge $Q_i$, angular momentum $J$ and dipole charge $p^0 \propto L$. It would be interesting to see if this conjecture can be borne out. We are grateful to I. Bena, M. Berkooz, A. Dabholkar, R. Dijkgraaf, P. Kraus, K. Hori, D. Jafferis, S. Murthy, A. Neitzke and T. Pantev for useful discussions. P.G. is supported in part by NSERC. B.P. is supported in part by ANR (CNRS–USAR) contract No 05–BLAN–0079–01. Free energy of 5D spinning black holes ====================================== In this appendix, we derive Eq.  for the free energy of 5D spinning black holes. We start by computing the Legendre transform of the tree-level entropy , and incorporate the higher-derivative corrections at the end. Extremizing over $J$, we find that the extremum is reached at J=-Q\^[3/2]{} leaving F\_[5D]{}(\^i,) = Q\^[3/2]{} - Q\_i \^i \_[Q\_i]{} . The extremum over $Q_i$ is therefore reached at Q\^i =  , at which point F\_[5D]{}(\^i,) = - To incorporate the effect of the higher-derivative correction in , we note that the variation of the tree-level entropy with respect to $q_i$ is given, to leading order, by S\_[5D]{} = Q\^i Q\_i where we used the fact that $\delta Q^{3/2} = \frac12 Q^i \delta Q_i$. Thus, the subleading term in is reproduced by setting $\delta Q_i = \frac18 c_i$. After Legendre transform, the corrected free energy is therefore F\_[5D]{}(\^i,) = - - 18 \^i c\_i + …Upon scaling $Q_i$ and $J$ to infinity keeping $J/Q^{3/2}$ fixed and less than one, it is easy to see that $\omega$ is fixed while $\phi^i$ go to infinity. The limit $\omega\to\infty$ (corresponding to strong topological coupling) corresponds to black holes near the Kerr bound $J=Q^{3/2}$. From BCOV to real polarization ============================== In this appendix, we provide a check on in the case of “magic” $\cN=2$ supergravities, which illuminates the relation between the constructions in [@Gunaydin:2006bz] and [@Schwarz:2006br]. For this purpose, we postulate the form \[eguess\] \_(p\^I) \~ (p\^0)\^ , consistent with for $\alpha = 1 - \frac{\chi}{24}$, and show that it leads to a BCOV topological partition function of the expected form after applying the chain of transformations in [@Gunaydin:2006bz]. The first step is to obtain the holomorphic wave function via [@Gunaydin:2006bz] \[interhol\] \_[hol]{}(t\^i;w,y\_i) = dp\^I ( p\^I \_[IJ]{}(X) p\^J + p\^I y\_I ) \_(p\^I) . where $t^i=X^i/X^0$ and $w=y_0+t^i y_i$. To evaluate this integral in the saddle point approximation, define $\tilde p^i = p^i - p^0 X^i/X^0, \tilde p^0=p^0$. Taylor expanding the r.h.s. at small $p^0$, it is easy to check that - p\^I \_[IJ]{}(X) p\^J = - )  . Moreover, defining $\tilde y_0=y_0+ y_i X^i/X^0, \tilde y_i=y_i$, we have p\^I y\_I = p\^0 y\_0 + p\^i y\_i  . Inserting into and changing variables from $p^I$ to $\tilde p^I$ leads then to \_[hol]{}(X\^I,y\_I) = dp\^I (p\^0)\^  . In the saddle point approximation, using the results in [@Pioline:2005vi], we conclude that \[psiholap\] \_[hol]{}(X\^I,y\_I) \~ (y\_0)\^[’]{}\[N(y\_i)\]\^[’]{}  , where (except in the $D_n$ case) ’ = -2-12(n\_v+3) ,’ = +16(n\_v+3) . Next, we take the complex conjugate $\Psi_{\rm ahol}(\bar X^I, \bar y_I)$ of and change variable from $\bar y_I$ to $x^i,\lambda$ using \[xtoy\] x\^I \^[IJ]{} |y\_J = 2e\^[-14 [[i]{}]{}]{} \^[-1]{} (X\^I + x\^i D\_i X\^I)  . The BCOV topological partition function is finally obtained as \[holtobcov\] \_[BCOV]{}(t\^i,|t\^i,x\^i,) = e\^[-f\_1(t)]{} (- x\^I \[\]\_[IJ]{} x\^J) \_[ahol]{}(|X\^I, |y\_I) For magic supergravities, and in the gauge $X^0=\bar X^0=1$, equations are solved by |[y]{}\_0 = [[i]{}]{}\^[-1]{} e\^[-K]{} ,|y\_[|i]{} = -[[i]{}]{} \^[-1]{} e\^[-K]{} g\_[|i j]{} ( x\^j - (t\^j - [|t]{}\^j)) , Moreover, = e\^[-K]{} , N(|y\_[|i]{}) = [[i]{}]{}\^[-3]{}e\^[-K]{}N(x\^j - (t\^j - [|t]{}\^j)) . Altogether, evaluates to \_[BCOV]{} (t\_i,|t\_i,x\^i,) \~ \^[-]{}( )\^[’]{} The quadratic correction in the exponent cancels the terms of order 0,1,2 in $x^i$, leaving only the cubic term in the exponent, \[finan\] \_[BCOV]{} (t\_i,|t\_i,x\^i,) \~ \^[-]{}( )\^[’]{} ( ) Thus, we find agreement with the expected form provided we set f\_1(t)=0 ,= 1 - This provides an independent check on , which was arrived at in [@Schwarz:2006br] by a rather different line of reasoning from [@Gunaydin:2006bz]. In particular, the fact that the power of $\lambda$ in turns out to be opposite to the power of $p^0$ in is rather non-trivial. We note that the second factor in contributes to genus one 1-point functions, unless $\alpha=-(n_v+3)/6$. Although such contributions are perfectly admissible, it is worth noting that the special value of $\alpha$ where they disappear is also the one where is invariant under Fourier transform with respect to all $p^I$ [@Pioline:2005vi]. [\[20\]]{} H. Ooguri, A. Strominger and C. Vafa, “Black hole attractors and the topological string,” Phys. Rev. D [ **70**]{}, 106007 (2004) \[arXiv:hep-th/0405146\]. R. Gopakumar and C. Vafa, “M-theory and topological strings. I,” \[arXiv:hep-th/9809187\]; HEP-TH 9809187; R. Gopakumar and C. Vafa, “M-theory and topological strings. II,” \[arXiv:hep-th/9812127\]; S. H. Katz, A. Klemm and C. Vafa, “M-theory, topological strings and spinning black holes,” Adv. Theor. Math. Phys. [**3**]{}, 1445 (1999) \[arXiv:hep-th/9910181\]. I. Bena and P. Kraus, “Microscopic description of black rings in AdS/CFT,” JHEP [**0412**]{} (2004) 070 \[arXiv:hep-th/0408186\]. D. Gaiotto, A. Strominger and X. Yin, “New connections between 4D and 5D black holes,” JHEP [**0602**]{}, 024 (2006) \[arXiv:hep-th/0503217\]; D. Gaiotto, A. Strominger and X. Yin, “5D black rings and 4D black holes,” JHEP [**0602**]{}, 023 (2006) \[arXiv:hep-th/0504126\]. I. Bena, P. Kraus and N. P. Warner, “Black rings in Taub-NUT,” Phys. Rev. D [**72**]{}, 084019 (2005) \[arXiv:hep-th/0504142\]. E. Witten, “Quantum background independence in string theory,” \[arXiv:hep-th/9306122\]. E. P. Verlinde, “Attractors and the holomorphic anomaly,” \[arXiv:hep-th/0412139\]. A. A. Gerasimov and S. L. Shatashvili, “Towards integrability of topological strings. I: Three-forms on Calabi-Yau \[arXiv:hep-th/0409238\]. M. Aganagic, V. Bouchard and A. Klemm, “Topological Strings and (Almost) Modular Forms,” Commun. Math. Phys.  [**277**]{} (2008) 771 \[arXiv:hep-th/0607100\]. M. Günaydin, A. Neitzke and B. Pioline, “Topological wave functions and heat equations,” JHEP [**0612**]{} (2006) 070 \[arXiv:hep-th/0607200\]. A. Schwarz and X. Tang, “Quantization and holomorphic anomaly,” JHEP [**0703**]{} (2007) 062 \[arXiv:hep-th/0611281\]. M. x. Huang, A. Klemm and S. Quackenbush, “Topological String Theory on Compact Calabi-Yau: Modularity and Boundary Conditions,” arXiv:hep-th/0612125. M. Bershadsky, S. Cecotti, H. Ooguri and C. Vafa, “Holomorphic anomalies in topological field theories,” Nucl. Phys. B [**405**]{}, 279 (1993) \[arXiv:hep-th/9302103\]; M. Bershadsky, S. Cecotti, H. Ooguri and C. Vafa, “Kodaira-Spencer theory of gravity and exact results for quantum string \[arXiv:hep-th/9309140\]. M. Günaydin, G. Sierra and P. K. Townsend, “Exceptional Supergravity Theories And The Magic Square,” Phys. Lett. B [**133**]{}, 72 (1983); M. Günaydin, G. Sierra and P. K. Townsend, “The Geometry Of N=2 Maxwell-Einstein Supergravity And Jordan Algebras,” Nucl. Phys. B [**242**]{}, 244 (1984). S. Ferrara, J. A. Harvey, A. Strominger and C. Vafa, “Second Quantized Mirror Symmetry,” Phys. Lett. B [ **361**]{}, 59 (1995) \[arXiv:hep-th/9505162\]. 9505162; Y. Dolivet, B. Julia and C. Kounnas, “Magic N=2 supergravities from hyper-free superstrings,” JHEP [**0802**]{} (2008) 097 \[arXiv:0712.2867 \[hep-th\]\]. M. Bianchi and S. Ferrara, “Enriques and Octonionic Magic Supergravity Models,” JHEP [**0802**]{} (2008) 054 \[arXiv:0712.2976 \[hep-th\]\]. B. Pioline, “Lectures on on black holes, topological strings and quantum attractors,” Class. Quant. Grav., S981 (2006) \[arXiv:hep-th/0607227\]. B. Pioline, “BPS black hole degeneracies and minimal automorphic representations,” JHEP [**0508**]{}, 071 (2005) \[arXiv:hep-th/0506228\]. F. Denef and G. W. Moore, “Split states, entropy enigmas, holes and halos,” arXiv:hep-th/0702146. A. Iqbal, N. Nekrasov, A. Okounkov and C. Vafa, “Quantum foam and topological strings,” arXiv:hep-th/0312022. D. Maulik, N. A. Nekrasov, A. Okounkov, and R. Pandharipande, “[G]{}romov-[W]{}itten theory and [D]{}onaldson-[T]{}homas theory,” [[math.AG/0312059]{}](http://www.arXiv.org/abs/math.AG/0312059); D. Maulik, N. A. Nekrasov, A. Okounkov, and R. Pandharipande, “[G]{}romov-[W]{}itten theory and [D]{}onaldson-[T]{}homas theory, [II]{},” [[math.AG/0406092]{}](http://www.arXiv.org/abs/math.AG/0406092); A. Okounkov and R. Pandharipande, “[The local Donaldson-Thomas theory of curves]{},” [[ math.AG/0512573]{}](http://www.arXiv.org/abs/math.AG/0512573). R. Dijkgraaf, C. Vafa and E. Verlinde, “M-theory and a topological string duality,” \[arXiv:hep-th/0602087\]. M. x. Huang, A. Klemm, M. Marino and A. Tavanfar, “Black Holes and Large Order Quantum Geometry,” arXiv:0704.2440 \[hep-th\]. M. Günaydin, A. Neitzke, B. Pioline and A. Waldron, “BPS black holes, quantum attractor flows and automorphic forms,” Phys. Rev. D [**73**]{}, 084019 (2006) \[arXiv:hep-th/0512296\]. J. C. Breckenridge, R. C. Myers, A. W. Peet and C. Vafa, “D-branes and spinning black holes,” Phys. Lett.  B [**391**]{} (1997) 93 \[arXiv:hep-th/9602065\]. A. Castro, J. L. Davis, P. Kraus and F. Larsen, “Precision entropy of spinning black holes,” JHEP [**0709**]{} (2007) 003 \[arXiv:0705.1847 \[hep-th\]\]. A. Castro, J. L. Davis, P. Kraus and F. Larsen, “5D Black Holes and Strings with Higher Derivatives,” JHEP [**0706**]{} (2007) 007 \[arXiv:hep-th/0703087\]. P. Breitenlohner, D. Maison and G. W. Gibbons, “Four-Dimensional Black Holes from Kaluza-Klein Theories,” Commun. Math. Phys.  [**120**]{} (1988) 295. M. Günaydin, K. Koepsell and H. Nicolai, “Conformal and quasiconformal realizations of exceptional Lie groups,” Commun. Math. Phys.  [**221**]{} (2001) 57 \[arXiv:hep-th/0008063\]. M. Günaydin, A. Neitzke, O. Pavlyk and B. Pioline, “Quasi-conformal actions, quaternionic discrete series and twistors: $SU(2,1)$ and $G_2(2)$,” arXiv:0707.1669 \[hep-th\], to appear in Commun. Math. Phys. M. Eichler and D. Zagier, [*The theory of Jacobi forms*]{}, Progress in Mathematics 55, Birkhaüser, Boston, 1985 M. Günaydin, A. Neitzke, B. Pioline and A. Waldron, “Quantum Attractor Flows,” JHEP [**0709**]{} (2007) 056 \[arXiv:0707.0267 \[hep-th\]\]. D. Maison, “Ehlers-Harrison Type Transformations For Jordan’s Extended Theory Of Gravitation,” Gen. Rel. Grav.  [**10**]{} (1979) 717. S. Giusto and A. Saxena, “Stationary axisymmetric solutions of five dimensional gravity,” Class. Quant. Grav.  [**24**]{} (2007) 4269 \[arXiv:0705.4484 \[hep-th\]\]. A. Bouchareb, G. Clement, C. M. Chen, D. V. Gal’tsov, N. G. Scherbluk and T. Wolf, “$G_2$ generating technique for minimal D=5 supergravity and black rings,” Phys. Rev.  D [**76**]{} (2007) 104032 \[arXiv:0708.2361 \[hep-th\]\]. M. Berkooz and B. Pioline, “5D Black Holes and Non-linear Sigma Models,” JHEP [**0805**]{} (2008) 045 \[arXiv:0802.1659 \[hep-th\]\]. [^1]: Unité mixte de recherche du CNRS UMR 7589 [^2]: Unité mixte de recherche du CNRS UMR 8549 [^3]: For later convenience, we have flipped the sign of $\hat q_I$ and $Z$ with respect to [@Gunaydin:2006bz], and reinstored a general value for $\hbar$. [^4]: This $\Psi_{\rm hol;GNP}(t^i;y_i,w)$ is related to the holomorphic wave function $\Psi_{\rm hol,ST}(t^i; \lambda,\epsilon^0, \epsilon^i)$ introduced in [@Schwarz:2006br] by setting $\lambda=1$ using homogeneity, and Fourier transforming $(\epsilon^0,\epsilon^i)$ into $(w,y_i)$. $\Psi_{\rm hol,ST}$ arises as the holomorphic limit of the BCOV topological amplitude $\Psi_{\rm BCOV}(t^i,\bar t^{\bar i} \to\infty,x^i,\lambda)$, whereas $\Psi_{\rm hol;GNP}$ may be obtained directly from $\Psi_{\rm BCOV}$ without any limiting or integration procedure. [^5]: Here and below, we follow the conventions in [@Denef:2007vg], up to minor changes of notation $g_{\rm top}\to\lambda, n\to 2J, \beta_i\to Q_i$. [^6]: The factors of $1/4$ and $-1/8$ in this equation are convention-dependent, and so is the value of $\hbar$. [^7]: The apparent discrepancy at order $c_i$ with the free energy given in Eq. (3.23) of [@Castro:2007ci] is due to the fact that the chemical potentials $e_I$ are conjugate to the 4D charges rather than the ones measured at infinity in 5D, as recognized in [@Castro:2007ci]. At zeroth order in $c_i$, the result was known to the second author, R. Dijkgraaf and E. Verlinde in 2005. [^8]: With this notation the scalar $p^I$ differs from the eigenvalue of ${{{\widehat{p}}}}^I$ by a power of $y$. We hope that this will not cause any confusion. [^9]: Despite the fact that equations hold only in the minimal representation, whose semi-classical limit pertains to special solutions whose Noether charge is nilpotent of degree 2, the equality of the conserved charges $(T_i,{{{\widehat{q}}}}_0')$ with the electric charge and angular momentum holds in general [@Berkooz:2008rj] .
--- abstract: | We theoretically consider the carrier density dependence of low-temperature electrical conductivity in high-quality and low-disorder two-dimensional (2D) “metallic” electronic systems such as 2D GaAs electron or hole quantum wells or doped/gated graphene. Taking into account resistive scattering by Coulomb disorder arising from quenched random charged impurities in the environment, we show that the 2D conductivity $\sigma(n)$ varies as $\sigma \sim n^{\beta(n)}$ as a function of the 2D carrier density $n$ where the exponent $\beta(n)$ is a smooth, but non-monotonic, function of density $n$ with possible nontrivial extrema. In particular, the density scaling exponent $\beta(n)$ depends qualitatively on whether the Coulomb disorder arises primarily from remote or background charged impurities or short-range disorder, and can, in principle, be used to characterize the nature of the dominant background disorder. A specific important prediction of the theory is that for resistive scattering by remote charged impurities, the exponent $\beta$ can reach a value as large as 2.7 for $k_F d \sim 1$, where $k_F \sim \sqrt{n}$ is the 2D Fermi wave vector and $d$ is the separation of the remote impurities from the 2D layer. Such an exponent $\beta$ ($>5/2$) is surprising because unscreened Coulomb scattering by remote impurities gives a limiting theoretical scaling exponent of $\beta = 5/2$, and näively one would expect $\beta(n) \le 5/2$ for all densities since unscreened Coulomb scattering should nominally be the situation bounding the resistive scattering from above. We find numerically and show theoretically that the maximum value of $\alpha$ ($\beta$), the mobility (conductivity) exponent, for 2D semiconductor quantum wells is around 1.7 (2.7) for all values of $d$ (and for both electrons and holes) with the maximum $\alpha$ occurring around $k_F d \sim 1$. We discuss experimental scenarios for the verification of our theory. author: - 'S. Das Sarma' - 'E. H. Hwang' title: 'Universal density scaling of disorder-limited low-temperature conductivity in high-mobility two-dimensional systems' --- introduction ============ It is well-known that carrier scattering by background quenched disorder arising from random impurities and defects limits the $T=0$ (i.e., low-temperature) residual conductivity of a metal or doped semiconductor. This “residual resistivity”, i.e., the extrapolated $T=0$ value of the electrical resistivity obtained from the measured low-temperature transport data, provides information about the extrinsic disorder in the material limiting transport properties. In 3D metals, this residual resistivity is not of much intrinsic fundamental interest except for the fact that very pure defect-free single crystals, where the background disorder is presumably very low, could have extremely low residual resistivity leading to very long electron mean free paths. In particular, the residual resistivity of different samples (with different levels of sample purity, i.e., different impurity or defect content) of the same metal could differ by orders of magnitude, and a metallic resistivity uniquely defining a particular metal (e.g. Cu, Al, or Ag) is only meaningful at higher temperatures ($\agt 100$ K) where phonon scattering dominates the electrical resistivity over impurity/defect/disorder scattering. At low temperatures, each metallic sample would have a unique resistivity reflecting its specific impurity content signature, and as such characterizing a metal by a unique resistivity is useless at low temperatures (i.e., each sample of the same metal would have different resistivity at $T=0$). In fact, the low-temperature resistivity of a particular sample typically depends on the preparation history of the sample, and annealing to room temperatures (where all samples of a particular metal do have the same resistivity) and then cooling down to low temperatures could substantially modify the sample resistivity as the impurity/disorder configuration could change due to annealing. Theoretical statements [@one] about residual resistivity (or equivalently, low temperature resistivity) of 3D metals thus focus on first principles calculations of the quantitative aspects of the local disorder potential arising from various defects or disorder in the metal (where the specific types of defect or impurity have to be specifically assumed), and then estimating the resistivity arising from various postulated disorder in terms of resistivity due to some specified impurity or defect content on an atomic percentage basis. Typical values of calculated residual resistivity for most simple metals fall in the $\sim 0.1-1$ $\mu \Omega$cm range per atomic percentage of impurities, and thus a metal with 99.999% (e.g., Cu) purity could have an extremely low residual resistivity of $10^{-10}$ $\Omega$cm, leading to elastic mean free paths $\sim 1$ cm although the typical phonon-limited room-temperature mean free paths in most metals are only $1-10$ Å with the room-temperature phonon-limited resistivity of most metals being around $\sim \mu\Omega$cm. Thus, theoretical studies [@one; @two] of disorder-limited residual resistivity of metals focus entirely on the quantitative modeling of various defect scattering in the metal using detailed materials-specific band structure and Boltzmann transport theories. No systematic dependence of the residual resistivity of metals on various metallicity parameters (e.g., the Wigner-Seitz radius $r_s$ or lattice constant or Fermi energy) can in general be discussed qualitatively, and there is no metallic density scaling of conductivity that one can speak of since the carrier density cannot be tuned by an external gate voltage in metals (as it can be in 2D semiconductor systems or in graphene). In principle, there is only a modest variation in $r_s$ ($\sim 2-6$) among different 3D metals (since $r_s \sim n^{-1/3}$ where $n$ is the effective metallic free electron density), but the band structure variation in going from a metal to another would typically swamp any systematic $r_s$-dependence of the residual resistivity. Thus, the only systematic trend of the residual resistivity in metals which is discussed in the literature [@one; @two] is the dependence of the $T=0$ impurity-limited resistivity of a particular metal on the atomic type (e.g. atomic number) of the various impurities or on the type of defects causing the resistive scattering. It is basically a subject of quantitative details based on serious numerical calculations. In the current work we are interested in discussing qualitatively the impurity-limited ‘metallic’ residual (i.e. $T=0$) resistivity as a function of the electron density of the metal. In particular, we want to understand the density-dependence of the $T=0$ metallic resistivity assuming that the metallic density can be tuned continuously keeping all other parameters (e.g., disorder, band structure) fixed. Of course, such a situation is practically impossible in 3D metals since the electron density is fixed in each metal and cannot be tuned at all. But, as is well-known, such a tunable carrier density situation is routine in 2D “metallic” systems such as Si MOSFETs, gated semiconductor heterostructures and quantum wells, and gated graphene. In principle, a variable carrier density can be achieved in 3D doped semiconductor systems by changing the doping level (but keeping the same dopant element) in different samples, but this is not ideal for our purpose of a qualitative understanding of the density scaling of the metallic $T=0$ saturated resistivity since there is always the possibility of unknown sample to sample variation beyond just the carrier density variation since doping level cannot be tuned continuously in a single sample as can be done in 2D systems. We do, however, briefly consider the density scaling properties of 3D impurity-limited saturated resistivity for the sake of completeness although most of the current work focuses on 2D semiconductor structures (and graphene) where the carrier density can be easily tuned in the same sample by applying an external gate voltage, and thus the density-dependent conductivity is a meaningful concept in 2D “effective metallic” systems existing in semiconductor structures and graphene. Obviously, the $T=0$ conductivity ($\equiv \rho^{-1}$, where $\rho$ is resistivity) would manifest different density dependence for different types of background disorder, i.e., different types of impurity-electron interaction. The main resistive disorder scattering in relatively pure 3D metals is due to defects, vacancies, and impurities which scatter primarily through the short-range (essentially) $\delta$-function type scattering potential (although there are extended defects which could produce anisotropic longer-range disorder potential). Disorder scattering by $\delta$-function type point scatterers gives rise to rather uninteresting density scaling of electrical conductivity, again making 3D metals unsuitable for studying the density dependence of conductivity. Our main interest in this work is to obtain the density scaling of 2D $T=0$ “metallic” conductivity arising from Coulomb disorder induced by random quenched charged impurity centers in the environment. Quenched Coulomb disorder is the dominant extrinsic resistive scattering mechanism in all semiconductor systems (2D or 3D) at low temperatures since doping by impurities is essential in inducing carriers in a semiconductor (whereas metals, by definition, have free carriers at the Fermi level even at $T=0$), leading to the inevitable existence of background Coulomb disorder. Thus, a key distinction between low-temperature transport in metals and semiconductors is that short-range disorder (arising from very strong metallic screening) dominates metallic transport whereas long-range Coulombic disorder (arising from random localized charged impurities) dominates transport in (both 2D and 3D) doped semiconductors. The density dependent conductivity in doped semiconductor structures (or graphene) arises from the carrier screening of background Coulombic disorder which depends sensitively on the dimensionless ratio $q_{TF}/2k_F$ where $q_{TF}$ and $k_F$ are respectively the (density dependent) screening wave vector and Fermi wave vector of the system. The variation in the carrier screening properties as a function of the carrier density eventually leads to the density dependence of the resistivity. Of course in doped semiconductors, one must worry about the additional complications of disorder-induced Anderson localization and/or low-temperature carrier freezeout and/or possible percolation transition associated with charged impurity-induced charge puddle formation. In the current work, we ignore all of these complications uncritically, focusing primarily on extremely high-quality modulation-doped 2D GaAs quantum well structures (and high-quality suspended graphene) where these complicating circumstances are absent down to very low carrier densities and very low temperatures. Our theoretical results presented in this paper, based on Drude-Boltzmann semiclassical transport theory, should apply to experimental systems above carrier densities (and temperatures) where localization (and related effects) become operational. The neglect of carrier localization and freezeout is not a particularly significant issue for our 2D theory and results because high-quality 2D semiconductor systems (and graphene) are not susceptible to these problems except perhaps at extremely low temperatures and carrier densities of little practical or experimental interest. On the other hand, our low-temperature and low-density transport results for 3D doped semiconductors are given here purely for the purpose of completeness and comparison with 2D results since 3D doped semiconductors typically become insulating at low carrier density (as well as low temperature) because of localization and carrier freezeout. Our focus and most of our presented results, as the title of our article clearly indicates, are on low-temperature density dependent metallic transport properties of very high-quality 2D systems where the concept of density scaling of conductivity is both theoretically and experimentally meaningful. One may wonder about the fact that the scaling theory of localization predicts that all 2D systems are strictly speaking Anderson insulators, and have, in principle, zero conductivity at $T=0$ for infinite samples at all carrier densities. (This is theoretically true even for graphene when disorder induced intervalley carrier scattering is taken into account.) For our purpose in the current work, where we are specifically interested in very high-mobility 2D structures with very low background disorder, the scaling localization is a non-issue because (1) the samples are finite in size and the temperature, although low, is still finite; and (2) more importantly, the elastic mean free path is extremely long, making the effective 2D localization length much larger than the system size (or the phase breaking length, whichever is shorter). Thus, we are specifically considering the density dependence of the semiclassical part of the 2D resistivity in the situation where the semiclassical resistivity is much larger than the quantum (or weak localization) contribution to the resistivity. This limit is generic in high mobility 2D GaAs semiconductor systems and in graphene, and therefore our work is of wide validity. Thus we are strictly speaking considering the density dependence of the zeroth order semiclassical conductivity in the situation where the quantum contribution to the resistivity is negligible. We emphasize that the density dependence (or scaling) of electrical transport properties is rarely discussed in the theoretical literature mainly because (as discussed above) such a density dependence of electrical conductivity is neither relevant nor interesting in 3D metals. Indeed, it is universally believed that the only aspect of electrical conductivity where studying the carrier density dependence is a meaningful endeavor is in the study of density-tuned Anderson localization as a ($T=0$) quantum phase transition, exactly the physical phenomenon we are excluding in the current work where we focus entirely on the Drude-Boltzmann part of the conductivity in high quality samples assuming quantum localization effects to be negligible. Thus, the density scaling of our theory is not connected with quantum criticality at all, but with the behavior of the density dependence of the background disorder arising from nontrivial screening properties. In contrast to the density dependence which is rarely discussed in the literature except in the context of metal-insulator transition, the temperature dependence of transport properties has been extensively discussed for electronic materials (including many experimental and theoretical papers in 2D systems [@three; @four; @five; @six; @seven]) because the temperature dependent electrical resistivity generically contains qualitative information about the underlying resistive scattering mechanism. For example, a linear-in-T higher temperature metallic resistivity is the hallmark of acoustic phonon scattering in both metals and semiconductors (and in both 2D or 3D systems) whereas acoustic phonons typically lead to strongly suppressed high power laws ($\sim T^{\eta}$ with $\eta \approx 4-7$ depending on the details) in the (2D or 3D) metallic resistivity below a system-dependent Bloch-Grüneisen temperature $T_{BG}$. Our current work focuses entirely on temperatures well below the Bloch-Grüneisen temperature ($T \ll T_{BG}$) so that phonons are completely ineffective in limiting the electrical conductivity. Optical phonons, which are often relevant in semiconductor transport at higher temperatures and indeed limit the room-temperature resistivity of 2D GaAs-based semiconductor structures [@eight] and are exponentially suppressed as $e^{-\hbar \omega_{LO}/k_B T}$ at low temperatures (where $\hbar \omega_{LO} \sim 450$ K in GaAs), are completely irrelevant for our low-temperature transport considerations. It is well-known that high-quality low-density 2D semiconductor systems (as well as graphene) often manifest a strong temperature dependence in its low-temperature resistivity [@three; @four; @five; @six; @seven], which arises from the strong temperature dependence of the screened Coulomb disorder [@nine] at low carrier densities. Again, our theoretical work at $T=0$ is completely free from any temperature-induced complications in the resistivity since our interest is in understanding the density dependence of the $T=0$ resistivity/conductivity, which can be obtained from the low-temperature experimental data by extrapolation to $T=0$ or by sitting always at a constant very low temperature (e.g., 50 mK) in obtaining the density dependence of the transport properties. In any case, the question we are interested in is perfectly well-defined as a matter of principle: How does the $T=0$ Drude-Boltzmann semiclassical conductivity of a 2D (or 3D) “metallic” system vary as a function of its carrier density in the presence of background screened Coulomb disorder being the main resistive scattering mechanism? In the rest of this article, we provide a detailed theoretical answer to the question posed in the last sentence above. background ========== The $T=0$ conductivity $\sigma(n)$ of a 2D metallic system is typically written as [@nine; @ten] $$\sigma(n) = \frac{e^2 v_F^2}{2} D(E_F)\tau(E_F), \label{eq:eq1}$$ in the Boltzmann theory, where $E_F$, $v_F$ are respectively the Fermi energy and the Fermi velocity, $D(E_F)$ is the carrier density of states at the Fermi surface, and $\tau$ is the so-called relaxation time (or the scattering time) for the relevant resistive scattering mechanism. (The factor 2 in the denominator of Eq. (\[eq:eq1\]) is replaced by 3 for 3D systems.) It is assumed that $v_F$, $E_F$, and $D$ are known as a function of the carrier density $n$ from the relevant band structure information (and we will consider only the parabolic and the linear band approximation with the linear approximation used for obtaining our graphene transport results). The whole theory then boils down to a calculation of the transport relaxation time $\tau$ at the Fermi surface using the appropriate microscopic scattering mechanism (which we describe in details in section III of our paper). For parabolic bands with $E({\bf k}) = \hbar^2 k^2/2 m$ we can write $E_F=mv_F^2/2$, where $m$ is the carrier effective mass, and Eq. (\[eq:eq1\]) simply reduces to the celebrated Drude-Boltzmann transport formula: $$\sigma(n) = \frac{n e^2 \tau}{m}, \label{eq:eq2}$$ where $\tau\equiv \tau(E_F)$ and $n$ is the relevant (2D or 3D) carrier density. For graphene, Eq. (\[eq:eq2\]) requires a slight modification [@nine] which we will discuss later when we come to describing our graphene results. From this point on, unless otherwise stated, our explicit equations and formula are given for parabolic band 2D systems — we will consider the very special linear band case of graphene separately at the appropriate juncture. Most of our results focus on 2D semiconductor systems, specifically 2D GaAs electron or hole quantum wells where the parabolic approximation applies well. We will point out the corresponding 3D parabolic band analytic results as appropriate, concentrating on presenting the equations and formulas mainly for 2D quantum well systems since most of our presented results are for 2D semiconductor systems, where the corresponding experiments are feasible. We note that one can, instead of discussing the conductivity $\sigma$, equivalently discuss the relaxation time $\tau$ or the resistivity $\rho = \sigma^{-1}$ or the mobility $\mu$, which is defined as $$\mu = \frac{e \tau}{m} = \frac{\sigma}{ne}.$$ Obtaining the density dependent conductivity $\sigma(n)$ now becomes equivalent to obtaining the density dependence of $\tau$ or $\mu$ since $\sigma \propto n \mu$ (or $n\tau$). We write: $$\tau \propto n^{\alpha}, \;\; {\rm i.e.}, \;\; \mu \propto n^{\alpha},$$ and therefore $$\sigma \sim n^{\beta = \alpha + 1}.$$ In general, we can define the conductivity exponent $\beta$, or equivalently the mobility exponent $\alpha$ through the relation: $$\alpha(n) = \frac{d \ln \mu}{d \ln n},$$ and $$\beta(n) = \alpha(n) + 1 = \frac{d \ln \sigma}{d \ln n}.$$ We note that, in general, the mobility exponent $\alpha \equiv \alpha_{\mu}$ and the relaxation rate exponent $\alpha_{\tau}$ may be unequal (e.g., graphene) except that for parabolic bands we have $\alpha_{\mu} = \alpha_{\tau} = \alpha$. But the relation $\beta = \alpha_{\mu} + 1=\alpha + 1$ always applies. The main goal of the current work is to calculate the exponent $\alpha(n)$ for various specified disorder mechanism, and discuss/contrast how the density scaling exponent depends on the type of disorder controlling the resistive scattering mechanism. We will obtain $\alpha(n)$ analytically in various limiting situations in our theoretical study. But the main technique would be to obtain $\mu(n)$ or $\sigma(n)$ numerically from the Boltzmann theory and then to calculate $\alpha(n)$ or $\beta(n)$. Experimentally, the way to estimate the exponent $\alpha(n)$ is to plot the measured low-temperature $\mu(n)$ against $n$, and then to estimate $\alpha(n)$ by carrying out the logarithmic differentiation. In general, $\alpha(n)$ will depend strongly on the background disorder, and will vary smoothly as a function of carrier density as different scattering mechanisms are operational in different density regimes and as the background Coulomb disorder is screened differently at different carrier density through the variation in $q_{TF}/2k_F$. An important surprising finding of our theory is an interesting nonmonotonic variation in the scaling exponent $\alpha(n)$ as a function of carrier density. It is important to point out at this stage that graphene, because of its linear dispersion with a constant Fermi velocity ($v_F \equiv v_0$), does not obey [@nine] the exponent scaling relation $\beta = \alpha_{\tau} + 1$ connecting the density scaling between conductivity ($\beta$) and relaxation rate ($\alpha_{\tau}$). Instead, for graphene [@nine], we have $\beta = \alpha_{\tau} + 1/2$ if $\alpha_{\tau}$ is defined by $\tau^{-1} \sim n^{\alpha_{\tau}}$ which follows from the constancy of $v_F$ and the fact that $D(E_F) \propto E_F \propto \sqrt{n}$ in graphene. Putting $D(E_F) \propto k_F \propto \sqrt{n}$ in Eq. (\[eq:eq17\]), we get for graphene $\sigma(n) \sim \sqrt{n} \tau(n)$, i.e., $\beta = \alpha_{\tau}+1/2$. Thus, whereas in ordinary parabolic systems the exponent $\alpha$ ($\equiv \alpha_{\tau} \equiv \alpha_{\mu}$) is the same for both mobility ($\mu$) and the relaxation rate ($\tau^{-1}$), in graphene, by virtue of its linear band dispersion, $\alpha=\alpha_{\mu} = \alpha_{\tau}-1/2$, but the relationship between the conductivity exponent ($\beta$) and the mobility exponent ($\alpha$) is still given by $\beta = \alpha + 1$ since by definition $\sigma = ne \mu$. Model and Theory ================ The central quantity to obtain theoretically in the semiclassical Boltzmann transport theory in the relaxation time $\tau$ or equivalently the relaxation rate $\tau^{-1}$, which is given by the following expression for 2D systems, within the leading order Born approximation, for carrier scattering at $T=0$ by disorder [@nine; @ten]: $$\begin{aligned} \frac{1}{\tau} = \frac{2\pi}{\hbar} \sum_{\gamma} & &\int N_i^{(\gamma)}(z) dz \int \frac{d^2 k'}{(2\pi)^2} \left |{V^{(\gamma)}_{\bf k-k'}(z)} \right |^2 \nonumber \\ &\times & (1-\cos \theta_{\bf kk'}) \delta [E({\bf k})-E({\bf k'})]. \label{eq:eq7}\end{aligned}$$ Here $N_i^{(\gamma)}(z)$ is the 3D density of random impurities of the $\gamma$-th type (in general, there could be several different types of impurities present in the system: near and far, 2D or 3D, long- or short-range) with $z$ being the direction perpendicular to the plane of 2D system (which lies in the $x$-$y$ plane); $V_{\bf q}(z)$ is the electron-impurity interaction (in the 2D wave vector space defined by [**q**]{}). ${\bf k, k'}$ are the incoming and outgoing 2D carrier wave vectors involved in the scattering process with a scattering angle $\theta_{\bf kk'}$ between them and ${\bf k-k'}$ being the scattering wave vector; $E({\bf k}) = \hbar^2 k^2/2m$ is the carrier energy. Note that our disorder model assumes a random distribution of impurities in the 2D $x$-$y$ plane although it is easy to include correlations in the 2D impurity distribution if experiment indicates the importance of such correlations. Once the scattering potential $V_{\bf q}(z)$ is defined, the problem of calculating the 2D conductivity becomes simply a question of evaluating the 4-dimensional integral given in Eq. (\[eq:eq7\]). Note that we are restricting to the $T=0$ case (or to low temperatures), otherwise a thermal average would be required in defining $\tau^{-1}$ necessitating a 5-dimensional integration. We note that although Eq. (\[eq:eq7\]) applies only to 2D systems, a very slight modification gives us the corresponding expressions for 3D systems and graphene, which we do not show here. We note here that for graphene there is a well-known additional form factor of ($1+\cos\theta_{\bf kk'}$) inside the integral in Eq. (\[eq:eq7\]) arising from chirality [@nine]. It is worthwhile to point out here that the theoretical idea of a meaningful universal density scaling behavior of conductivity applies as a matter of principle only when the resistive scattering is dominated by a particular disorder mechanism. If there are many different types of disorder (i.e., several $\gamma$-types) contributing equivalently to the resistivity, then the net resistivity will be given by the Matthiessen’s rule: $\tau^{-1}(n) = \sum_{\gamma} \tau_{\gamma}^{-1}(n)$, and $\rho(n) = \sum_{\gamma}\rho_{\gamma}(n)$, i.e., $ \sigma^{-1}(n) = \sum_{\gamma} \sigma^{-1}_{\gamma}(n)$, where $\rho_{\gamma}$, $\sigma_{\gamma}$, $\tau_{\gamma}$ are respectively the resistivity, conductivity, scattering time for the $\gamma$-th type of disorder arising from $N_i^{(\gamma)}(z)$ in Eq. (\[eq:eq7\]). In such a situation, unless one particular type of disorder (i.e., one specific $\gamma$) dominates scattering, the resulting density dependence of the total $\sigma$ (or $\mu$) will manifest complex crossover behavior arising from the combination of all different scattering processes contributing with different strength, and there will not be only universal density dependent scaling behavior of $\sigma(n)$ or $\mu(n)$. To exemplify this important point, we consider a strict 2D electron gas being scattered by three different types of disorder (i.e., $\gamma=1$, 2, 3) given by remote random charged impurities at a distance $d$ from the 2D system, background charged impurities at the 2D layer, and zero-range disorder in the layer, each with their respective conductivity exponent $\beta_{\gamma}$ with $\gamma = 1$, 2, 3 respectively. We can formally write: $$\begin{aligned} \sigma^{-1}(n) & = & \sigma^{-1}_1 + \sigma^{-1}_2 + \sigma^{-1}_3 \nonumber \\ & = & A_1n^{-\beta_1} + A_2 n^{-\beta_2} + A_3 n^{-\beta_3},\end{aligned}$$ where $A_{\gamma} \propto n_{i,\gamma}$ is the strength of each independent physical scattering process (i.e., $n_{i,1}$, $n_{i,2}$, $n_{i,3}$ denote respectively the remote and background charged impurity density and the short-range defect density). If we now define the net conductivity exponent $\beta(n)$ through the usual $\beta = d \ln \sigma/ d\ln n$ definition using the total conductivity $\sigma(n)$, then obviously $\beta(n)$ will be a complex (and opaque) function not only of $\beta_1$, $\beta_2$, and $\beta_3$, but also of the disorder strength $n_{i,1}$, $n_{i,2}$, $n_{i,3}$. Thus, the extraction of a [*universal*]{} density exponent $\beta$ (or $\alpha$) makes sense only when one scattering mechanism dominates — generically, the conductivity/mobility exponent $\beta$/$\alpha$ ($=\beta -1$) is nonuniversal and depends in a complex manner both on the individual scattering mechanism and the relative strengths of different operational scattering mechanisms. We consider below theoretically several different disorder potentials which may be operational in real 2D and 3D systems, obtaining asymptotic analytical expressions for the exponent $\alpha$ and $\beta$ in the process as applicable. Zero-range disorder ------------------- Zero-range disorder, $V_{\bf q} \equiv V_0$ (a constant), corresponds to pure $\delta$-function real space scatterers distributed randomly spatially. Without any loss of generality, we can drop the $z$-dependence of the disorder (since the electron-impurity interaction is spatially localized), and assume the 2D electron system to be of zero thickness in the $z$-direction interacting with the random zero-range scatterers situated in the same plane. For 3D systems we of course assume the scatterers to be randomly distributed three dimensionally. For the constant $V_{\bf q}$ model potential it is straightforward to do the momentum integration in Eq. (\[eq:eq7\]) to obtain the following results for 2D and 3D semiconductor systems and graphene: $$\begin{aligned} \tau & \sim & n^0 \;\;\;\;\;\;\;\; {\rm 2D}, \\ \tau & \sim & n^{-1/3} \;\;\; {\rm 3D}, \\ \tau & \sim & n^{-1/2} \;\;\; {\rm graphene}.\end{aligned}$$ We note that in graphene the mobility exponent ($\alpha \equiv \alpha_{\mu}$) differs from the relaxation rate exponent $\alpha_{\tau}$ by $\alpha_{\mu}=\alpha_{\tau} -1/2$. In obtaining the density dependence of the relaxation time above we use the standard expressions for $k_F$ and $E_F$ for parabolic 2D, 3D systems, and graphene: $$\begin{aligned} k_F & = & (2\pi n)^{1/2}; \;\;\; E_F = \hbar^2 \pi n/m: \; {\rm 2D}, \\ k_F & = & (3\pi^2 n)^{1/3}; \; E_F = \hbar^2 (3\pi^2 m)^{2/3}/2m: \; {\rm 3D}, \\ k_F & = & (\pi n)^{1/2}; \;\;\;\; E_F=\hbar v_0 (\pi n)^{1/2}: \; {\rm graphene}.\end{aligned}$$ \[eq:eq11\] We use $n$ throughout to denote the relevant 2D or 3D carrier density of the “metal”, and $v_0$ in Eq. (\[eq:eq11\]c) is the constant graphene Fermi velocity defining its linear energy-momentum relationship, $E=\hbar v_0 k$ ($v_0 \approx 10^8$ cm/s). We assume a spin degeneracy of 2 throughout and an additional valley degeneracy of 2 for graphene in defining $k_F$ and $E_F$. For zero-range $\delta$-function disorder which is equivalent to assuming an uncorrelated white-noise disorder (often also called in the literature, “short-range disorder”, somewhat misleadingly), we therefore have (we define $\alpha=\alpha_{\mu}$ always as the mobility exponent) $$\alpha =0 \;\;({\rm 2D}); \;\;\; -1/3 \;\;({\rm 3D}); \;\;\; -1 \;\; ({\rm graphene}),$$ and $$\beta = \alpha +1 = 1 \;\;({\rm 2D}); \;\;\; 2/3 \;\;({\rm 3D}); \;\;\; 0 \;\; ({\rm graphene}).$$ These results for 2D parabolic system and graphene are known [@nine; @ten] and show that the conductivity grows linearly with carrier density in 2D systems and becomes a density independent constant in graphene when transport is limited or dominated by zero-range white-noise type $\delta$-function background disorder. The zero-range transport result for the 3D systems (metals or doped semiconductors) does not appear to be as well-known (perhaps because the density dependence of conductivity is not of much experimental interest in 3D systems as discussed in the Introduction of this paper) and shows surprisingly a sub-linear $\sim n^{2/3}$ increase in $\sigma(n)$ in 3D systems for $\delta$-correlated short-range white-noise disorder. We now move on to the more interesting and experimentally more relevant Coulomb disorder, and discuss a number of disorder models pertaining to Coulomb disorder in the next two subsections. Unscreened (long-range) Coulomb disorder ---------------------------------------- In most semiconductor systems (2D or 3D), the dominant background disorder arises from quenched random charged impurities in the environment. Thus, the bare electron-impurity interaction is invariably the long-range $1/r$ Coulomb interaction arising from the electric potential of the charged impurity. The charged impurity could be an intentional dopant impurity introduced to provide the doping necessary to create free carriers in the semiconductor or an unintentional charged impurity invariably present even in the cleanest semiconductor. In general, the Coulomb disorder from the random charged impurities should be screened by the free carriers themselves so that the effective (screened) Coulomb disorder is short-ranged (this is the infra-red regularization necessary for handling the long range nature of Coulomb interaction). We consider the screened Coulomb disorder in the next three sub-sections, focusing here on the unscreened bare Coulomb disorder for the sake of theoretical completeness. The disorder potential $V_{\bf q}(z)$ is given by the following equation for the unscreened Coulomb interaction: $$V_{\bf q}(z) = \frac{2\pi Z e^2}{\kappa q} e^{-q|z|}, \label{eq:eq14}$$ for 2D systems and graphene, and $$V_{\bf q}(z) = \frac{4\pi Z e^2}{\kappa q^2}, \label{eq:eq15}$$ for 3D systems. Here $q=|{\bf q}|$ in Eqs. (\[eq:eq14\]) and (\[eq:eq15\]) is the appropriate 2D and 3D wave vector, $Z$ is the atomic number of the charged impurity center with $Ze$ being its charge (we consider $Z=1$ throughout), and $\kappa$ is the background static lattice dielectric constant. The coordinate $z$ in Eq. (\[eq:eq14\]) denotes the spatial separation of the charged impurity from the 2D confinement plane of the electron layer (taken to be located at $z=0$ in the $x$-$y$ plane). We note that Eqs. (\[eq:eq14\]) and (\[eq:eq15\]) are simply the Fourier transform of the $1/r$ three-dimensional Coulomb interaction (with ${\bf r}\equiv (x,y,z)$ a 3D space vector) in 2D and 3D systems. For 3D systems, unscreened Coulomb disorder leads to the following expression for the scattering rate: $$\begin{aligned} \tau^{-1} = \frac{2\pi N_i}{\hbar} & & \int \frac{d^3 k'}{(2\pi)^3} \left [ \frac{4\pi e^2}{\kappa |{\bf k-k'}|^2} \right ]^2 \nonumber \\ & \times & (1-\cos \theta_{\bf kk'}) \delta(E_k-E_{k'}), \label{eq:eq16}\end{aligned}$$ where all wave vectors are now three dimensional and $N_i$ is the 3D impurity density. The momentum integration in Eq. (\[eq:eq16\]) is straightforward, and it is well-known [@eleven; @twelve] that the integral has a logarithmic divergence arising from the long-range nature of bare Coulomb interaction (i.e., an infra-red singularity). We get: $$\tau^{-1} \sim (n \ln b)^{-1} \rightarrow \infty, \label{eq:eq17}$$ with $b= 4 k_F^2/q_{TF}^2 \rightarrow 0$, where $k_F = (3\pi^2 n)^{1/3}$, and $q_{TF}$, which in principle is the 3D screening wave vector, goes to zero in the unscreened approximation, leading to a logarithmic divergence without the screening cut-off of the long-range Coulomb interaction. There are several ways of cutting off this long-range logarithmic Coulomb divergence (e.g., Conwell-Weiskoff approximation or Brooks-Herring-Dingle approximation) which has been much discussed in the transport literature on doped semiconductors [@eleven; @twelve]. Since our interest in mainly focused on 2D systems where the carrier density can be tuned continuously (in contrast to 3D systems), we do not further discuss the implications of Eq. (\[eq:eq17\]) for 3D doped semiconductors. For 2D systems, one must distinguish among (at least) three different kinds of Coulomb disorder: Random 2D charged impurities in the 2D plane of the carriers \[i.e., $z=0$ in Eq. (\[eq:eq14\])\], random charged impurities in a 2D layer parallel to the 2D carriers with a separation $d$ (i.e. $z=d$), and 3D random charged impurity centers in the background (where $z$ varies over a region in space). We refer to these three situations as 2D near impurities, remote impurities, and 3D impurities, respectively, throughout this paper. We first consider the 2D near impurity case with $z=0$ assuming unscreened impurity-electron Coulomb interaction in the 2D plane: $$V_q(z=0) = \frac{2\pi e^2}{\kappa q}.$$ This then gives (with $n_i$ now as the 2D impurity density): $$\begin{aligned} \tau^{-1} = \frac{2\pi n_i}{\hbar} & & \int \frac{d^2k'}{(2\pi)^2} \left [ \frac{2\pi e^2}{\kappa |{\bf k-k'}|} \right ]^2 \nonumber \\ & \times & (1-\cos \theta_{\bf kk'}) \delta(E_k-E_{k'}). \label{eq:eq19}\end{aligned}$$ The 2D integral in Eq. (\[eq:eq19\]), in contrast to the corresponding (divergent) 3D integral defined by Eq. (\[eq:eq16\]), is convergent: $$\tau^{-1} \sim k_F^{-2} \sim n^{-1}. \label{eq:eq20}$$ Thus $\alpha =1$ in 2D systems for the unscreened 2D impurity case (while the corresponding 3D case is logarithmically divergent). The exponent $\beta = \alpha + 1=2$ for the unscreened 2D Coulomb impurity in 2D “metallic” systems. Next we consider the 2D system with 2D remote impurities ($z=d \neq 0$). The relaxation rate \[Eq. (\[eq:eq7\])\] is now given by $$\begin{aligned} \tau^{-1} = \frac{2\pi n_i}{\hbar} & & \int \frac{d^2k'}{(2\pi)^2} \left [ \frac{2\pi e^2 e^{-|{\bf k-k'}| d}}{\kappa |{\bf k-k'}|} \right ]^2 \nonumber \\ & \times & (1-\cos \theta_{\bf kk'}) \delta(E_k-E_{k'}). \label{eq:eq21}\end{aligned}$$ The integral in Eq. (\[eq:eq21\]) can be rewritten in a dimensionless form $$\tau^{-1} \sim k_F^{-2} \int_0^1 dx\frac{e^{-2x d_0}}{\sqrt{1-x^2}}, \label{eq:eq22}$$ where $d_0 = 2k_F d$ is dimensionless. \[We note that putting $d_0=0$ for $d=0$ gives us the 2D near impurity results of Eq. (\[eq:eq20\]).\] The asymptotic carrier density dependence ($k_F \propto \sqrt{n}$) implied by Eq. (\[eq:eq22\]) depends sensitively on whether $k_Fd$ \[$\equiv d_0$ in Eq. (\[eq:eq22\])\] is small ($k_F d \ll 1$) or large ($k_F d \gg 1$). For $k_Fd \ll 1$, we get from Eq. (\[eq:eq22\]) $$\tau^{-1} \sim k_F^{-2} \sim n^{-1}$$ and for $k_Fd \gg 1$, we get from Eq. (\[eq:eq22\]) $$\tau^{-1}\sim k_F^{-3} \sim n^{-3/2}.$$ Thus, $\alpha =1$ ($3/2$) for $k_F d \ll 1$ ($\gg 1$) and therefore $\beta = 2$ ($5/2$) for $k_Fd \ll 1$ ($\gg 1$) for 2D carriers in the presence of remote Coulomb scatterers. Finally, we consider the 2D carrier system with a 3D random distribution of charged impurity centers. The integral for the 2D relaxation rate (Eq. (\[eq:eq7\])) now becomes with $q_{TF} \rightarrow 0$ $$\tau^{-1}\sim k_F^{-3} \ln\left ( \frac{q_{TF}}{2k_F} \right ) \sim n^{-3/2} \ln \left ( \frac{q_{TF}}{2k_F} \right ),$$ which has the same logarithmic divergence for the unscreened Coulomb disorder as the corresponding 3D case considered above in Eq. (\[eq:eq17\]), but with a different 2D density exponent ($\sim n^{-3/2}$) from the corresponding 3D density exponent ($\sim n^{-1/3}$) in Eq. (\[eq:eq17\]). Thus, 2D carrier systems with unscreened 3D Coulomb disorder would have logarithmically divergent resistivity necessitating a length cut-off on the long range part of the bare Coulomb potential similar to the well-known situation for long-range bare Coulomb disorder in 3D systems (e.g. doped semiconductors). It is important to emphasize our interesting theoretical finding that, although 3D unscreened Coulomb disorder leads to logarithmic long-range divergence in the resistivity of both 2D and 3D systems independent of the carrier density (i.e. $\tau^{-1}$ diverges logarithmically), the corresponding situation for 2D carrier systems with 2D Coulomb disorder has no divergence and does not require any cut-off dependent infra-red regularization. It turns out that 2D metals with unscreened Coulomb disorder arising from random 2D charged impurities is perfectly well-defined within the Boltzmann transport theory. Of course, the unscreened Coulomb disorder model is not realistic and the calculated conductivity may not agree with the experimental data, but theoretically it is perfectly well-defined with the density exponents $\alpha$ ($\beta$) being 1 (3/2) and 2 (5/2), respectively depending on whether $k_Fd \ll 1$ or $\gg 1$ where $d$ is the location of the impurities with respect to the 2D carrier layer. One may wonder about the fundamental reason underlying the necessity for infra-red regularization for the 3D unscreened case and [*not*]{} for the 2D unscreened case. This arises from the 3D bare Coulomb potential ($\sim 1/q^2$) being much more singular in the long wavelength $q\rightarrow 0$ limit than the corresponding 2D Coulomb potential ($\sim 1/q$). It turns out that this difference between 2D and 3D is sufficient to make the 3D Coulomb disorder case infra-red divergent whereas the 2D case being non-divergent without infra-red regularization. We do not discuss here the unscreened Coulomb disorder case for graphene since for graphene the density scaling of the conductivity is the same for both unscreened and screened Coulomb disorder (and we consider screened Coulomb disorder in the next subsection) by virtue of graphene screening wave vector $q_{TF}$ ($\propto k_F$) being proportional to the Fermi wave vector leading to both screened and unscreened Coulomb disorder having the same carrier density dependence in the conductivity. Screened Coulomb Disorder ------------------------- This is the most realistic (as well as reasonably computationally tractable) model for calculating the resistivity due to Coulomb scattering within the Boltzmann transport theory. The basic idea is to use the appropriately screened Coulomb potential in Eq. (\[eq:eq7\]) for calculating the relaxation rate. We use the conceptually simplest (static) random phase approximation (RPA) for carrier screening of the long-range electron-impurity Coulomb interaction. This means that the screening functions in 2D and 3D parabolic systems are the static dielectric functions first calculated by Stern [@thirteen] and Linhard [@fourteen], respectively. For graphene, we use the dielectric screening function first calculated in Ref. \[\]. The relevant screened Coulomb disorder potential is given by: $$V_{q} \equiv \frac{v_q}{\epsilon(q)},$$ where $v_q$ is the long-range Coulomb potential and $\epsilon(q)$ is the appropriate static RPA dielectric function. In general, the screened Coulomb potential can be rewritten as $$V_q = \frac{2\pi e^2}{\kappa (q + q_{TF})},$$ for 2D systems and graphene, and $$V_q = \frac{4\pi e^2}{\kappa (q^2 + q_{TF}^2)},$$ for 3D systems. The screening wave vector $q_{TF}$, sometimes referred to as the Thomas-Fermi wave vector is given by the following expression (obtained in a straightforward manner from the corresponding static polarizability function or the dielectric function in Refs. ): $$\begin{aligned} q_{TF} & = & \frac{2 m e^2}{\kappa \hbar^2} \;\;\;\;\;\;\; {\rm 2D}, \\ q_{TF} & = & \left ( \frac{4me^2 k_F}{\pi \kappa \hbar^2} \right )^{1/2} \; {\rm 3D}, \\ q_{TF} & = & \frac{4 e^2 k_F}{\kappa \hbar v_0} \;\;\;\;\;\; {\rm graphene}.\end{aligned}$$ \[eq:eq28\] We have used a valley degeneracy of 1 (2) for 2D/3D (graphene) systems in Eqs. (\[eq:eq28\]), and chosen a spin degeneracy of 2. We note the (well-known) results that in the 2D parabolic electron system the screening wave vector is a constant, whereas in graphene (3D parabolic system) it is proportional to $k_F$ ($\sqrt{k_F}$). It may be worthwhile to discuss the dimensionless screening strength characterized by the parameter $q_s=q_{TF}/2k_F$, which is given by $$\begin{aligned} q_{s} & = & \frac{2 m e^2}{\kappa \hbar^2k_F} \sim n^{-1/2} \;\;\;\;\; {\rm 2D}, \\ q_{s} & = & \left ( \frac{4me^2}{\pi \kappa \hbar^2 k_F} \right )^{1/2} \sim n^{-1/6} \;\; {\rm 3D}, \\ q_{s} & = & \frac{4 e^2}{\kappa \hbar v_0} \sim n^0 \;\;\;\;\; {\rm graphene}.\end{aligned}$$ \[eq:eq31\] We note the curious (albeit well-established) result that the dimensionless screening strength increases (very slowly in 3D $\sim n^{-1/6}$) with decreasing density ($\sim n^{-1/2}$ in 2D) except in graphene where $q_{TF} \propto k_F$ leading to a density-independent $q_s$. Thus, the strong (weak) screening limit with $q_s =q_{TF}/2k_F \gg 1$ ($\ll 1$) is reached at low (high) carrier density in 2D and 3D metallic systems. This peculiar density-dependence of screening has qualitative repercussion for the transport exponents $\alpha$ and $\beta$ as a function of density. We now rewrite Eq. (\[eq:eq7\]) for the 2D relaxation rate $\tau^{-1}$ in terms of screened Coulomb disorder obtaining; $$\tau^{-1} = \left (\frac{n_i m}{\pi \hbar^3 k_F^2} \right ) \left (\frac{2\pi e^2}{\kappa} \right )^2 I_{22}(q_s,d_0), \label{eq:eq34}$$ with $$I_{22}(q_s,d_0) = \int_0^1 \frac{e^{-2 x d_0} x^2 dx}{(x+q_s)^2 \sqrt{1-x^2}}, \label{eq:eq35}$$ for 2D carriers and 2D impurities (with impurity density $n_i$ per unit area) with $q_s = q_{TF}/2k_F$ and $d_0 = 2k_F d$ (with $z=d$ defining the impurity locations). $$\tau^{-1} = \left ( \frac{N_i m}{8\pi \hbar^3 k_F^3} \right ) \left(\frac{2\pi e^2}{\kappa} \right )^2 I_{23}(q_s),$$ with $$I_{23}(q_s) = \int_0^1 \frac{dx}{(q_s + \sqrt{1-x^2})^2},$$ for 2D carriers and 3D impurities (with impurity density $N_i$ per unit volume). We note that putting $q_s=0$ (i.e. no screening) immediately produces the density scaling exponents obtained in the last subsection. For 3D carriers with (obviously) 3D random charged impurity distribution, we get $$\tau^{-1} = \left (\frac{N_i m}{8\pi \hbar^3 k_F^3} \right ) I_{33}(q_s),$$ where $$I_{33}(q_s) = \int_0^1 dx \frac{1-x}{(1-x+2q_s^2)^2}.$$ Again, putting $q_s=0$ in the 3D result above produces the logarithmically divergent relaxation rate discussed in Sec. IIIB for the 3D unscreened Coulomb disorder. Finally, for graphene we write down the relaxation rates for scattering by screened Coulomb disorder arising 2D and 3D charged impurity distributions respectively by following the standard references [@nine; @sixteen]: $$\tau^{-1} =\left( \frac{n_i}{\pi \hbar^2 v_0 k_F} \right ) \left( \frac{2\pi e^2}{\kappa} \right )^2 I_{G2}(q_s,d_0),$$ where $$I_{G2} = \int_0^1 dx \frac{x^2 \sqrt{1-x^2}}{(x+q_s)^2}e^{-2xd_0}, \label{eq:eq41}$$ for 2D charged impurities located a distance $d$ (with $d_0 = 2k_F d$) from the graphene plane, and for 3D disorder: $$\tau^{-1} = \left ( \frac{N_i}{\pi \hbar^2 v_0 k_F^2} \right ) \left ( \frac{2\pi e^2}{\kappa} \right )^2 I_{G3}(q_s),$$ with $$I_{G3} = \int_0^1 dx \frac{x\sqrt{1-x^2}}{(x+q_s)^2}, \label{eq:eq43}$$ and $N_i$ being the 3D impurity density. In the 3D Coulomb disorder case for graphene, the random charged impurities are distributed in the graphene substrate with a uniform random 3D distribution with a 3D impurity density of $N_i$. For our numerical calculations of transport properties (to be presented in the next section) which would focus entirely on 2D systems with 2D impurities (both near and remote), we will include the realistic width of the quantum well through a subband form-factor modifying the Coulomb matrix element arising from the finite thickness of the quantum well in the $z$-direction. This is a nonessential complication (making our numerical results compatible with and comparable with the experimental low-temperature transport data in GaAs quantum wells) which does not affect our theoretical conclusions about the carrier density scaling of the 2D transport properties since the quasi-2D quantum well form factor is independent of the carrier density in the leading order. Below we obtain the asymptotic density exponents (based on the results given above) $\alpha$ and $\beta$ for screened Coulomb disorder in the strong ($q_s \gg 1$) and weak ($q_s \ll 1$) screening situations considering both near and remote 2D impurities and 3D impurities. [|m[5cm]{}||m[2.4cm]{}|c|c|c|]{} & 2D & 3D & Graphene\ & strong screening ($q_s \gg 1$) & 0 & N/A & 0\ & weak screening ($q_s \ll 1$) & 1 & N/A & 0\ & unscreened ($q_s=0$) & 1 & N/A & 0\ & strong screening ($q_s \gg 1$) & 3/2 & N/A & 1/2\ & weak screening ($q_s \ll 1$) & 3/2 & N/A & 1/2\ & unscreened ($q_s=0$) & 3/2 & N/A & 1/2\ & strong screening & 1/2 & 1/3 & 1/2\ & weak screening & 3/2 & 1 & 1/2\ & unscreened & log-divergent & log-divergent & log-divergent\ zero-range disorder with $\delta$-function impurities & concept of screening inapplicable here & 0 & $-1/3$ & $-1$\ Strong-screening ($q_s \gg 1$) and weak-screening ($q_s \ll 1$) limits ------------------------------------------------------- It is straightforward to carry out the asymptotic expansions of the various integrals in Eqs. (\[eq:eq34\])–(\[eq:eq43\]) to obtain the strong-screening ($q_s \gg1$) and the weak-screening ($q_s \ll 1$) limiting behaviors of the relaxation rate $\tau^{-1}$ for the different systems under consideration. Remembering that $\tau^{-1} \sim n^{-\alpha}$ and $\beta = \alpha + 1$ ($\alpha + 1/2$ for graphene) we get the following results by taking $q_s \gg 1$ and $q_s \ll 1$ limits of Eqs. (\[eq:eq34\])–(\[eq:eq43\]) in various situations.\ (i) 2D carriers with 2D impurities:\ For $2k_Fd \ll 1$ (i.e. near impurities)\ $\alpha = 0$ ($\beta = 1$) for strong screening ($q_{TF} \gg 2k_F$)\ $\alpha = 1$ ($\beta =2$) for weak screening ($q_{TF} \ll 2k_F$).\ For $2k_F d \gg 1$ (i.e. remote impurities)\ $\alpha = 3/2$ ($\beta = 5/2$) for both weak ($q_{TF} \ll 2k_F$) and strong ($q_{TF} \gg 2k_F$) screening.\ (ii) 2D carriers with 3D impurities:\ $\alpha = 1/2$ ($\beta = 3/2$) for strong screening ($q_{TF} \gg 2k_F$)\ $\alpha = 3/2$ ($\beta = 5/2$) for weak screening ($q_{TF} \ll 2k_F$).\ (iii) 3D carriers with 3D impurities:\ $\alpha=1/3$ ($\beta = 4/3$) for strong screening ($q_{TF} \gg 2k_F$)\ $\alpha=1$ ($\beta=2$) for weak screening ($q_{TF} \ll 2k_F$).\ (iv) Graphene with 2D impurities (remembering $\alpha = \alpha_{\tau}-1/2 = \alpha_{\mu}$):\ For $2k_F d \ll 1$ (i.e. near impurities) $\alpha_{\tau} = 1/2$; $\alpha = 0$ ($\beta = 1$) for both strong and weak screening.\ For $2k_Fd \gg 1$ (i.e., remote impurities) $\alpha_{\tau}=1$; $\alpha = 1/2$ ($\beta = 3/2$) for both strong and weak screening.\ We note that for graphene $\alpha = \alpha_{\mu} = \alpha_{\tau} - 1/2$ (and $\beta = \alpha + 1$) by virtue of its linear dispersion. Also, for graphene strongly screened, weakly screened and unscreened Coulomb disorder manifest the same density exponent in the conductivity and the mobility since $q_{TF} \propto k_F$, and thus $q_s$ is density independent. (Of course, the actual numerical values of the conductivity and the mobility are very different in the three approximations except for having the same power law dependence on the carrier density depending only on whether the random 2D charged impurities are near or far.)\ (v) Graphene with screened 3D impurities: $\alpha_{\tau}=1$; $\alpha = 1/2$ ($\beta = 3/2$) for both strong and weak screening.\ We note that graphene with unscreened 3D Coulomb disorder ($q_s=0$) has the same logarithmic divergence in the relaxation rate $\tau^{-1}$ (and hence in the resistivity) for all carrier densities as in the corresponding 2D and 3D parabolic electron systems for unscreened 3D disorder. The unscreened 3D Coulomb disorder is thus unphysical, always necessitating an infra-red regularization as was already realized in the 1950s in the context of 3D doped semiconductor transport [@eleven; @twelve]. In Table I we summarize our asymptotic analytic findings for the density scaling of conductivity and mobility for 2D, 3D parabolic systems and 2D graphene in various limiting situations involving Coulomb disorder arising from random charged impurities in the environment. In Table I above we provide the asymptotic density scaling exponent ($\alpha$) for the carrier mobility ($\mu \sim n^{\alpha}$) in various limiting situations and for various types of disorder. The corresponding conductivity exponent ($\beta$) with $\sigma \sim n^{\beta}$ is given by $\beta = \alpha + 1$. results ======= In this section we present our numerical results for the $T=0$ density dependent transport properties of 2D systems within the Boltzmann transport theory in order to obtain the full density dependence of the exponent $\alpha(n)$ for mobility and equivalently the exponent $\beta(n)$ for conductivity, obtaining in the process the density regimes where the analytical asymptotic exponents obtaining in the last section apply. The reason we focus on the 2D carrier system is that it is the most convenient system for the experimental investigation of the density dependence of transport properties. In 3D semiconductor systems, the carrier density cannot be continuously tuned as it can be in 2D systems. We first note that the strong (weak) screening condition implies low (high) values of carrier density in the system. Using Eq. (\[eq:eq11\]) and Eq. (\[eq:eq28\]), we find: $q_s = q_{TF}/2k_F \gg 1$ implies $2m e^2/ \kappa \hbar^2 \gg 2 (2 \pi n)^{1/2}$: 2D, $(4 me^2/\pi \kappa \hbar^2)^{1/2} (3 \pi^2 n)^{1/6} \gg 2 (3\pi^2 n)^{1/3}$: 3D, $(4e^2 k_F/\kappa \hbar v_0) \gg 2 k_F$: graphene, i.e., $$\begin{aligned} \frac{1}{8\pi}\left ( \frac{2 m e^2}{\kappa \hbar^2} \right )^2 & \gg & n \;\;\; {\rm 2D}, \\ \frac{1}{2\pi^2}\left ( \frac{4me^2}{\pi \kappa \hbar^2} \right )^3 & \gg & n \;\;\; {\rm 3D}, \\ \left (\frac{2 e^2}{\kappa \hbar} \right ) & \gg & v_0 \;\;\; {\rm graphene}.\end{aligned}$$ \[eq:eq44\] Eq. (\[eq:eq44\]) define the low-density regime where the strong screening condition would be satisfied except for graphene which has $q_s$ independent of carrier density since $q_{TF} \propto k_F$. From Eq. (\[eq:eq44\]) we conclude that strong (weak) screening situation (within RPA) for $T=0$ transport properties would be satisfied under the following conditions for the different systems under consideration: $$\begin{aligned} n & \ll & \; (\gg) \left ( \frac{m}{m_e \kappa} \right )^2 \times 1.14 \times 10^{16} cm^{-2} \;\; {\rm 2D}, \\ n & \ll & \; (\gg) \left ( \frac{m}{m_e \kappa} \right )^3 \times 7.4 \times 10^{21} cm^{-2} \;\; {\rm 3D}, \\ \kappa & \ll & \; (\gg) \;\; 4.4 \;\;\;\;\; {\rm graphene}.\end{aligned}$$ \[eq:eq47\] Using the band effective mass for GaAs electrons ($m=0.07 m_e$) and holes ($m=0.4m_e$), we get (using $\kappa=13$ for GaAs-AlGaAs quantum wells) $$\begin{aligned} n &\ll& \; (\gg) \; 3.3 \times 10^{11} cm^{-2} \;\; {\rm for \; 2D \; n-GaAs}, \\ n &\ll& \; (\gg) \; 9.5 \times 10^{13} cm^{-2} \;\; {\rm for \; 2D \; p-GaAs}, \end{aligned}$$ \[eq:eq471\] and $$\begin{aligned} n &\ll& \; (\gg) \; 1.5 \times 10^{15} cm^{-3} \;\; {\rm for \; 3D \; n-GaAs}, \\ n &\ll& \; (\gg) \; 2.8 \times 10^{17} cm^{-3} \;\; {\rm for \; 3D \; p-GaAs}.\end{aligned}$$ \[eq:eq481\] We note (again) that in graphene (as Table I indicates) the exponents $\alpha$, $\beta$ do not depend on weak/strong screening or on the carrier density. The density range for GaAs-based 2D systems, which we would consider numerically in this section, is the experimentally relevant $10^9-10^{12}$ cm$^{-2}$ density range in high-mobility GaAs-Al$_x$Ga$_{1-x}$As 2D quantum well structures, and thus for 2D electron systems (2D n-GaAs) the crossover from the low-density strong-screening to high-density weak-screening behavior may be experimentally relevant. On the other hand, for 2D p-GaAs hole quantum wells, the crossover density ($\sim 10^{14}$ cm$^{-2}$) is too high to be relevant experimentally, and thus the laboratory 2D hole systems are likely to be always in the strongly screened situation. ![(color online) Calculated mobility, $\mu n_i$, as a function of density $n$ of n-GaAs quantum well with a well width $a=200$ Å for various $d=$0, 200, 400, 600, 800 Å(from bottom to top). Here $n_i$ is the 2D random charged impurity density located a distance $d$ away from the quantum well. The mobility is calculated at $T=0$ K. \[fig:one\] ](ds_fig1.eps){width="1.0\columnwidth"} ![(color online) Calculated mobility exponent $\alpha$ in $\mu \propto n^{\alpha}$ (i.e., $\alpha = d \ln \mu /d\ln n$) for the corresponding $\mu(n)$ in Fig. \[fig:one\]. \[fig:two\] ](ds_fig2.eps){width=".90\columnwidth"} The 2D transport exponents $\alpha$ and $\beta$ depend on an additional dimensionless parameter (i.e. in addition to $q_s = q_{TF}/2k_F$) $d_0=2k_F d$, which depends both on the carrier density through $k_F \sim \sqrt{n}$ and on the separation ($d$) of the random charged impurities from the 2D system. This dependence on $k_Fd$, the dimensionless separation of the impurities from the carriers in the $z$ direction (which has no analog in the corresponding 3D disordered case), is experimentally always relevant in high-mobility 2D semiconductor structures for three reasons: (1) in high-mobility modulation-doped 2D quantum well structures, scattering by remote dopants (which are introduced intentionally at a known distance $d$ from the quantum well) is always present; (2) in principle, it is always possible to place ionized impurities at a set distance $d$ from the 2D quantum well using the computer-controlled MBE growth technique which is used to produce high-quality semiconductor quantum wells to begin with; (3) in realistic experimental samples with a finite quasi-2D thickness, $d$ is always finite. In general, the experimental 2D mobility/conductivity could be limited by various types of Coulomb disorder [@seventeen]: near/far 2D Coulomb disorder ($2k_F d \ll 1$/ $\gg 1$): 3D Coulomb disorder in the background. According to Table I, the asymptotic density dependence in each case should be different. In Figs. \[fig:one\] and \[fig:two\] we show our directly numerically calculated n-2D GaAs mobility for quantum well electrons assuming 2D random charged impurity scattering from quenched point scattering centers located a distance $d$ away from the quantum well (including the $d=0$ case). The numerically calculated mobility exponent $\alpha(n) = d \ln \mu/d\ln n$ is shown in Fig. \[fig:two\] for the corresponding $\mu(n)$ shown in Fig. \[fig:one\] – we note that $n \propto k_F^2$, and therefore the parameter $k_F d \sim \sqrt{n}$ for a fixed value of $d$. The results shown in Figs. \[fig:one\] and \[fig:two\] correspond to the zero-temperature case, but are indistinguishable from the corresponding low-temperature results for $T=300mK$ (which we have verified explicitly). We note that the numerical results presented in Figs. \[fig:one\] and \[fig:two\] are realistic (as are all other numerical results shown in this paper) in the sense that they are obtained from the full numerical integration of the Boltzmann theory expression for the relaxation rate \[as given in Eq. (\[eq:eq7\]) in section III\] with the additional sophistication of including the finite thickness of the quantum well through the finite well-thickness form-factors $f_i(q)$ and $f(q)$ which modify the Coulomb disorder matrix element (i.e. $|V|^2 \rightarrow |V|^2 f_i(q)$) and $q_{TF} \rightarrow q_{TF} f(q)$, respectively, and they are given by $$\begin{aligned} f_i(q) & = & \frac{4}{qa} \frac{2\pi^2(1-e^{-qa/2}) + (qa)^2}{4\pi^2 + (qa)^2}, \nonumber \\ f(q) & = & \frac{3(qa)+8\pi^2/(qa)}{(qa)^2+4\pi^2} - \frac{32\pi^4[1-\exp(-qa)]}{(qa)^2[(qa)^2+4\pi^2]^2}.\end{aligned}$$ where $a$ refers to the quantum well width taken to be 200 Å for the results in Figs. \[fig:one\] and \[fig:two\]. We note that the form-factor $f_i(q)$ simply reduces the Coulomb impurity potential from its $q^{-1}$ behavior by a $q$-dependent (but density-independent) function determined by the thickness $a$ of the GaAs quantum well. The form-factor $f(q)$ reduces the 2D screening through the modification of the electron-electron interaction due to the finite thickness of the quantum well. Since the quantum well form factors do not depend on the carrier density in the leading order, this quasi-2D form-factor effect does not in any way modify the asymptotic exponents $\alpha$ and $\beta$ given in Table I (and theoretically defined in the lase section), but the form factors do modify the actual calculated values of the mobility/conductivity/resistivity, making them more realistic, and therefore any comparison with experimental transport data necessitates the inclusion of the quasi-2D form factor effect. All our numerical results for 2D electrons and holes in GaAs quantum wells presented in this paper include the realistic quantum well form factors in the theory taking into account the finite well width effect both in the electron-impurity interaction and in the electron screening \[with $q_{TF}$ being modified to $q_{TF} f(q)$\]. ![(color online) Calculated mobility of n-GaAs quantum well ($a=200$Å) with unintentional charged impurities. Here the 2D impurities ($n_{i2D}=10^9$ cm$^{-2}$) are at the GaAs-AlGaAs interface whereas there are two different types of 3D Coulomb disorder: inside the GaAs well \[$n_{i3D}=10^{14}$ cm$^{-3}$ (GaAs)\] and inside the barrier AlGaAs \[$n_{i3D}=10^{15}$ cm$^{-3}$(AlGaAs)\]. \[fig:three\] ](ds_fig3.eps){width="1.0\columnwidth"} ![(color online) Calculated mobility exponent $\alpha$ in $\mu \propto n^{\alpha}$ (i.e., $\alpha = d \ln \mu /d\ln n$) for the corresponding $\mu(n)$ in Fig. \[fig:three\]. \[fig:four\] ](ds_fig4.eps){width=".9\columnwidth"} In Fig. \[fig:one\] we show (we actually show the calculated $\mu n_i$ since $\mu \sim 1/n_i$) our calculated mobility $\mu$ for 2D n-GaAs as a function of carrier density $n$ for different impurity locations ($d$) whereas in Fig. \[fig:two\] we show the corresponding mobility exponent $\alpha(n)=d\ln \mu/d\ln n$. The asymptotic high-density results, $\alpha \rightarrow 3/2$ for $n \rightarrow \infty$ (i.e., $q_{TF} \ll 2k_F$) and $k_F d \gg 1$ as given in Table I, is clearly obeyed in all cases with $\alpha \approx 3/2$ for larger $d$ and $n$ values satisfying $2k_F d \gg 1$. For 2D GaAs systems: $$2 k_F d \approx 5 \tilde{d} \sqrt{\tilde{n}},$$ where $\tilde{d}$ is measured in 1000 Å units and $\tilde{n}$ in units of $10^{10}$ cm$^{-2}$. Thus even for $d=100$Å, $2k_F d \agt 1$ already for $n \agt 2 \times 10^{10}$ cm$^{-2}$, and thus the $2k_F d \gg 1$ condition is quickly reached at higher carrier density in all 2D GaAs systems for any type of relevant background 2D Coulomb disorder (since typically, $n \approx 10^9$ cm$^{-2}$ is a lower limit for the achievable carrier density in 2D semiconductor systems). One may wonder if the $d=0$ situation corresponds to the $2k_F d \equiv 0$ situation considered in Table I. This is certainly true for the strict 2D limit (i.e. $a=0$ limit of the quantum well). But for any finite value of $a$, the $d=0$ impurity location only refers to the distance of the 2D charged impurities from the GaAs-AlGaAs interface, and thus the average impurity separation from the electrons is always finite except in the $a \rightarrow 0$ limit. Thus even the $d=0$ case in our theoretical calculation has an effective finite value of $d_0 = 2k_F d$ because of the finite layer thickness effect (e.g., $d \approx a/2$ effectively in the $d \rightarrow 0$ limit). The most important conclusions from Figs. \[fig:one\] and \[fig:two\] are: (i) The $2k_Fd \gg 1$ condition dominates the mobility exponent except for rather low mobility samples with very small values of $d$; (ii) even for the nominal $d=0$ case in Fig. 1 (where in the strict 2D case, $\alpha \alt 1$ always) $\alpha$ eventually approaches the asymptotic $\alpha \rightarrow 3/2$ value for $n \gg 10^{11}$ cm$^{-2}$ because of the finite layer thickness effect (i.e., finite $a$); (iv) for scattering purely by very remote dopants ($d \agt 500$ Å), the mobility exponent $\alpha > 1$ always because $2k_F d \gg 1$ condition is always satisfied; (v) the low density limit ($2k_F d \ll 1$, $q_s \gg 1$), where $\alpha \rightarrow 0$ according to Table I, would be achieved in 2D n-GaAs systems only for $n \ll 10^9$ cm$^{-2}$, and in all realistic situations, $\alpha > 0.5$ always as long as transport is dominated by 2D Coulomb disorder. In Fig. \[fig:three\] and \[fig:four\] we show (again for $a=200$ Å) our calculated mobility in the presence of both 2D and 3D (unintentional background) disorder neglecting remote scattering effects (assuming the intentional remote dopants to be too far, $d > 1000$ Å, for them to have any quantitative effects, as would apply to gated undoped HIGFET structures or to extreme high-mobility modulation doped structures where the intentional dopants are placed very far away). In Figs. \[fig:three\] and \[fig:four\], the 2D impurities ($n_{i2D}$) are put right at the GaAs-AlGaAs interface whereas there are two different types of 3D Coulomb disorder: inside the GaAs well \[$n_{i3D}$ (GaAs)\] and inside the barrier AlGaAs \[$n_{i3D}$ (AlGaAs)\]. Again the corresponding critical exponents increase with carrier density, approaching $\alpha =3/2$ for high density consistent with the analytic theory. When the dominant background disorder is that arising from $n_{i3D}$(GaAs), i.e., unintentional background impurities in the well itself, typically $\alpha \approx 0.5-0.8$, which is in between weak and strong screening situation. In Figs. \[fig:five\] and \[fig:six\], we show our 2D p-GaAs results for hole-doped high-mobility GaAs quantum wells. Everything in Figs. \[fig:five\] and \[fig:six\] is identical except for using two different hole effective mass values: $m_h = 0.3 m_0$ (Fig. \[fig:five\]) and $0.4 m_0$ (Fig.\[fig:six\]) since the hole effective mass in GaAs quantum wells is somewhat uncertain[@eighteen]. The precise value of $m_h$ affects $q_{TF} \propto m$, and thus determines the value of $q_s = q_{TF}/2k_F$, leading to some difference between the results in Figs. \[fig:five\] and \[fig:six\]. For the holes, we show both the individual mobility and exponent for each scattering process as well as the total mobility and total exponent obtained by adding the two resistivities (or equivalently, the two scattering rates arising from the two scattering mechanisms). We deliberately refrain from showing the hole results (Figs. \[fig:five\] and \[fig:six\]) in the same format as the electron results (Figs. \[fig:one\] and \[fig:two\]) since they would all look identical except for some changes in the numbers. ![(color online) (a) Hole mobility as a function of hole density ($n$) of p-GaAs quantum well with a width $a=200$ Å and the hole mass of $m=0.3$. Here the background unintentional charged impurities with a density $n_{i3D}=3 \times 10^{13} cm^{-3}$ are located inside the quantum well and 2D remote charged impurities with a density $n_i=8 \times 10^{11} cm^{-2}$ are located at $d= 150$ Å from the interface. In this figure the black solid curve indicates the total mobility and blue dot-dashed (red dashed) curve indicates the mobility limited by only background scattering (remote charged scattering). (b) The exponents ($\alpha$) for the corresponding mobilities in (a). (c) Hole mobility with the same parameters of Fig. \[fig:five\](a) except the remote charged impurity density $n_i=4 \times 10^{11} cm^{-2}$. (d) The exponents ($\alpha$) of corresponding mobilities of (c). \[fig:five\] ](ds_fig5.eps){width="1.0\columnwidth"} ![(color online) (a) Hole mobility as a function of hole density ($n$) of p-GaAs quantum well with a width $a=200$ Å and the hole mass of $m=0.4$. Here the background unintentional charged impurities with a density $n_{i3D}=3 \times 10^{13} cm^{-3}$ are located inside the quantum well and 2D remote charged impurities with a density $n_i=8 \times 10^{11} cm^{-2}$ are located at $d= 800$ Å from the interface. In this figure the black solid curve indicates the total mobility and blue dot-dashed (red dashed) curve indicates the mobility limited by only background scattering (remote charged scattering). (b) The exponents ($\alpha$) for the corresponding mobilities in (a). (c) Hole mobility with the same parameters of Fig. \[fig:five\](a) except the remote charged impurity density $n_i=4 \times 10^{11} cm^{-2}$. (d) The exponents ($\alpha$) of corresponding mobilities of (c). \[fig:six\] ](ds_fig6.eps){width="1.0\columnwidth"} In Figs. \[fig:five\]/\[fig:six\] (a) and (c) we show the calculated 2D hole mobility $\mu(n)$ limited by 3D Coulomb scattering ($n_{i3D}$) and 2D remote Coulomb scattering ($n_i$) for $a=200$ Å with the only difference being $n_i = 8 (4) \times 10^{11}$ cm$^{-2}$ respectively in Fig. \[fig:five\]/\[fig:six\] (a) (c), showing explicitly the quantitative importance of 2D remote Coulomb scattering vis a vis 3D Coulomb scattering. The corresponding critical exponent $\alpha(n)$, shown in Fig. \[fig:five\]/\[fig:six\] (b) and (d) respectively, is completely consistent with Table I with $\alpha$ for remote scattering quickly reaching the asymptotic unscreened value of 3/2 as $2k_F d \gg 1$ and $\alpha$ for 3D Coulomb disorder increasing slowly from the very strongly screened low density situation ($\alpha_{3D} \alt 0.1$ for $n < 2\times 10^{9}$ cm$^{-2}$) to $\alpha \sim1$ for very high hole density ($n \sim 10^{12}$ cm$^{-2}$) where screening weakens. The total exponent in Figs. \[fig:five\] and \[fig:six\] shows complicated non-monotonicity as a function of carrier density since the low-density (high-density) situation is more strongly affected by remote 2D (background 3D) Coulomb scattering and the density dependent crossover between the two scattering regimes is completely nonuniversal depending precisely on the relative amounts of 2D and 3D Coulomb disorder (i.e., on $n_{i3D}$, $n_i$, and $d$). The only concrete statement one can make is that $\alpha(n)$ increases at low carrier density generally to a value larger than unity whereas at intermediate to high density it tends to stay below unity. Again, all of these results are completely consistent with the asymptotic exponents given in Table I (as long as various scattering mechanisms with different exponents are combined together). So far we have discussed our numerical results of Figs. \[fig:one\]—\[fig:six\] in terms of their consistency with the theoretically analytically obtained critical exponents given in Table I with the mobility exponent ($\mu \sim n^{\alpha(n)}$) $\alpha$ showing the expected behavior in the asymptotic density regimes of $2k_F d \gg 1 \; (\ll 1)$ and $q_s \gg 1 \; (\ll 1)$ as the case may be. Remote scattering by 2D ionized dopants dominates transport at low density ($2k_F d < 1$) crossing over to background impurity scattering dominated regime at higher density, leading to $\alpha > 1 \; (<1) $ at low (high) density. Now we discuss perhaps the most interesting aspect of our numerical results in Figs. \[fig:one\]—\[fig:six\], which appears to be at odds with our asymptotic theoretical analysis of section III (and table I). This is the intriguing result that $\alpha(n)$ due to remote 2D impurity scattering can actually exceed the asymptotic unscreened 2D Coulomb scattering value of $\alpha = 3/2$. It is clear in Figs. \[fig:two\], \[fig:four\], \[fig:five\](b), \[fig:five\](d), \[fig:six\](b), \[fig:six\](d) that there is a shallow maximum in $\alpha(n)$ at some intermediate density where the numerically calculated mobility exponent $\alpha >3/2 =1.5$ (and therefore the corresponding conductivity exponent $\beta > 2.5$), which is remarkable since the unscreened Coulomb scattering (applicable in the high-density regime defined by $2k_F d \gg 1$ and/or $q_s \gg 1$) by remote impurities produces $\alpha =3/2$. This non-monotonic behavior of the exponent $\alpha(n)$ in the intermediate density regime (neither high- nor low-density asymptotic regime considered in Table I) with an exponent value larger than the corresponding unscreened Coulomb exponent 3/2 is unexpected and highly intriguing. We provide a theoretical explanation for this intriguing nonmonotonic behavior of remote Coulomb scattering at intermediately density with a mobility exponent exceeding the unscreened value of 1.5 in the next section. Before concluding this section on numerical results, we provide, for the sake of completeness, our numerically calculated mobility exponent for 2D graphene transport under the remote Coulomb scattering situation. In Fig. \[fig:seven\] we show our calculated graphene mobility exponent $\alpha(n)$ obtained from the numerically calculated graphene mobility, $\alpha(n) = d\ln \mu/d\ln n$, for various locations ($d$) of the 2D impurity layer in relation to the 2D graphene layer. We note that graphene is a strictly 2D system and hence there is no quasi-2D form factor correction. It is clear that $\alpha(n)$ goes asymptotically to the unscreened Coulomb value of $3/2$ as $n$ (and therefore $k_F d$ increases) except for the trivial $d=0$ case where $\alpha(n)=0$ (i.e., $\beta=1$) for all density as is already well-known in the literature [@nine; @sixteen] and is well-verified experimentally [@nineteen]. The calculated graphene mobility exponent $\alpha(n)$ shown in Fig. \[fig:seven\] agrees completely with the analytical exponent values given in Table I. (We mention again that in graphene the mobility exponent $\alpha_{\mu}=\alpha$ and the relaxation rate exponent $\alpha_{\tau}$ differ with $\alpha=\alpha_{\mu} = \alpha_{\tau}-1/2$ and $\beta = \alpha + 1 = \alpha_{\mu} + 1$ by definition whereas in 2D and 3D parabolic system, where $\mu \propto \tau$, $\alpha_{\mu} = \alpha_{\tau}= \alpha = \beta -1$.) We note that in graphene, for impurities away from the 2D graphene plane (i.e., $d \neq 0$), the asymptotic conductivity exponent $\beta$ for high carrier densities ($k_F d \gg 1$) is $3/2$ and thus $\sigma \propto n^{3/2}$ in graphene layers dominated by far away Coulomb impurities. An experimental verification of such a $\sigma \sim n^{3/2}$ behavior in graphene due to Coulomb scattering by remote impurities will be a direct verification of our theory. ![(color online) (a) shows the calculated graphene mobility exponent $\alpha(n)$ obtained from the numerically calculated graphene mobility, $\alpha(n) = d\ln \mu/d\ln n$, for various location ($d$) of the 2D impurity layer in relation to the 2D graphene layer. In (b) the exponent for $d=30$ nm shows a shallow local maximum at $n \sim 10^{12}$ cm$^{-2}$. \[fig:seven\] ](ds_fig7.eps){width=".90\columnwidth"} Nonmonotonicity of transport scaling ==================================== We now theoretically discuss (and explain analytically) our surprising numerical finding in section IV, not anticipated at all in the asymptotic theory of section II or in any of the substantial earlier literature on 2D transport, that the density scaling exponent $\alpha$ ($\beta$) of 2D mobility (conductivity) has an intriguing nonmonotonicity as a function of carrier density in the intermediate density regime (in-between the asymptotic low and high density regimes discussed in section III and tabulated in Table I). We first note that the nonmonotonicity in $\alpha(n)$ arises from the subtle fact that although the exponent $\alpha$ (or $\beta$) depends only on one explicit external variable (namely, the carrier density $n$), it depends theoretically on two independent dimensionless variables $q_s = q_{TF}/2k_F$ and $d_0 = 2k_F d$ since in reality there are two independent external variables in the problem: carrier density ($n$) and the impurity location ($d$). The dependence on two independent variables is the key feature allowing for the presence of nonmonotonicity in $\alpha(n)$ as well as its maximum possible value being larger than the unscreened exponent value $\alpha \rightarrow 3/2$. Indeed in the strict 2D limit with $d=0$ (see, e.g., the graphene result in Fig. \[fig:seven\]), there is no maximum allowed in $\alpha(n)$. This is true for both graphene and 2D parabolic system – in graphene, $\alpha(n)=0$ for all values of $n$ in the $d=0$ limit whereas in strictly 2D parabolic system $\alpha(n)$ monotonically increases from $\alpha=0$ in the low-density limit ($q_s \gg 1$) to precisely $\alpha=1$ in the high density ($q_s \ll 1$) for $d=0$ as one would expect theoretically (we have verified this strict 2D limit with $d=0$ result explicitly numerically). For $d\neq 0$ (i.e., $2k_F d \neq 0$), however, the behavior of transport properties depends nontrivially on the variable $d_0 = 2k_F d$, and there is no obvious theoretical reason why the low-density ($\alpha =0$ or 1 depending on strong or weak screening) and the high-density ($\alpha = 3/2$ always) asymptotic limits must be the lower and upper bounds on the exponent. In fact, as our numerics show, and we establish theoretically below, $\alpha(n)$ does have a peak (exceeding the unscreened $\alpha=3/2$ value) at an intermediate density around $2k_F d \approx 1$. As shown in our numerical results presented in the last section, $\alpha(n)$ defined by $\mu \sim n^{\alpha(n)}$, can exceed the unscreened exponent value $\alpha = 3/2$ reached in the asymptotic high-density ($2k_F d \gg 1$) limit. To verify whether this finding is a numerical artifact or real, it is sufficient to consider the unscreened remote 2D Coulomb disorder in the strict 2D electron layer limit where the free carriers and the charged impurities, both confined in infinite zero-thickness 2D layers in the $x$-$y$ plane, are separated by a distance $d$ in the $z$-direction. (This is the model explicitly used in section III.) As $n \rightarrow 0$, $\alpha(n) \rightarrow 1$ (unscreened $2k_F d \ll 1$ limit) whereas as $n \rightarrow \infty$, $\alpha(n) \rightarrow 3/2$ (unscreened $2k_F d \gg 1$ limit). The zero-density limit would be modified to $\alpha =0$ if screening is included in the theory, but we are interested here in the intermediate-density behavior, not the zero-density regime. The scattering rate $\tau^{-1}$ in this case is given by \[see Eqs. (\[eq:eq34\]) and (\[eq:eq35\])\]: $$\tau^{-1} = \frac{C}{n} \int_0^1 \frac{e^{-b x \sqrt{n}}}{\sqrt{1-x^2}} dx, \label{eq:eq52}$$ where we have shown the explicit density ($n$)-dependence everywhere ($C$ and $b$ are unimportant carrier density independent constants for our purpose) and have used the fact that $k_F \sim \sqrt{n}$. We rewrite Eq. (\[eq:eq52\]) as, $$\tau^{-1} = \frac{C}{n}I(n), \label{eq:eq53}$$ where $$I(n) = \int_0^1 \frac{e^{-b x \sqrt{n}}}{\sqrt{1-x^2}} dx. \label{eq:eq54}$$ It is obvious that $\tau^{-1}$ is a monotonic function of $n$ decreasing continuously with increasing density with no extrema whatsoever since both $1/n$ and $\exp(-b x \sqrt{n})$ decrease with increasing $n$ continuously. This can be easily checked explicitly by showing that the equation $d\tau^{-1}(n)/d n = 0$ has no solution. The monotonic decrease of $\tau^{-1}(n)$ with $n$ simply implies that $\mu(n) \propto \tau(n)$ increases monotonically with increasing density as is obvious from our numerical results in section IV: For Coulomb disorder, 2D mobility and conductivity always increase with increasing density monotonically. To figure out whether $\alpha(n) = d \ln \mu/ d \ln n$ has non-monotonicity (or extrema) as a function of $n$, we must use $\mu(n) \propto \tau(n)$, and write $$\alpha = \frac{d \ln \mu}{d \ln n} = n \frac{d \ln \mu}{d n} = 1- \frac{n}{I} \frac{d I}{dn}, \label{eq:eq55}$$ where $I(n)$ is the integral defined by Eq. (\[eq:eq54\]). It is straightforward, but messy, to show that the condition $d\mu/dn = 0$ with $\alpha(n)$ defined by Eq. (\[eq:eq55\]) has a solution at an intermediate value of $n$, and thus $\alpha(n)$ has an extremum – it is still messier to show that $d^2 \alpha/dn^2 < 0$ at this extremum so that $\alpha(n)$ has a maximum at an intermediate density as is found numerically in section IV. For our purpose, however, it is much easier to simply establish that the function $\alpha(n)$ defined by Eq. (\[eq:eq55\]) approaches the high density $n \rightarrow \infty$ limit of $\alpha(n\rightarrow \infty) = 3/2$ from above, thus definitively proving that the mobility exponent $\alpha$ exceeds $3/2$ at some intermediate density (and thus must have a maximum in the $0 < n < \infty$ or equivalently in the $ 1 \gg k_F d \gg 1$ regime). As $n \rightarrow 0$, we have from Eq. (\[eq:eq54\]): $$I(n) = \pi - b \sqrt{n},$$ leading to $$\alpha(n\rightarrow 0) = 1 + b \sqrt{n}/\pi.$$ As $n \rightarrow \infty$, we have: $$I(n) = \frac{1}{b\sqrt{n}} \left [ 1 + \frac{1}{b^2 n} \right ],$$ leading to $$\alpha(n\rightarrow \infty) = 3/2 + 1/2b^2 n. \label{eq:eq59}$$ From Eqs. (\[eq:eq34\]), (\[eq:eq35\]), (\[eq:eq52\]), and (\[eq:eq53\]) we have: $ b = 2 \sqrt{2 \pi} d$, and thus $b \propto d$ is positive definite (except for the $d=0$ explicitly left out here). We, therefore, immediately conclude that $\alpha(n)$ approaches its asymptotic value of $\alpha = 3/2$ for $n \rightarrow \infty$ from above with $\alpha(n\rightarrow \infty) \approx \frac{3}{2} + \frac{1}{16\pi d^2} \frac{1}{n}$, and the leading possible correction to the exponent in the $n \rightarrow \infty$ limit is of $O(1/n)$ with a coefficient $1/16 \pi d^2 \propto d^{-2}$. We also find that the correction to the $n \rightarrow 0$ value of $\alpha(n=0) = 1$ is of $O(\sqrt{n})$ with a positive coefficient of $2 \sqrt{2\pi} d/\pi \propto d$. Thus, we now have a complete theoretical understanding of the intriguing (and hitherto unexpected in the literature) numerical finding in Sec. IV that, although the mobility $\mu(n)$ itself is a monotonically increasing function of increasing carrier density, its power law exponent $\alpha(n)$ shows a maximum (around $2k_F d \sim 1$ in fact) approaching the asymptotic high-density ($n \rightarrow \infty$; $2k_Fd \gg 1$) value of $\alpha = 3/2$ from above, allowing $\alpha(n)$ to be larger than 1.5 at some $d$-dependent value $\alpha > 1.5$. One feature of our theory presented in Eqs. (\[eq:eq52\]) – (\[eq:eq59\]) is worth mentioning and comparing with the numerical results of Sec. IV. This is our finding in Eq. (\[eq:eq59\]) that the asymptotic high-density ($n \rightarrow \infty$) exponent $\alpha(n\rightarrow \infty) = 3/2$ is approached from above as $\alpha(n \rightarrow \infty) = 3/2 + 1/(16 \pi d^2 n)$, implying that the maximum possible values of $\alpha$, $\alpha_{\rm max}$, scales approximately as $(d^2 n_{\rm max})^{-1} \propto (k_{Fm} d)^2$, where $n_{\rm max}$ and $k_{Fm}$ are respectively the carrier density and the corresponding Fermi wave vector at the maximum. This implies that the maximum value $\alpha_{\rm max} \approx 1.7$ ($\approx 3/2 + 1/16 \pi$) is approximately independent of the value $d$ and of the carrier effective mass with the value of the carrier density $n_{\rm max}$ (where the maximum occurs) scaling roughly as $n_{\rm max} \sim d^{-2}$. This strong prediction is approximately consistent with our numerical results – in fact, $\alpha_{\rm max} \sim 1.7$ is clearly independent of whether the system is a 2D electron or hole system and of the precise value of $d$. In fact, $\alpha_{\rm max} \sim 1.7$ being approximately independent of electrons/holes and the value of the separation distance $d$ is a striking theoretical result which is consistent with the full numerical results of Sec. IV. Because of the striking nature of our finding that $\alpha_{\rm max} \sim 1.7$ always (for remote impurity scattering) for 2D electron/hole carrier systems, we carried out additional numerical calculations using the realistic Boltzmann theory (including both quasi-2D finite thickness and screening effects) for 2D n-GaAs wells of thickness $a = 300$ Å (different from the case of $a=200$ Å used in Sec. IV) and incorporating both remote impurity scattering with 2D impurity densities $n_{d}$ and separation $d$ and also (different values of $n_d$ is used) with near impurity scattering with 2D impurity density $n_i$ with $d=0$ (i.e. interface impurities). The calculated exponents for the individual scattering mechanisms $\alpha_d$ and $\alpha_i$ (for $n_d$ and $n_i$, respectively) are shown in Fig. \[fig:eight\] where each panel corresponds to different sets of values for $n_d$ and a fixed $n_i$. In each case, the individual exponents $\alpha_d/\alpha_i$ as well as the total exponent $\alpha$ are shown as a function of density. The exponent is extracted from a full numerical evaluation of $\mu(n)$ and then using $\alpha(n) = d\ln \mu(n)/d \ln n$, where the total exponent is extracted by adding the two individual resistivities, i.e., $\mu^{-1} = \mu^{-1}_d + \mu^{-1}_i$. The rather amazing fact to note in Fig. \[fig:eight\] is that each individual exponent $\alpha_d$ and $\alpha_i$ has a maximum value $\sim 1.7$, albeit the maximum for remote (near) impurities occurring at low (high) carrier densities because the effective $d$-values are much higher (lower) for remote (near) impurities. (We note that $d=0$ for $n_i$ impurities still has an effective $d$ value of roughly $a/2 \approx 150$ Å whereas the effective $d$ for the $n_d$ impurities is $d + a/2$ in each case.) Figure \[fig:eight\] is a striking direct numerical verification of our theory. Before discussing the same physics for graphene, which we do next, we mention that including screening in the theory is straightforward (but extremely messy). All we need to do is to modify Eq. (\[eq:eq54\]) to $$I(n) = \int_0^1 \frac{x^2 e^{-b x \sqrt{n}} dx}{(x+ c/\sqrt{n})^2 \sqrt{1-x^2}},$$ with $c=0$ giving the unscreened formula of Eq. (\[eq:eq52\]). Inclusion of screening strongly affects the low-density $n \rightarrow 0$ behavior, changing $\alpha(n \rightarrow 0)$ exponent to zero (for $c\neq 0$) from unity ($c=0$), but does not affect the intermediate or high density behavior at all (as is obvious from Table I where the $2k_F d \gg 1$ asymptotic results are independent of the screening constant $q_s = q_{TF}/2k_F \propto 1/\sqrt{n}$). Since the extrema behavior of interest to us is not a low-density phenomenon, our analysis based on Eq. (\[eq:eq54\]) is appropriate (as is verified by its agreement with the full numerical results). ![(color online) The calculated mobility exponent as a function of density for 2D n-GaAs wells of thickness $a = 300$ Å. The mobility is calculated by incorporating both remote impurity scattering with 2D impurity densities $n_{d}$ and separation $d$ and near impurity scattering with 2D impurity density $n_i$. Here we use $d=500$ Å and $n_i = 3 \times 10^8$ cm$^{-2}$ for all figures, but different values of $n_d$, i.e., (a) $n_d = 2\times 10^{10}$ cm$^{-2}$, (b) $n_d = 1\times 10^{10}$ cm$^{-2}$, (c) $n_d = 0.5\times 10^{10}$ cm$^{-2}$, (a) $n_d = 0.22\times 10^{10}$ cm$^{-2}$. The calculated exponents for the individual scattering mechanisms $\alpha_d$ and $\alpha_i$ (for $n_d$ and $n_i$, respectively) and the total exponent $\alpha$ are shown as a function of density. \[fig:eight\] ](ds_fig8.eps){width=".90\columnwidth"} Now, we consider the corresponding graphene case, which also has a maximum in $\alpha(n)$ at some intermediate carrier density with $\alpha_{\rm max}$ ($>1.5$) being larger than the corresponding infinite density unscreened exponent value of 3/2 (see Fig. \[fig:seven\]). For graphene, with the charged impurities located in a 2D layer a distance $d$ from the graphene layer, the scattering rate $\tau^{-1}$ is given by \[see Eq.(\[eq:eq41\])\]: $$\tau^{-1} = A \sqrt{n} \int_0^1 dx \frac{x^2 \sqrt{1-x^2}}{(x+q_s)^2} e^{- \tilde{b} x \sqrt{n}}, \label{eq:eq62}$$ where $A$, $\tilde{b} = 2 \sqrt{\pi} d$ are density-independent constants and $q_s = 4 e^2/\kappa \hbar v_0$ is also density independent. Direct expansion for Eq. (\[eq:eq62\]) in the low ($n \rightarrow 0$) and high ($n \rightarrow \infty$) limits give: $$\tau^{-1}(n \rightarrow 0) = \frac{A_0}{\sqrt{n}} \left ( 1- \frac{16 d}{3 \pi} \sqrt{\pi n} \right ),$$ and $$\tau^{-1}(n \rightarrow \infty) = \frac{A_{\infty}d}{(k_Fd)^2(q_{TF}d)^2} \left ( 1- \frac{6}{q_{TF} d} \right ),$$ where $A_0$ and $A_{\infty}$ are constants independent of $n$ and $d$. For graphene $\mu(n) \sim \tau/\sqrt{n}$, and therefore, we get for $n \rightarrow 0$ $$\mu(n) \sim \left ( 1 + \frac{16d}{3\pi} \sqrt{n} \right ),$$ and for $n \rightarrow \infty$ $$\mu(n) \sim n^{3/2} d^3 \left ( 1 + \frac{3}{2 r_s d \sqrt{\pi n}} \right ), \label{eq:eq66}$$ where $r_s = e^2/(\kappa \hbar v_0)$ is the so-called graphene fine structure constant. Eqs. (\[eq:eq62\]) – (\[eq:eq66\]) imply that the mobility $\mu(n)$ starts at low density ($n \rightarrow 0$) with $\alpha = 0$, but with a leading order correction going as $O(\sqrt{n})$ with a positive sign. For large $n$ (i.e., $k_F d \gg 1$), $\alpha(n \rightarrow \infty)$ becomes 3/2, but has a positive leading order correction of $O(1/\sqrt{n})$. This immediately implies that $\alpha(n)$ must have a local maximum at some intermediate density with $\alpha$ being only logarithmically larger than the asymptotic ($n \rightarrow \infty$) value of 3/2. This conclusion is consistent with numerical results presented in Fig. \[fig:seven\]. We note that the maximum in $\alpha(n)$ for graphene is much shallower and weaker than in the 2D parabolic system. Experimental Implications ========================= We now discuss the possible experimental relevance of our theoretical findings in this section. The fact that the mobility $\mu(n)$ or the conductivity $\sigma(n) = n e \mu$ of 2D carrier systems shows a density scaling behavior with $\mu \sim n^{\alpha(n)}$ and $\sigma \sim n^{\beta(n)}$ with $\beta = \alpha + 1$ has been known for a long time in the experimental 2D transport literature [@ten]. Although our current work is purely theoretical, focusing entirely on the fundamental questions of principle involving the detailed behavior of the exponent $\alpha(n)$ and $\beta = \alpha + 1$ for various types of disorder affecting transport properties, we believe that it is appropriate for us to comment on experiments, making connection with the existing data in the literature as well as making some concrete predictions for future experiments (particularly in the context of our unexpected finding of a maximum in $\alpha(n)$ with an almost universal value of 1.7 for 2D semiconductor quantum wells). We first summarize the serious difficulties in making direct quantitative connection or comparison between experiment and theory in 2D semiconductor systems with respect to low-temperature disorder-limited transport properties. In fact, these caveats apply to all disorder-limited transport properties in all systems, not just to 2D transport in semiconductor quantum wells. The key problem is that the detailed nature of disorder (either qualitative or quantitative) in a sample is never precisely known from independent measurements — in the context of our transport theory in this paper, relative amounts of 2D near and remote Coulomb disorder, 3D Coulomb disorder, and short-range disorder are simply not known. Thus, a quantitative or even a qualitative theory for calculating the mobility or the conductivity of a given sample exists only as a matter of principle, but not in practice since the details of the underlying disorder contributing to resistive scattering are apriorie unknown and often are figured out indirectly based on quantitative comparisons between transport experiment and theory. The situation is worsened by the fact that the relative magnitudes of various independent scattering mechanisms vary strongly with carrier density – for example, Coulomb disorder weakens with increasing carrier density making short-range scattering relatively stronger at high carrier density. Thus, all 2D semiconductor transport would eventually be dominated by short-range scattering (e.g., interface roughness, alloy disorder) at very high carrier density (where $\alpha \approx 0$ in 2D systems), and the only question is how high in density one must go to reach this asymptotic zero-range disorder limited regime where Coulomb disorder has virtually been screened out. This, of course, is completely nonuniversal and depends entirely on the relative amount of Coulomb impurities and short-range disorder in particular samples! This discussion shows that in 2D semiconductor we have $\alpha(n \rightarrow 0) =0$ and $\alpha (n \rightarrow \infty) = 0$ purely theoretically with the zero-density limit and the infinite-density limit being dominated by completely screened Coulomb disorder ($k_F d \ll 1$, $q_s \gg 1$) and zero-range disorder, respectively, although these strict theoretical limits are unlikely to apply to real samples at any finite carrier density. The difficulty of applying the pristine theory to specific experimental situations is obvious from our numerical results presented in Figs. \[fig:five\] and \[fig:six\] (for 2D holes) and Fig. \[fig:eight\] (for 2D electrons). In each case, the mobility exponent $\alpha$ for individual near and far Coulomb impurities follows our theoretical prescription perfectly with the expected low- and high-density exponents agreeing with the results of Table I, but the total exponent $\alpha$ (which is the only one relevant for the experimental data) may not follow any well-defined pattern and could vary strongly depending on the relative strengths of near and far Coulomb disorder, showing strong nonmonotonicity (Figs. \[fig:five\] and  \[fig:six\]) or weak/no nonmonotonicity (Fig. \[fig:eight\]). This reinforces the point made earlier by us that the universal density scaling behavior in transport applies only to individual scattering mechanism with the overall transport being dominated by crossover behavior is generically nonuniversal due to the existence of several different operational scattering processes. In spite of the above serious caveats arising from our ignorance about the underlying disorder contributing to resistive scattering mechanisms, some general statements can be made about the implications of our theory to experimental data. We discuss this below. \(i) For very dirty (and low-mobility) samples, the background Coulomb disorder arising from the unintentional charged impurities in the quantum well should dominate transport properties, leading to $\alpha_e \sim \alpha_h \sim 0.5$ for a wide range of intermediate densities. (ii) When transport is limited by remote dopants, which would always be true in modulation-doped samples for $k_F d < 1$, $\alpha_e \sim 1.5$ and $\alpha_h \sim 1-1.5$ depending on the hole effective mass, but $\alpha_e/\alpha_h$ will decrease with increasing density as $k_F d > 1$ regime is reached. (iii) For modulation doped structures with $k_F d \gg 1$, background disorder again dominates at intermediate density giving $\alpha_e \sim \alpha_h \sim 0.5 - 1$. The above situation seems to describe the existing experimental situation for 2D quantum well transport reasonably well as discussed below. Focusing on specific experimental results in the literature in the context of our transport theory, we make the following remarks discussing some specific experimental publications in 2D GaAs based electron and hole systems. \(1) In Ref., the measured $\alpha_e \approx 0.6-0.7$ in n-GaAs 2D system with no intentional remote dopants (the sample is a gated undoped sample) in the density range $\sim 10^{10} - 10^{11}$ cm$^{-2}$ (i.e., $q_s \agt 1$; $2k_F d < 1$) agrees quantitatively with our theoretical results given in Figs. \[fig:three\], \[fig:four\], and \[fig:eight\] with background 2D and 3D unintentional charged impurities being the main disorder mechanism as expected for an undoped 2D system. \(2) In a similar gated undoped 2D p-GaAs sample Manfra [*et al.*]{} [@twentyone] find $\alpha_h \sim 0.7$ for density $\agt 10^{10}$ cm$^{-2}$, again agreeing with our results given in Figs. \[fig:three\] and \[fig:four\] for background scattering. \(3) In Harrell [*et al.*]{} [@twentytwo] gated undoped n-GaAs 2D samples, $\alpha \approx 0.6$ was found for $n \agt 10^{11}$ cm$^{-2}$ and $\alpha \approx 0.33$ was found for $n < 5 \times 10^{10}$ cm$^{-2}$. This is both quantitatively and qualitatively consistent with our numerical findings in Figs. \[fig:three\], \[fig:four\], and \[fig:eight\], where scattering by background charged impurities in the layer leads to $\alpha \approx 0.3-0.7$ in the $n=10^{10} - 10^{11}$ cm$^{-2}$ density range with $\alpha(n)$ decreasing with decreasing carrier density. \(4) Melloch [*et al*]{}. [@twentythree] found $\alpha \approx 0.6 - 0.7$ for $n >10^{11}$ cm$^{-2}$ which is consistent with our background impurity scattering results. \(5) Pfeiffer [*et al.*]{} studied [@twentyfour] modulation-doped high-mobility 2D GaAs electron systems [@twentyfour] obtaining $\alpha \sim 0.7$ around $n\sim 3 \times 10^{11}$ cm$^{-2}$ for modulation-doped structures ($d=1000-2000$Å) with $\mu \agt 10^7$ cm$^2$/Vs. Again, remote impurity scattering is completely ineffective here because $k_F d \gg 1$ rendering mobility limited only by remote impurity scattering to be around $10^8$ cm$^2$/Vs according to our numerical calculations. The dominant scattering mechanism in this sample is by background unintentional charged impurities, leading to $\alpha \approx 0.7$ around $n \sim 3 \times 10^{11}$ cm$^{-2}$ according to our Fig. \[fig:four\], which is in precise agreement with the date of Pfeiffer [*et al.*]{} [@twentyfour]. \(6) In a similar high-mobility modulation-doped 2D n-GaAs sample, Shayegan [*et al*]{}. [@twentyfive] found $\alpha \approx 0.6$ in samples with $\mu \approx 10^6$ cm$^2$/Vs for $n \alt 10^{11}$ cm$^{-2}$ with the spacer thickness $d = 1000-2000$ Å. Again, remote scattering by the intentional dopants is ineffective as a resistive scattering mechanism here with the dominant scattering being by unintentional background impurities in the GaAs quantum well.From Fig. \[fig:four\] of our presented results, we find $\alpha \approx 0.6$ for $n \alt 10^{11}$ cm$^{-2}$ in agreement with the experimental finding of Shayegan [ *et al.*]{} [@twentyfive]. \(7) Most of the high-mobility experimental samples discussed above are dominated by the background unintentional charged impurities in the 2D layer itself leading to $\alpha < 1$ by virtue of the fact that the intentional dopants introduced for modulation doping are rather far away in these high quality samples (this is a generic feature of all high mobility 2D samples with $\mu > 10^6$ cm$^2$/Vs where $\alpha <1$ prevails by virtue of the background disorder being dominant). By contrast, early work on modulation-doped 2D samples invariably had lower values of $d$ and achieved much lower mobility $\mu < 10^6$ cm$^2$/Vs. Such samples are almost always dominated by remote scattering by the intentionally introduced dopants, leading to $\alpha$ values typically exceeding unity as our theory predicts. As a typical example, we consider the work of Hirakawa and Sakaki [@twentysix] on modulation-doped 2D n-GaAs samples with $\mu \sim 10^4 - 5 \times 10^5$ cm$^2$/Vs for $d\approx 0-180$ Å in the $n\approx 10^{11} - 5 \times 10^{11}$ cm$^{-2}$ density range. Assuming transport to be limited entirely by the intentional ionized dopants in the modulation layer (i.e., no background disorder scattering), our results of Fig. \[fig:one\] predict $\alpha \approx 1$ for $d=0$ and $\alpha \approx 1.1 - 1.3$ for $d=100$ Å. Hirakawa and Sakaki reported [@twentysix] $\alpha \approx 1.1-1.3$ for $d\approx 0-100$ Å in essential agreement with our theory. In addition, these authors reported $\alpha \approx 1.7$ for $d\approx 200$ Å, an anomalous mobility exponent (i.e. $d > 3/2$) which has remained unexpected in the literature for more than 25 years. Our current work provide a definitive explanation for $\alpha \approx 1.7$ as arising from the maximum in $\alpha(n)$ for remote scattering as is apparent in Fig. \[fig:one\]. We note that $k_F d \sim 1-2$ for $n \sim 10^{11}$ cm$^{-2}$ and $d \sim 100 - 200$ Å, and thus our theory predicts $\alpha \sim 1.7$ for $d \approx 200$ Å in the Hirakawa-Sakaki experiment [@twentysix]. We believe that the experimental finding of $\alpha \sim 1.7$ by Hirakawa and Sakaki is a direct verification of our intriguing prediction of $\alpha > 3/2$ for $k_F d \agt 1$ in transport dominated by remote scattering. \(8) Finally, we discuss some recent unpublished experimental work by Pfeiffer and West [@twentyseven] who, motivated by our theoretical work, carried out low-temperature transport measurements in a series of high-quality (i.e. ultrapure GaAs with very little background disorder due to unintentional impurities) MBE-grown 2D n-GaAs samples with variable values of $d$. Since these experiments were performed with the specific goal of checking our low-temperature 2D transport theory predictions, Pfeiffer and West made undoped gated samples of highest quality with little background disorder and a nominal low-temperature mobility of $\mu > 10^7$ cm$^2$/Vs. Then, they systematically introduced charged impurities at specific separation ($d$) from the 2D layer by inserting carbon atoms in the GaAs layer ($d=0$) or in AlGaAs barrier layer ($d\neq 0$). First they explicitly verified that the introduction of different amounts of impurity centers without changing $d$ only affects the 2D mobility through the expected $n_i \mu$ scaling behavior (i.e. $\mu \propto n_i^{-1}$) without changing $\alpha(n)$ by changing the inserted carbon atoms by a factor 5 keeping $d$ fixed (which changed the 2D mobility by a factor of 5 without changing the exponent $\alpha(n)$ in the same carrier density range). Thus, they measured $\alpha(n)$ for $d=0$ and $d=150$ Å finding $\alpha(n) \approx 0.8$ and 1.8 respectively for $n \sim 10^{11}$ cm$^{-2}$. Our calculated $\alpha$($n \approx 10^{11}$ cm$^{-2}$)$\approx 0.8$ for $d=0$ in Fig. \[fig:two\] in perfect agreement with the experimental data. For $d=150$ Å, $k_Fd \approx 1$ for $n \approx 10^{11}$ cm$^{-2}$, and we predict $\alpha(n) \approx \alpha_{\rm max} = 1.7$ in this situation which compares well with the experimental exponent of 1.8. Thus, this experimental investigation of our theoretical predictions appears to have strikingly verified our finding that $\alpha > 3/2$ in the $k_F d \sim 1$ intermediate density regime where the shallow maximum occurs in $\alpha(n)$ in our theory. \(9) Before concluding our discussion of experimental implications of our theory we describe the very recent 2D hole transport data in high-mobility p-GaAs systems by the Manfra group [@twentyeight; @twentynine]. These 2D p-GaAs samples with hole mobility $>2\times 10^6$ cm$^2$/Vs are the world’s highest mobility hole samples ever, and taking into account the effective mass difference ($\sim$ a factor of 5) between GaAs electrons and holes compare favorably with the best ($\sim 15\times 10^6$ cm$^2$/Vs) available 2D electron mobilities. The main finding of the work [@twentyeight] is that the mobility exponent $\alpha(n)$ increases from 0.7 at high hole density ($\sim 10^{11}$ cm$^-2$) to 1.7 at low density ($\agt 10^{10}$ cm$^{-2}$) for modulation doped sample with $d=800$ Å. Since the mobility remains high throughout ($\gg 10^5 $ cm$^2$/Vs), localization effects should not be playing a role. We therefore believe that the experimental finding of Watson [*et al.*]{} [@twentyeight] is a direct confirmation of the $\alpha(n)$ behavior for 2D holes presented in our Figs. \[fig:five\] and \[fig:six\], where the total calculated $\alpha(n)$ increases monotonically as the 2D hole density decreases from $n\sim 10^{11}$ cm$^{-2}$ to $n \sim 10^{10}$ cm$^{-2}$. In fact, even the explicit $\alpha$-values measured by Watson [*et al.*]{} agree well with our 2D hole theoretical results presented in Figs. \[fig:five\] and \[fig:six\] with $\alpha$ ($n \sim 10^{10}$ cm$^{-2}$) increasing above the unscreened $\alpha=3/2$ value reaching essentially the measured value of $\alpha \sim 1.7$ around $n \agt 10^{10}$ cm$^{-2}$. Thus, our theory provides a qualitative explanation of the Watson [*et al*]{}. experimental results including the surprising finding of the intermediate-density $\alpha$ being around 1.7 ($> 1.5$). \(10) We conclude this section on the experimental relevance of our theory by discussing graphene briefly. There has been substantial research activity on studying the density dependent graphene conductivity [@nine; @thirty] which is well beyond the scope of our current work and has already been covered elsewhere. It is known that scattering by near random charged impurities located on the surface of the graphene layer or at the graphene-substrate interface leads to a $\sigma(n) \propto n$ (i.e., $\mu(n) \sim$ constant) in the intermediate density ($k_F d < 1$) region, and in the high-density regime $\sigma(n)$ becomes sublinear most likely because of short-range defect scattering which gives $\sigma(n)\sim$ constant (i.e., $\mu \sim 1/n$) – see Table I for details. Theory predicts (Table I) that for $k_F d \gg 1$, i.e., remote Coulomb disorder, $\mu(n)$ and $\sigma(n)$ should cross-over to $\mu(n) \sim \sqrt{n}$ and $\sigma(n) \sim n^{3/2}$ in graphene. This clear prediction could be verified by putting an impurity layer (e.g. a SiO$_2$ film) at various values of $d$ from the graphene layer and measuring $\sigma(n)$ to check if the low-density ($2k_F d \ll 1$) $\sigma(n) \sim n$ behavior indeed crosses over to the high density $\sigma(n) \sim n^{3/2}$ behavior as we predict theoretically. In graphene, with a valley degeneracy of 2, $2k_F d \approx 2.5 \tilde{d} \sqrt{\tilde{n}}$, where $\tilde{d} = d/1000$Å, and $\tilde{n} = n/10^{10}$cm$^{-2}$. Hence, for $n=10^{12}$ cm$^{-2}$ and $d=100$ Å $2k_F d \approx 2.5$. Thus, the condition $2k_F d \gg 1$ would require an impurity layer at $d \approx 1000$ Å (with consequent very weak Coulomb disorder scattering) which may lead to the complication that the subsequent graphene resistivity will be entirely dominated by any underlying short-range disorder (with $\sigma \sim n^0$), masking any $\sigma \sim n^{3/2}$ behavior arising from charged impurity scattering. One possibility would be to put suspended graphene near a thick layer (of thickness $L$) of disordered substrate with a 3D charged impurity distribution, which would lead to \[see Eqs. (\[eq:eq41\])–(\[eq:eq43\]): $$\tau^{-1} = \frac{N_i}{\pi \hbar^2 v_0 k_F^2} \left ( \frac{2\pi e^2}{\kappa} \right )^2 \int_0^1 dx \frac{x\sqrt{1-x^2}}{(x+q_s)^2} \left [ 1-e^{-2L_0 x} \right ]$$ with $N_i$ being the 3D impurities density in the impurity layer and $L_0 = 2k_F L$, giving $\sigma(n) \sim n^{3/2}$ for $k_F L \gg 1$ and $\sigma(n) \sim n$ for $k_F L \ll 1$. Such a 3D impurity layer underneath graphene may manifest the superlinear $\sigma(n) \sim n^{3/2}$ conductivity behavior predicted by the theory, but observing the shallow maximum in the exponent $\beta$/$\alpha$ for $k_F d \sim 1$ may still be difficult. other effects ============= In this section, just before our conclusion in the next section, we discuss “other effects” completely left out of our theoretical considerations which may compromise and complicate direct quantitative comparisons between our theory and experiment although we believe that our theoretical conclusions should apply generically to transport in high-mobility 2D semiconductor systems at low enough temperatures. First, phonon effects are neglected in the theory since we explicitly consider the $T=0$ situation (in practice, $T=50-300$ mK is the typical low-temperature experimental situation mimicking the $T=0$ theoretical situation). For consistency between theory and experiment, the transport data must therefore be taken at a fixed temperature $T<T_{BG}$, where $T_{BG}$ is the so-called Bloch-Grüneisen temperature, so that acoustic phonon scattering contribution to the resistivity is negligible compared with disorder scattering even in the high-mobility 2D semiconductor structures under consideration in this work. (Optical phonon transport is of no relevance for low-temperature transport since $k_BT \ll \hbar \omega_{LO}$ is explicitly satisfied as $\hbar \omega_{LO} > 100$ K typically.) $T_{BG}$ is given either by the Debye temperature (for 3D metals) or by the energy of the acoustic phonons with $2k_F$-wave vector (for 2D semiconductors), whichever is lower. We therefore have $k_B T_{BG} = 2 \hbar k_F v_{ph}$ where $v_{ph}$ is the relevant acoustic phonon velocity in the material. Putting in the appropriate sound velocity ($v_{ph}$), we get $T_{BG} \approx 2 \sqrt{\tilde{n}}$ K; $10 \sqrt{\tilde{n}}$ K in 2D GaAs and graphene, respectively, where $\tilde{n}$ is measured in units of $10^{10}$ cm$^{-2}$. Thus, down to carrier density $n \sim 10^9$ cm$^{-2}$, it is reasonable to ignore phonon effects in transport at $T \approx 100$ mK. Acoustic phonon scattering has been considered elsewhere in the literature [@eight]. Second, we have used the Born approximation in calculating the scattering time and the RPA in calculating the screened Coulomb disorder throughout. Both of these approximations surely become increasingly quantitatively inaccurate at lower carrier density although they should remain qualitatively valid unless there is a metal-insulator transition (obviously, our Drude-Boltzmann transport theory would not apply at or below any metal-insulator transition density). RPA screening theory becomes increasingly quantitatively inaccurate as carrier density (or the corresponding $r_s \sim n^{-1/2}$ parameter) decreases (increases), but there is no well-accepted systematic method for incorporating low-density electronic correlation effects going beyond RPA screening. Including low-density correlation effects in Hubbard-type local field theories do not change any of our qualitative conclusion. As for multiple scattering effects [@gold] beyond Born approximation, they typically lead to higher-order corrections to the resistivity so that $\rho \propto n_i$, where $n_i$ is the impurity density, is no longer valid and one must incorporate high-order nonlinear corrections to the resistivity going as $O(n_i^2)$ and higher. These nonlinear multiscattering corrections become quantitatively important for $n \alt n_i$, and can be neglected for $n > n_i$ regime, which is of our main interest in this paper. Our neglect of multiscattering corrections beyond Born approximation is consistent with our neglect of strong localization effect, both of which will become important in the very low carrier density regime ($n<n_i$) where the Drude-Boltzmann theory becomes manifestly inapplicable. Third, we ignore all nonlinear screening effects, which have been much discussed in the recent graphene literature [@adam] where charged impurity induced inhomogeneous electron-hole puddle formation plays an important role at low carrier density, since they are important only at very low carrier density ($n<n_i$) where our whole Boltzmann theoretical approach becomes suspect any way. For the same reason we also do not take into account any scattering-screening self-consistency effect [@dassarma] which may also become important at very low carrier density (again, for $n<n_i$). Fourth, we ignore any possible spatial correlation effects among impurity locations, assuming the disorder to arise from completely uncorrelated random impurity configurations. If the impurities are spatially correlated, it is straightforward to include the correlation effect in the Boltzmann transport calculation by simply multiplying the disorder potential term in Eq. (\[eq:eq7\]) by the corresponding structure factor $s(q)$ for the impurity distribution, i.e., by writing $|V_q|^2 \rightarrow |V_q|^2s(q)$ in Eq. (\[eq:eq7\]) where $$s(q) = \frac{1}{n_i} \left | \sum_{i=1}^{n_i} e^{-i {\bf q \cdot r_{i}}} \right |^2 - n_i \delta_{q0},$$ where ${\bf r_i}$ denotes the position of each impurity and the sum going over all the impurities. This, if experimental information about impurity correlations exists, it is then straightforward to include such spatial correlation effects in the Boltzmann transport calculations, as has indeed been done in both 2D GaAs[@cor_gaas] and graphene [@correlation]. Since intrinsic impurity correlation information is typically unavailable, our model of uncorrelated random disorder seems to be the obvious choice from a theoretical perspective since it involves only one (the impurity density $n_i$) or two ($n_i$ and the impurity location $d$) unknown parameters (and most often $d$ is known from the modulation doping setback distance and is thus not a free parameter) whereas including impurity spatial correlations would invariably involve the introduction of more unknown free parameters making the theory of dubious theoretical relevance. Since our interest in this paper is not an absolute calculation of the conductivity or mobility (and hence $n_i$ typically drops out of our theory if only one dominant scattering mechanism is operational), but obtaining the universal density dependence of conductivity, it is important to mention that the main effect of impurity correlation is to suppress the effect of $n_i$ on the resistivity without much affecting the carrier density dependence particularly for $n>n_i$ regime of our interest. We mention that any intrinsic impurity correlations actually increase the value of mobility from our uncorrelated random disorder theory and thus compensate to some extent the suppression of mobility arising from some of the effects discussed above. Fifth, for our $T=0$ theory to be strictly applicable to the experimental data, the experimental temperature $T$ should satisfy the condition $T \ll T_F$, where $T_F \equiv E_F/k_B$ is the corresponding Fermi temperature of the system. Using the known dependence of $E_F$ on the 2D carrier density $n$ we see that this implies $T (K) \ll 4.2 \tilde{n}$ for 2D n-GaAs, $T(K) \ll 1 \tilde{n}$ for 2D p-GaAs, and $T(K) \ll 150 \sqrt{\tilde{n}}$ for graphene where $\tilde{n} = n/10^{10}$cm$^{-2}$. Thus, low-temperature experiments carried out at $T=100$ mK satisfy our $T=0$ theoretical constraint very well down to $10^9$ cm$^{-2}$ carrier density (except for 2D p-GaAs hole system where the density cut off is perhaps $5\times 10^9$ cm$^{-2}$). Our direct numerical calculations at finite temperatures (not shown in this paper) show that our calculated exponents $\alpha(n)$ and $\beta(n) = \alpha +1$ at $T=0$ continue to be quantitatively accurate upto $T \agt T_F$ as long as the experimental data for density dependence are taken at a fixed temperature. Thus our theory and numerics for $\alpha(n)$ and $\beta(n)$ are quite robust against thermal effects as long as phonons are unimportant (i.e., $T < T_{BG}$ is satisfied). We conclude this section of “other effects” by noting that the most important drawback of our RPA-Drude-Boltzmann theory is that it may fail systematically at low carrier density ($n \alt n_i$) where important physical effects (which are difficult to treat theoretically) such as strong localization, metal-insulator transition, nonlinear screening, inhomogeneous puddle formation, multiscattering corrections, screening-scattering self-consistent, etc. may all come into play making both our theory inapplicable to the experimental situation. We do, however, anticipate our theory to be applicable to very low carrier densities ($n \agt 10^9$ cm$^{-2}$) in ultra-high mobility 2D GaAs and graphene systems where $n_i \alt 10^8$ cm$^{-2}$ typically. A convenient experimental measure of the applicability of our theory is looking at the dimensionless quantity ’$k_F l$’ where $l$ is the elastic mean free path defined by $l \equiv v_F \tau$ where $\tau$ and $v_F$ are respectively the transport relaxation time and Fermi velocity. We find that $k_F l = 4.14 \tilde{n} \tilde{\mu}$ for 2D GaAs systems, where $\tilde{n} = n/10^{10}$cm$^{-2}$ and $\tilde{\mu} = \mu/(10^6$ cm$^2$/Vs) whereas $k_Fl=0.2 \tilde{n} \tilde{\mu}$ for graphene. As long as $k_F l > 1$, our Drude-Boltzmann theory should be valid qualitatively and we therefore conclude that the theory remains quantitatively accurate for $n \agt 10^{10}$ cm$^{-2}$ ($10^{11}$ cm$^{-2}$) for 2D GaAs (graphene) systems in high-quality/high-mobility samples. Thus, there is a large range of carrier density ($10^{10}-10^{12}$ for 2D GaAs and $10^{11} - 10^{13}$ for graphene) where our predictions for universal density dependence can be experimentally tested through low-temperature transport measurements. The fact that all the “other effects” left out of our theory affects transport at low carrier densities indicates that our predicted density scaling of conductivity will systematically disagree with the experimental data at lower carrier densities. Of course, “lower” density is a relative term, and the dimensionless quantities such as $n/n_i$ and $k_F l$ are the appropriate quantities to define the regime of validity of our theory. As $n/n_i$ and/or $k_F l$ (or even $T_F/T$ or $T_{BG}/T$) become smaller, the Drude-Boltzmann theoretic predictions become increasingly unreliable. Nevertheless, the theory remains predictive down to $10^{10}$ cm$^{-2}$ carrier density (or lower) in high quality (i.e. low values of $n_i$) GaAs samples at low temperatures ($T \approx 100$ mK). discussion and conclusion ========================= We have developed a detailed quantitative theory for the density-dependence of the zero-temperature conductivity (or equivalently mobility) of (mainly) 2D and 3D electron and hole metallic systems assuming transport to be limited by (mainly Coulomb) disorder scattering within the semiclassical Drude-Boltzmann transport theory. We neglect all quantum interference (hence localization) effects as well as interaction effects (except for the carrier screening of the bare impurity Coulomb disorder, which is an essential qualitative and quantitative ingredient of our theory) assuming them to be small since our interest is the density dependence (rather than the temperature dependence) of transport properties at low fixed temperatures in high-mobility ($k_F l \gg 1$ where $l$ is the elastic mean free path) samples. We have systematically considered 3D and 2D doped (n- and p-) semiconductor systems as well as 2D graphene but the primary focus has been on n-GaAs and p-GaAs quantum well based 2D electron or hole systems, mainly because these systems continue to be of great interest in physics and because the carrier density can easily be tuned in such high-mobility 2D semiconductor systems, and the mobility is dominated by Coulomb disorder at low temperatures. We have taken into account both long-range Coulomb disorder from charged impurities and zero-range disorder arising from possible non-Coulombic short range scatterers. The primary focus has been the Coulombic disorder since this is the main low-temperature resistive scattering mechanism in semiconductors. Instead of discussing the nonuniversal values of $\mu$ (and $\sigma$), which depend [@seventeen] on the actual impurity content, we focus on the universal power-law density scaling of transport properties: $\sigma(n) \sim n^{\beta(n)}$ and $\mu(n) \sim n^{\alpha(n)}$ with $\beta = \alpha + 1$. These exponents $\alpha$ and $\beta$ are sample-independent and depend only on the nature of the dominant disorder. We provide asymptotic theoretical analysis of $\alpha$ and $\beta$ (for various types of underlying disorder) in the high and the low density regime and for near and far impurities. We have then verified our analytical results with direct numerical calculations based on the full solution of the Boltzmann transport theory in the presence of disorder scattering. Although our work is primarily theoretical, we provide a critical comparison with various experimental results in the literature (in 2D n-GaAs electrons and p-GaAs holes), finding generally good agreement between our theoretically predicted exponents $\alpha$ and $\beta$ and low-temperature experimental findings in high-mobility 2D electron and hole systems. In particular, a truly exciting prediction, that $\alpha(n)$ has a maximum universal value $\alpha_{\rm max} \sim 1.7$ for all 2D systems at an intermediate carrier density value (approximately defined by $k_F d \sim 1$), seems to be consistent with recent (and old) experimental results from several different groups (and for both 2D electrons and holes), as discussed in details in Sec. VI. Our theory predicts $\alpha(n)$ to vary from $\alpha=0$ in the low-density strong-screening regime ($q_{TF} \gg 2k_F$, $n\rightarrow 0$) to $\alpha = 3/2$ in the high density weak screening regime ($q_{TF} \ll 2k_F$, $n \rightarrow \infty$) with a shallow maximum of $\alpha \sim 1.7$ at intermediate carrier density for $k_F d \sim 1$ where $d$ is the impurity location. Although our work presented in this article is purely theoretical describing the density dependent and disorder-limited $T=0$ conductivity of 2D/3D carriers using the semiclassical Boltzmann theory approach, it is worthwhile to speculate about the prospects for the experimental observation of our asymptotic low ($\alpha \rightarrow$; $\beta \rightarrow 1$) and high ($\alpha \rightarrow 3/2$; $\beta \rightarrow 5/2$) density behavior of Coulomb disorder limited 2D semiconductor transport. These limiting exponents are theoretically universal. We first summarize the current experimental status in the context of our theory. For scattering by background Coulomb disorder (i.e., near impurities with $2k_F d <1$), $\alpha(n) \approx 0.5 -0.8$ typically, and for scattering by remote impurities ($2k_F d >1$) in the modulation doping layer $\alpha(n) > 1-1.3$ typically with a few atypical cases showing $\alpha(n) \sim 1.7$ ($>3/2$) around $k_F d \sim 1$. All of these are intermediate density results in our theory where $q_s$ ($=q_{TF}/2k_F$) and $2k_Fd$ are neither extremely large nor extremely small. Thus the basic experimental situation is in excellent agreement with our theory as it should be because the 2D doped semiconductor transport (as well as graphene) is known to be dominated by screened Coulomb disorder with near or far charged impurities being the dominant scattering mechanism depending on the sample and carrier density. First, we discuss the high-density situation which is theoretically more straightforward and where the Boltzmann theory is almost exact. As carrier density increases, the semiclassical Boltzmann theory becomes increasingly more valid for Coulomb disorder limited transport properties since the conductivity itself and consequently $k_F l$ increases, thus making the system progressively more metallic. In addition, increasing density decreases the metallic $r_s$-parameters, the dimensionless Wigner-Seitz radius, given by $r_s = me^2/(\kappa \hbar^2 \sqrt{\pi n})$ for 2D semiconductor systems. Since $r_s \sim n^{-1/2}$, at high carrier density (e.g., $r_s = 0.5$, 2.5 for 2D n-GaAs, p-GaAs respectively at $n=10^{12}$ cm$^{-2}$) $r_s$ is small, making our theory using RPA screened effective Coulomb disorder to be systematically more valid at higher carrier density as RPA becomes exact at low $r_s$. Thus, it appears that the ideal applicability of our theory is in obtaining the high-density 2D system. This is indeed true except that new physical (rather than theoretical) complications arise making it problematic for a direct comparison between our theory and experiment on 2D systems at high carrier density. Two new elements of physics come into play at high carrier density, both contributing to the suppression (enhancement) of mobility (scattering rate): Intersubband scattering becomes important as the Fermi level moves up and comes close to (or crosses over into) the higher confined subbands of the quasi-2D quantum well, thus opening up a new scattering channel; and, short-range scattering at the interface and by alloy disorder (in AlGaAs) becomes important as the self-consistent electric field created by the electrons themselves pushes the carriers close to the interface at high carrier density. Both of these physical effects eventually suppress the monotonic growth of $\mu (n)$ and $\sigma(n)$ with increasing density, and eventually $\mu(n)$ starts decreasing with increasing carrier density at high enough density (for $n > 3 \times 10^{11}$ cm$^{-2}$ in GaAs-AlGaAs systems) instead of continuing as $\mu(n) \sim n^{3/2}$ as it would in the high-density regime if Coulomb disorder is the only dominant scattering mechanism with no other complications. Since at high density $\tau^{-1}(n) \sim n^{\alpha}$ with $\alpha > 1$ for Coulomb disorder, eventually at some (nonuniversal sample dependent) high density, Coulomb disorder becomes insignificant compared with the short-range scattering effects. Obviously, the density at which this happens is non-universal and depends on all the details of each sample. But in all 2D systems and samples, eventually, when Coulomb disorder induced scattering rate is sufficiently small, a high density regime is reached where the mobility stops increasing (and even starts decreasing). In Si MOSFETs [@ten] this effect is very strong already around $\sim 10^{12}$ cm$^{-2}$ because of considerable surface roughness scattering at the Si-SiO$_2$ interface, and $\mu(n)$ decreases with increasing $n$ at higher density. Even in high-mobility GaAs systems, $\mu(n)$ saturates (i.e., $\alpha=0$) and eventually starts decreasing at a non-universal density around $n \agt 3\times 10^{11}$ cm$^{-2}$. Boltzmann transport theory can be easily generalized to incorporate inter-subband scattering and surface scattering, but the physics is nonuniversal and beyond the scope of our current work. The low-density ($n \rightarrow 0$) situation is fundamentally inaccessible to our Boltzmann transport theory since all doped semiconductor systems (3D or 2D) eventually undergo an effective metal-insulator transition at (a nonuniversal) low (critical) density with the semiclassical Boltzmann theory eventually becoming invalid as a matter of principle at a sufficient low sample-dependent carrier density. In 3D, this transition may be a true Anderson localization transition as $k_F l \rightarrow 1$ with decreasing density making the Boltzmann theory inapplicable. In 2D semiconductors, which are the systems of our main interest, the observed metal-insulator transition at a nonuniversal sample dependent critical density $n_c$ is likely to be a crossover phenomenon [@five; @nine] since the scaling theory of Anderson localization predicts 2D to be the critical dimension with no localization transition. There is considerable experimental support for the observed low-density 2D metal-insulator transition to be a density inhomogeneity-driven percolation transition at a nonuniversal critical density $n_c$ where charged impurity induced Coulomb disorder drives the system into an inhomogeneous collection of “puddles” with a mountain and lake potential landscape where semiclassical metallic transport becomes impossible for $n < n_c$ with $n_c$ being the disorder-dependent percolation transition density [@twenty; @twentyone; @leturcq; @tracy2009; @adam2]. The critical density $n_c$ depends crucially on the sample quality and is typically below $10^9$ cm$^{-2}$ in high-mobility GaAs systems — in Si MOSFETs, where disorder is very strong, $n_c \approx 10^{11}$ cm$^{-2}$, and this is why we have left out 2D Si systems from our consideration in this work although our basic theory applies well to 2D Si-based systems for $n > 10^{11}$ cm$^{-2}$. Our semiclassical Boltzmann theory works for $n \gg n_c$, but for $n \rightarrow n_c$, one must include the inhomogeneous puddle formation and the associated percolation transition even in the semiclassical theory [@hwang]. This is the reason our theory fails for real 2D systems in the $n \rightarrow 0$ limit unless the level of disorder is extremely low (even then our Boltzmann theory is valid only for $n \gg n_c$ and the $n \rightarrow 0$ limit is fundamentally inaccessible). In spite of this fundamental difficulty in accessing the low-density (i.e. $n \rightarrow 0$) limit using our theory directly, it turns out that we can approximately include the effect of a semiclassical percolation transition in our theory by an indirect technique in the density regime $n > n_c$ above the effective metal-insulator transition (but still in a reasonably low density regime for high-quality samples where $n_c$ can be very low). Since the percolation picture essentially eliminates a certain fraction of the carriers from being metallic, we can assume that the effective conductivity (mobility) for $n \gg n_c$ is given by the same exponent $\beta$ ($\alpha$) calculated in our theory with the only caveat that $\beta$ ($\alpha$) is now the exponent only for the actual “metallic” free mobile carrier fraction of the whole system. We can then write: $$\mu = A (n-n_c)^{\alpha} = B n^{\alpha'}, \label{eq:eq68}$$ where $\alpha(n)$ is the real exponent we calculate theoretically from the Boltzmann theory and $\alpha'(n)$ is the effective (apparent) exponent obtained experimentally from $$\alpha'(n) = \frac{d \ln \mu(n)}{d \ln n} \label{eq:eq69}$$ by using the actual data for $\mu(n)$ without taking into account any complications arising from the existence of $n_c$. Using Eqs. (\[eq:eq68\]) and (\[eq:eq69\]), we immediately get the following relationship connecting the effective exponent $\alpha'$ with the real exponent $\alpha$ for $n \gg n_c$ $$\alpha' = \alpha (1-n_c/n)^{-1}, \label{eq:eq70}$$ for $n \gg n_c$. We note that $\alpha' \approx 2 \alpha$ for $n = 2n_c$. Eq. (\[eq:eq70\]) is valid within logarithmic accuracy, and connects the measured low-density ($n$ small, but $n \gg n_c$ being still valid) exponent $\alpha'(n)$ with the real Boltzmann theory exponent (as in Table I) $\alpha(n)$. Since $n_c$ can be measured experimentally by checking where $\sigma(n)$ vanishes, and defining $n_c$ to be $\sigma(n_c) = 0$ at $T=0$, we then immediately see that the measured low-density effective exponent $\alpha'(n) > \alpha(n)$ always, and in fact, $\alpha'(n) \rightarrow \infty$ as $n \rightarrow n_c^+$. Thus, we conclude that the experimentally measured mobility exponent will eventually start increasing as $n$ decreases approaching $n_c$. For $n \gg n_c$, we get $\alpha' \approx \alpha$, but the leading correction to $\alpha$ goes as $\alpha' = \alpha (1+n_c/n)$ for $n \gg n_c$. We note that the existence of $n_c$ enhances $\alpha$ over its nominal value of Table I even for $n \gg n_c$! We have checked existing experimental results in the literature (to the extent $n_c$, $\alpha'$ etc. are known experimentally), finding that our prediction of Eq. (\[eq:eq70\]) seems to apply quite well. We note that since the pristine calculated $\alpha(n)$ decreases with decreasing $n$ (see Figs. \[fig:one\]–\[fig:eight\]) for [ *all*]{} models of Coulomb disorder at low carrier density, Eq. (\[eq:eq70\]) predicts that the apparent exponent $\alpha'$ would be close to $\alpha$ for $n \gg n_c$, but would then manifest a minimum at a density $n_0$ ($>n_c$) defined by the equation: $$\frac{d\alpha}{dn} = \frac{\alpha (n_c/n)}{(n-n_c)}$$ at $n=n_0>n_c$. For $n <n_0$, $\alpha'$ will increase as $n$ decreases (with $\alpha'$ being a minimum at $n=n_0$), eventually diverging as $(1-n_c/n)^{-1}$ as $n \rightarrow n_c^+$. We find this behavior to be qualitatively consistent with all the existing data in the literature although a precise quantitative comparison necessitates more low temperature data showing $\mu(n)$ all the way down to $n =n_c$ where $\mu(n) = 0$. Careful measurements of $\sigma(n)$ close to $n_c$ are lacking in the literature for us to form a definitive conclusion on this matter at this stage. We conclude by pointing out that very close to the percolation transition, where $n-n_c \ll n_c$, i.e. $n \ll 2 n_c$, we expect the critical behavior of 2D percolation transition to possibly come into play, where the conductivity may have a completely different universal 2D percolation exponent $\delta$ (totally distinct from $\alpha$ or $\alpha'$) which has nothing to do with our Boltzmann theory: $$\sigma \sim (n-n_c)^{\delta}$$ for $(n-n_c) \ll n_c$. This percolation critical exponent $\delta$ for $\sigma(n)$ can only manifest itself very close to $n_c$ (i.e. $n \ll 2n_c$), and for $ n \agt 2 n_c$ we believe that our effective theory predicts an effective conductivity exponent $\beta' = \alpha' + 1$ given by $$\beta' \approx \frac{\alpha}{1-n_c/n} + 1 = \frac{\beta - n_c/n}{1-n_c/n}, \label{eq:eq72}$$ for $n \agt 2 n_c$. Again, as for $\alpha'$, $\beta' \approx \beta = \alpha + 1$ for $n \gg n_c$. The conductivity exponents $\delta$ and $\beta'$ arise from completely different physics (from percolation critical theory near $n_c$ and Boltzmann theory far above $n_c$, respectively) and have nothing to do with each other. How the transition or crossover occurs even within the semiclassical theory from $\beta'$ (for $n \agt 2n_c$) to $\delta$ for ($n \ll 2n_c$) is a very interesting question which is beyond the scope of the current work. In our current work, we have, however, solved \[Eqs. (\[eq:eq68\]) – (\[eq:eq72\]) above\] the problem of the crossover from the effective exponent $\alpha'$ (or $\beta'$) for $n \agt 2 n_c$ to the true Boltzmann exponent $\alpha$ (or $\beta$) for $n \gg 2n_c$. We give one possible experimental example for the observed crossover from the Boltzmann exponent $\alpha$ (for $ n \gg n_c$) to $\alpha'$ (for $n \agt n_c$) arising in Jiang [*et al.*]{} [@jiang]. In this work [@jiang], the conductivity was measured in a high-mobility modulation doped 2D GaAs electron system, finding the effective mobility exponent $\alpha \approx 0.9-1.1$ in the high density range ($n \sim 5 \times 10^{10} - 3 \times 10^{11}$ cm$^{-2}$) for $d=350 - 750$ Å. This range of $n$ and $d$ converts to $2k_F d = 1-5$ and $q_s \approx 1$, which is the intermediate density range for all our Coulomb disorder mechanisms in Table I. Given that the measured mobility in Ref. \[\] was relatively modest, $\mu \sim 10^5 - 10^6$ cm$^2$/Vs, it is reasonable to expect, based on our numerical mobility calculations, for both remote and background Coulomb scattering to be equivalently effective, leading to $\alpha \sim 0.9-1.2$ according to our numerical results of Sec. IV. Thus, the “high-density” exponent ($\alpha \sim 1$) in the Jiang sample agrees well with our theory. The interesting point to note in the current context is that Jiang [*et al.*]{} [@jiang] found a large increase of the measured exponent $\alpha$ as $n$ decreases, which is in apparent disagreement with our analytical theory which always gives $\alpha(n)$ decreasing with decreasing $n$. Although other possibilities (e.g., multiscattering) cannot be ruled out [@gold], we believe that the Jiang [*et al.*]{} data indicate a crossover from $\alpha$ to $\alpha'$ as $n$ decreases. In particular, the measured mobility exponent increased from $\alpha \sim 1$ to $\alpha \sim 4$ as $n$ decreased from $\sim 10^{11}$ cm$^{-2}$ to $\sim 2 \times 10^{11}$ cm$^{-2}$ (with also a factor of 50 decrease in mobility). Assuming $n_c \sim 1.5 \times 10^{11}$ cm$^{-2}$ (which is consistent with the data), we get \[from Eq. (\[eq:eq70\])\], $\alpha' \approx 4 \alpha \approx 4$ at $n \approx 2 \times 10^{10}$ cm$^{-2}$, which is in excellent agreement with the data [@jiang]. We therefore believe that the Jiang [*et al.*]{} experiment manifests our predicted crossover behavior from $\alpha(n)$ to $\alpha'(n)$ as $n$ approaches $n_c$ from above. We note that $k_F l \approx 1$ in the Jiang [*et al.*]{} sample for $n \approx 3 \times 10^{10}$ cm$^{-2}$ (where $\mu \approx 10^5$ cm$^2$/Vs seems to have been reached starting from $\mu \sim 10^6$ cm$^2$/Vs for $n \sim 3 \times 10^{11}$ cm$^{-2}$), and thus the identification of $n_c \approx 1.5 \times 10^{10}$ cm$^{-2}$ is meaningful. Clearly, our simple Drude-Boltzmann theory applies for $n \gg 3 \times 10^{10}$ cm$^{-2}$, but not for $n < 3 \times 10^{10}$ cm$^{-2}$ where $k_F l \sim 1$. What is encouraging is that the simple modification of the theory introducing the crossover exponent $\alpha' = \alpha (1-n/n_c)^{-1}$ seems to describe the experimentally observed density scaling exponent of the observed experimental mobility at low carrier densities. We conclude by mentioning that direct numerical percolation calculations indicate $\delta \approx 1.32$ in 2D systems, which is unfortunately too close to our calculated value of $\beta$ in the low-density Coulomb disorder-dominated Boltzmann theory, where (see Figs. \[fig:one\] – \[fig:six\] and \[fig:eight\]) the low-density ($n \alt 10^{10}$ cm$^{-2}$) $\alpha$-value is $\alpha \approx 0.3-0.5$ implying low-density $\beta \approx 1.3-1.5$. Since high-mobility 2D GaAs samples typically have $n_c \approx 10^9$ cm$^{-2}$, it is unclear whether the existing conductivity exponent measurements for the putative metal-insulator transition [@twenty; @twentyone; @leturcq; @tracy2009] for the 2D density-driven metal-insulator transition really is obtaining $\delta$ or is just a measurement of our calculated $\beta=\alpha + 1$ which at low values of $n$ would be rather close to the experimentally measured percolation exponent of $1.3-1.5$. The current experiments do not really quantitatively measure $\sigma(n)$ in the $n < 2 n_c$ regime necessary for obtaining $\delta$, and we feel that much more work will be needed to establish the nature of the $\sigma(n \rightarrow n_c) \rightarrow 0$ transition observed in the laboratory. It is possible, even likely, that the existing measurements have only measured the low density (but still $n \gg n_c$) value of our calculated (non-critical) Boltzmann exponent $\beta \approx 1.3-1.5$ in n- and p- 2D GaAs systems. In conclusion, we have developed a comprehensive theory for the universal density scaling of the low-temperature transport properties of 2D and 3D doped semiconductors and graphene, concentrating on the role of background Coulomb disorder and obtaining, both theoretically and numerically, the power law exponents for the density-dependent mobility and conductivity within the Boltzmann transport theory. acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by Microsoft Q, JQI-NSF-PFC, DARPA QuEST, LPS-CMTC, and US-ONR. [99]{} We cite here some representative papers for the theoretical calculation of the impurity resistivity in 3D metals in the context of metallic residual resistivity. See, for example, V. U. Nazarov, G. Vignale, and Y.-C. Chang, arXiv:1302.1660 (2013) and references therein; M. J. Puska and R. M. Nieminen, Phys. Rev. B [**27**]{}, 6121 (1983); Yu. Yu. Tsiovkin, A. N. Voloshinskii, V. V. Gapontsev, and V. V. Ustinov, Phys. Rev. B [**71**]{}, 184206 (2005); J. P. Dekker, A. Lodder, and J. van Ek, Phys. Rev. B [**57**]{}, 12719 (1998); Raju P. Gupta, Phys. Rev. B [**35**]{}, 5431 (1987); John H. Tripp and David E. Farrell, Phys. Rev. B [**7**]{}, 571 (1973); P. T. Coleridge, N. A. W. Holzwarth, and Martin J. G. Lee, Phys. Rev. B [**10**]{}, 1213 (1974); A. R. DuCharme and L. R. Edwards, Phys. Rev. B [**2**]{}, 2940 (1970). Neil W. Ashcroft and N. David Mermin, [*Solid State Physics*]{} (Sannders, Orlando, 1976); N. Mott and H. Jones, [*The Theory of the Properties of Metals and Alloys*]{} (Clarendon, Oxford, 1936). Y. Hanein, U. Meirav, D. Shahar, C. C. Li, D. C. Tsui, and Hadas Shtrikman, Phys. Rev. Lett. [**80**]{}, 1288 (1998); M. Y. Simmons, A. R. Hamilton, M. Pepper, E. H. Linfield, P. D. Rose, D. A. Ritchie, A. K. Savchenko, and T. G. Griffiths, Phys. Rev. Lett. [**80**]{}, 1292 (1998); A. P. Mills, Jr., A. P. Ramirez, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. [ **83**]{}, 2805 (1999); X. P. A. Gao, G. S. Boebinger, A. P. Mills, Jr., A. P. Ramirez, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. [ **94**]{}, 086402 (2005). Y. Hanein, D. Shahar, J. Yoon, C. C. Li, D. C. Tsui, and Hadas Shtrikman, Phys. Rev. B [**58**]{}, R13338 (1998); M. P. Lilly, J. L. Reno, J. A. Simmons, I. B. Spielman, J. P. Eisenstein, L. N. Pfeiffer, K. W. West, E. H. Hwang, and S. Das Sarma, Phys. Rev. Lett. [**90**]{}, 056806 (2003); J. Zhu, H. L. Stormer, L. N. Pfeiffer, K. W. Baldwin, and K. W. West, Phys. Rev. Lett. [**90**]{}, 056805 (2003); Xiaoqing Zhou, B. Schmidt, L. W. Engel, G. Gervais, L. N. Pfeiffer, K. W. West, and S. Das Sarma, Phys. Rev. B [**85**]{}, 041310(R) (2012). S. Das Sarma and E. H. Hwang, Phys. Rev. Lett. [ **83**]{}, 164 (1999); Phys. Rev. B [**69**]{}, 195305 (2004); Phys. Rev. B [**61**]{}, R7838 (2000); Solid State Commun. [**135**]{}, 579 (2005). G. Zala, B. N. Narozhny, and I. L. Aleiner, Phys. Rev. B [**64**]{}, 214204 (2001); B. Altshuler and D. Maslov, Phys. Rev. Lett. [**82**]{}, 145 (1999). S. Das Sarma, , 5401 (1986); A. Gold and V. Dolgopolov, , 1076 (1986); F. Stern, Phys. Rev. Lett. [**44**]{}, 1469 (1980). T. Kawamura and S. Das Sarma, Phys. Rev. B [|bf 42]{}, 3725 (1990); Phys. Rev. B [**45**]{}, 3612 (1992); E. H. Hwang and S. Das Sarma, Phys. Rev. B [**77**]{}, 115449 (2008); Hongki Min, E. H. Hwang, and S. Das Sarma, , 085307 (2012). See, for example, S. Das Sarma, S. Adam, E. H. Hwang, and E. Rossi, , 407 (2011), and references therein. T. Ando, A. B. Fowler, and F. Stern, , 437 (1982). E. Conwell and V. F. Weisskopf, Phys. Rev. [**77**]{}, 388 (1950); R. Dingle, Phil. Mag. [**46**]{}, 831 (1955). D. Chattopadhyay and H. J. Queisser, , 745 (1981); K. Seeger, [*Semiconductor Physics*]{} (Springer, Berlin, 1986); B. Nag, [*Electron in Compound Semiconductor*]{} (Springer, Berlin, 1986); F. Blatt, [*Physics of Electronic Conduction in Solids*]{} (McGraw-Hill, New York, 1968); B. Ridley, [*Quantum Processes in Semiconductors*]{} (Oxford, Oxford, 1982). Frank Stern, Phys. Rev. Lett. [**18**]{}, 546 (1967). J. Lindhard, Kgl. Danske Videnskab. Selskab, Mat.-Fys. Medd. [**28**]{}, No. 8 (1954). E. H. Hwang and S. Das Sarma, Phys. Rev. B [**75**]{}, 205418 (2007) E. H. Hwang, S. Adam, and S. Das Sarma, Phys. Rev. Lett. [**98**]{}, 186806 (2007). E. H. Hwang and S. Das Sarma, Phys. Rev. B [**77**]{}, 235437 (2008). H. L. Stormer, Z. Schlesinger, A. Chang, D. C. Tsui, A. C. Gossard, and W. Wiegmann, Phys. Rev. Lett. [**51**]{}, 126 (1983); R. C. Miller, D. A. Kleinman, and A. C. Gossard, Phys. Rev. B [**29**]{}, 7085 (1984); D. A. Broido and L. J. Sham, Phys. Rev. B [**31**]{}, 888 (1985); Y.-Ch. Chang and R. B. James, Phys. Rev. B [**39**]{}, 12672 (1989); R. Ferreira and G. Bastard, , 9687 (1991). Y.-W. Tan, Y. Zhang, K. Bolotin, Y. Zhao, S. Adam, E. H. Hwang, S. Das Sarma, H. L. Stormer, and P. Kim, Phys. Rev. Lett. [**99**]{}, 246803 (2007); J.-H. Chen, C. Jang, S. Adam, M. S. Fuhrer, E. D. Williams, and M. Ishigami Nature Physics [**4**]{}, 377 (2008). S. Das Sarma, M. P. Lilly, E. H. Hwang, L. N. Pfeiffer, K. W. West, and J. L. Reno, Phys. Rev. Lett. [**94**]{}, 136401 (2005). M. J. Manfra, E. H. Hwang, S. Das Sarma, L. N. Pfeiffer, K. W. West, and A. M. Sergent, Phys. Rev. Lett. [**99**]{}, 236402 (2007). R. H. Harrell, K. S. Pyshkin, M. Y. Simmons, D. A. Ritchie, C. J. B. Ford, G. A. C. Jones, and M. Pepper, Appl. Phys. Lett. [**74**]{}, 2328 (1999). M. R. Melloch, Thin Solid Films, [**231**]{}, 74 (1993). Loren Pfeiffer, K. W. West, H. L. Stormer, and K. W. Baldwin, Appl. Phys. Lett. [**55**]{}, 1888 (1989). M. Shayegan, V. J. Goldman, C. Jiang, T. Sajoto, and M. Santos, Appl. Phys. Lett. [**52**]{}, 1086 (1988). Kazuhiko Hirakawa and Hiroyuki Sakaki, Phys. Rev. B [**33**]{}, 8291 (1986). L. Pfeiffer and K. West, private communication and unpublished. J. D. Watson, S. Mondal, G. A. Csáthy, M. J. Manfra, E. H. Hwang, S. Das Sarma, L. N. Pfeiffer, and K. W. West, Phys. Rev. B [**83**]{}, 241305 (2011). J. D. Watson, S. Mondal, G. Gardner, G. A. Csathy, and M. J. Manfra, Phys. Rev. B [**85**]{}, 165301 (2012). S. Das Sarma and E. H. Hwang, Phys. Rev. B [**87**]{}, 035415 (2013); S. Das Sarma, E. H. Hwang, and Qiuzi Li, Phys. Rev. B [**85**]{}, 195451 (2012). A. Gold, Appl. Phys. Lett. [**54**]{}, 2100 (1989). S. Adam, E. H. Hwang, V. M. Galitski, and S. Das Sarma, Proc. Natl. Acad. Sci. U.S.A. [**104**]{}, 18 392 (2007); E. Rossi and S. Das Sarma Phys. Rev. Lett. [**107**]{}, 155502 (2011). S. Das Sarma, Phys. Rev. Lett. [**50**]{}, 211 (1983). T. Kawamura and S. Das Sarma, Solid State Commun. [**100**]{}, 411 (1996); E. Buks, M. Heiblum, and Hadas Shtrikman, Phys. Rev. B [**49**]{}, 14790 (1994); S. Das Sarma and S. Kodiyalam, Sem. Sci. Tech. [ **13**]{}, A59-A62 (1988). Qiuzi Li, E. H. Hwang, E. Rossi, and S. Das Sarma, Phys. Rev. Lett. [**107**]{}, 156601 (2011); Jun Yan and Michael S. Fuhrer, Phys. Rev. Lett. [**107**]{}, 206601 (2011). R. Leturcq, D. LHote, R. Tourbot, C. J. Mellor, and M. Henini, Phys. Rev. Lett. [**90**]{}, 076402 (2003); Y. Meir, [*ibid.*]{} [**83**]{}, 3506 (1999) L. A. Tracy, E. H. Hwang, K. Eng, G. A. Ten Eyck, E. P. Nordberg, K. Childs, M. S. Carroll, M. P. Lilly, and S. Das Sarma, Phys. Rev. B [**79**]{}, 235307 (2009). S. Adam, S. Cho, M. S. Fuhrer, and S. Das Sarma, Phys. Rev. Lett. [**101**]{}, 046404 (2008). E. H. Hwang and S. Das Sarma, Phys. Rev. B [**82**]{}, 081409 (2010); Q. Li, E. H. Hwang, and S. Das Sarma, Phys. Rev. B [**84**]{}, 115442 (2011). C. Jiang, D. C. Tsui, and G. Weimann, Appl. Phys. Lett. [**53**]{}, 1533 (1988).
--- abstract: | We report a combined theoretical and experimental study of the spectral and polarization dependence of near resonant radiation coherently backscattered from an ultracold gas of $^{85}$Rb atoms. Measurements in a $\pm 6$ $MHz$ range about the $5s^{2}S_{1/2}\rightarrow 5p^{2}P_{3/2}$ $F=3\rightarrow F^{\prime }=4$ hyperfine transition are compared with simulations based on a realistic model of the experimental atomic density distribution.  In the simulations, the influence of heating of the atoms in the vapor, magnetization of the vapor, finite spectral bandwidth, and other nonresonant hyperfine transitions are considered.  Good agreement is found between the simulations and measurements. author: - 'D.V. Kupriyanov, I.M. Sokolov, and N.V. Larionov' - 'P. Kulatunga, C.I. Sukenik, S. Balik, and M.D. Havey' title: 'Spectral Dependence of Coherent Backscattering of Light in a Narrow-Resonance Atomic System' --- Introduction ============ Coherent wave scattering effects in disordered media display an extraordinary variety of phenomena which are of both fundamental and practical concern.  Of particular interest is that coherent wave scattering shows a broad universality which makes possible a qualitatively similar description for different types of wave excitation in a variety of media. These range, as an illustration, from enhancement of light scattering off the lunar regolith and the rings of Saturn on the one hand [@Mish], to explanation of peculiarities in propagation of waves in the solid earth on the other [@POAN]. In addition, coherent wave scattering is a useful technique for diagnosing the average properties of scatterers in turbid media, and for assessing relatively thin surface layers in biological and mechanical materials [@Sheng; @LagTig; @POAN]. The propagation of light waves in natural photonic materials such as opal gives the valuable semiprecious gemstone its highly valued beauty. Of fundamental scientific importance, coherent wave scattering was first recognized by Anderson [Anderson]{} in the context of interference of electron wave scattering in conductors. As the scattering mean free path decreases and becomes shorter than a characteristic length on the order of the wavelength, wave diffusion slows as a result of wave interference. The limiting case where diffusion ceases is called strong localization, where the propagating wave becomes spatially localized inside the medium. For electromagnetic radiation [Sheng,LagTig]{}, two recent reports of strong localization have been made, one in the optical regime [@Wiersma1], and the other for microwave radiation [@Chabanov1]. A major long-term and fundamental goal of the research presented here, and of other researchers in the field, is to attain strong localization of light, but in an ultracold atomic vapor. Quite recently, coherent multiple light scattering has been observed in ultracold atomic gases, which form a unique and flexible medium for fundamental studies and practical applications [Labeyrie1,Labeyrie2,Kulatunga1,Bidel]{}. In all cases, the essential physical mechanisms are due to interferences in multiple wave scattering from the components of the medium; under certain not very stringent conditions the interferences survive configuration averaging, thus generating macroscopic observables. First observations and initial explanations for electromagnetic radiation were of the so-called coherent backscattering (CBS) cone in disordered media [@Ishimaru; @Wolf; @Albeda]. For radiation incident on a diffusive medium, the effect manifests itself as a spatially narrow ($\sim $1 mrad) cusp-shaped intensity enhancement in the nearly backwards direction [@Sheng; @LagTig]. As electromagnetic waves are not scalar, the detailed shape and size of the enhancement depends on the polarization of the incident and the detected light. Nevertheless, for classical radiation scattering from a $^{1}S_{0}\rightarrow $ $^{1}P_{1}$ atomic transition, the largest possible interferometric enhancement is to increase the intensity by a factor of two. Atomic gases, because they have exceptionally high-Q resonances, and because the light scattering properties may be readily modified by light polarization or intensity, atomic density, and applied external fields, represent an interesting and flexible medium in which to study the role of multiple scattering. However, to achieve the full potential of atomic scatterers as a practical medium for such studies, it is necessary to significantly cool the atoms, in order to suppress the dephasing effects of atomic motion. Coherent backscattering interference has, in fact, been measured in $^{85}$Rb [@Labeyrie1; @Kulatunga1] and Sr [@Bidel], and quite successfully modeled for resonant and near-resonant scattering as well [@CBSth1; @KCBSth1; @KCBSth2; @Jonckheere]. Measurements have also been made of the magnetic field dependence of the coherent backscattering line shape [@Labeyrie3], and of the time-dependence, for a particular geometry, of light scattered in the coherent scattering regime [@Kaiser]. However, there remains a significant range of physical parameters associated with the various processes which have not yet been fully explored.  Among these are the influence of light intensity, nonzero ground state multipoles such as alignment or orientation, cooperative multi-atom scattering associated with higher atomic density, and more general geometries for time-dependent studies.  In the present report we concentrate attention on another variable, that being the dependence of the coherent backscattering enhancement on detuning of the probe beam from exact resonance.  It is clear that non-resonant excitation of the atomic sample results in a smaller optical depth (and associated larger transport mean free path) of the medium [@CBSth1; @KCBSth1].  However, theoretical and experimental results presented here reveal that other more subtle effects, including far-off-resonance optical transitions, heating of the vapor by multiple light scattering and self-magnetization of the vapor during the CBS phase, can have significant effects on the spectral variation of the CBS enhancement. In the following sections we first present an overview of the physical system, including how atomic samples are prepared and characterized and a brief review of measurements of coherent backscattering from an atomic vapor.  This is followed by a summary of the approach to simulate coherent multiple scattering in an ultracold atomic gas.  We then present our experimental and theoretical results, with focus on various mechanisms that can influence the spectral variation of the coherent backscattering enhancement factor. Overview of Physical System =========================== Preparation and description of ultracold atomic sample ------------------------------------------------------ Preparation of the ultracold atomic $^{85}$Rb sample used in the measurements described in this paper has been described in detail elsewhere [@Kulatunga1], but for completeness will be briefly reviewed here.  The samples are formed in a vapor-loaded magneto-optical trap (MOT) which is operated in a standard six-beam configuration. The trapping laser is detuned a frequency of -2.7$\gamma $ from resonance, where $\gamma \sim 5.9$ $MHz$, is the natural linewidth of the $F=3\rightarrow F^{\prime }=4$ hyperfine transition in $^{85}$Rb. Laser light for the MOT is derived from an injection locked diode laser (Sanyo DL7140-201) which is slaved to a master laser (Hitachi HG7851G). The master laser is locked to a crossover peak produced in a Doppler-free saturated absorption spectrometer. Laser locking is achieved by dithering the master laser current and demodulating the saturation absorption spectrum with a lock-in amplifier. In order to produce the required light for hyperfine repumping, the slave laser is microwave modulated to produce a sideband at the wavelength corresponding to the $% F=2\rightarrow F^{\prime }=3$ hyperfine transition. Light exiting the slave passes through an acousto-optic modulator (AOM), which is used as an optical switch, and subsequently coupled into a single mode fiber optic patchcord. The combination of the AOM switching and fiber coupling results in an $\sim $65 dB attenuation of the trapping laser light. After exiting the fiber, the trapping light is split into three beams and sent to the MOT. Each beam contains $\sim $3.3 mW of light and is retroreflected, generating an average $\sim $19 mW in the center of the chamber. In order to ascertain the number and density of confined atoms, we employ absorption and fluorescence imaging. We find that the MOT is not completely spherical [@Kulatunga1; @CBSth1], but rather is somewhat ‘cigar-shaped’ having $1/e^{2}$ Gaussian radii of 1.1 mm and 1.38 mm, where the radius is defined according to the density distribution $n(r)=n_{0}\exp (-r^{2}/2r_{0}^{2})$, $n_{0}$ being the peak density. This distribution results in an optical depth through the center of the MOT of about 6, where the optical depth $b$ is defined as resulting in an attenuation of the incident intensity by a factor $e^{-b}$. We determine the peak optical depth by direct measurement of the transmitted CBS light intensity through the central region of the MOT. In these measurements, probing of the density takes place when the MOT lasers are off, for they result in a significant excited state fraction, decreasing the measured optical depth. For a Gaussian atom distribution in the MOT, the optical depth is given by $b=$ $% \sqrt{2\pi }\sigma _{0}n_{0}r_{0}$, where $\sigma _{0}$ is the cross-section for light scattering [@Metcalf]. With the values given above and an average Gaussian radius $r_{0}=1.2$ $mm$, we calculate that the MOT contains approximately $4.3\times 10^{8}$ atoms and has a peak density $n_{0}=$ $% 1.6\times 10^{10}$ atoms-cm$^{-3}$.  Note that these parameters are large enough to insure an optical depth large enough for coherent multiple scattering, but that the density is not so large as to necessitate consideration of cooperative pair scattering in the vapor. ![\[Figure1\]Schematic diagram of the coherent backscattering apparatus. Shown in the figure is an acousto optic modulator (AOM), magneto optic trap (MOT), linear polarizer (LP), quarter wave plate (QWP), and a charge coupled device (CCD) camera.](josafig1a1.eps "fig:"){width="3.0"}\ The vapor-loaded MOT is formed in a custom-made stainless steel ultrahigh vacuum (UHV) chamber that is pumped by both an ion and titanium sublimation pump. The UHV chamber is fitted with a stainless-steel sidearm containing a valvable and heated Rb reservoir.  Because we are observing light which is backscattered from our sample, it is critical that all other backscattered reflections are suppressed. A major source of unwanted back-scattered light is from the vacuum viewports on the MOT chamber. In order to minimize this light, we installed wedged optical quality windows having a “V”-type antireflection (AR)coating at 780 nm on the probe-laser (described in the following section) entrance and exit ports. The AR coating results in less than 0.25% reflectivity at 780 nm. Further, the window through which the probe laser beam enters is mounted on a UHV bellows, allowing us to better direct unwanted reflections from entering the charge-coupled device (CCD) detector. We also found it necessary to replace the standard window on the CCD camera with a wedged and near-infrared AR coated window in order to suppress interference fringe formation in the CCD images. Measurement of atomic coherent backscattering --------------------------------------------- We present in this section a brief overview of the coherent backscattering apparatus used to obtain the experimental results reported here. Further details may be found in Kulatunga, *et al.* [@Kulatunga1], where the experimental apparatus used in experiments to study coherent radiative transfer in an ultracold gas of $^{85}$Rb [@Kulatunga1] is described. A schematic diagram of the arrangement is shown in Figure 1.  There the external light source used in the experiment is provided by an external cavity diode laser that is stabilized by saturated absorption to a crossover resonance associated with hyperfine components of the 5s $^{2}$S$_{1/2}$ $ \rightarrow $ 5p $^{2}$P$_{3/2}$ transition. With reference to Figure 2, which shows relevant hyperfine transitions in $^{85}$Rb, the laser may be tuned several hundred MHz from nearly any hyperfine resonance in $^{85}$Rb by a standard offset locking technique using an acousto-optic modulator. Detuning from resonance is defined by $\Delta =\omega _{L}-\omega _{0}$, where $\omega _{L}$ is the CBS laser frequency and $\omega _{0}$ is the $% F=3\rightarrow F^{\prime }=4$ resonance frequency. The laser bandwidth is a few hundred kHz, and the typical output power is $\sim $5 mW. The laser output is launched into a single mode polarization preserving fiber and then beam expanded and collimated by a beam expander to a $1/e^{2}$ diameter of about 8 mm. The polarization of the resulting beam is selected and then the beam passed through a nonpolarizing and wedged 50-50 beam splitter that passes approximately half of the laser power to the atomic sample. The backscattered radiation is directed by the same beam splitter to a field lens of 45 cm focal length, which condenses the light on the focal plane of a liquid nitrogen cooled CCD camera. The diffraction limited spatial resolution is about 100 $\mu rad$, while the polarization analyzing power is greater than 2000 at 780 nm. There are four polarization channels that are customarily studied in coherent backscattering. For linearly polarized input radiation, two of these correspond to measuring the backscattered light in two mutually orthogonal output channels. This is readily achieved by removing the quarter wave plate, as shown in Figure 1, and rotating the linear polarization analyzer located before the field lens. For input radiation of definite helicity, that being generated by the linearly polarized input and the quarter wave plate, the other two channels correspond to the helicity of the backscattered radiation.  This is similarly measured by rotation of the linear polarizer just before the field lens. The instrumentation as described, with some modifications to suppress the intense trapping beam fluorescence, has been previously used to study coherent backscattering in ultracold atomic gases and in solid and liquid samples as well [@Kulatunga1]. ![\[Figure2\]Hyperfine energy levels of relevant transitions in atomic $^{85}$Rb.](josafig2a2.eps "fig:"){width="3.0"}\ Measurements of the backscattered light is made by exposing the ultracold atoms to the CBS laser light for an interval of 0.25 ms temporally centered in a 5 ms dark interval during which the MOT lasers are turned off.  The MOT lasers are then turned back on for 20 ms, which is sufficiently long that the cold atom sample is reconstituted.  This procedure is repeated for 300 s, which constitutes a single experimental run.  A run of 300 s with the MOT absent allows measurement of the background, which is principally due to hot atom fluorescence excited by the CBS laser.  Attenuation of the CBS laser by the MOT during the data taking phase, which reduces the amount of background during the backscattering run, in comparison with the background phase, is accounted for by auxiliary measurements of the MOT attenuation of the CBS laser intensity.  Finally, the saturation parameter for the CBS laser is less than s = 0.08 on resonance, which with the 0.25 ms measurement interval is sufficient to minimize mechanical action of the CBS laser beam on the atomic sample. Brief overview of the theoretical treatment ------------------------------------------- A general theory of the coherent backscattering process in an ultracold atomic gas has been developed recently by several groups [CBSth1,KCBSth1,KCBSth2]{}. The theoretical development essentially maintains the earlier conceptions of weak localization in the atomic scattering problem [@Shlyap], and takes into account the influence of the optical depth and sample size on the character of the coherent backscattering cone. In spite of the fact that the basic ladder and interference terms, describing the process, have a similar structure in all the theoretical approaches, there are certain types of accompanying physical phenomena which can become more important as more detailed experimental or theoretical spectral analysis is considered. In our earlier theoretical approach [@CBSth1], the general analytical development was realized by a Monte-Carlo simulation of coherent multiple scattering in an ultracold ($T<50$ $\mu K$) gas of $^{85}$Rb atoms confined in a magneto optical trap.  The simulation was closely matched to the experimental density distribution and temperature conditions as described in the previous paragraphs.  The radiation field frequency was selected to be in the vicinity of the $F=3\rightarrow F^{\prime }=4$ hyperfine transition, and to have polarization states and a weak-field intensity corresponding to the experimental realization. The effects of sample size, and the spatial and polarization dependence of the coherent backscattering cone were considered in detail.  Some aspects of the spectral variation of the coherent backscattering enhancement factor were also considered, including the surprisingly-strong influence of the far-off-resonance $F=3\rightarrow F^{\prime }=3$ and $F=3\rightarrow F^{\prime }=2$ hyperfine transitions. However, other physical effects can have a profound influence on the spectral variations, and we consider some of those in the present report. Among these, for currently achievable laboratory conditions, are (i) heating of the atomic gas by multiple scattering of the probing light source, (ii) optical pumping effects initiated by the probe or MOT lasers, and (iii) the influence of the finite bandwidth of the probe laser. Each of these effects has been quantitatively ignored in earlier studies, since their role is not so crucial to calculations of the basic characteristics of the CBS process. Of particular interest is the influence of atomic motion and internal polarization variables on the spectral variations of the CBS enhancement. We point out that to properly account for these factors, it is necessary to consider the influence of the mean field on both attenuation and dispersion of the multiply scattered light, and to include also the anisotropic Green’s function for light propagating along a chain of scatterers.  As is seen in the following section, inclusion of some such effects, in isolation or combination, may well be essential to better agreement between experimental and theoretical results. Finally we emphasize that the simulations are made for conditions quite close to those in the experiment. These conditions include sample size, temperature, shape and density, and the characteristic intensity of the CBS laser beam.  These conditions are such that cooperative scattering may be neglected, and such that saturation of the atomic transition is also negligible. In simulations of thermal effects, and of the influence of atomic magnetization on the coherent backscattering enhancement, more severe conditions are used in order to illustrate possible range of influence of the effects. Experimental and Theoretical Results ==================================== In this section we present experimental and theoretical results associated with backscattering of near-resonance radiation from ultracold atomic $^{85}$Rb.  First we present experimental measurements of the spectral variation of the coherent backscattering enhancement, in a range of approximately $\pm 6MHz$, as a function of detuning from the $F=3\rightarrow F^{\prime }=4$ hyperfine transition.  These results are directly compared to theoretical simulations, made with inclusion of the influence of off-resonant hyperfine transitions and considering an ultracold sample not at absolute zero. Second, we present simulations of several effects which should generally be considered when modelling coherent backscattering from ultracold atomic vapors. Spectral variation of the CBS enhancement: experimental results --------------------------------------------------------------- ![\[Figure3\]Comparison of experimental and theoretical enhancement spectra in the *l* $\|$ *l* polarization channel.  Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $% n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm $.](josafig3a3.eps "fig:")\ ![\[Figure4\]Comparison of experimental and theoretical enhancement spectra in the *l* $\perp $ *l* polarization channel.  Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $% n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm $.](josafig4a4.eps "fig:")\ ![\[Figure5\]Comparison of experimental and theoretical enhancement spectra in the helicity preserving ($h||h)$ polarization channel.  Theoretical spectra show modification by Doppler broadening, which is varied from $kv_{0}=0$ to $kv_{0}=0.25\protect\gamma $, in an ensemble of ${}^{85}$Rb atoms having a peak density of $% n_{0}=1.6\times 10^{10}\;cm^{-3}$ and a Gaussian radius $r_{0}=1\;mm$.](josafig5a5.eps "fig:")\ Measurements of the variation of the coherent backscattering enhancement with detuning of the CBS laser, in a $\pm 6MHz$ range around the $F=3\rightarrow F^{\prime }=4$ hyperfine transition, are shown in Figures 3-6.  The measurements have a typical uncertainty on the order of 2%, this being due to a combination of statistical uncertainty due to counting statistics in the spatial intensity measurements, but also an estimated uncertainty due to the cone fitting procedure, as described previously [Kulatunga1]{}.  In addition, there is residual noise in the spatial distribution of backscattered light due to speckle in the $l$ $||$ $l$ and $% h\perp h$ channels; slight variations in speckle appearing in background scattered light from run to run does not completely average to zero.  This effect is responsible for the somewhat larger fluctuations in the extracted enhancement factors for these two polarization channels.  There is a small systematic reduction of the peak enhancement due to the finite spatial resolution of the backscattering polarimeter, an effect which arises from smoothing of the nearly cusped shaped CBS cone near its peak by the finite spatial resolution of the instrument.  This is accounted for by using a Lorentzian model of the spatial response and the CBS cone, which allows an estimate of the amount of reduction.  Justification for this is made by the fact that the spatial variation of the simulated cones is to a good approximation described by a Lorentzian, as is the measured spatial response of the experimental apparatus.  In our case, accounting for this effect amounts to a maximum decrease in the peak enhancement $\sim 0.01$ in the enhancement for the narrowest cones, which appear in the $h$ $||$ $h$ data. This estimated correction is not made to the data in Figs. 3-6. On the scale of the figures, there is negligible uncertainty in the detuning measurements. \ Also shown in the figures are simulations of the enhancement for several different values of the average Doppler shift of the atoms, measured in units of the natural spectral width $\gamma $.  It appears, from comparison of the experimental data and the simulations that the data is better described by inclusion of some nonzero average heating of the vapor  on the order of a few hundred kHz.  Note that in a following section we model the influence of the finite bandwidth of the CBS laser; it is seen there that the decrease in enhancement on resonance, as seen in Figures 3-6, cannot be explained by that mechanism. Influence of dynamical heating ------------------------------ The initial temperature of the atomic ensemble is $\sim $ $50\;\mu K$, which makes negligible any possible spectral manifestations caused by atomic motion. However, during the interaction time, the probe light produces a certain mechanical action on the atoms. The radiation force associated with the probe light can accelerate the atoms and heat them to temperatures where the Doppler broadening and shift can become comparable with the natural linewidth. It is important to recognize that the initial scattering event transfers momentum from the CBS laser to the atomic ensemble, but that subsequent scattering of the light deep within the sample is more nearly isotropic, resulting in some effective heating of the atoms during the CBS data taking phase.  Although the dynamical process is complex, and is currently under study, we present here a short discussion of this process by comparing several scanning spectra averaged over an equilibrium Maxwell distribution of atom velocities, that is for different temperatures and respectively for different Doppler widths. The drift velocity of the atomic cloud, which also exists but was ignored in our calculations, leads (to a good approximation) to a Doppler shift of all the spectral dependencies into the blue wing with respect to the laser frequency $\omega _{L}$. In addition to the experimental data, there are also shown in Figures 3-6 calculated spectral variations of the enhancement factor for all four polarization channels. In the graphs, the velocity is indicated as a fraction of the natural width of the atomic transition, $kv_{0}$ ($v_{0}=% \sqrt{2k_{B}T/m}$ is the the most probable velocity in the atomic ensemble).  It is seen from these graphs that the shape of the spectra becomes significantly modified, even for an average Doppler breadth of $0.1\gamma $, where $\gamma $ is the natural atomic width.  Similar results are also obtained in the linear polarization channels.  The overall trend suggests that dynamical heating, or some other mechanism which modifies the spectrum in a similar way, will be required to describe the experimental results. Of particular interest is the helicity preserving channel (Figure 5). Unique to this case is an increase in the enhancement, even for no atomic motion, for moderate detunings away from exact resonance. As described in a previous report [@CBSth1], this is due to the suppressed role of Raman-type single scattering. The asymmetry is due to the nonnegligible influence of far off resonance hyperfine transitions on the coherent backscattering enhancement.  This effect is suggested in the overall spectral trend of the data, although more precise measurements are clearly in order.  In the following section, we see that a much larger enhancement increase at greater detunings can also arise from dynamical (or static) magnetization of the vapor along the direction of propagation of the CBS laser beam. ![\[Figure7\]Scanning spectra of CBS enhancement for (a) *h* $||$ *h* and (b) *h* $\perp $ *h* polarization channels. Spectra are shown for an average Doppler broadening varying from $kv_{0}=0$ to $kv_{0}=\protect\gamma $, in the ensemble of $% {}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.](josafig7a7.eps "fig:")\ We finish this section by presenting for completeness theoretical results over a wider spectral range for the detuning dependence of the CBS enhancement.  These are shown in Figure 7 (a) for the helicity preserving channel and in Figure 7 (b) for the helicity nonpreserving channel.  The results show that there is some persistence of the CBS enhancement, even for an average Doppler broadening on the order of the natural line width of the atomic transition, and that this enhancement increases at larger detunings, before falling off at the largest offsets, when single scattering becomes dominant. Optical pumping effects ----------------------- Effects on coherent radiative transport in an atomic vapor will generally depend on the polarization of the incident light and on the nonzero ground state multipoles in the atomic vapor.  In our experimental arrangement, the atoms are confined to a magneto optic trap, in which there exist generally spatially varying hyperfine multipoles.  However, the MOT lasers are typically turned off for several ms before taking data in a coherent backscattering experiment, and residual macroscopic atomic polarization should be largely dissipated on that time scale.  However, there can be hyperfine multipoles generated by the CBS laser itself, dynamically polarizing the vapor. The main argument, why there is no optical pumping manifestation in the CBS process, comes from the reasonable assumption that under not atypical experimental conditions the probe radiation is weak and characterized by a small saturation parameter. However, it is quite clear that in an ensemble of cold atoms the relaxation mechanisms in the ground state, which mainly are collisional, play a reduced role and become even negligible. Then, after each cycle of interaction with the polarized CBS light, the atomic ensemble can accumulate a certain degree of polarization, which may be either of an orientation or alignment type. Of particular interest to us here is when the incident radiation has definite helicity, which can magnetize the vapor along the CBS propagation direction.  It is quite difficult to estimate precisely the actual dynamical spatial distribution, within the atomic cloud, of polarization generated during the whole interaction cycle. Therefore in this section we only qualitatively illustrate how the optical pumping effects can change a basic characteristic of the CBS process such as the enhancement factor. ![\[Figure8\]Diagram explaining the CBS phenomenon for double scattering in an ensemble of oriented atoms of ${}^{85}$Rb atoms. There is only one transition amplitude in the helicity preserving channel, which leads to maximal enhancement of backward scattered light. The direct and reciprocal transitions and photon paths are shown by solid and dashed arrows respectively.](josafig8a8.eps "fig:"){width="2.5in"}\ Consider probing the atomic sample with positive helicity circular polarized radiation. Let us consider further that, due to optical pumping, the atoms become oriented only along the propagation direction of the light beam. In steady state, following a sufficiently long pumping time, if there is no relaxation in the ground state, the atoms should be concentrated in the Zeeman sublevel $F=3,M=3$, see Figures 2 and 8. Of course, this is an idealized case which can never be precisely attained in reality, but such a model situation is convenient for illustrative purposes. The spectral variations of the enhancement factor for such an oriented ensemble are shown in Figure 9 for the case of monochromatic probe radiation and for a Gaussian-type cloud with the peak density $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and with a radius $% r_{0}=1\;mm$. The spectral variation of the enhancement in the helicity preserving channel shows quite unusual behavior, in that there is no reduction of the CBS enhancement in the spectral wings. On the contrary, the enhancement factor approaches its maximal possible value of two. The limiting factor of two is normally associated with Rayleigh-type scattering on classical objects. But here we deal with Rayleigh-type scattering under approximately attainable, but not typical quantum conditions. This result may be explained by a simple but fundamental property that in such a coherent atomic ensemble there is no single-atom scattering of Raman-typeradiation in the backward direction, which potentially could also be a source of backscattered light in the helicity-preserving channel. Moreover, the partial contribution of only double scattering on oriented atoms in an optically thin sample causes the enhancement factor to take the maximum possible numerical value of two. This can be understood by turning to Figure 8, where it is shown that there is only one channel, or one product of the transition matrix elements, contributing in the scattering amplitude of the double Rayleigh scattering, that is in the helicity preserving channel. These are ideal conditions to observe maximal enhancement in the CBS process. In higher orders there are several partial contributions and not all of them can interfere. This, as usual, leads to essential reduction of the interference contribution to the total intensity of scattered light. Due to such reduction the enhancement factor considerably decreases in the spectral domain near the resonance scattering, as shown in Figure 9. Thus in the wings of the helicity preserving curve in the graph of Figure 9 a unique situation is revealed when *in an optically thin medium under special conditions the enhancement factor can increase to its maximal value*. ![\[Figure9\]Scanning spectra of enhancement for circular polarized probe light in an ensemble of atoms of ${}^{85}$Rb atoms with 100% orientation in the direction the light propagation. The spectra were calculated for a Gaussian type atomic cloud with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.](josafig9a9.eps "fig:"){width="2.5in"}\ ![\[Figure10\]The output spectral response of the CBS light when the input circular polarized laser radiation, modeled by a Lorentzian spectrum (\[t1\]) with $\protect\omega _{L}=\protect\omega % _{0}+1.5\protect\gamma $ and $\protect\gamma _{L}=\protect\gamma /6$, is tuned in the blue wing of the $F=3\rightarrow F=4$ optical transition of $% {}^{85}$Rb. The first graph (a) shows the distortion of the input Lorentzian profile (dotted curve) for the total ladder and interference contribution; it is normalized to the total output intensity of the CBS light. The second graph (b) shows the distortion for the interference term only; it is normalized according to the corresponding enhancement factor. Both the dependencies relate to the same helicity preserving channel.](josafig10a10.eps "fig:"){width="2.5in"}\ The multiple scattering in the helicity non-preserving channel shows more ordinary behavior. The disappearance of CBS in the wings is caused by the dominating contribution of single scattering events as far as the sample becomes optically thin. We point out that there is, for this polarization channel also, a certain increase of the maximal value of enhancement in comparing with a non-oriented atomic ensemble. Here, as in the linear polarization channels, we see that optical pumping phenomenon leads to some quantitative but not qualitative changes in observation of the CBS process. However, the combined results of the numerical simulations presented in this and the previous section suggest that the experimental results may be essentially modified by the combined influence of thermal and optical pumping effects. The finite bandwidth of the probe light spectrum ------------------------------------------------ In an experiment, the CBS probe laser operates ideally in a single-mode regime and its spectral bandwidth is much less than the natural relaxation rate of the atoms. But in reality the difference is not necessarily so great to completely ignore the spectral distribution of the laser radiation. Typically in our experiments on spectral scanning the sample consisting of rubidium atoms, which have a resonance line natural decay rate $\gamma \sim 5.9\;MHz$, the laser radiation has normally a bandwidth of less than $1\;MHz$. For the multiple scattering process in higher orders, the scanned spectral profile of the sample is formed as a successive overlap of individual profiles per scattering event, the effective output shape reveals a much narrower spectral variance than $\gamma $. Thus the bandwidth of the laser mode can become comparable with the spectral inhomogeneity in the sample spectrum associated with partial contributions of the higher scattering orders. This can be quantitatively discussed with the following model of a quasi-monochromatic single mode laser radiation. To define the basic parameters we approximated the assumed homogeneously broadened spectrum of the CBS laser by a Lorentzian profile $$I(\omega )=I\,\frac{\gamma _{L}}{\left( \omega -\omega _{L}\right) ^{2}+\left( \gamma _{L}/2\right) ^{2}} \label{t1}$$where $\omega _{L}$ and $\gamma _{L}$ are the carrier frequency and the spectral bandwidth of the laser radiation respectively. $I$ is the total intensity of the incident laser radiation. The spectrum obeys to the following normalization condition $$I\;=\;\int_{-\infty }^{\infty }\frac{d\omega }{2\pi }\,I(\omega ) \label{t2}$$where in the quasi-monochromatic approximation there is no difference between $0$ and $-\infty $ in the lower limit of this integral. The basic idea in this case is that, in comparison with purely monochromatic radiation, the spectral response of the initially symmetric but broadened input spectral profile (\[t1\]) should be significantly distorted by the sample due to effects of multiple scattering. In Figure 10 we show, in the helicity preserving channel, the output spectral response in the backward direction when the input circular polarized laser radiation, modeled by the spectrum of Eq. (\[t1\]) with $% \omega _{L}=\omega _{0}+1.5\gamma $ and $\gamma _{L}=\gamma /6$, is tuned in the blue wing of the $F=3\rightarrow F^{\prime }=4$ optical transition of $% {}^{85}$Rb with resonant frequency $\omega _{0}$. The first graph plotted in Figure 10 (a) shows the distortion of the input Lorentzian profile for the total ladder and interference contribution, the intensity being normalized to the total output intensity of the CBS light. In turn, the second graph depicted in Figure 10 (b) shows the distortion of the interference term only. This term is normalized according to the corresponding enhancement factor $X_{EF}$ \[to $(X_{EF}-1)/X_{EF}$\]. Both the dependencies relate to the same helicity preserving channel. It is clearly seen that the output spectral profile becomes asymmetric because of the influence of resonance scattering near the atomic transition in higher orders of multiple scattering. It may be less obvious, but there is also a small but not negligible difference between the two spectral dependences, which explains why in our experiment the spectral probe of the sample with scanning carrier frequency $\omega _{L}$ near the resonance can be sensitive to the spectral bandwidth of the CBS laser. ![\[Figure11\]Scanning spectra of the intensity (a) and of the enhancement factor (b) in the helicity non-preserving channel for the quasi-monochromatic laser radiation, with $\protect\gamma _{L}=\protect \gamma /4,\,\protect\gamma /6,\,0$. The spectra were calculated for a Gaussian type atomic cloud of ${}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.](josafig11a11.eps "fig:")\ ![\[Figure12\]Scanning spectra of the intensity (a) and of the enhancement factor (b) in the helicity preserving channel for quasi-monochromatic laser radiation with $\protect\gamma _{L}=\protect\gamma /4,\,\protect\gamma /6,\,0$. The spectra were calculated for a Gaussian type atomic cloud of ${}^{85}$Rb atoms with $n_{0}=1.6\times 10^{10}\;cm^{-3}$ and $r_{0}=1\;mm$.](josafig12a12.eps "fig:")\ The influence of this effect is illustrated, for the helicity preserving and non-preserving channels, in Figures 11 and 12.  There it is shown that the spectra of the total intensity and of the enhancement factor, generated by scanning of the frequency $\omega _{L}$, reveals different spectral behavior, particularly in the wings of scanned profiles. These are calculated results for the helicity polarization channels, but similar behavior takes place for the linear polarization channel. At first sight this spectral divergence appears as a rather weak effect, but we believe it should not be ignored in precise comparison of the experimental data with numerical simulations. Particularly, it can be important in a realistic estimation of the background, since such a spectral washing in the probe radiation response can be important in the interpolation procedure of the CBS cone to its wing. Indeed, the higher orders of multiple scattering contribute to the formation of the central portion of the CBS cone, but the role of second order scattering is more important in its wings. As we see for large spectral detunings, the correct estimation of the enhancement factor in higher orders of multiple scattering is rather sensitive to the spectral distribution of the probe radiation. Summary ======= A combined theoretical and experimental study of spectral variations in the coherent backscattering enhancement factor, for a very narrow band resonance system, has been reported. Experimental data taken over a range of two atomic natural widths about direct atomic resonance suggests spectral variations in the peak value of the CBS enhancement.  Simulations indicate that the combined influence of heating of the atomic ensemble, and optical pumping of the Zeeman sublevels in the $F=3$ ground level during the coherent backscattering data taking phase, can qualitatively account for the effects.  The simulations of the CBS process examined the influence of atomic motion, in a thermal equilibrium model, on the spectral variation of the enhancement factor.  A model case of magnetization of the vapor due to optical pumping was also considered.  It was found that these two factors could explain variations in the CBS enhancement observed in the experiments.  The simulations which considered the influence of atomic magnetization predicted a remarkable result; the classical CBS maximum enhancement of two could be closely approached for a strongly magnetized atomic sample.  Finally, it was shown, by simulation of the influence of the spectral bandwidth of the CBS probe laser, that even a quite small laser line width, in comparison to the natural width of the atomic transition, can influence significantly the CBS enhancement in the wings of the atomic resonance line. We acknowledge informative discussions with Robin Kaiser. Financial support for this research was provided by the National Science Foundation (NSF-PHY-0099587, NSF-INT-0233292), by the North Atlantic Treaty Organization (PST-CLG-978468), by the Russian Foundation for Basic Research (01-02-17059), and by INTAS (INFO 00-479).  D.V.K. would like to acknowledge financial support from the Delzell Foundation, Inc. [99]{} M.I. Mishchenko, Astrophys. J. 411, 351 (1993). *New Aspects of electromagnetic and  Acoustic Wave Diffusion*, ed. POAN Research Group, Springer Tracts in Modern Physics, V. 44 (Springer-Verlag, New York, 1998). Ping Sheng, *Introduction to Wave Scattering, Localization, and Mesoscopic Phenomena* (Academic Press, San Diego, 1995). Ad Lagendijk, and B.A. van Tiggelen, *Resonant Multiple Scattering of Light*, Physics Reports 270, 143 (1996). P.W. Anderson, Phys. Rev. 109, 1492(1958). D.S. Wiersma, P. Bartolini, Ad Lagendijk, and R. Righini, Nature 390, 671 (1997). A.A. Chabanov, M. Stoytchev, and A.Z. Genack, Nature 404, 850 (2000). G. Labeyrie, F. De Tomasi, J-C Bernard, C.A. Muller, C.A. Miniatura, and R. Kaiser, Phys. Rev. Lett. 83, 5266 (1999). G. Labeyrie, C.A. Muller, D.S. Wiersma, Ch. Miniatura, and R. Kaiser, J. Opt. B: Quantum Semiclass. Opt 2, 672 (2000). P. Kulatunga, C.I. Sukenik, S. Balik, M.D. Havey, D.V. Kupriyanov, and I.M. Sokolov, to appear Phys. Rev. A (2003). Y. Bidel, B. Klappauf, J.C. Bernard, D. Delande, G. Labeyrie, C. Miniatura, D. Wilkowski, and R. Kaiser, Phys. Rev. Lett 88, 203902-1 (2002). J. Ishimaru and Yu. Kuga, J. Opt. Soc. Am. A1, 813 (1984). P.E. Wolf and G. Maret, Phys. Rev. Lett. 55, 2696 (1985). M.P. VanAlbada and A. Lagendijk, Phys. Rev. Lett. 55, 2692 (1985). D.V. Kupriyanov, I.M. Sokolov, P. Kulatunga, C.I. Sukenik, and M.D. Havey, Phys. Rev. A 67, 013814 (2003). Guillaume Labeyrie, Dominique Delande, Cord A. Mueller, Christian Miniatura, Robin Kaiser, Europhys. Lett. 61, 327 (2003). Guillaume Labeyrie, Dominique Delande, Cord A. Mueller, Christian Miniatura, Robin Kaiser, Phys. Rev. A 67, 033814 (2003). T. Jonckheere, C. A. Muller, R. Kaiser, Ch. Miniatura, and D. Delande, Phys. Rev. Lett. 85, 4269 (2000). G. Labeyrie, C. Miniatura, C.A. Müller, O. Sigwarth, D. Delande, and R. Kaiser, Phys. Rev. Lett. 89, 163901-1 (2002). Robin Kaiser, private communication. Harold J. Metcalf and Peter van der Straten, *Laser Cooling and Trapping* (Springer, New York, 1999). Th. M. Nieuwenhuizen, A.L. Burin, Yu. Kagan, and G. V. Shlyapnikov, Physics Letters A184, 360 (1994).
--- abstract: 'Macaulay’s inverse system is an effective method to construct Artinian $K$-algebras with additional properties like, Gorenstein, level, more generally with any socle type. Recently, Elias and Rossi [@ER17] gave the structure of the inverse system of $d$-dimensional Gorenstein $K$-algebras for any $d>0$. In this paper we extend their result by establishing a one-to-one correspondence between $d$-dimensional level $K$-algebras and certain submodules of the divided power ring. We give several examples to illustrate our result.' address: - 'Dipartimento di Matematica, Universit[à]{} di Genova, Via Dodecaneso 35, 16146 Genova, Italy' - 'Department of Mathematics, University of Kaiserslautern, 67663 Kaiserslautern, Germany' author: - 'Shreedevi K. Masuti' - Laura Tozzo title: 'The Structure of the Inverse System of Level $K$-Algebras' --- Introduction ============ Level rings have been studied in several different contexts. These rings are in between Cohen-Macaulay and Gorenstein rings. Their study was initiated by Stanley in [@Stanley77]. Since then they have been widely investigated, especially in the Artinian case, see for instance [@Boij],[@Boi99], [@Boi00], [@Boi09], [@Bertella], [@GHMS], [@Iar84], [@Stefani]. However, there are also many examples of level rings in positive dimension: Stanley-Reisner rings of matroid simplicial complexes [@Stanley77], associated graded rings of semigroup rings corresponding to arithmetic sequences [@MT95], determinantal rings corresponding to generic matrices [@BV88] or generic symmetric matrices [@Con94], [@Goto]. Although several theory has been developed for level rings, they are not as well-understood as Gorenstein rings. One of the reasons for the lack of knowledge of level $K$-algebras of positive dimension is the absence of an effective method to construct level $K$-algebras. Macaulay’s inverse system (see [@Mac1916]) allows to construct level rings in the zero-dimensional case, as was shown by Emsalem [@Ems78] and Iarrobino [@Iar94]. Recently, Elias and Rossi [@ER17] gave the structure of the inverse system of Gorenstein $K$-algebras of any dimension $d>0.$ In this paper we extend their result and give the structure of the inverse system of level $K$-algebras of positive dimension. We define local level rings of positive dimension and study their properties in Section \[Section:LevelAlgebras\]. The definition of level rings is well-known in the graded case. Recall that a homogeneous $K$-algebra $A$ is *level* if the canonical module $\omega_A$ of $A$ has a minimal set of generators of same degree (see Definition \[Def:GradedLevel\]). For the local case we take inspiration from [@EI87], and say that a local $K$-algebra $A$ is *level* if $A/J$ is Artinian level for some general minimal reduction $J$ of the maximal ideal (see Definition \[Def:LocalLevel\]). Notice that defining local level $K$-algebras of positive dimension is non-trivial. One of the reasons is that, if the associated graded ring of $A$ is not Cohen-Macaulay, then Artinian reductions of $A$ by minimal reductions of the maximal ideal may have different socle types (see Example \[Example:NotLocalLevel\] and Proposition \[Prop:construction of level\]). In Section \[Section:DividedRing\] we recall some classical results about divided power rings and the inverse system. Let $R=K[x_1,\ldots,x_m]$ be a standard graded polynomial ring or $R=K[\![x_1,\ldots,x_m]\!]$ a power series in $m$ variables over the field $K$, and let $I$ be an ideal of $R$ (homogeneous if $R=K[x_1,\ldots,x_m]$). It is well-known that the the injective hull of $K$ is isomorphic to the *divided power ring* $\operatorname{\mathcal D}:=K_{DP}[X_1,\ldots,X_m]$ as an $R$-module, where $\operatorname{\mathcal D}$ has a structure of $R$-module via the contraction action (see Section \[Section:DividedRing\]). Therefore, the dual module $(R/I)^\vee:=\operatorname{Hom}_R(R/I,E_R(K))=\operatorname{Hom}_R(R/I,\operatorname{\mathcal D})$ is isomorphic to an $R$-submodule of $\operatorname{\mathcal D}$, called the *inverse system of $I$* and denoted by $I^\perp$. By Matlis duality it is clear that if $R/I$ has positive Krull dimension, then $I^\perp$ is not finitely generated. In Section \[Section:MainResult\] we investigate the structure of $I^\perp$ when $R/I$ is a positive dimensional level $K$-algebra. In [@ER17] the authors proved that $I^\perp$ is *$G_d$-admissible* if $R/I$ is Gorenstein of dimension $d$ and, vice versa, for any $G_d$-admissible submodule $W$ of $\operatorname{\mathcal D},$ $W^\vee$ is a $d$-dimensional Gorenstein $K$-algebra. Generalizing this result, in Theorem \[thm:CharOfLocalLevel\], we establish a one-to-one correspondence between level $K$-algebras $R/I$ of dimension $d>0$ and *$L_d^\tau$-admissible* submodules of $\operatorname{\mathcal D}$ (see Definition \[Def:LdAdmissible\]). Observe that the conditions given for $L_d^\tau$-admissibility are not merely the “union” of the conditions given for $G_d$-admissible submodules (see Discussion \[Disc:LdAdmissibleDefinition\]). This is a symptom of the intrinsic complexity of level $K$-algebras in positive dimension, and it is one of the reasons why constructing examples of level algebras is hard. For example, in the Artinian case, as a consequence of Macaulay’s inverse system, it is known that the intersection of Gorenstein ideals of same socle degree is always level. The analogous is not true in positive dimension (see Example \[Example:intersection\]). For this reason Theorem \[thm:CharOfLocalLevel\] is an important tool, as it gives an effective method to construct level $K$-algebras. In Section \[Section:Examples\], we construct several examples of level $K$-algebras. We have used the computer algebra systems [@Macaulay2], [@Singular] and the library [@Elias] for various computations in this paper. Level $K$-algebras {#Section:LevelAlgebras} ================== Throughout this paper let $R=K[x_1,\ldots,x_m]$ be a standard graded polynomial ring or $R=K[\![x_1,\ldots,x_m]\!]$ a power series ring in $m$ variables over an infinite field $K$, let $\operatorname{\mathcal M}=(x_1,\ldots,x_m)$ be the unique maximal (homogeneous) ideal and $I$ an ideal of $R.$ We write $A=R/I,$ ${{\mathfrak m}}=\operatorname{\mathcal M}/I$, and assume that $A$ is Cohen-Macaulay of dimension $d>0.$ If $R=K[x_1,\ldots,x_m]$ and $I$ is a homogeneous ideal in $R$, we say that $A$ is [graded]{} (or [homogeneous]{}). We write $\operatorname{Soc}(A):=(0:_A{{\mathfrak m}})$ for the [socle]{} of $A$ and set $\tau:=\mathrm{type}(A).$ In this section we introduce the concept of local level $K$-algebras and derive some properties of these algebras that we need to construct the inverse system. In the literature level $K$-algebras have been defined for zero-dimensional rings in the local case and for rings of arbitrary dimension in the graded case. We define local level $K$-algebras of positive dimension using general minimal reductions (Definition \[Def:LocalLevel\]). This definition, in particular, ensures that if $A$ is level and $({\underline{\textit{a}}})$ is a general minimal reduction of ${{\mathfrak m}},$ then $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is level for every ${\mathbf{n}}\in {\mathbb{N}}^d_+$ (Proposition \[Prop:QuotientLevel\]). This is a necessary property for a ring to be level as we will see in Section \[Section:MainResult\]. \[Def:GradedLevel\] A homogeneous Cohen-Macaulay $K$-algebra $A$ is called [*level*]{} if the canonical module of $A$, $\omega_A$, has a minimal set of generators of same degree. Equivalently, there exists an integer $c$ such that $\beta_{n-d,j}^R(A) = 0$ if and only if $j \neq c$, where $\beta_{i,j}:=\beta_{i,j}^R(A)$ denote the graded Betti numbers of $A.$ In other words, a minimal $R$-free resolution of $A$ is of the form: $$0 \to R(-c)^{\beta_{n-d,c}} \to \cdots \to \bigoplus_{j} R(-j)^{\beta_{1,j}} \to R \to 0.$$ In the following proposition we recall the well-known fact that a homogeneous $K$-algebra $A$ is level if and only if $A/(a)$ is level for any homogeneous $A$-regular element $a$ in ${{\mathfrak m}}.$ \[Prop:GLQutotientByHomEle\] Let $A=R/I$ be a homogeneous $K$-algebra and let $a \in A$ be a homogeneous $A$-regular element. Then $A$ is level if and only if $A/({a})$ is level. For a graded $A$-module $M$ and an integer $b,$ let $M(b)$ denote the $A$-module $M$ with grading given by $M(b)_i=M_{b+i}.$ Let $b:=\deg(a).$ By [@BH98 Corollary 3.6.14] $$\omega_{A/(a)} \simeq (\omega_A/(a)\omega_A)\left(b\right).$$ Hence it follows that $A$ is level if and only if $A/(a)$ is level. From Definition \[Def:GradedLevel\] it is clear that we can not define level $K$-algebras analogously in the local case. However, the definition of Artinian local level $K$-algebras is quite well-known. Let us recall this definition. The *socle degree* of an Artinian local ring, denoted as $\operatorname{socdeg}(A),$ is the maximum integer $j$ such that ${{\mathfrak m}}^j \neq 0.$ An Artinian local $K$-algebra $A=R/I$ is said to be *level* if $\operatorname{Soc}(A)={{\mathfrak m}}^s$ where $s:=\operatorname{socdeg}(A).$ Let $(A,{{\mathfrak m}})$ a local ring of positive dimension. Recall that an ideal $J \subseteq {{\mathfrak m}}$ is said to be a *reduction* of ${{\mathfrak m}}$ if there exists a non-negative integer $k$ such that ${{\mathfrak m}}^{k+1}=J {{\mathfrak m}}^k.$ If further $J$ does not contain properly any other reductions of ${{\mathfrak m}}, $ then we say that $J$ is a *minimal reduction* of ${{\mathfrak m}}.$ We refer to [@RV10 Section 1.2] and [@HS06 Chapter 8] for the basic properties of reductions and superficial elements. The following example illustrates that in the non-graded case it is possible that $A/J$ is level for some minimal reduction $J$ of ${{\mathfrak m}}$, but $A/J'$ is not level for another minimal reduction $J'\ne J.$ This is one of the reasons why defining local level $K$-algebras of positive dimension is not trivial. \[Example:NotLocalLevel\] Let $A=\mathbb{Q}[\![t^6,t^7,t^{11},t^{15}]\!]$. Then $A$ has type $2$ and $J=(t^6)$ and $J'=(t^6+t^7)$ are two minimal reductions of ${{\mathfrak m}}=(t^6,t^7,t^{11},t^{15})$. Using a computer algebra system, it can be verified that $A/J$ has the Hilbert function $(1,3,2)$ and hence is level. But $A/J'$ has the Hilbert function $(1,3,1,1)$ and hence is not level. Notice that here both $t^6$ and $t^6+t^7$ are superficial elements for ${{\mathfrak m}}.$ We define level $K$-algebras using general minimal reductions. General minimal reductions have been introduced by several authors. Here we recall the definition due to P. Mantero and Y. Xie [@MX16]. \[Def:genelem\] Let $a_i=\sum_{j=1}^m a_{ij}(\overline{x_j}) \in {{\mathfrak m}}$ for $i=1,\ldots,t$ where $(a_{ij}) \in A^{tm}.$ We say that the elements $ a_1,\ldots, a_t$ are [*general*]{} if there exists a proper Zariski-dense open subset $U$ of $(A/{{\mathfrak m}})^{tm}=K^{tm}$ such that $(\overline{a_{ij}})\in U$ where $\overline{(\cdot)}$ denotes image in $K.$ We recall the following fact on general elements: \[Fact:GeneralElements\] Since $K$ is infinite, a sequence ${\underline{\textit{a}}}:=a_1,\ldots,a_d $ of general elements in ${{\mathfrak m}}$ always forms a superficial sequence for ${{\mathfrak m}}$ [@Xie12 Corollary 2.5]. In particular, if $a_1,\ldots,a_d $ are general elements in ${{\mathfrak m}},$ then $({\underline{\textit{a}}})$ is a minimal reduction of ${{\mathfrak m}}.$ \[Def:genminred\] We say that $J=(a_1,\ldots,a_d) \subseteq {{\mathfrak m}}$ is a [*general minimal reduction*]{} of ${{\mathfrak m}}$ if $a_1,\ldots,a_d$ forms a sequence of general elements in ${{\mathfrak m}}.$ We recall the following lemma from [@MX16] which guarantees that the socle degree and the Hilbert function of an Artinian reduction of $A$ are the same for every general minimal reduction of ${{\mathfrak m}}$: [@MX16 Proposition 3.2] \[Lemma:IndMinRed\] Let $i\geq 0$. Then the socle degree of $A/J$ and $\dim_K ({{\mathfrak m}}^i+J/J)$ are independent of the general minimal reduction $J$ of ${{\mathfrak m}}$. In [@EI87 §1] authors defined general minimal reductions in a different way. It is not clear whether their definition and Definition \[Def:genminred\] are the same. More generally, they proved that the socle type of $A/J$ is the same for every general minimal reduction $J$ of ${{\mathfrak m}}$ and define the socle type of $A$ as the socle type of $A/J$ for a general minimal reduction $J$ of ${{\mathfrak m}}$ (see [@Iar84] for the definition of socle type in the Artinian case). Motivated by this, we define level local $K$-algebras as follows: \[Def:LocalLevel\] A Cohen–Macaulay $K$-algebra $A=R/I$, where $R=K[\![x_1,\ldots,x_m]\!]$, is said to be [*level*]{} if $A/J$ is an Artinian level $K$-algebra for some (equiv. every) general minimal reduction $J$ of ${{\mathfrak m}}.$ Clearly, both in the graded and in the local case, if $A$ is Gorenstein, then $A$ is level. We point out that we can not remove “general” in the definition of level local $K$-algebra. In fact, in Example \[Example:NotLocalLevel\] $A/(t^6)$ is level, but $A$ is not level. Notice that in this example $(t^6)$ is a minimal reduction of ${{\mathfrak m}},$ but $(t^6)$ is not a general minimal reduction of ${{\mathfrak m}}.$ One of the main reasons why the property being $A/J$ level depends on the minimal reduction $J$ is that the associated graded ring $gr_{{\mathfrak m}}(A):=\oplus_{i \geq 0}{{\mathfrak m}}^i/{{\mathfrak m}}^{i+1}$ of a Cohen-Macaulay local ring $A$ need not be Cohen-Macaulay. The following lemma shows that if $gr_{{\mathfrak m}}(A)$ is Cohen-Macaulay and $A/J$ is level for some minimal reduction, then $A/J'$ is level for every minimal reduction $J'$ of ${{\mathfrak m}}$. This gives an effective method to examine whether a given local ring is level. \[Prop:construction of level\] Assume that $G:=gr_{{\mathfrak m}}(A)$ is Cohen-Macaulay. Then the following assertions hold true: 1. The socle degree of $A/J$ is the same for every minimal reduction $J$ of ${{\mathfrak m}}.$ 2. If $A/J$ is level for some minimal reduction $J$ of ${{\mathfrak m}}$, then $A/J'$ is level for every minimal reduction $J'$ of ${{\mathfrak m}}.$ In particular, if $A$ is level, then $A/J$ is level for every minimal reduction $J$ of ${{\mathfrak m}}.$ Let $J \subseteq {{\mathfrak m}}$ be a minimal reduction of ${{\mathfrak m}}$. Then there exist ${\underline{\textit{a}}}:=a_1,\ldots,a_d \in {{\mathfrak m}}$ such that $J=({\underline{\textit{a}}})$ and ${\underline{\textit{a}}}$ is a superficial sequence for ${{\mathfrak m}}.$ Since $G$ is Cohen-Macaulay, $a_1^*,\ldots,a_d^*$ is $G$-regular. For a graded ring $S,$ let $\operatorname{HS}_S(t)$ denote the Hilbert series of $S.$ Hence if $HS_G(t)=\frac{1+h_1t+\cdots+h_s t^s}{(1-t)^{d}}$ with $h_s \neq 0$ and $\sum_{i=1}^s h_i\ne 0$, then $$HS_{gr_{{{\mathfrak m}}/J}(A/J)}(t)=1+h_1t+\cdots+h_s t^s .$$ This implies that $s$ is the socle degree of $A/J$ and $h_s=\dim_K ({{\mathfrak m}}^s+J/J).$ This proves that the socle degree and $\dim_K ({{\mathfrak m}}^s+J/J)$ are the same for every minimal reduction $J$ of ${{\mathfrak m}}.$ Hence the result follows. However, if $G$ is not Cohen-Macaulay, then it is not clear whether $A/J$ level for a general minimal reduction implies that $A/J'$ is level for every minimal reduction $J'$ of ${{\mathfrak m}}.$ [@RV00 Example 3, p. 125] \[Example:Rossi-Valla\] Consider the semigroup ring $$A=K[\![t^6,t^8,t^{10},t^{13}]\!]\cong K[\![x,y,z,w]\!]/(y^2-xz,yz-x^3,z^2-x^2y,w^2-x^3y).$$ Then $A$ is Cohen-Macaulay of type $2$ and $gr_{{\mathfrak m}}(A)$ is Cohen-Macaulay. Moreover, $A/(t^6)$ has the Hilbert function $(1,3,2)$ and hence is level. Therefore by Proposition \[Prop:construction of level\] $A$ is level. Let $({\underline{\textit{a}}}),$ where ${\underline{\textit{a}}}:=a_1,\ldots,a_d$ (is a regular linear sequence if $A$ is homogeneous), be a minimal reduction of ${{\mathfrak m}}$. For ${\mathbf{n}}\in {\mathbb{N}}^d_+$ we set ${\underline{\textit{a}}}^{\mathbf{n}}=a_1^{n_1},\ldots,a_d^{n_d}.$ One of the important properties that a graded ring satisfies is: if $A/({\underline{\textit{a}}})$ is level (equiv. $A$ is level), then $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is level for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$. This is no longer true in the local case. In fact, in Example \[Example:NotLocalLevel\] $A/(t^6)$ is level, but $A/(t^6)^2$ is not level. However, in Proposition \[Prop:QuotientLevel\] we will show that if $({\underline{\textit{a}}})$ is a general minimal reduction of ${{\mathfrak m}}$ and $A/({\underline{\textit{a}}})$ is level (equiv. $A$ is level) with $\mathrm{char}(K)=0$, then $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is level for all ${\mathbf{n}}\in {\mathbb{N}}^d_+.$ This is another reason why we need “general” in Definition \[Def:LocalLevel\]. We recall few definitions and results needed to prove this. Following [@MX16] we define the [*index of nilpotency of $A$ with respect to a reduction $J$ of ${{\mathfrak m}}$*]{} as $$s_J(A):=\operatorname{socdeg}A/J.$$ [@MX16 Proposition 3.2] [@Fouli Proposition 5.3.3] \[Prop:IndOfMult\] If $J$ is a general minimal reduction of ${{\mathfrak m}},$ then $s_J(A)$ does not depend on a choice of $J.$ Moreover, if $R$ is equicharacteristic and $J$ is a general minimal reduction of ${{\mathfrak m}}$, then $$s_{J'}(A) \leq s_J(A)$$ for every minimal reduction $J'$ of ${{\mathfrak m}}.$ In [@EI87 Page 346] authors obtained an analogous result which does not depend upon the characteristic of $R.$ Since it is not clear whether the definition of general elements given in [@EI87] and Definition \[Def:genelem\] are the same, it is not known whether Proposition \[Prop:IndOfMult\] is true if $R$ is not equicharacteristic. Following [@MX16] we call the number $$s(A):=s_J(A) \mbox{ where $J$ is a general minimal reduction of ${{\mathfrak m}}$},$$ the [*index of nilpotency of $A$*]{} which is well-defined by Proposition \[Prop:IndOfMult\]. We need the following theorem from [@HT05] on the core of an ideal. Recall that the *core* of an ideal $I$ is the intersection of all (minimal) reductions of $I.$ The core is difficult to compute, and there have been many efforts to find explicit formulas, see [@HT05], [@PU05] and references therein. We recall the following theorem which gives an explicit formula to compute the core. The theorem is true more generally for equimultiple ideals in any Cohen-Macaulay local ring. Since we need this for ${{\mathfrak m}},$ we state the result only for ${{\mathfrak m}}.$ Recall that for a reduction $J$ of ${{\mathfrak m}}$, the *reduction number of ${{\mathfrak m}}$ with respect to $J$*, denoted as $r_J({{\mathfrak m}})$, is the smallest nonnegative integer $k$ such that the equality ${{\mathfrak m}}^{k+1}=J{{\mathfrak m}}^k$ holds true. The *reduction number of ${{\mathfrak m}}$* is $$r({{\mathfrak m}}):=\min\{r_J({{\mathfrak m}})\mid J \text{ is a minimal reduction of }{{\mathfrak m}}\}.$$ [@HT05 Theorem 3.7] \[thm:HunekeTrung\] Let $R=K[\![x_1,\dots,x_m]\!]$ with $\mathrm{char}(K)=0$, and let $A=R/I$ be Cohen–Macaulay of dimension $d>0$. Let $J$ be a minimal reduction of ${{\mathfrak m}}$ and $r:=r({{\mathfrak m}})$ the reduction number of ${{\mathfrak m}}$. Then $$\operatorname{core}({{\mathfrak m}})= J^{r+1}:{{\mathfrak m}}^r.$$ Equivalently, $\operatorname{core}({{\mathfrak m}})= J^{n+1}:{{\mathfrak m}}^n$ for all $ n \geq r.$ We remark that Theorem \[thm:HunekeTrung\] is not true in positive characteristic (see [@PU05 Example 4.9]). Therefore from now onwards we assume that the characteristic of $K$ is zero in the case $R=K[\![x_1,\ldots,x_n]\!].$ We use the following notations: $$\begin{aligned} |{\mathbf{n}}|&:=n_1+\cdots+n_d \hspace*{0.5cm} \mbox{ for } {\mathbf{n}}\in {\mathbb{N}}^d_+\\ {\mathbf{e}}_i &:=(0,\ldots,1,\ldots,0) \in {\mathbb{N}}^d, \\ {\bf t}_d&:=(t,\ldots,t) \in {\mathbb{N}}^d \hspace*{0.5cm} \mbox{ for } t \in {\mathbb{N}}. \end{aligned}$$ In the following proposition we prove that $A/({\underline{\textit{a}}}^{\mathbf{n}})$ has “expected” socle degree. This is an important result that we need in constructing the inverse system of level algebras. We thank A. De Stefani for providing a proof of this (through private communication) in the one-dimensional case. The same idea works for higher dimension. \[Prop:SocDegOfR\_nn\] Let $A=R/I$ and ${\underline{\textit{a}}}:=a_1,\ldots,a_d \in {{\mathfrak m}}$ be one of the following: 1. \[Prop:SocDegOfR\_nnGraded\] $ R=K[x_1,\ldots,x_m],$ ${\underline{\textit{a}}}$ is a regular linear sequence in ${{\mathfrak m}}$ and $s$ the Castelnuovo-Mumford regularity of $A;$ 2. \[Prop:SocDegOfR\_nnLocal\] $R=K[\![x_1,\ldots,x_m]\!]$ with $\mathrm{char}(K)=0,$ $({\underline{\textit{a}}})$ a general minimal reduction of ${{\mathfrak m}}$ and $s$ the index of nilpotency of $A.$ Then for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$, $$\label{Eqn:socledegree} \operatorname{socdeg}(A/({\underline{\textit{a}}}^{\mathbf{n}}))=s+|{\mathbf{n}}|-d.$$ Suppose $R=K[x_1,\ldots,x_m] $ and ${\underline{\textit{a}}}\in {{\mathfrak m}}$ is a regular linear sequence in $A.$ We prove by induction on $|{\mathbf{n}}|$. Notice that the Castelnuovo-Mumford regularity of an Artinian graded ring coincides with its socle degree. Since the Castelnuovo-Mumford regularity of $A,$ $\operatorname{reg}(A),$ and regularity of the Artinian reduction $A/({\underline{\textit{a}}})A$ are the same, we have $$\operatorname{socdeg}(A/({\underline{\textit{a}}}))=\operatorname{reg}(A/({\underline{\textit{a}}}))=\operatorname{reg}(A)=s.$$ Hence the assertion is clear for $|{\mathbf{n}}|=d$. Let $|{\mathbf{n}}| > d$. After a permutation, we may assume that ${\mathbf{n}}=(n_1,\ldots,n_d)$ with $n_1 \ge 2$. By induction $A/({\underline{\textit{a}}}^{{\mathbf{n}}-{\mathbf{e}}_1})$ has socle degree $s+|{\mathbf{n}}|-d-1$. Let $f \in {{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1}$ be a homogeneous polynomial. Since ${{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1} \subseteq ({\underline{\textit{a}}}^{{\mathbf{n}}-{\mathbf{e}}_1})$, $$f=a_1^{n_1-1}f_1+a_2^{n_2}f_2+\cdots+a_d^{n_d}f_d$$ where $f_i \in A$ are homogeneous polynomials. Thus $\deg(f_1) \geq s+|{\mathbf{n}}|-d+1 -(n_1-1).$ By induction hypothesis applied to $(1,n_2,\ldots,n_d),$ we get that $$f_1 \in {{\mathfrak m}}^{s+|{\mathbf{n}}|-(n_1-1)-d+1} \subseteq (a_1,a_2^{n_2},\ldots,a_d^{n_d}).$$ Therefore $f \in ({\underline{\textit{a}}}^{\mathbf{n}})$. This gives that ${{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1} \subseteq ({\underline{\textit{a}}}^{\mathbf{n}})$. On the other hand, if $f\in {{\mathfrak m}}^{s+|{\mathbf{n}}|-d-1} \setminus ({\underline{\textit{a}}}^{{\mathbf{n}}-{\mathbf{e}}_1})$ is a homogeneous polynomial, then $a_1 f \in {{\mathfrak m}}^{s+|{\mathbf{n}}|-d} \setminus ({\underline{\textit{a}}}^{\mathbf{n}})$. Hence ${{\mathfrak m}}^{s+|{\mathbf{n}}|-d} \nsubseteq ({\underline{\textit{a}}}^{\mathbf{n}})$. This proves . Let $R=K[\![x_1,\ldots,x_m]\!]$ with $\mathrm{char}(K)=0$ and $({{\underline{\textit{a}}}})$ a general minimal reduction of ${{\mathfrak m}}.$ Let $V$ be the set of all minimal reductions of ${{\mathfrak m}}.$ As $\mathrm{char}(K)=0,$ $R$ is equicharacteristic. Hence by Proposition \[Prop:IndOfMult\], $\operatorname{socdeg}(A/J) \leq s(A)=s$ for any minimal reduction $J$ of ${{\mathfrak m}}.$ Therefore $${{\mathfrak m}}^{s+1}\subseteq \bigcap_{J\in V}J=\operatorname{core}({{\mathfrak m}}).$$ We prove by induction on $|{\mathbf{n}}|.$ Let $J:=({\underline{\textit{a}}}).$ Since $s$ is the socle degree of $A/({\underline{\textit{a}}}),$ the assertion is clear if $|{\mathbf{n}}|=d.$ Let $|{\mathbf{n}}|\geq d+1$. Assume that $k:=\max\{0,r-|{\mathbf{n}}|+d\}$ where $r:=r({{\mathfrak m}})$ is the reduction number of ${{\mathfrak m}}.$ Since $k+|{\mathbf{n}}|-d\ge r,$ by Theorem \[thm:HunekeTrung\] $\operatorname{core}({{\mathfrak m}})=J^{k+|{\mathbf{n}}|-d+1}:{{\mathfrak m}}^{k+|{\mathbf{n}}|-d}.$ Therefore $$\begin{aligned} \label{Eqn:ContainmentForallk} J^k{{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1}&\subseteq {{\mathfrak m}}^{k+|{\mathbf{n}}|-d}{{\mathfrak m}}^{s+1}\subseteq {{\mathfrak m}}^{k+|{\mathbf{n}}|-d}\operatorname{core}({{\mathfrak m}}) \subseteq J^{k+|{\mathbf{n}}|-d+1}. \end{aligned}$$ Note that $J^{|{\mathbf{n}}|-d+1}\subseteq ({\underline{\textit{a}}}^{{\mathbf{n}}}).$ Indeed suppose that $a_1^{n_1'}\cdots a_d^{n_d'} \in J^{|{\mathbf{n}}|-d+1}$ where ${\mathbf{n}}'=(n'_1,\ldots,n'_d)\in {\mathbb{N}}^d$ and $|{\mathbf{n}}'|\ge |{\mathbf{n}}|-d+1.$ Then $n'_i\ge n_i$ for some $i\in\{1,\dots,d\}$ and hence $a_1^{n_1'}\cdots a_d^{n_d'}\subseteq ({\underline{\textit{a}}}^{{\mathbf{n}}}).$ Therefore if $k=0,$ then by $${{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1} \subseteq J^{|{\mathbf{n}}|-d+1} \subseteq ({\underline{\textit{a}}}^{{\mathbf{n}}}).$$ Assume that $k \geq 1.$ Since $\oplus_{i \geq 0} J^i/J^{i+1}$ is Cohen-Macaulay, $J^{i+1}:(a_1)=J^{i}$ for all $i \geq 0.$ Hence by $${{\mathfrak m}}^{s+|{\mathbf{n}}|-d+1}\subseteq J^{k+|{\mathbf{n}}|-d+1}:J^k \subseteq J^{k+|{\mathbf{n}}|-d+1}:(a_1^k)= J^{|{\mathbf{n}}|-d+1}\subseteq({\underline{\textit{a}}}^{{\mathbf{n}}}).$$ Thus $\operatorname{socdeg}(A/({\underline{\textit{a}}}^{\mathbf{n}}))\le s+|{\mathbf{n}}|-d.$ On the other hand, if $u\in {{\mathfrak m}}^{s}\setminus ({\underline{\textit{a}}})$, then $(a_1^{n_1-1}\cdots a_d^{n_d-1})u\in {{\mathfrak m}}^{s+|{\mathbf{n}}|-d}\setminus ({\underline{\textit{a}}}^{\mathbf{n}})$. Thus ${{\mathfrak m}}^{s+|{\mathbf{n}}|-d} \nsubseteq ({\underline{\textit{a}}}^{\mathbf{n}}).$ Hence $\operatorname{socdeg}(A/({\underline{\textit{a}}}^{\mathbf{n}})) =s+|{\mathbf{n}}|-d.$ \[Prop:QuotientLevel\] 1. \[Prop:QuotientLevelGraded\] Let $R=K[x_1,\dots,x_m]$ and ${{\underline{\textit{a}}}}:={a_1},\ldots,{a_d}$ be a regular linear sequence in ${{\mathfrak m}}$. If $A$ is graded level of type $\tau$, then $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is an Artinian graded level $K$-algebra of type $\tau$ and socle degree $s+|{\mathbf{n}}|-d$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ where $s$ is the Castelnuovo-Mumford regularity of $A.$ 2. \[Prop:QuotientLevelLocal\] Let $R=K[\![ x_1,\dots, x_m ]\!]$ with $\mathrm{char}(K)=0$ and $({{\underline{\textit{a}}}}) \subseteq {{\mathfrak m}}$ be a general minimal reduction of ${{\mathfrak m}}$ where ${\underline{\textit{a}}}:=a_1,\ldots,a_d.$ If $A$ is local level ring of type $\tau$, then $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is an Artinian local level $K$-algebra of type $\tau$ and socle degree $s+|{\mathbf{n}}|-d$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ where $s$ is the index of nilpotency of $A.$ Follows from Propositions \[Prop:GLQutotientByHomEle\] and \[Prop:SocDegOfR\_nn\]. Since $A$ is level, by definition $A/({\underline{\textit{a}}})$ is an Artinian local level $K$-algebra of type $\tau.$ Let $s_{{\mathbf{n}}}:=\operatorname{socdeg}(A/({\underline{\textit{a}}}^{\mathbf{n}})).$ By Proposition \[Prop:SocDegOfR\_nn\], $s_{\mathbf{n}}=s+|{\mathbf{n}}|-d.$ Consider $$\mu: \frac{{{\mathfrak m}}^s+({\underline{\textit{a}}})}{({\underline{\textit{a}}})} \xrightarrow{\cdot a_1^{n_1-1}\cdots a_d^{n_d-1}} \frac{{{\mathfrak m}}^{s_{\mathbf{n}}}+({\underline{\textit{a}}}^{\mathbf{n}})}{({\underline{\textit{a}}}^{\mathbf{n}})}.$$ Clearly $\mu$ is well-defined. Since ${\underline{\textit{a}}}$ is a regular sequence in $A,$ it is easy to verify that $\mu$ is injective. Hence $$\dim_K \frac{{{\mathfrak m}}^{s_{\mathbf{n}}} + ({\underline{\textit{a}}}^{\mathbf{n}})}{({\underline{\textit{a}}}^{\mathbf{n}})} \geq \dim_K \frac{{{\mathfrak m}}^s+({\underline{\textit{a}}})}{({\underline{\textit{a}}})}=\dim_K \operatorname{Soc}(A/({\underline{\textit{a}}}))=\tau.$$ On the other hand, one always has $$\dim_K \frac{{{\mathfrak m}}^{s_{\mathbf{n}}} + ({\underline{\textit{a}}}^{\mathbf{n}})}{({\underline{\textit{a}}}^{\mathbf{n}})} \leq \tau$$ as $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is an Artinian local ring of type $\tau$ and $({{\mathfrak m}}^{s_{\mathbf{n}}} + ({\underline{\textit{a}}}^{\mathbf{n}}))/({\underline{\textit{a}}}^{\mathbf{n}}) \subseteq \operatorname{Soc}(A/({\underline{\textit{a}}}^{\mathbf{n}})). $ Therefore $\dim_ K ({{\mathfrak m}}^{s_{\mathbf{n}}} + ({\underline{\textit{a}}}^{\mathbf{n}}))/({\underline{\textit{a}}}^{\mathbf{n}})=\tau$ and hence $({{\mathfrak m}}^{s_{\mathbf{n}}} + ({\underline{\textit{a}}}^{\mathbf{n}}))/({\underline{\textit{a}}}^{\mathbf{n}})=\operatorname{Soc}(A/({\underline{\textit{a}}}^{\mathbf{n}})).$ Thus $A/({\underline{\textit{a}}}^{\mathbf{n}})$ is level. Divided Power rings and Inverse System {#Section:DividedRing} ====================================== In this section we recall some basic facts about the divided power rings and Macaulay’s inverse system for Artinian level $K$-algebras. These play an important role in constructing the inverse system of level $K$-algebras of dimension $d>0$ in the next section. Let $R=K[\![x_1,\ldots,x_m]\!]$ or $R=K[x_1,\ldots,x_m]$ and let $R_i$ denote the $K$-vector space spanned by homogeneous polynomials of degree $i.$ It is known that the injective hull of $K$ as an $R$-module is isomorphic to the following ring (see [@Eis95 Exercise A3.4(b)] and [@Gab59]): $$\displaystyle E_R(K) \cong \operatorname{\mathcal D}:=K_{DP}[X_1,\ldots,X_m]=\bigoplus_{i \geq 0}\operatorname{Hom}_K(R_i,K),$$ called the *divided power ring*. The ring $\operatorname{\mathcal D}$ has a structure of $R$-module via the following contraction action: $$\label{Eqn:DividedPower} f \circ F=\begin{cases} \sum_{{\mathbf{n}},{\mathbf{n}}'}\alpha_{{\mathbf{n}}}\beta_{{\mathbf{n}}'} X_1^{n'_1-n_1}\cdots X_m^{n'_m-n_m}& \mbox{ if } {\mathbf{n}}' \geq {\mathbf{n}}\\ 0 & \mbox{ otherwise } \end{cases}$$ where $f=\sum_{{\mathbf{n}}\in {\mathbb{N}}^m} \alpha_{{\mathbf{n}}} x_1^{n_1}\cdots x_m^{n_m}$ and $F=\sum_{{\mathbf{n}}' \in {\mathbb{N}}^m} \beta_{{\mathbf{n}}'} X_1^{n_1'}\cdots X_m^{n_m'}$. Moreover, if $z_1,\ldots,z_m$ is a regular system of parameter of $R$ and $Z_1,\ldots,Z_m$ are the corresponding dual elements, that is, $z_i \circ Z_j=\delta_{ij},$ then $\operatorname{\mathcal D}=K_{DP}[Z_1,\ldots,Z_m]$ (see [@Gab59]). $\operatorname{\mathcal D}$ can be equipped with an internal multiplication which is commutative. But throughout this paper whenever we write $X^{\mathbf{n}}F$ for $F \in \operatorname{\mathcal D}$ we mean formal multiplication and not the multiplication in $\operatorname{\mathcal D}.$ If $I \subseteq R$ is an ideal of $R,$ then $(R/I)^{\vee}:=\operatorname{Hom}_R(R/I,E_R(K))$ can be identified with the $R$-submodule $I^\perp$ of $\operatorname{\mathcal D}$ which is defined as $$I^\perp:=\{F \in \operatorname{\mathcal D}:f \circ F=0 \mbox{ for all }f\in I\}.$$ This submodule is called the *Macaulay’s inverse system of $I$*. On the other hand, given an $R$-submodule $W$ of $\operatorname{\mathcal D},$ $W^\vee:=\operatorname{Hom}_R(W,E_R(K))$ is the ring $R/\operatorname{Ann}_R(W)$ where $$\operatorname{Ann}_R(W):=\{f \in R:f \circ F=0 \mbox{ for all }F \in W\}$$ is an ideal of $R.$ If $I$ is a homogeneous ideal (resp. $W$ is generated by homogeneous polynomials) then $I^\perp$ is generated by homogeneous polynomials of $\operatorname{\mathcal D}$ (resp. $\operatorname{Ann}_R(W)$ is a homogeneous ideal of $R$). By Matlis duality, $R/I$ is Artinian (resp. an $R$-submodule $W$ of $\operatorname{\mathcal D}$ is finitely generated) if and only if $ (R/I)^\vee \cong I^\perp $ is finitely generated (resp. $W^\vee \cong R/\operatorname{Ann}_R(W) $ is Artinian) (see [@BH98 Section 3.2]). Moreover, $\operatorname{type}(R/I)=\mu((R/I)^\vee)$ where $\mu(-)$ denotes the number of minimal generators as $R$-module. Emsalem in [@Ems78 Proposition 2] and Iarrobino in [@Iar94], based on the work of Macaulay in [@Mac1916], gave a more precise description of the inverse system of Artinian level $K$-algebras: \[Prop:Emsalem-Iarrobino\] There is a one-to-one correspondence between ideals $I$ such that $R/I$ is an Artinian level local (resp. graded) $K$-algebra of socle degree $s$ and type $\tau$ and $R$-submodules of $\operatorname{\mathcal D}$ generated by $\tau$ polynomials (resp. homogeneous polynomials) of degree $s$ having linearly independent forms of degree $s.$ The correspondence is defined as follows: $$\begin{aligned} \left\{ \begin{array}{cc} I \subseteq R \mbox{ such that } R/I \mbox{ is an Artinian } \\ \mbox{level (resp. graded) local $K$-algebra of }\\ \mbox{ socle degree $s$ and type } \tau \end{array} \right\} \ &\stackrel{1 - 1}{\longleftrightarrow}& \ \left\{ \begin{array}{cc} W \subseteq \operatorname{\mathcal D}\mbox{ submodule generated by } \tau \mbox{ polynomials }\\ \mbox{(resp. homogeneous polynomials) of degree} \\ s \mbox{ with l.i. forms of degree } s \end{array} \right\} \\ R/I \ \ \ \ \ &\longrightarrow& \ \ \ \ \ I^\bot \ \ \ \ \ \ \ \ \qquad \ \ \ \ \ \ \ \qquad \ \ \ \ \ \ \ \\ R/\operatorname{Ann}_R(W) \ \ \ \ \ &\longleftarrow& \ \ \ \ \ W \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \qquad \qquad \ \ \ \ \ \ \ \end{aligned}$$ In particular, an Artinian Gorenstein $K$-algebra of socle degree $s$ corresponds to a cyclic $R$-submodule generated by a non-zero polynomial $F$ of degree $s$ in $\operatorname{\mathcal D}.$ Consider the following exact pairing of $K$-vector spaces induced by the contraction action: $$\xymatrix@R=.3pc{ \langle\ , \ \rangle: R \times \operatorname{\mathcal D}\ar[r] & K \\ \qquad (f,F) \ar@{|->}[r]& (f\circ F)(0). }$$ Since $(\mathcal M I \circ F)(0)=0$ if and only if $I \circ F=0,$ it follows that $$I^\perp=\{F \in \operatorname{\mathcal D}:\langle f,F \rangle \mbox{ for all }f \in I\}.$$ Using this bilinear pairing we can deduce the following property of $\operatorname{Ann}_R(W)$ (see [@Cil94 Proposition 12.9]). \[Prop:BasicPropertiesOfAnn\] Let $W_1, ~W_2$ be finitely generated $R$-submodules of $\operatorname{\mathcal D}.$ Then $$\operatorname{Ann}_R(W_1 \cap W_2)=\operatorname{Ann}_R(W_1) + \operatorname{Ann}_R(W_2).$$ If $\{F_1,\ldots,F_t\} \subseteq \operatorname{\mathcal D}$ is a set of polynomials, we will denote by $\langle F_1,\ldots,F_t \rangle$ the $R$-submodule of $\operatorname{\mathcal D}$ generated by $F_1,\ldots,F_t$, i.e., the $K$-vector space spanned by $F_1,\ldots,F_t$ and by the corresponding derivatives of all orders. If $W \subseteq \operatorname{\mathcal D}$ is generated by polynomials $F_1,\ldots,F_t,$ with $F_i \in \operatorname{\mathcal D}, $ then we write $\operatorname{Ann}_R(W)= \operatorname{Ann}_R\langle F_1,\ldots,F_t\rangle .$ Inverse System of Level $K$-algebras {#Section:MainResult} ==================================== In [@ER17] authors established a one-to-one correspondence between Gorenstein $K$-algebras of dimension $d>0$ and $G_d$-admissible submodules of $\operatorname{\mathcal D}.$ The main goal of this section is to extend their result and give the structure of the inverse system of level $K$-algebras of positive dimension. First we construct the inverse system of a $d$-dimensional level $K$-algebra. Then, using the properties of this inverse system, we define $L^\tau_d$–admissible submodule of $\operatorname{\mathcal D}$. Finally, we show that the Matlis dual of an $L^\tau_d$-admissible submodule is indeed a level ring. We fix some notations. Let $R=K[x_1,\ldots,x_m]$ and ${\underline{z}}$ a regular linear sequence for $A=R/I$, or $R=K[\![x_1,\ldots,x_m]\!]$ and ${\underline{z}}:=z_1,\ldots,z_d $ a sequence of general elements in $ \mathcal{M}$. If $R=K[\![x_1,\ldots,x_m]\!]$, then by Fact \[Fact:GeneralElements\] $({\underline{z}})$ is a minimal reduction of $\mathcal{M}.$ Therefore from [@HS06 Corollary 8.3.6] it follows that ${\underline{z}}$ is part of a regular system of parameter for $R.$ Hence in both cases ${\underline{z}}$ can be extended to a minimal system of generator of $\mathcal{M},$ say $z_1,\ldots,z_d,\ldots,z_m.$ We write $Z_1,\ldots,Z_m$ for the dual elements in $\operatorname{\mathcal D}$ corresponding to $z_1,\ldots,z_m,$ that is, $z_i \circ Z_j=\delta_{ij}.$ Hence $\operatorname{\mathcal D}=K[Z_1,\ldots,Z_m]$ (see [@Gab59]). We recall the following proposition from [@ER17] that is needed to construct the inverse system of level $K$-algebras. Notice that the proof of this given in [@ER17] does not depend on the ring being Gorenstein, and hence we state it without proof. [@ER17 Proposition 3.2] \[Prop:DualExactSeq\] Let $R=K[\![x_1,\ldots,x_m]\!]$ (resp. $R=K[x_1,\ldots,x_m]$) and $A=R/I$ be Cohen-Macaulay of dimension $d>0.$ Let ${\underline{z}}:=z_1,\ldots,z_d \in \mathcal{M}$ be a sequence of general elements (resp. regular linear sequence for $A$). For any ${\mathbf{n}}\in {\mathbb{N}}^d_+, $ let $T_{{\mathbf{n}}}=(I+({\underline{z}}^{\mathbf{n}}))^{\perp}.$ 1. \[Prop:DualExactSeq:d=1\] Assume $d=1.$ Then for all $n \geq 2,$ there is an exact sequence of finitely generated $R$-submodules of $\operatorname{\mathcal D}$ $$0 \longrightarrow T_{1} \longrightarrow T_{n} \xrightarrow{z_1\circ} T_{{n-1}} \longrightarrow 0.$$ 2. \[Prop:DualExactSeq:d&gt;1\] Assume $d \geq 2.$ For all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ such that ${\mathbf{n}}\geq {\bf 2}_d$ there is an exact sequence of finitely generated $R$-submodules of $\operatorname{\mathcal D}$ $$\displaystyle 0 \to T_{{\bf 1}_d} \to T_{{\mathbf{n}}} \to \bigoplus_{k=1}^d T_{{{\mathbf{n}}-{\mathbf{e}}_k}} \to \bigoplus_{1 \leq i < j \leq d } T_{{{\mathbf{n}}-{\mathbf{e}}_i-{\mathbf{e}}_j}}.$$ We recall the following basic lemma, which will be used later. \[Lemma:Generates\] Let $f:M\to N$ be an epimorphism between two non-zero $R$-modules $M$ and $N$ that are minimally generated by $\nu$ elements. Let $m_1,\ldots,m_\nu$ be such that $f(m_1),\ldots,f(m_\nu)$ generate $N$. Then $m_1,\ldots,m_\nu$ generate $M$. As $f: M \to N$ is surjective, $\bar{f}:M/\operatorname{\mathcal M}M \to N/\operatorname{\mathcal M}N$ is also surjective. Since $N/\operatorname{\mathcal M}N$ is generated by $f(m_1)+\operatorname{\mathcal M}N, \ldots,f(m_\nu)+\operatorname{\mathcal M}N$ as a $K$-vector space and $\dim_K N/\operatorname{\mathcal M}N=\nu,$ $f(m_1)+\operatorname{\mathcal M}N, \ldots,f(m_\nu)+\operatorname{\mathcal M}N$ are $K$-linearly independent. This implies that $m_1+\operatorname{\mathcal M}M,\ldots,m_\nu + \operatorname{\mathcal M}M$ are also $K$-linearly independent and hence $m_1+\operatorname{\mathcal M}M,\ldots,m_\nu + \operatorname{\mathcal M}M$ generate $M/\operatorname{\mathcal M}M. $ Therefore by Nakayama’s lemma $m_1,\ldots,m_\nu$ generate $M.$ In the following proposition we construct the inverse system of level $K$-algebras satisfying certain conditions. \[Prop:ConstructionOfAdmissible\] Let $R=K[\![x_1,\ldots,x_m]\!]$ with $\mathrm{char}(K)=0$ (resp. $R=K[x_1,\ldots,x_m]$), $A=R/I$ and ${\underline{z}}:=z_1,\ldots,z_d \in \mathcal{M}$ a sequence of general elements (resp. regular linear sequence for $A=R/I$). Let $s$ be the index of nilpotency (resp. Castelnuovo-Mumford regularity) of $A.$ Suppose that $A$ is a level $K$-algebra of dimension $d>0$ and type $\tau.$ Then there exists a system of (resp. homogeneous) generators $\{H_{\mathbf{n}}^j:j=1,\dots,\tau,{\mathbf{n}}\in {\mathbb{N}}_+^d\}$ of $I^\perp$ such that for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ $$W_{{\mathbf{n}}}:=(I+({\underline{z}}^{\mathbf{n}}))^\perp=\langle H_{\mathbf{n}}^j : j=1,\dots,\tau\rangle$$ where $\deg H_{\mathbf{n}}^1=\deg H_{\mathbf{n}}^2=\cdots=\deg H_{\mathbf{n}}^\tau=s+|{\mathbf{n}}|-d$ and the forms of degree $s+|{\mathbf{n}}|-d$ of $H_{\mathbf{n}}^1,\ldots,H_{\mathbf{n}}^\tau$ are linearly independent; for all ${\mathbf{n}}\in {\mathbb{N}_+^d}$ and $j=1,\ldots,\tau$ and $i=1,\ldots,d$ $$z_i \circ H_{\mathbf{n}}^j =\begin{cases} H_{{\mathbf{n}}-{\mathbf{e}}_i}^j & \mbox{ if } {\mathbf{n}}-{\mathbf{e}}_i >{\mathbf{0}_d}\\ 0 & \mbox{ otherwise}. \end{cases}$$ We prove proposition for the local case. The proof is similar for the graded case. For all ${\mathbf{n}}\in {\mathbb{N}}^d_+,$ let $I_{{\mathbf{n}}}:=I+({\underline{z}}^{\mathbf{n}}),$ $R_{\mathbf{n}}:=R/I_{{\mathbf{n}}}=A/({\underline{z}}^{\mathbf{n}})$ and $W_{{\mathbf{n}}}:=I_{{\mathbf{n}}}^\perp.$ Since ${\underline{z}}$ is a sequence of general elements in $\mathcal{M},$ ${\underline{z}}+I:=z_1+I,\ldots,z_d+I$ is a sequence of general elements in ${{\mathfrak m}}.$ Therefore by Fact \[Fact:GeneralElements\], $({\underline{z}})A$ is general minimal reduction of ${{\mathfrak m}}.$ Since $A$ is level, $R_{\mathbf{1}_d}$ is an Artinian local level $K$-algebra of type $\tau.$ Note that $\operatorname{socdeg}(R_{\mathbf{1}_d})=s.$ Hence by Proposition \[Prop:Emsalem-Iarrobino\] there exist polynomials $H_{{\bf 1}_d}^1, H_{{\bf 1}_d}^2,\ldots,H_{{\bf 1}_d}^\tau$ of degree $s$ such that the forms of degree $s$ of $H_{{\bf 1}_d}^1,\ldots, H_{{\bf 1}_d}^\tau$ are linearly independent and $$W_{{\bf 1}_d}=(I+({\underline{z}}))^\perp=\langle H_{{\bf 1}_d}^j: j=1,\ldots, \tau \rangle.$$ For ${\mathbf{n}}=(n_1,\ldots,n_d)\in {\mathbb{N}}^d_+, $ let $|{\mathbf{n}}|_+$ denote the cardinality of the set $\{n_i:n_i \geq 2\}.$ We put the lexicographic order on the set $ \{1,\ldots,d\} \times {\mathbb{N}}_+,$ that is $(i_1,j_1) < (i_2,j_2)$ if $i_1 < i_2 $ OR if $i_1=i_2$ and $ j_1 < j_2.$ We construct $H_{\mathbf{n}}^j,$ ${\mathbf{n}}\in {\mathbb{N}}^d_+,~j=1,\ldots,\tau,$ as required using induction on the pair $(|{\mathbf{n}}|_+,|{\mathbf{n}}|-d+|{\mathbf{n}}|_+) \in \{1,\ldots,d\} \times {\mathbb{N}}_+$. Assume that $|{\mathbf{n}}|_+=1.$ After a permutation, we may assume that ${\mathbf{n}}=(n,1,\ldots,1)$ with $n \geq 2.$ Since $|{\mathbf{n}}-{\mathbf{e}}_1|_+\leq 1,$ we have $$(|{\mathbf{n}}-{\mathbf{e}}_1|_+,|{\mathbf{n}}-{\mathbf{e}}_1|-d+|{\mathbf{n}}-{\mathbf{e}}_1|_+) < (|{\mathbf{n}}|_+,|{\mathbf{n}}|-d+|{\mathbf{n}}|_+).$$ Hence by induction for all ${\mathbf{n}}' \leq {\mathbf{n}}-{\mathbf{e}}_1$ and $j=1,\dots,\tau$ there exist $H_{{\mathbf{n}}'}^j \in \operatorname{\mathcal D}$ such that $\{H_{{\mathbf{n}}'}^j:{{\mathbf{n}}' \in \mathbb{N}_+^d,~ j=1,\ldots,\tau,{\mathbf{n}}' \leq {\mathbf{n}}-{\mathbf{e}}_1}\}$ satisfy the required conditions. Let $J=I+(z_2,\ldots,z_d).$ Now, by Proposition \[Prop:DualExactSeq\], we get an exact sequence $$0 \longrightarrow W_{{\bf 1}_d}= (J+(z_1))^\perp\longrightarrow W_{{\mathbf{n}}}=(J+(z_1^n))^\perp \overset{z_1 \circ}{\longrightarrow} W_{{\mathbf{n}}-{\mathbf{e}}_1}= (J+(z_1^{n-1}))^\perp\longrightarrow 0.$$ Therefore for each $j=1,\ldots,\tau,$ there exist polynomials $H_{\mathbf{n}}^j \in W_{{\mathbf{n}}}$ such that $$\label{Eqn:Induction1} z_1 \circ H_{\mathbf{n}}^j = H_{{\mathbf{n}}-{\mathbf{e}}_1}^{j}.$$ As $z_i \in I_{{\mathbf{n}}}$ for $i \neq 1,$ $z_i \circ H_{{\mathbf{n}}}^j=0$ if $i \neq 1.$ Since $\operatorname{socdeg}(R_{\mathbf{n}})=s+|{\mathbf{n}}|-d$ by Proposition \[Prop:SocDegOfR\_nn\], $\deg H_{{\mathbf{n}}}^j \leq s+|{\mathbf{n}}|-d$ for all $j=1,\ldots,\tau.$ Moreover, as $\deg H_{{\mathbf{n}}-{\mathbf{e}}_1}^j=s+|{\mathbf{n}}|-d-1,$ by $\deg H_{{\mathbf{n}}}^j \geq s+|{\mathbf{n}}|-d.$ Hence for all $j=1,\ldots,\tau$ $$\deg H_{{\mathbf{n}}}^j = s+|{\mathbf{n}}|-d.$$ Also, since the forms of degree $s+|{\mathbf{n}}|-d-1$ of $H_{{\mathbf{n}}-{\mathbf{e}}_1}^1,\ldots,H_{{\mathbf{n}}-{\mathbf{e}}_1}^\tau$ are linearly independent, the forms of degree $s+|{\mathbf{n}}|-d$ of $H_{\mathbf{n}}^1,\ldots,H_{\mathbf{n}}^\tau$ are also linearly independent. By Proposition \[Prop:Emsalem-Iarrobino\] $W_{\mathbf{n}}$ and $W_{{\mathbf{n}}-{\mathbf{e}}_1}$ are minimally generated by $\tau$ elements. Hence by Lemma \[Lemma:Generates\] $W_{\mathbf{n}}=\langle H_{\mathbf{n}}^j:j=1,\ldots,\tau \rangle.$ Let $l:=|{\mathbf{n}}|_+ \geq 2.$ After a permutation, we may assume that ${\mathbf{n}}=(n_1,\ldots,n_l,1,\ldots,1)$ with $n_i \geq 2$ for $i=1,\ldots,l.$ We set ${\underline{z}}'=z_1,\ldots,z_l$ and ${\mathbf{n}}'=(n_1,\ldots,n_l)\in {\mathbb{N}}^l_+$ and $J=I+(z_{l+1},\ldots,z_d).$ By Proposition \[Prop:DualExactSeq\] we get an exact sequence $$\label{Eqn:exactSequence} 0 \longrightarrow T_{{\bf 1}_l} \longrightarrow T_{{\mathbf{n}}'} \longrightarrow \bigoplus_{k=1}^l T_{{\mathbf{n}}'-{\mathbf{e}}_k} \overset{\phi^*_{{\mathbf{n}}'}}{\longrightarrow} \bigoplus_{1 \leq i < j \leq l } T_{{{\mathbf{n}}'-{\mathbf{e}}_i-{\mathbf{e}}_j}}.$$ where $T_{{\bf 1}_l}=W_{{\bf 1}_d}, T_{{\mathbf{n}}'}=(J+({\underline{z}}'^{{\mathbf{n}}'}))^\perp=W_{\mathbf{n}},T_{{\mathbf{n}}'-{\mathbf{e}}_k}=(J+({\underline{z}}'^{{\mathbf{n}}'-{\mathbf{e}}_k}))^\perp=W_{{\mathbf{n}}-{\mathbf{e}}_k}$ and $T_{{\mathbf{n}}'-{\mathbf{e}}_i-{\mathbf{e}}_j}=(J+({\underline{z}}'^{{\mathbf{n}}'-{\mathbf{e}}_i-{\mathbf{e}}_j}))^\perp=W_{{\mathbf{n}}-{\mathbf{e}}_i-{\mathbf{e}}_j}$ and $\phi^*_{{\mathbf{n}}'}$ is defined as $$\phi^*_{{\mathbf{n}}'}(v_1,\ldots,v_l)=(z_k \circ v_i-z_i \circ v_k: 1\leq i<k\leq l).$$ Since for all $k=1,\ldots,l$ $$(|{\mathbf{n}}-{\mathbf{e}}_k|_+,|{\mathbf{n}}-{\mathbf{e}}_k|-d+|{\mathbf{n}}-{\mathbf{e}}_k|_+) < (|{\mathbf{n}}|_+,|{\mathbf{n}}|-d+|{\mathbf{n}}|_+),$$ by induction there exist $H_{{\mathbf{n}}-{\mathbf{e}}_k}^j \in \operatorname{\mathcal D},$ $j =1,\ldots, \tau,$ such that - $W_{{\mathbf{n}}-{\mathbf{e}}_k}=\langle H_{{\mathbf{n}}-{\mathbf{e}}_k}^j:j=1,\ldots,\tau\rangle, $ $\deg H_{{\mathbf{n}}-{\mathbf{e}}_k}^j=s+|{\mathbf{n}}|-d-1$ for all $j=1,\ldots,\tau$ and the forms of degree $s+|{\mathbf{n}}|-d-1$ of $H_{{\mathbf{n}}-{\mathbf{e}}_k}^1,\ldots,H_{{\mathbf{n}}-{\mathbf{e}}_k}^\tau$ are linearly independent; - $z_i \circ H_{{\mathbf{n}}-{\mathbf{e}}_k}^j=H_{{\mathbf{n}}-{\mathbf{e}}_k-{\mathbf{e}}_i}^j $ for all $i=1,\ldots,l$ and $i \neq k.$ Therefore for each $j=1,\ldots,\tau,$ $$\begin{aligned} \phi^*_{{\mathbf{n}}'}(H_{{\mathbf{n}}-{\mathbf{e}}_1}^j,\ldots,H_{{\mathbf{n}}-{\mathbf{e}}_l}^j)&=&(z_k \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}^j - z_i \circ H_{{\mathbf{n}}-{\mathbf{e}}_k}^j: 1\leq i<k\leq l)\\ &=&(H_{{\mathbf{n}}-{\mathbf{e}}_i-{\mathbf{e}}_k}^j - H_{{\mathbf{n}}-{\mathbf{e}}_k-{\mathbf{e}}_i}^j:1\leq i<k\leq l)=0.\end{aligned}$$ Hence $$(H_{{\mathbf{n}}-{\mathbf{e}}_1}^j,\ldots,H_{{\mathbf{n}}-{\mathbf{e}}_l}^j) \in \ker (\phi^*_{{\mathbf{n}}'}).$$ Therefore by exactness of the sequence , we conclude that there exist $H_{\mathbf{n}}^j \in W_{\mathbf{n}}$ such that for all $k=1,\ldots,l,$ $$\label{Eqn:Induction2} z_k \circ H_{\mathbf{n}}^j=H_{{\mathbf{n}}-{\mathbf{e}}_k}^j.$$ Moreover, as $z_k \in I_{{\mathbf{n}}}$ for $k>l,$ $z_k \circ H_{{\mathbf{n}}}^j=0$ if $k >l.$ Since $\operatorname{socdeg}(R_{\mathbf{n}})=s+|{\mathbf{n}}|-d$ by Proposition \[Prop:SocDegOfR\_nn\], $\deg H_{{\mathbf{n}}}^j\leq s+|{\mathbf{n}}|-d.$ Also, as $\deg H_{{\mathbf{n}}-{\mathbf{e}}_k}^j=s+|{\mathbf{n}}|-d-1$, by $\deg H_{\mathbf{n}}^j\geq s+|{\mathbf{n}}|-d.$ Hence $\deg H_{{\mathbf{n}}}^j=s+|{\mathbf{n}}|-d$ for all $j=1,\ldots,\tau.$ As the forms of degree $s+|{\mathbf{n}}|-d-1$ of $H_{{\mathbf{n}}-{\mathbf{e}}_1}^1,\ldots,H_{{\mathbf{n}}-{\mathbf{e}}_1}^{\tau}$ are linearly independent, the forms of degree $s+|{\mathbf{n}}|-d$ of $H_{\mathbf{n}}^1,\ldots,H_{\mathbf{n}}^\tau$ are also linearly independent. Applying Lemma \[Lemma:Generates\] to the map $$W_{{\mathbf{n}}} \overset{z_1 \circ}{\longrightarrow} W_{{\mathbf{n}}-{\mathbf{e}}_1}$$ we conclude that $W_{\mathbf{n}}=\langle H_{\mathbf{n}}^1,\ldots,H_{\mathbf{n}}^\tau\rangle.$ Thus we have constructed $H_{\mathbf{n}}^j,$ for all $j=1,\ldots,\tau$ and ${\mathbf{n}}\in {\mathbb{N}}^d_+$ satisfying the required conditions. Moreover, $I^\perp=\langle W_{{\mathbf{n}}} :{\mathbf{n}}\in {\mathbb{N}}^d_+ \rangle=:W.$ Indeed, if $F \in I^\perp,$ then $(I_{{\bf t}_d}) \circ F=0$ where $t>\deg F.$ Hence $F \in W_{{\bf t}_d} \subseteq W.$ The other containment is trivial. Motivated by Proposition \[Prop:ConstructionOfAdmissible\] we define $L_d^\tau$-admissible $R$-submodules of $\operatorname{\mathcal D}$ as follows: \[Def:LdAdmissible\] Let $d$ and $\tau$ be positive integers. An $R$-submodule $W$ of $\operatorname{\mathcal D}$ is called [*local*]{} (resp. [*graded*]{}) [*$L_d^\tau$-admissible*]{} if it admits a system of (resp. homogeneous) generators $\{H_{\mathbf{n}}^j:{{\mathbf{n}}\in \mathbb{N}_+^d, j =1,\ldots, \tau}\}$ satisfying the following conditions: (1) \[Def:Cond1\] for all ${\mathbf{n}}\in {\mathbb{N}}_+^d,$ $s_{{\mathbf{n}}}:=\deg H_{\mathbf{n}}^1=\deg H_{\mathbf{n}}^2=\cdots=\deg H_{\mathbf{n}}^\tau$ and the forms of degree $s_{{\mathbf{n}}}$ of $H_{\mathbf{n}}^1,\ldots,H_{\mathbf{n}}^\tau$ are linearly independent; (2) \[Def:Cond2\] there exists a sequence of general elements (resp. regular linear sequence) $z_1,\ldots,z_d $ in $\mathcal{M}$ such that $$\label{Eqn:Cond2} z_i \circ H_{\mathbf{n}}^j =\begin{cases} H_{{\mathbf{n}}-{\mathbf{e}}_i}^j & \mbox{ if } {\mathbf{n}}-{\mathbf{e}}_i >\mathbf{0}_d\\ 0 & \mbox{ otherwise} \end{cases}$$ for all ${\mathbf{n}}\in {\mathbb{N}_+^d},$ $j=1,\ldots,\tau$ and $i=1,\ldots,d;$ (3) \[Def:Cond3\] for all $i=1,\ldots,d$ and ${{\mathbf{n}}\in \mathbb{N}_+^d} $ such that ${\mathbf{n}}-{\mathbf{e}}_i > \mathbf{0}_d$ $$\label{Eqn:Cond3} W_{{\mathbf{n}}} \cap V_{\mathbf{n}}^i \subseteq W_{{\mathbf{n}}-{\mathbf{e}}_i}$$ where $W_{\mathbf{n}}=\langle H_{\mathbf{n}}^j:j=1,\ldots,\tau\rangle$ and $V_{\mathbf{n}}^i:= \langle Z_1^{k_1}\cdots Z_m^{k_m} : {\mathbf{k}}=(k_1,\ldots,k_m) \in {\mathbb{N}}^m \mbox{ with } k_i\leq n_i-2 \mbox{ and }|{\mathbf{k}}| \leq s_{{\mathbf{n}}} \rangle.$ If this is the case, we also say that $W$ is $L_d^\tau$-admissible with respect to the sequence ${\underline{z}}:=z_1,\ldots,z_d.$ \[Remark:Definition\] We remark that if an $R$-submodule $W$ of $\operatorname{\mathcal D}$ satisfies , then $$W_{{\mathbf{n}}-{\mathbf{e}}_i} \subseteq W_{{\mathbf{n}}} \cap V_{\mathbf{n}}^i$$ for all $i=1,\ldots,d$ and ${\mathbf{n}}-{\mathbf{e}}_i > \mathbf{0}_d.$ Indeed, by $W_{{\mathbf{n}}-{\mathbf{e}}_i} \subseteq W_{{\mathbf{n}}}$ and $$z_i^{n_i-1} \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}^j=0$$ for all $j=1,\ldots,\tau$ which implies that $W_{{\mathbf{n}}-{\mathbf{e}}_i} \subseteq V_{{\mathbf{n}}}^i.$ Thus $W_{{\mathbf{n}}-{\mathbf{e}}_i} \subseteq W_{{\mathbf{n}}} \cap V_{\mathbf{n}}^i.$ Hence if $W$ is $L_d^\tau$-admissible, then equality holds in . \[Remark:vanishing\] Let $\{H_{\mathbf{n}}^j:{1 \le j \le \tau, {\mathbf{n}}\in {\mathbb{N}}_+^d}\}$ be a local or graded $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D}.$ Put $W_{\mathbf{n}}:=\langle H_{\mathbf{n}}^j:j=1,\ldots,\tau\rangle.$ Then $W_{{\bf 1}_d}=0$ if and only if $W_{\mathbf{n}}=0$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+.$\ [**Proof:**]{} Suppose $W_{{\bf 1}_d}=0.$ We use induction on $t=|{\mathbf{n}}|$ to show that $W_{\mathbf{n}}=0$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+.$ If $t=d,$ then ${\mathbf{n}}={\bf 1}_d$ and by assumption $W_{{\bf 1}_d}=0.$ Assume that $t >d $ and $W_{{\mathbf{n}}}=0$ for all ${\mathbf{n}}$ with $|{\mathbf{n}}| \leq t.$ It suffices to show that $W_{{\mathbf{n}}+{\mathbf{e}}_i}=0$ for all $i=1,\ldots,d.$ From it follows that $W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap V_{{\mathbf{n}}+{\mathbf{e}}_i}^i \subseteq W_{\mathbf{n}}=0$. If $W_{{\mathbf{n}}+{\mathbf{e}}_i} \neq 0,$ then $0\neq 1 \in W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap V_{{\mathbf{n}}+{\mathbf{e}}_i}^i$ which is a contradiction. Hence $W_{{\mathbf{n}}+{\mathbf{e}}_i}=0$ for all $i=1,\ldots,d.$ The converse is trivial. In the following proposition we prove that if $W$ is an $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D},$ then $R/\operatorname{Ann}_R(W)$ is a level $K$-algebra. We do not need assumptions on the characteristic for this. \[Prop:AdmissibleGivesLevel\] Let $W=\langle H_{\mathbf{n}}^j:j=1,\dots,\tau, {\mathbf{n}}\in {\mathbb{N}}_+^d \rangle$ be a non-zero local or graded $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D}.$ Then $R/\operatorname{Ann}_R(W)$ is a level $K$-algebra of dimension $d>0$ and type $\tau.$ We prove the result for the local case. The proof is similar for the graded case. We give details for the graded case, if necessary. Let $W$ be $L_d^\tau$-admissible with respect to a sequence of general elements ${\underline{z}}:=z_1,\ldots,z_d.$ We set $W_{\mathbf{n}}:=\langle H_{\mathbf{n}}^j:j=1,\ldots,\tau\rangle ,$ $I_{\mathbf{n}}:=\operatorname{Ann}_R(W_{\mathbf{n}}) $ and $I:=\operatorname{Ann}_R(W).$ Hence $I=\cap_{{\mathbf{n}}\in {\mathbb{N}}^d_+} I_{\mathbf{n}}.$\ [**Claim 1:**]{} $I_{\mathbf{n}}=I+({{\underline{z}}^{\mathbf{n}}})$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+.$\ [**Proof of Claim 1:**]{} First we prove that for all ${\mathbf{n}}\in {\mathbb{N}}^d_+,$ $$\label{Eqn:containment} I_{\mathbf{n}}\subseteq I_{{\mathbf{n}}+{\bf 1}_d}+({\underline{z}}^{\mathbf{n}}).$$ Note that for all $i=1,\ldots,d$ and ${\mathbf{n}}\in {\mathbb{N}}_d^+,$ $$\operatorname{Ann}_R(V_{{\mathbf{n}}}^i)=(z_i^{n_i-1})+\mathcal{M}^{s_{{\mathbf{n}}}+1}.$$ Indeed, suppose $f \in \operatorname{Ann}_R(V_{{\mathbf{n}}}^i).$ Let $ f=\sum_{{\mathbf{k}}\in {\mathbb{N}}^m} \alpha_{{\mathbf{k}}} x^{\mathbf{k}}. $ Since $x^{{\mathbf{k}}} \in \mathcal{M}^{s_{{\mathbf{n}}}+1}$ if $|{\mathbf{k}}| > s_{{\mathbf{n}}},$ we may assume that $f$ is a polynomial of degree at most $s_{{\mathbf{n}}}.$ Write $$f=z_i^{n_i-1} f_1+f_2$$ for some $f_1,f_2\in R$ such that $\deg _{~z_i} f_2 \leq n_i-2 .$ As $\deg f \leq s_{{\mathbf{n}}},$ $\deg f_2 \leq s_{{\mathbf{n}}}.$ Assume that $f_2 \neq 0.$ Then there exists $G \in V_{{\mathbf{n}}}^i$ such that $f_2 \circ G \neq 0.$ But as $f,z_i^{n_i-1} \in \operatorname{Ann}_R(V_{{\mathbf{n}}}^i),$ $f_2 \in \operatorname{Ann}_R(V_{{\mathbf{n}}}^i)$ and hence $f_2 \circ G=0,$ which leads to a contradiction. Therefore $f_2=0$ and hence $f \in (z_i^{n_i-1}).$ Thus $\operatorname{Ann}_R(V_{{\mathbf{n}}}^i)\subseteq(z_i^{n_i-1})+\mathcal{M}^{s_{{\mathbf{n}}}+1}.$ The other containment is clear. Now, since $W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap V_{{\mathbf{n}}+{\mathbf{e}}_i}^i \subseteq W_{{\mathbf{n}}},$ by using Proposition \[Prop:BasicPropertiesOfAnn\] we get $$\begin{aligned} I_{{\mathbf{n}}}= \operatorname{Ann}_R(W_{{\mathbf{n}}}) \subseteq \operatorname{Ann}_R(W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap V_{{\mathbf{n}}+{\mathbf{e}}_i}^i)=\operatorname{Ann}_R(W_{{\mathbf{n}}+{\mathbf{e}}_i})+\operatorname{Ann}_R(V_{{\mathbf{n}}+{\mathbf{e}}_i}^i)=I_{{\mathbf{n}}+{\mathbf{e}}_i}+(z_i^{n_{i}}).\end{aligned}$$ The last equality follows because $\mathcal{M}^{s_{{\mathbf{n}}+{\mathbf{e}}_i}+1} \subseteq I_{{\mathbf{n}}+{\mathbf{e}}_i}.$ Therefore for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ $$\begin{aligned} \nonumber I_{{\mathbf{n}}} \subseteq I_{{\mathbf{n}}+{\mathbf{e}}_1}+(z_1^{n_1})\subseteq I_{{\mathbf{n}}+{\mathbf{e}}_1+{\mathbf{e}}_2}+(z_1^{n_1},z_2^{n_2}) \subseteq \cdots\\ \cdots \subseteq I_{{\mathbf{n}}+{\mathbf{e}}_1\cdots+{\mathbf{e}}_d}+(z_1^{n_1},\ldots,z_d^{n_d})=I_{{\mathbf{n}}+{\bf 1}_d}+ ({\underline{z}}^{{\mathbf{n}}}). \end{aligned}$$ Fix ${\mathbf{n}}\in {\mathbb{N}}^d_+$ and consider $f \in I_{\mathbf{n}}.$ By there are $f_{{\mathbf{n}}+{\bf 1}_d} \in I_{{\mathbf{n}}+{\bf 1}_d}$ and $g_0 \in ({\underline{z}}^{\mathbf{n}})$ such that $$f=f_{{\mathbf{n}}+{\bf 1}_d}+g_0.$$ Since $f_{{\mathbf{n}}+{\bf 1}_d} \in I_{{\mathbf{n}}+{\bf 1}_d},$ again by there are $f_{{\mathbf{n}}+{\bf 2}_d} \in I_{{\mathbf{n}}+{\bf 2}_d}$ and $g_1 \in ({\underline{z}}^{{\mathbf{n}}+{\bf 1}_d})$ such that $$f_{{\mathbf{n}}+{\bf 1}_d}=f_{{\mathbf{n}}+{\bf 2}_d}+g_1.$$ Thus $f=f_{{\mathbf{n}}+{\bf 2}_d}+g_0+g_1.$ By recurrence there are sequences $\{f_{{\mathbf{n}}+{\bf t}_d}\}_{t \geq 0}$ and $\{g_t\}_{t \geq 0}$ such that $f_{{\mathbf{n}}+{\bf t}_d} \in I_{{\mathbf{n}}+{\bf t}_d},$ $g_t \in ({\underline{z}}^{{\mathbf{n}}+{\bf t}_d})$ and $f_{{\mathbf{n}}+({\bf t-1})_d}=f_{{\mathbf{n}}+{\bf t}_d}+g_{t-1}.$ For all $t \geq 0, $ it holds $$\label{Eqn:FormulaForf} f=f_{{\mathbf{n}}+{\bf t}_d}+\sum_{i=0}^{t-1}g_i.$$ Let ${g'}=\sum_{i \geq 0}g_i \in K[\![x_1,\ldots,x_m]\!].$ Let ${f'}=\lim_{t \to \infty} f_{{\mathbf{n}}+{\bf t}_d} \in K[\![x_1,\ldots,x_m]\!].$ Taking limit as $t \to \infty$ in , we get $$\begin{aligned} f={f'}+{g'}.\end{aligned}$$ Since $g_t \in ({\underline{z}}^{{\mathbf{n}}+{\bf t}_d})$ for all $t\geq 0,$ ${g'} \in ({\underline{z}}^{\mathbf{n}}).$ Since for every ${\bf k} \in {\mathbb{N}}^d_+,$ $W_{{\mathbf{k}}}$ is finitely generated $R$-submodule of $\operatorname{\mathcal D}$, $R/I_{{\mathbf{k}}}$ is Artinian. Hence there exists a positive integer $N({\mathbf{k}})$ such that $\operatorname{\mathcal M}^{N({\mathbf{k}})} \subseteq I_{{{\mathbf{k}}}}.$ Note that as $W_{{\mathbf{k}}} \subseteq W_{{\mathbf{k}}+{\bf t}_d}$ by , $I_{{\bf k}+{\bf t}_d} \subseteq I_{{\bf k}}$ for all $t \geq 0.$ Since $f_{{\bf k}+{\bf t}_d} - {f'} \in \operatorname{\mathcal M}^{N({\mathbf{k}})} \subseteq I_{{\bf k}}$ for all $t \gg 0$ and $f_{{\bf k}+{\bf t}_d}\in I_{{\bf k}+{\bf t}_d} \subseteq I_{{\bf k}}$ for all $t \geq 0,$ we get that ${f'} \in I_{{\bf k}}$ for all ${\bf k} \in {\mathbb{N}}^d_+.$ Thus ${f'} \in I$ and hence $f \in I+({\underline{z}}^{\mathbf{n}}).$ This gives that $I_{\mathbf{n}}\subseteq I+({\underline{z}}^{\mathbf{n}}).$ If $R=K[x_1,\ldots,x_m], $ then ${f'} \in I \subseteq R.$ Since $f \in R$ we get that ${g'}=\sum_{i \geq 0}g_i \in R.$ On the other hand, by $z_i^{n_i} \circ H_{\mathbf{n}}^j =0$ for all $j=1,\ldots,\tau$ and $i=1,\ldots,d.$ Hence $({\underline{z}}^{\mathbf{n}}) \subseteq I_{\mathbf{n}}.$ Clearly, $I \subseteq I_{\mathbf{n}}.$ Therefore $I+({\underline{z}}^{\mathbf{n}}) \subseteq I_{\mathbf{n}}.$ This proves Claim 1. [**Claim 2:**]{} ${\underline{z}}$ is a regular sequence modulo $I$ and $\dim(R/I)=d.$\ [**Proof of Claim 2:**]{} By Claim 1 $\operatorname{Ann}_R(W_{{\bf 1}_d})=I_{{\bf 1}_d}= I+({\underline{z}}).$ Since $W \neq 0,$ by Remark \[Remark:Definition\] $W_{{\bf 1}_d} \neq 0.$ Since $W_{{\bf 1}_d}$ is a non-zero finitely generated $R$-submodule of $\operatorname{\mathcal D},$ $R/(I+({\underline{z}}))$ is Artinian. This implies that $\dim (R/I) \leq d.$ Next we prove that ${\underline{z}}$ is a regular sequence modulo $I$ and hence $\dim(R/I)=d.$ First we prove that $z_1$ is a nonzero-divisor on $A=R/I.$ By the derivation by $z_1$ defines an epimorphism of $R$-modules $$W_{{\mathbf{n}}+ {\mathbf{e}}_1} \overset{z_1\circ}{\longrightarrow} W_{{\mathbf{n}}} \longrightarrow 0$$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+ .$ Since $\operatorname{Ann}_R(W_{{\mathbf{n}}})= I_{{\mathbf{n}}}= I+({\underline{z}}^{\mathbf{n}})$ by Claim 1, this sequence induces an exact sequence of $R$-modules $$0\longrightarrow \frac{R}{I+(\uz^{\nn})} \overset{\bigcdot z_1}{\longrightarrow} \frac{R}{I+(\uz^{\nn+\ee_1})}$$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$. Let $f \in R$ be such that $z_1f \in I.$ Since $z_1 f \in I+({\underline{z}}^{{\mathbf{n}}+{\mathbf{e}}_1}),$ we deduce that $f \in I+({\underline{z}}^{{\mathbf{n}}})=I_{{\mathbf{n}}}$ for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ and hence we conclude that $f \in I.$\ Assume that $z_1,\ldots,z_l$ is a regular sequence of $R/I$, with $l < d$. Given ${\mathbf{n}}'=(n_{l+1},\ldots,n_d) \in {\mathbb{N}}^{d-l}_+,$ we take ${\mathbf{n}}=(1,\ldots,1,n_{l+1},\ldots,n_d) \in {\mathbb{N}}^d_+.$ By the derivation by $z_{l+1}$ defines an epimorphism of $R$-modules for all $n_{l+1} \geq 2$ $$W_{\mathbf{n}}\overset{z_{l+1}\circ}{\longrightarrow} W_{{\mathbf{n}}-{\mathbf{e}}_{l+1}} \longrightarrow 0.$$ Since $\operatorname{Ann}_R(W_{{\mathbf{n}}})= I_{{\mathbf{n}}}= I+({\underline{z}}^{\mathbf{n}}) $ by Claim 1, this sequence induces an exact sequence of $R$-modules $$0 \longrightarrow \frac{R}{I+(z_1,\ldots,z_l)+(z_{l+1}^{n_{l+1}-1},\ldots,z_d^{n_d})} \overset{\bigcdot z_{l+1}}{\longrightarrow} \frac{R}{I+(z_1,\ldots,z_l)+(z_{l+1}^{n_{l+1}},\ldots,z_d^{n_d})}.$$ Let $f \in R$ be such that $z_{l+1} f \in I+(z_1,\ldots,z_l).$ Since $z_{l+1} f \in I+(z_1,\ldots,z_l)+(z_{l+1}^{n_{l+1}},\ldots,z_d^{n_d}),$ we deduce that $f \in I+(z_1,\ldots,z_l)+(z_{l+1}^{n_{l+1}-1},\ldots,z_d^{n_d})$ for all $n_{l+1} \geq 2.$ Hence $\bar f\in R/(I+(z_1,\dots,z_l))$ is such that $\bar f\in {L_t}:=\overline{(z_{l+1}^{t},\ldots,z_d^{t})}$ for all $t \geq 1.$ Here $\overline{(\cdot)}$ denotes the image in $R/(I+(z_1,\dots,z_l)).$ Since by Krull intersection theorem $\cap_{t \geq 1} L_t=0,$ $\bar f=0$ and hence $f \in I+(z_1,\ldots,z_l).$ Thus ${\underline{z}}$ is a regular sequence and we have the claim. [**Claim 3:**]{} $R/I$ is a level $K$-algebra of type $\tau$.\ [**Proof of Claim 3:**]{} Since ${\underline{z}}$ is a sequence of general elements in $\mathcal{M},$ ${\underline{z}}+I:=z_1+I,\ldots,z_d+I$ is a sequence of general elements in ${{\mathfrak m}}.$ Therefore by Fact \[Fact:GeneralElements\], $({{\underline{z}}})A \subseteq {{\mathfrak m}}$ is a general minimal reduction of ${{\mathfrak m}}.$ Since $(I+({\underline{z}}))^\perp=(I_{{\bf 1}_d})^\perp=W_{{\bf 1}_d}$ and $W_{{\bf 1}_d}$ is generated by polynomials $H_{{\bf 1}_d}^1,\ldots,H_{{\bf 1}_d}^\tau$ of same degree whose forms of degree $ s_{\bf 1}$ are linearly independent, by Proposition \[Prop:Emsalem-Iarrobino\] $R/(I+({\underline{z}}))$ is an Artinian level $K$-algebra of type $\tau.$ Since $R/I$ is Cohen-Macaulay by Claim 2, $R/I$ is a $d$-dimensional level $K$-algebra of type $\tau$ according to Definition \[Def:LocalLevel\]. Summarizing, we can now give a complete characterization of the inverse system of level $K$-algebras. \[thm:CharOfLocalLevel\] Let $R=K[\![x_1,\ldots,x_m]\!]$ with $\mathrm{char}(K)=0$ (resp. $R=K[x_1,\ldots,x_m]$) and let $0< d \leq m$. There is a one-to-one correspondence between the following sets: (1) $d$-dimensional local (resp. graded) level $K$-algebras of type $\tau;$ (2) non-zero local (resp. graded) $L_d^\tau$-admissible $R$-submodules $W=\langle H_{\mathbf{n}}^j: j=1,\dots,\tau,{\mathbf{n}}\in {\mathbb{N}}^d_+\rangle$ of $\operatorname{\mathcal D};$ given by $$\begin{aligned} \left\{ \begin{array}{cc} I \subseteq R \mbox{ such that } R/I \mbox{ is a local } \\ \mbox{ (resp. graded) level $K$-algebra of }\\ \mbox{ dimension $d$ and type }\tau \end{array} \right\} \ &\stackrel{1 - 1}{\longleftrightarrow}& \ \left\{ \begin{array}{cc} \mbox{ non-zero local (resp. graded)} L_d^\tau\mbox{-admissible }\\ R\mbox{-submodule } W \mbox{ of } \operatorname{\mathcal D}\\ \end{array} \right\} \\ R/I \ \ \ \ \ &\longrightarrow& \ \ \ \ \ I^\perp \\ R/\operatorname{Ann}_R(W) \ \ \ \ \ &\longleftarrow& \ \ \ \ \ W \end{aligned}$$ Let $A=R/I$ be a local level $K$-algebra of dimension $d$ and type $\tau.$ Let ${\underline{z}}=z_1,\ldots,z_d $ be a sequence of general elements in $\mathcal{M}.$ Let $\{H^j_{\mathbf{n}}:{\mathbf{n}}\in {\mathbb{N}}^d_+,~ j =1,\ldots, \tau\}$ be a generating set of $I^\perp$ as in Proposition \[Prop:ConstructionOfAdmissible\]. To prove that $I^\perp$ is $L_d^\tau$-admissible it is enough to prove . For fixed $i=1,\ldots,d$ and ${\mathbf{n}}\in N^d_+,$ let $ F \in W_{\mathbf{n}}\cap V_{\mathbf{n}}^i$ where $W_{{\mathbf{n}}}:=(I+({\underline{z}}^{{\mathbf{n}}}))^\perp$ and $V_{\mathbf{n}}^i:= \langle Z_1^{k_1} \cdots Z_m^{k_m} : {\mathbf{k}}=(k_1,\ldots,k_m)\in {\mathbb{N}}^m \mbox{ with } k_i \leq n_i-2 \mbox{ and }|{\mathbf{k}}| \leq s+|{\mathbf{n}}|-d \rangle .$ We have $$\begin{aligned} (I+({\underline{z}}^{{\mathbf{n}}-{\mathbf{e}}_i})) \circ F& =& I \circ F + ({\underline{z}}^{{\mathbf{n}}-{\mathbf{e}}_i}) \circ F\\ &=& z_i^{n_i-1} \circ F \hspace*{4cm} (\mbox{as } F \in W_{\mathbf{n}})\\ &=& 0 \hspace*{5.1cm} (\mbox{as } F \in V_{\mathbf{n}}^i). \end{aligned}$$ Hence $F \in (I+({\underline{z}}^{{\mathbf{n}}-{\mathbf{e}}_i}))^\perp= W_{{\mathbf{n}}-{\mathbf{e}}_i}$. This proves . Hence $I^\perp$ is a local $L_d^\tau$-admissible submodule of $\operatorname{\mathcal D}$. As $d \geq 1,$ $W_{{\bf 1}_d}\neq 0$ which gives that $I^\perp \neq 0.$ Let $W=\langle H_{\mathbf{n}}^j:1 \le j \le \tau, {\mathbf{n}}\in {\mathbb{N}}_+^d \rangle$ be a non-zero local $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D}.$ By Proposition \[Prop:AdmissibleGivesLevel\], $R/\operatorname{Ann}_R(W)$ is a level $K$-algebra of dimension $d >0$ and type $\tau.$ Let $$\begin{aligned} \mathcal{C}&:=&\left\{I \subseteq R \mbox{ such that } R/I \mbox{ is a local (or graded) level $K$-algebra of } \mbox{dimension $d$ and type }\tau \right\} \mbox{ and }\\ \mathcal{C}'&:=&\left\{ \mbox{ local (or graded) } L_d^\tau\mbox{-admissible $R$-submodule } W= \langle H_{\mathbf{n}}^j :j=1,\dots,\tau, {\mathbf{n}}\in {\mathbb{N}}^d_+ \rangle \mbox{ of } \operatorname{\mathcal D}\right\}. \end{aligned}$$ Consider $\theta: \mathcal{C} \to \mathcal{C}'$ defined as $$\theta(R/I)=I^\perp$$ and $\theta':\mathcal{C}' \to \mathcal{C}$ defined as $$\theta'(W)=R/\operatorname{Ann}_R(W).$$ We prove that $\theta$ and $\theta'$ are inverses of each other. Let $A=R/I$ be a $d$-dimensional level $K$-algebra. Then $$\theta'\theta(R/I)=R/\operatorname{Ann}_R(I^\perp).$$ Let ${\underline{z}}:=z_1,\ldots,z_d$ be a sequence of general elements in $\mathcal{M}.$ Let $\{H_{{\mathbf{n}}}^j:{\mathbf{n}}\in {\mathbb{N}}^d_+, ~j =1,\ldots,\tau\}$ be a system of generators of $I^\perp$ as in Proposition \[Prop:ConstructionOfAdmissible\]. Then $W_{{\mathbf{n}}}:=(I+({\underline{z}}^{\mathbf{n}}))^\perp=\langle H_{{\mathbf{n}}}^j:j=1,\ldots,\tau\rangle.$ Therefore $$\operatorname{Ann}_R(I^\perp)=\bigcap_{{\mathbf{n}}\in {\mathbb{N}}^d_+} \operatorname{Ann}_R(W_{\mathbf{n}})= \bigcap_{{\mathbf{n}}\in {\mathbb{N}}^d_+} \operatorname{Ann}_R((I+({\underline{z}}^{\mathbf{n}}))^\perp)=\bigcap_{{\mathbf{n}}\in {\mathbb{N}}^d_+} (I+({\underline{z}}^{\mathbf{n}}))=I$$ where $\operatorname{Ann}_R((I+({\underline{z}}^{\mathbf{n}}))^\perp)=I+({\underline{z}}^{\mathbf{n}})$ by Matlis duality. The last equality follows by Krull intersection theorem applied to $R/I.$ Hence $\theta'\theta$ is the identity. Let $W= \langle H_{\mathbf{n}}^j :j=1,\dots,\tau, {\mathbf{n}}\in {\mathbb{N}}^d_+\rangle$ be a local $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D}$ with respect to the sequence ${\underline{z}}:=z_1,\ldots,z_d.$ Then $$\theta \theta'(W)=\operatorname{Ann}_R(W)^\perp.$$ Let $I:=\operatorname{Ann}_R(W),$ $I_{{\mathbf{n}}}:=\operatorname{Ann}_R \langle H_{\mathbf{n}}^j :j=1,\dots,\tau \rangle.$ Then by Claim 1 in the proof of Proposition \[Prop:AdmissibleGivesLevel\] we get $ I_{{\mathbf{n}}}=I+({\underline{z}}^{\mathbf{n}}).$ Hence by Proposition \[Prop:ConstructionOfAdmissible\] it follows that $$I^\perp=\langle (I+({\underline{z}}^{\mathbf{n}}))^\perp:{\mathbf{n}}\in {\mathbb{N}}^d_+\rangle \langle =I_{{\mathbf{n}}}^\perp : {\mathbf{n}}\in {\mathbb{N}}^d_+\rangle.$$ Since $I_{{\mathbf{n}}}^\perp=\langle H_{\mathbf{n}}^j :j=1,\dots,\tau \rangle$ by Matlis duality, $$\operatorname{Ann}_R(W)^\perp = I^\perp=\langle I_{{\mathbf{n}}}^\perp:{\mathbf{n}}\in {\mathbb{N}}^d_+\rangle=\langle H_{\mathbf{n}}^j :j=1,\dots,\tau, {\mathbf{n}}\in {\mathbb{N}}^d_+\rangle =W.$$ Hence $\theta \theta'$ is the identity. We do not know whether we can remove the assumption on the characteristic in Theorem \[thm:CharOfLocalLevel\]. With few modifications in Theorem \[thm:CharOfLocalLevel\] it is not difficult to obtain the structure of the inverse system of any graded Cohen-Macaulay $K$-algebra. But it would be interesting to understand the inverse system of a Cohen-Macaulay $K$-algebra of positive dimension in the local case. Let $A=R/I$ be a $d$-dimensional local level ring. By Theorem \[thm:CharOfLocalLevel\] the dual module $I^\perp=W=\langle H^j_{\mathbf{n}}: j=1,\dots,\tau,{\mathbf{n}}\in{\mathbb{N}}^d_+\rangle$ is an $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D},$ say with respect to ${\underline{z}}=z_1,\dots,z_d.$ By Theorem \[thm:CharOfLocalLevel\] $\operatorname{Ann}_R(W)=I.$ Hence by using Claim 1 in Proposition \[Prop:AdmissibleGivesLevel\] we get $$(I+({\underline{z}}))^\perp=I_{{\bf 1}_d}^\perp= \langle H^j_{{\bf 1}_1}:j=1,\ldots,\tau\rangle.$$ Therefore by Proposition \[Prop:Emsalem-Iarrobino\] the index of nilpotency $s(A)=\operatorname{socdeg}(A/({\underline{z}})A)$ coincides with $\deg(H^1_{\bf 1_d})$ ($= \deg(H^j_{\bf 1_d})$ for any $1\leq j \leq \tau$). Since $({\underline{z}})$ is a general minimal reduction of ${{\mathfrak m}},$ by Fact \[Fact:GeneralElements\] ${\underline{z}}$ forms a superficial sequence for ${{\mathfrak m}}.$ Hence the *multiplicity* of $A$ is $$e(A)=\dim_K(A/({\underline{z}})A)=\dim_K\langle H^j_{\bf 1_d}: j=1,\dots,\tau\rangle.$$ If $A$ is a homogeneous level $K$-algebra, then for any ${\underline{z}}:=z_1,\ldots,z_d$ regular linear sequence for $A$ $$e(A)=\dim_K\langle H^j_{\bf 1_d}:j=1,\dots,\tau\rangle$$ and by Proposition \[Prop:SocDegOfR\_nn\] $$\operatorname{reg}(A)=\deg(H^1_{\bf 1_d})=\cdots=\deg(H^\tau_{\bf 1_d}).$$ Therefore important informations of the level $K$-algebras are encoded in the first $\tau$ polynomials of the dual module. \[thm:ExtraProperties\] Let $R=K[\![x_1,\ldots,x_m]\!]$ with $\mathrm{char}(K)=0$ or $R=K[x_1,\ldots,x_m]$ and let $d \leq m$ be a positive integer. There is a one-to-one correspondence between the following sets: (i) $d$-dimensional local or graded level $K$-algebras $A=R/I$ of multiplicity $e$ (resp. index of nilpotency in the local case or Castelnuovo-Mumford regularity in the graded case, say $s$); (ii) non-zero local or graded $L^\tau_d$-admissible $R$-submodules $W=\langle H^j_{\mathbf{n}}: j=1,\dots,\tau,{\mathbf{n}}\in{\mathbb{N}}^d_+\rangle$ of $\operatorname{\mathcal D}$ such that $\dim_K\langle H^j_{\bf 1_d}:j=1,\dots,\tau\rangle=e$ (resp. $\deg H^1_{\bf 1_d}=\cdots=\deg H^\tau_{\bf 1_d}=s$ in both local and graded cases). We now compare the definition of $G_d$-admissible and $L_d^\tau$-admissible $R$-submodules of $\operatorname{\mathcal D}.$ \[Disc:LdAdmissibleDefinition\] According to [@ER17 Definition 3.6], a submodule $W$ of $\operatorname{\mathcal D}$ is said to be $G_d$-admissible if it admits a system of generators $\{H_{\mathbf{n}}:{{\mathbf{n}}\in {\mathbb{N}}^d_+}\}$ such that for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ and $i=1,\ldots,d$ $$z_i \circ H_{{\mathbf{n}}}=\begin{cases} H_{{\mathbf{n}}-{\mathbf{e}}_i} & \mbox{ if } {\mathbf{n}}- {\mathbf{e}}_i > \mathbf{0}_d\\ 0 & \mbox{ otherwise }; \end{cases}$$ for all $i=1,\ldots,d$ and ${\mathbf{n}}-{\mathbf{e}}_i >\mathbf{0}_d$ $$\begin{aligned} \label{Eqn:GdAdmissible} \operatorname{Ann}_R\langle H_{{\mathbf{n}}-{\mathbf{e}}_i}\rangle \circ H_{{\mathbf{n}}}=\langle H_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i} \rangle.\end{aligned}$$ Inspired by the definition of $G_d$-admissible submodule, one could define an $L_d^\tau$-admissible submodule $W:=\langle H_{\mathbf{n}}^j: j=1,\ldots, \tau,{\mathbf{n}}\in {\mathbb{N}}^d_+\rangle$ of $\operatorname{\mathcal D}$ using the following condition instead of : $$\begin{aligned} \label{Eqn:WeakCondition} \operatorname{Ann}_R(W_{{\mathbf{n}}-{\mathbf{e}}_i}) \circ W_{{\mathbf{n}}}=W_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i} \mbox{ for all }i=1,\ldots,d \mbox{ and } {\mathbf{n}}-{\mathbf{e}}_i >\mathbf{0}_d\end{aligned}$$ where $W_{\mathbf{n}}=\langle H_{\mathbf{n}}^j:j=1,\ldots,\tau \rangle.$ However, as Example \[Example:intersection\] shows, if $\tau>1$ an $R$-submodule $W$ of $\operatorname{\mathcal D}$ satisfying need not satisfy and therefore $R/\operatorname{Ann}_R(W)$ would not be level with this definition. We now prove that our condition is stronger, as it implies . For $1 \leq j \leq \tau$ consider $f \circ H_{\mathbf{n}}^j$ where $f \in \operatorname{Ann}_R(W_{{\mathbf{n}}-{\mathbf{e}}_i}).$ Then $$z_i \circ (f \circ H_{\mathbf{n}}^j) = f \circ (z_i \circ H_{\mathbf{n}}^j)=f \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}^j=0.$$ Hence $$f \circ H_{\mathbf{n}}^j \in W_{\mathbf{n}}\cap V_{\mathbf{n}}^i \subseteq W_{{\mathbf{n}}-{\mathbf{e}}_i}.$$ Therefore $f \circ H_{\mathbf{n}}^j \in W_{{\mathbf{n}}-{\mathbf{e}}_i} \cap V_{{\mathbf{n}}-{\mathbf{e}}_i}^i $ as $z_i \circ (f \circ H_{\mathbf{n}}^j)=0.$ Hence by $ f \circ H_{\mathbf{n}}^j \in W_{{\mathbf{n}}-2{\mathbf{e}}_i}. $ Repeating the same argument, we get $f \circ H_{\mathbf{n}}^j \in W_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i}.$ As $z_i^{n_i-1} \in \operatorname{Ann}_R(W_{{\mathbf{n}}-{\mathbf{e}}_i})$ and $z_i^{n_i-1} \circ H_{{\mathbf{n}}}^j=H_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i}^j,$ the other containment in follows. Moreover, an $R$-submodule $W $ of $\operatorname{\mathcal D}$ is $L_d^1$-admissible if and only if $W$ is $G_d$-admissible. Indeed,by the previous discussion it follows that if $W$ is $L_d^1$-admissible then $W$ satisfies which is equivalent to as $\tau=1.$ Hence $W$ is $G_d$-admissible. Now, suppose that $W=\langle H_{\mathbf{n}}:{\mathbf{n}}\in {\mathbb{N}}^d_+ \rangle$ is $G_d$-admissible. Let $W_{{\mathbf{n}}}:=\langle H_{\mathbf{n}}\rangle.$ We claim that for all ${\mathbf{n}}\in {\mathbb{N}}^d_+$ and $i=1,\ldots,d,$ $$\label{Eqn:RossiCondition} W_{\mathbf{n}}\cap K[Z_1,\ldots,\widehat{Z_i},\ldots,Z_m] \subseteq W_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i}.$$ Consider $f \circ H_{\mathbf{n}}\in W_{{\mathbf{n}}} \cap K[Z_1,\ldots,\widehat{Z_i},\ldots,Z_m].$ Then $z_i \circ (f \circ H_{\mathbf{n}})=0.$ Hence $$\begin{aligned} f \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}=f \circ (z_i \circ H_{{\mathbf{n}}})=z_i \circ (f \circ H_{\mathbf{n}})=0.\end{aligned}$$ This implies that $f \in \operatorname{Ann}_R\langle H_{{\mathbf{n}}-{\mathbf{e}}_i} \rangle.$ Therefore by $f \circ H_{\mathbf{n}}\in \langle H_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i} \rangle=W_{{\mathbf{n}}-(n_i-1){\mathbf{e}}_i}.$ To prove that $W$ is $L_d^1$-admissible it is enough to prove . We prove by induction on $n_i.$ Suppose $n_i=2$ and $f \circ H_{{\mathbf{n}}} \in W_{{\mathbf{n}}} \cap V_{{\mathbf{n}}}^i.$ As $f \circ H_{{\mathbf{n}}} \in V_{{\mathbf{n}}}^i, $ $z_i \circ (f \circ H_{{\mathbf{n}}})=0.$ Hence $$\begin{aligned} f \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}=f \circ (z_i \circ H_{{\mathbf{n}}})= z_i \circ (f \circ H_{{\mathbf{n}}})=0. \end{aligned}$$ Therefore $f \in \operatorname{Ann}_R(W_{{\mathbf{n}}-{\mathbf{e}}_i})$ and hence by $f \circ H_{{\mathbf{n}}} \in W_{{\mathbf{n}}-{\mathbf{e}}_i}.$ Now, assume that is true for $n_i.$ Let $f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i} \in W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap V_{{\mathbf{n}}+{\mathbf{e}}_i}^i.$ Then $$\begin{aligned} z_i^{n_i-1} \circ (z_i \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i}))= z_i^{n_i} \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i})=0. \hspace*{1cm}\end{aligned}$$ Also, $$z_i \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i})=f \circ (z_i \circ H_{{\mathbf{n}}+{\mathbf{e}}_i})=f \circ H_{\mathbf{n}}.$$ Hence $ z_i \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i}) \in W_{\mathbf{n}}\cap V_{\mathbf{n}}^i \subseteq W_{{\mathbf{n}}-{\mathbf{e}}_i}$ by induction. Thus there exists $g \in R$ such that $$z_i \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i})=g \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}=g\circ (z_i \circ H_{\mathbf{n}})= z_i \circ (g \circ H_{{\mathbf{n}}}).$$ This gives $z_i \circ (f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i} - g \circ H_{{\mathbf{n}}})=0$ and hence $$f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i} - g \circ H_{{\mathbf{n}}} = (f-gz_i) \circ H_{{\mathbf{n}}+{\mathbf{e}}_i}\in W_{{\mathbf{n}}+{\mathbf{e}}_i} \cap K[Z_1,\ldots,\widehat{Z_i},\ldots,Z_m] \subseteq W_{({\mathbf{n}}+{\mathbf{e}}_i)-n_i{\mathbf{e}}_i}$$ where the last containment follows from . As $W_{({\mathbf{n}}+{\mathbf{e}}_i)-n_i{\mathbf{e}}_i} \subseteq W_{\mathbf{n}},$ we get that $f \circ H_{{\mathbf{n}}+{\mathbf{e}}_i} \subseteq W_{\mathbf{n}}$ as required. Examples and Effective Constructions {#Section:Examples} ==================================== In this section we give few examples that illustrate Theorem \[thm:CharOfLocalLevel\]. First we construct level $K$-algebras by constructing $L_d^\tau$-admissible submodule of $\operatorname{\mathcal D}.$ In principle, we need an infinite number of polynomials to construct an $L_d^\tau$-admissible submodule. However, in the graded case we give an effective method to construct level $K$-algebra starting with finite $L_d^\tau$-admissible set (Proposition \[Prop:EffectiveConstruction\]). We give an example to illustrate that if $d>0,$ then intersection of Gorenstein ideals need not be level (Example \[Example:intersection\]), which is true in the Artinian case if Gorenstein ideals have same socle degree. Then we compute the inverse system of level $K$-algebras corresponding to certain semigroup rings (Example \[Example:semigroupring\]) and Stanley-Reisner rings (Example \[Example:matroid\]). Following is a tautological example. Recall that an ideal $I \subseteq R =K[\![x_1,\ldots,x_m]\!]$ is a cone with respect to ideal $J \subseteq K[\![x_{d+1},\ldots,x_m]\!]$ if $I=JR.$ \[Prop:cone\] Let $H^1,\ldots,H^\tau \in K_{DP}[X_{d+1},\ldots,X_m]$ be (resp. homogeneous) polynomials of same degree, say $b$, whose forms of degree $b$ are linearly independent. Consider the following $R$-submodule of $\operatorname{\mathcal D}$ $$W:=\langle H_{\mathbf{n}}^j=X_1^{n_1-1}\ldots X_d^{n_d-1}H^j:j=1,\ldots,\tau,{\mathbf{n}}\in {\mathbb{N}}^d_+\rangle.$$ Then $R/\operatorname{Ann}_R(W)$ is a $d$-dimensional (resp. homogeneous) level $K$-algebra of type $\tau.$ Moreover, $\operatorname{Ann}_R(W)$ is a cone with respect to $\operatorname{Ann}_S\langle H^1,\ldots,H^\tau\rangle$ where $S=K[\![x_{d+1},\ldots, x_m]\!]$ (resp. $S=K[x_{d+1},\ldots,m]$). First we show that $W$ is $L_d^\tau$-admissible $R$-submodule of $\operatorname{\mathcal D}$ with respect to the sequence ${\underline{x}}:=x_1,\ldots,x_d.$ Notice that ${\underline{x}}$ is a sequence of general elements in $\mathcal{M}.$ It is clear that for each ${\mathbf{n}}\in {\mathbb{N}}^d_+,$ $$\deg H_{\mathbf{n}}^1=\cdots=\deg H_{{\mathbf{n}}}^\tau=b+|{\mathbf{n}}|-d$$ and the forms of degree $b+|{\mathbf{n}}|-d$ of $H_{{\mathbf{n}}}^1,\ldots,H_{{\mathbf{n}}}^\tau$ are linearly independent. Also, for each $j=1,\ldots,\tau,$ $i=1,\ldots,d$ and ${\mathbf{n}}\in {\mathbb{N}}^d_+$ $$x_i \circ H_{{\mathbf{n}}}^j=\begin{cases} X_1^{n_1-1}\cdots X_i^{n_i-2} \cdots X_d^{n_d-1} H^j=H_{{\mathbf{n}}-{\mathbf{e}}_i}^j & \mbox{ if }{\mathbf{n}}-{\mathbf{e}}_i > \mathbf{0}_d\\ 0 & \mbox{ otherwise } \end{cases}$$ and hence $W$ satisfies . Next we prove in the Definition \[Def:LdAdmissible\]. Fix $1 \leq i \leq d.$ Let $W_{\mathbf{n}}=\langle H_{{\mathbf{n}}}^j:j=1,\ldots,\tau\rangle.$ Suppose that ${\mathbf{n}}-{\mathbf{e}}_i > {\bf 0}_d$ and $f \circ H_{{\mathbf{n}}}^j \in W_{{\mathbf{n}}} \cap V_{\mathbf{n}}^i. $ As $f \circ H_{{\mathbf{n}}}^j \in V_{{\mathbf{n}}}^i,$ $x_i^{n_i-1} \circ (f \circ H_{{\mathbf{n}}}^j)=0.$ This implies that $x_i$ divides $f.$ Let $f=x_ig$ for some $g \in R.$ Then $$f \circ H_{{\mathbf{n}}}^j=(x_ig) \circ H_{{\mathbf{n}}}^j=g\circ (x_i \circ H_{{\mathbf{n}}}^j)=g \circ H_{{\mathbf{n}}-{\mathbf{e}}_i}^j \in W_{{\mathbf{n}}-{\mathbf{e}}_i}.$$ Thus $W_{{\mathbf{n}}} \cap V_{\mathbf{n}}^i \subseteq W_{{\mathbf{n}}-{\mathbf{e}}_i}$ for all ${\mathbf{n}}-{\mathbf{e}}_i > {\bf 0}_d.$ Hence $W$ is $L_d^\tau$-admissible. Therefore by Proposition \[Prop:AdmissibleGivesLevel\] $R/\operatorname{Ann}_R(W)$ is a level $K$-algebra of dimension $d$ type $\tau.$ Hence we have the first part of the statement. It is easy to verify that $$\operatorname{Ann}_R(W_{\mathbf{n}})=({\underline{x}}^{\mathbf{n}})+(\operatorname{Ann}_S\langle H^1,\ldots,H^\tau \rangle)R =({\underline{x}}^{\mathbf{n}})+(\operatorname{Ann}_S(W_{{\bf 1}_d}))R.$$ Therefore $$\begin{aligned} \operatorname{Ann}_R(W)&=&\cap_{{\mathbf{n}}\in {\mathbb{N}}_d^+}\operatorname{Ann}_R(W_{\mathbf{n}})\\ &=&\cap_{{\mathbf{n}}\in {\mathbb{N}}_d^+} ({\underline{x}}^{\mathbf{n}})+(\operatorname{Ann}_S(W_{{\bf 1}_d}))R\\ &=&(\operatorname{Ann}_S(W_{{\bf 1}_d}))R \hspace*{1cm} \end{aligned}$$ where the last equality follows from Krull intersection theorem applied to $R/\operatorname{Ann}_S(W_{{\bf 1}_d})$. Hence $\operatorname{Ann}_R(W)$ is a cone with respect to $\operatorname{Ann}_S(W_{{\bf 1}_d}).$ Let $t_0 \in {\mathbb{N}}_+.$ We say that a family $\mathcal{H}=\{H_{\mathbf{n}}^j:j=1,\ldots,\tau,~{\mathbf{n}}\in {\mathbb{N}}^d_+, ~|{\mathbf{n}}| \leq t_0\}$ of polynomials of $\operatorname{\mathcal D}$ is $L_d^\tau$-admissible if the elements $H_{\mathbf{n}}^j$ satisfy conditions \[Def:LdAdmissible\], \[Def:LdAdmissible\] and \[Def:LdAdmissible\] of the definition of $L_d^\tau$-admissibility up to ${\mathbf{n}}$ such that $|{\mathbf{n}}| \leq t_0.$ A similar proof as [@ER17 Proposition 4.2] shows that in the graded case finitely many admissible polynomials $\mathcal{H}$ are suffices to get an ideal $I$ such that $R/I$ is level. \[Prop:EffectiveConstruction\] Let $\mathcal{H}=\{H_{\mathbf{n}}^j:j=1,\ldots,\tau,~{\mathbf{n}}\in {\mathbb{N}}^d_+, ~|{\mathbf{n}}| \leq t_0\}$ be an $L_d^\tau$-admissible set of homogeneous polynomials with respect to a regular linear sequence ${\underline{z}}=z_1,\ldots,z_d$ with $t_0 \geq (s+2)d$ where $s=\deg H_{{\bf 1}_d}^1.$ Suppose that $\mathcal{H}$ can be extended to an $L_d^\tau$-admissible submodule $W=\langle G_{{\mathbf{n}}}^j:{\mathbf{n}}\in {\mathbb{N}}^d_+,j=1,\ldots,\tau \rangle$ of $\operatorname{\mathcal D}$ such that $G^j_{{\mathbf{n}}}=H^j_{{\mathbf{n}}}$ for all $j=1,\dots,\tau$ and $|{\mathbf{n}}|\le t_0.$ Let $I:=\operatorname{Ann}_R(W).$ Then $$I=(\operatorname{Ann}_R\langle H_{({\bf s}+{\bf 2})_d}^j:j=1,\ldots,\tau \rangle)_{\leq {s+1}} R.$$ By Theorem \[thm:ExtraProperties\] $R/I$ is a level $K$-algebra and $$\operatorname{reg}(R/I)=\deg H_{{\bf 1}_d}^1=s.$$ It is well known that the maximum degree of a minimal system of generators of $I$ is at most $\operatorname{reg}(R/I)+1.$ By using the Claim 1 in the proof of Proposition \[Prop:AdmissibleGivesLevel\] we get $$\operatorname{Ann}_R \langle H_{({\bf s}+{\bf 2})_d}^j:j=1,\ldots,\tau \rangle= \operatorname{Ann}_R(W_{({\bf s}+{\bf 2})_d})=I+({\underline{z}}^{({\bf s}+{\bf 2})_d}).$$ This gives the result. Notice that Proposition \[Prop:EffectiveConstruction\] provides an effective method to construct an level graded $K$-algebra starting with a finite $L_d^\tau$-admissible set. Indeed, suppose that $\mathcal{H}$ is a finite $L_d^\tau$-admissible set as in Proposition \[Prop:EffectiveConstruction\]. Check whether the ideal $I:=\operatorname{Ann}_R(W_{({\bf s}+{\bf 2})_d})_{\leq {s+1}}$ is level. If so, then we have already constructed a level ring. Else, by Proposition \[Prop:EffectiveConstruction\] $\mathcal{H}$ is not extendable to an $L_d^\tau$-admissible submodule. We give an explicit example to demonstrate Propositions \[Prop:cone\] and \[Prop:EffectiveConstruction\]. Let $R=\mathbb{Q}[x,y,z]$ and $\operatorname{\mathcal D}=\mathbb{Q}_{DP}[X,Y,Z].$ Let $d=1,~\tau=2$ and $$\begin{aligned} H_1^1&=Y^3 &H_1^2&=Z^3\\ H_2^1&=XH_1^1 & H_2^2&=XH_1^2\\ H_3^1&=X^2H_1^1 &H_3^2&=X^2H_1^2\\ H_4^1&=X^3H_1^1 & H_4^2&=X^3H_1^2\\ H_5^1&=X^4H_1^1 & H_5^2&=X^4H_1^2. \end{aligned}$$ By Proposition \[Prop:cone\] the set $\mathcal{H}=\{H_1^1,H_1^2,H_2^1,H_2^2,H_3^1,H_3^3,H_4^1,H_4^2\}$ is $L_1^2$-admissible. Suppose that $\mathcal{H}$ can be extended to an $L_1^2$-admissible submodule $W$ of $\operatorname{\mathcal D}.$ Then by Proposition \[Prop:EffectiveConstruction\] $R/I$ is level where $I:=(\operatorname{Ann}_R\langle H^1_5,H^2_5\rangle)_{\le 4}.$ By computer algebra system it can be verified that $$I=(y^4,yz,z^4)$$ and $R/I$ is a $1$-dimensional level $K$-algebra of type $2$. Thus we have constructed level $K$-algebra starting with finite $L_1^2$-admissible set. From Proposition \[Prop:Emsalem-Iarrobino\] it follows that the finite intersection of Gorenstein Artinian ideals of same socle degree is level where by a Gorenstein (resp. level) ideal $I$ we mean that $R/I$ is Gorenstein (resp. level). This is no longer true if ideals have positive codimension as the following example illustrates. \[Example:intersection\] Let $R=\mathbb{Q}[x,y,z]$ and $\operatorname{\mathcal D}=\mathbb{Q}_{DP}[X,Y,Z].$ Let $$\begin{aligned} H_1^1&=Y^3-Z^3 &H_1^2&=Y^2Z\\ H_2^1&=XH_1^1+YZ^3 & H_2^2&=XH_1^2\\ H_3^1&=XH_2^1-Y^2Z^3 &H_3^2&=XH_2^2\\ H_4^1&=XH_3^1+Y^3Z^3-4Z^6 & H_4^2&=XH_3^2 \\ H_5^1&=XH_4^1+Y^7-Y^4Z^3+4YZ^6 & H_5^2&=XH_4^2. \end{aligned}$$ By using Singular or Macaulay2 one can verify that the set $\mathcal{H}_1=\{H_1^1,H_2^1,H_3^1,H_4^1,H_5^1\}$ is $G_1$-admissible and the ideal $$I:=(\operatorname{Ann}_R\langle H_5^1\rangle)_{\leq 4}=(yz+xz,y^3+z^3-xy^2+x^2y-x^3)$$ is a $1$-dimensional Gorenstein ideal (see [@ER17 Example 4.4]). Similarly, the set $\mathcal{H}_2=\{H_1^2,H_2^2,H_3^2,H_4^2,H_5^2\}$ is $G_1$-admissible and the corresponding $1$-dimensional Gorenstein ideal is $$J:=(\operatorname{Ann}_R\langle H_5^2\rangle)_{\leq 4}=(z^2,y^3).$$ In fact, both $I$ and $J$ are complete intersections. It is easy to check that $$I \cap J = (xz^2+yz^2,4y^3z+z^4,x^3y^3-x^2y^4+xy^5-y^6-y^2z^3)$$ and $R/(I \cap J)$ has a graded minimal $R$-free resolution as follows: $$0 \to R(-6) \oplus R(-7) \to R(-3) \oplus R(-4) \oplus R(-6) \to R \to 0.$$ This shows that $R/(I \cap J)$ is not level. Here $$Z^3 \in \langle H_2^1,H_2^2 \rangle \cap V_2^1 \setminus \langle H_1^1, H_1^2\rangle$$ and hence $W:=\{H_n^j:n \leq 2 \mbox{ and } j=1,2 \}$ does not satisfy . However, as $\mathcal{H}_1$ and $\mathcal{H}_2$ are $G_1$-admissible, it is easy to verify that $W$ satisfies . We now construct a one-dimensional level ring of type $2$ in the non-graded case. \[Example:semigroupring\] Let $R=\mathbb{Q}[x,y,z,w]$ and $\operatorname{\mathcal D}=\mathbb{Q}_{DP}[X,Y,Z,W].$ Let $d=1, ~\tau=2$ and $$\begin{aligned} H_1^1&=YW &H_1^2&=ZW\\ H_2^1&=XH_1^1 & H_2^2&=XH_1^2+Y^2W\\ H_3^1&=X^2H_1^1+Z^2W &H_3^2&=X^2H_1^2+XY^2W=XH_2^2\\ H_4^1&=X^3H_1^1+XZ^2W+Y^2ZW+W^3 & H_4^2&=X^3H_1^2+X^2Y^2W+YZ^2W\\ &=XH_3^1+Y^2ZW+W^3 & & =XH_3^2+YZ^2W. \end{aligned}$$ By using Singular or Macaulay2 one can verify that the set $\mathcal{H}_1=\{H_1^1,H_1^2,H_2^1,H_2^2,H_3^1,H_3^3,H_4^1,H_4^2\}$ is $L_1^2$-admissible. The involved polynomials are not homogeneous, and hence they correspond to a local ring. Therefore we can’t use Proposition \[Prop:EffectiveConstruction\]. However, $$\operatorname{Ann}_R(W_4)=\operatorname{Ann}_R\langle H_4^1,H_4^2\rangle=(x^4,y^2-xz,x^3-yz,x^2y-z^2,w^2-x^3y)$$ and can be written as $I+(x^4)$, where $$I=(y^2-xz,x^3-yz,x^2y-z^2,w^2-x^3y).$$ Moreover, $R/I$ is a $1$-dimensional level ring of type $2$ (see Example \[Example:Rossi-Valla\]). Notice that $R/I\cong k[\![t^6,t^8,t^{10},t^{13}]\!].$ Example \[Example:semigroupring\] raises the natural question whether it is possible to find an analogous of Proposition \[Prop:EffectiveConstruction\] in the local case, at least for some special classes. We now give a couple of examples of inverse systems of special families of level algebras. (Semigroup rings) Let $n_1,\dots,n_l$ be an arithmetic sequence, i.e. there exists an integer $q \geq 1$ such that $$n_i=n_{i-1}+q=n_1+(i-1)q$$ for $i=2,\dots,l$. Then the ring $A=K[t^{n_1},\dots,t^{n_l}]$ is a semigroup ring whose associated graded ring $G$ is level (see [@MT95 Proposition 1.12]). By [@Fro87 Example 1(b)] the type of $G$ is always greater or equal than the type of $A$. If $G$ is level, then the two types coincide. Hence we can deduce that for any minimal reduction $J$ of ${{\mathfrak m}},$ $A/J$ is level. Therefore by Proposition \[Prop:construction of level\] the local ring $A$ is also level. We give an explicit example. Let $A=\mathbb{Q}[\![t^6,t^{10},t^{14},t^{18}]\!]$. Then $A$ is a semigroup ring associated to an arithmetic sequence, and we know from the previous discussion that $A$ is level. It is easy to check that $A=R/I$ where $R=\mathbb{Q}[\![x,y,z,w]\!]$ and $$I=(x^3-w,x^4-yz, xz-y^2,x^3y-z^2).$$ Then $(x)$ is a general minimal reduction of $\mathcal{M}.$ Consider the following polynomials in $\operatorname{\mathcal D}=\mathbb{Q}_{DP}[X,Y,Z,W]$ corresponding to the inverse system of $ I+(x^n)$ for $n \leq 5$: $$\begin{aligned} H_1^1&=Y & H_1^2&=Z \\ H_2^1&=XH_1^1 & H_2^2&=XH_1^2+Y^2 \\ H_3^1&=X^2H_1^1 & H_3^2&=X^2H_1^2+XY^2=XH_2^2\\ H_4^1&=X^3H_1^1+YW+Z^2 & H_4^2&=X^3H_1^2+X^2Y^2+ZW=XH_3^2+ZW \\ H_5^1&=X^4H_1^1+XYW+XZ^2+Y^2Z & H_5^2&=X^4H_1^2+X^3Y^2+XZW+YZ^2+Y^2W \\ &=XH_4^1+Y^2Z &&=XH_4^2+YZ^2+Y^2W\end{aligned}$$ In principle by Theorem \[thm:CharOfLocalLevel\] we have an infinite number of polynomials in the inverse system. However, in this case, to recover the ideal we only need a finite number of polynomials. Let $W_{(5,5)}=\langle H_5^1,H_5^2\rangle$. Using Singular one can verify that $$\begin{aligned} &\operatorname{Ann}_R(W_{(5,5)})_{\le 4}=(x^3-w,xz-y^2,xw-yz,z^2-yw,x^5 )_{\le 4}\\ &=( x^3-w,xz-y^2,xw-yz,z^2-yw)=I. \qedhere\end{aligned}$$ In the following example we construct the inverse system of a Stanley-Reisner ring (of dimension two) associated to a matroid simplicial complex. (Stanley-Reisner rings) \[Example:matroid\] By [@Sta96 Theorem 3.4] the Stanley-Reisner rings associated to matroid simplicial complexes are level. We describe particular type of matroids that arise from matrices. Let $X\in K^{m\times n}$ where $K$ is a field and $m\le n.$ We write $[i_1,\dots,i_m]$ for the $m\times m$ minor of $X$ corresponding to columns $i_1,\ldots,i_m$ of $X$ where $1\le i_1<\cdots<i_m\le n.$ Consider the simplicial complex $\Delta(X)$ on vertices $\{1,\dots,n\}$ that is generated by faces $\{i_1,\dots,i_m\},$ $1\le i_1<\cdots<i_m\le n,$ such that $[i_1,\dots,i_m]\ne 0,$ that is, $$\Delta(X)=\langle \{i_1,\dots,i_m\}\mid [i_1,\dots,i_m]\ne 0 \rangle.$$ Then $\Delta(X)$ is a matroid. Stanley’s result implies that $R/I_{\Delta(X)}$ is a graded level algebra, where $I_{\Delta(X)}$ is the Stanley-Reisner ideal associated to $\Delta(X)$. Let us consider an explicit example. Let $$X=\left(\begin{matrix} 1 & 0 & 2 & 0 & 3 \\ 0 & 1 & 0 & 2 & 0 \end{matrix}\right)$$ Then the matroid associated to $X$ is $$\Delta:=\Delta(X) =\langle \{1,2\},\{2,3\},\{3,4\},\{4,5\},\{1,4\},\{2,5\}\rangle.$$ The figure below illustrates this simplicial complex: (0,1) – (2,0); (0,1) – (2,2); (4,0) – (2,2); (2,0) – (4,2); (4,2) – (2,2); (2,0) – (4,0); (4,0) node\[shape=circle,draw,fill=black\] ; (4,0) node\[below=2pt\] [$5$]{}; (4,2) node\[shape=circle,draw,fill=black\]; (4,2) node\[above=2pt\] [$3$]{}; (2,0) node\[shape=circle,draw,fill=black\]; (2,0) node\[below=2pt\] [$4$]{}; (0,1) node\[shape=circle,draw,fill=black\]; (0,1) node\[left=2pt\] [$1$]{}; (2,2) node\[shape=circle,draw,fill=black\]; (2,2) node\[above=2pt\] [$2$]{}; Let $R=\mathbb{Q}[x_1,x_2,x_3,x_4,x_5].$ The Stanley-Reisner ideal corresponding to $\Delta$ is $$I_{\Delta}=(x_1x_3,x_2x_4,x_1x_5,x_3x_5).$$ By Stanley’s result, the ring $R/I_{\Delta}$ is a graded level ring of dimension $2$. Observe that $x_2+x_4$ and $x_1+x_3+x_5$ forms a regular sequence for $A=R/I_{\Delta}$. In order to find its inverse system, we first operate a change of coordinates: $$\begin{aligned} \varphi: R & \to S=\mathbb{Q}[y_1,\dots,y_5] \\ x_2+x_4 & \mapsto y_1 \\ x_1+x_3+x_5 &\mapsto y_2 \\ x_3 &\mapsto y_3 \\ x_4 &\mapsto y_4 \\ x_5 &\mapsto y_5 \end{aligned}$$ Under this change of coordinates we get the ideal $$I:=\varphi(I_{\Delta})=((y_2-y_3-y_5)y_3,(y_1-y_4)y_4,(y_2-y_3-y_5)y_5, y_3y_5) \subseteq S.$$ Now, the ring $A=S/I$ is a graded level ring of dimension $2$ and type $2$, and $y_1,y_2$ forms a regular sequence for $A$. Consider $\operatorname{\mathcal D}=\mathbb{Q}_{DP}[Y_1,\dots,Y_5]$ the divided power ring dual to $S$. Using Singular or Macaulay2, we can compute the first generators of $I^\perp\subseteq \operatorname{\mathcal D}$: $$\begin{aligned} H_{(1,1)}^1=&Y_4Y_5 &H_{(1,1)}^2=&Y_3Y_4\\ H_{(1,2)}^1=&Y_2H_{(1,1)}^1+Y_4Y_5^2& H_{(1,2)}^2=&Y_2H_{(1,1)}^2+Y_3^2Y_4\\ H_{(2,2)}^1=&Y_1H_{(1,2)}^1+Y_2Y_4^2Y_5+Y_4^2Y_5^2 & H_{(2,2)}^2=&Y_1H_{(1,2)}^2+Y_2Y_3Y_4^2+Y_3^2Y_4^2 \\ & \vdots & & \vdots \\ H_{(4,4)}^1=&Y_1^2Y_2^2H_{(2,2)}^1 +Y_1Y_2^3Y_4^3Y_5+Y_2^3Y_4^4Y_5+ &H_{(4,4)}^2=&Y_1^2Y_2^2H_{(2,2)}^2+Y_1^3Y_2Y_3^3Y_4+Y_1^3Y_3^4Y_4+ \\ &Y_1Y_2^2Y_4^3Y_5^2+Y_2^2Y_4^4Y_5^2+Y_1^3Y_2Y_4Y_5^3+ & & Y_1^2Y_2Y_3^3Y_4^2+Y_1^2Y_3^4Y_4^2+Y_1Y_2^3Y_3Y_4^3+\\ &Y_1^2Y_2Y_4^2Y_5^3+Y_1Y_2Y_4^3Y_5^3+Y_2Y_4^4Y_5^3+ & & Y_1Y_2^2Y_3^2Y_4^3+Y_1Y_2Y_3^3Y_4^3+Y_1Y_3^4Y_4^3+ \\ &Y_1^3Y_4Y_5^4+Y_1^2Y_4^2Y_5^4+Y_1Y_4^3Y_5^4+ & &Y_2^3Y_3Y_4^4+Y_2^2Y_3^2Y_4^4+Y_2Y_3^3Y_4^4+ \\ & Y_4^4Y_5^4 &&Y_3^4Y_4^4. \end{aligned}$$ The set $$\mathcal{H}=\{H_{(1,1)}^1,H_{(1,1)}^2,H_{(1,2)}^1,H_{(1,2)}^2,H_{(2,1)}^1,H_{(2,1)}^2,H_{(2,2)}^1,H_{(2,2)}^2,\dots,H_{(4,4)}^1,H_{(4,4)}^2\}$$ is $L_2^2$-admissible. By Proposition \[Prop:EffectiveConstruction\] we know that $I=\operatorname{Ann}_R(W_{(4,4)})_{\le 3}$ where $W_{(4,4)}=\langle H_{(4,4)}^1,H_{(4,4)}^2 \rangle.$ In fact, it can be verified that $$\begin{aligned} &\operatorname{Ann}_R(W_{(4,4)})_{\le 3}=(y_3y_5, y_2y_5-y_5^2, y_1y_4-y_4^2, y_2y_3-y_3^2, y_2^4 y_1^4, y_5^5, y_4^5, y_3^5)_{\le 3} \\ =& (y_3y_5, y_2y_5-y_5^2, y_1y_4-y_4^2, y_2y_3-y_3^2)=I. \qedhere \end{aligned}$$ Acknowledgements {#acknowledgements .unnumbered} ================ We thank M. E. Rossi for suggesting the problem and providing many useful ideas throughout the preparation of this manuscript. We thank Juan Elias for providing us the updated version of [Inverse-syst.lib]{} and clarifying our doubt in Singular. We would also like to thank Alessandro De Stefani for providing us the proof of Proposition \[Prop:SocDegOfR\_nn\] in the one-dimensional case, and Aldo Conca and Matteo Varbaro for useful discussions on the examples of level rings. [GHM[[$^{+}$]{}]{}07]{} V. Bertella, [*Hilbert function of local Artinian level rings in codimension two*]{}, J. Algebra [**321**]{} (2009), 1429-1442. M. Boij, [*Artin level algebras*]{}, Ph. D. thesis, Royal Institute of Technology, Stockholm, 1994. M. Boij, [*Betti numbers of compressed level algebras*]{}, J. Pure Appl. Algebra [**134**]{} (1999), no. 2, 111-131. M. Boij, [*Artin level modules*]{}, J. Algebra [**226**]{} (2000), no. 1, 361-374. M. Boij, [*Reducible family of height three level algebras*]{}, J. Algebra [**321**]{} (2009), no. 1, 86-104. W. Bruns and J. Herzog, [*Cohen-Macaulay rings*]{}, Revised Edition, Cambridge University Press, 1998. W. Bruns and U. Vetter, [*Determinantal rings*]{}. Lecture Notes in Mathematics, vol. 1327, Springer-Verlag, Berlin, 1988. C. Ciliberto, [*Algebra lineare*]{}, Bollati Boringhieri, 1994. A. Conca, [*Divisor class group and canonical class of determinantal rings defined by ideals of minors of a symmetric matrix*]{}, Arch. Math. [**63**]{} (1994), no. 3, 216-224. W. Decker, G.-M. Greuel, G. Pfister and H. Sch[ö]{}nemann, — [A]{} computer algebra system for polynomial computations (2016), available at . A. De Stefani, [*Artinian level algebras of low socle degree*]{}, Comm. Algebra [**42**]{} (2014), 729-754. D. Eisenbud, [*Commutative algebra with a view toward algebraic geometry*]{}, Graduate Texts in Mathematics, vol. 150, Springer-Verlag, New York, 1995. J. Elias and A. Iarrobino, [*The Hilbert function of a Cohen-Macaulay local algebra: extremal Gorenstein algebras*]{}, J. Algebra [**110**]{} (1987), no. 2, 344-356. J. Elias, *[Inverse-syst.lib]{}–[S]{}ingular library for computing [M]{}acaulay’s inverse systems*, http://www.ub.edu/C3A/elias/inverse-syst-v.5.2.lib, available at arXiv:1501.01786, 2015. J. Elias and M. E. Rossi, [*The structure of the inverse system of Gorenstein $K$-Algebras*]{}, Adv. Math. [**314**]{} (2017), 306-327. J. Emsalem, [*Géométrie des points épais*]{}, Bull. Soc. Math. France **106** (1978), no. 4, 399-416. L. Fouli, [*A study on the core of ideals*]{}, Ph. D. thesis, Purdue University, 2006. R. Fr[ö]{}berg, [*Connections between a local ring and its associated graded ring*]{}, J. Algebra [**111**]{} (1987), no. 2, 300-305. P. Gabriel, [*Objets injectifs dans les catégories abéliennes*]{}, Séminaire P. Dubriel 1958-1959, 1959, no. 17, 1-32. A. V. Geramita, T. Harima, J. C. Migliore and Y. S. Shin, [*The Hilbert function of a level algebra*]{}, Mem. Amer. Math. Soc. [**186**]{} (2007), no. 872, vi+139 pp. S. Goto, [*On the Gorensteinness of determinantal loci*]{}, J. Math. Kyoto Univ. [**19**]{} (1979), no. 2, 371-374. D. Grayson and M. Stillman, [*Macaulay2, a software system for research in algebraic geometry*]{}, available at <http://www.math.uiuc.edu/Macaulay2/>. C. Huneke and I. Swanson, [*Integral closure of ideals, rings, and modules*]{}, London Mathematical Society Lecture Note Series, vol. 336, Cambridge University Press, Cambridge, 2006. C. Huneke and N. V. Trung, [*On the core of ideals*]{}, Compos. Math. [**141**]{} (2005), no. 1, 1-18. A. Iarrobino, [*Compressed algebras: [A]{}rtin algebras having given socle degrees and maximal length*]{}, Trans. Amer. Math. Soc. [**285**]{} (1984), no. 1, 337-378. A. Iarrobino, [*Associated graded algebra of a [G]{}orenstein [A]{}rtin algebra*]{}, Mem. Amer. Math. Soc. **107** (1994), no. 514, viii+115 pp. A. Iarrobino and V. Kanev, [*Power sums, [G]{}orenstein algebras, and determinantal loci*]{}, Appendix C by Iarrobino and Steven L. Kleiman, Lecture Notes in Mathematics, vol. 1721, Springer-Verlag, Berlin, 1999. F. S. Macaulay, [*The algebraic theory of modular systems*]{}, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1994. Revised reprint of the 1916 original; With an introduction by Paul Roberts. P. Mantero, and Y. Xie, [*Generalized stretched ideals and Sally’s conjecture*]{}, J. Pure Appl. Algebra [**220**]{} (2016), no. 3, 1157-1177. T. Marley, [*Hilbert functions of ideals in Cohen-Macaulay rings*]{}, Ph. D. Thesis, Purdue University, 1989. S. Molinelli and G. Tamone, [*On the [H]{}ilbert function of certain rings of monomial curves*]{}, J. Pure Appl. Algebra [**101**]{} (1995), no. 2, 191-206. C. Polini and B. Ulrich, [*A formula for the core of an ideal*]{}, Math. Ann. [**331**]{} (2005), no. 3, 487-503. M. E. Rossi and G. Valla, [*Cohen-Macaulay local rings of embedding dimension $e+d-3$*]{}, Proc. London Math. Soc. (3) [**80**]{} (2000), no. 1, 107-126. M. E. Rossi and G. Valla, [*Hilbert functions of filtered modules*]{}, Lecture Notes of the Unione Matematica Italiana, vol. 9. Berlin, 2010. R. Stanley, [*Cohen-Macaulay complexes*]{}, Higher combinatorics (Proc. NATO Advanced Study Inst., Berlin, 1976), pp. 51-62. NATO Adv. Study Inst. Ser., Ser. C: Math. and Phys. Sci., 31. Reidel, Dordrecht, 1977. R. Stanley, [*Combinatorics and commutative algebra*]{}, Progress in Mathematics, vol. 41, Second Edition, Birkhäuser Boston, Inc., Boston, MA, 1996. Y. Xie, [*Formulas for the multiplicity of graded algebras*]{}, Trans. Amer. Math. Soc. [**364**]{} (2012), no. 8, 4085-4106.
--- abstract: 'A double-folding method is used to calculate the nuclear and Coulomb interaction between two deformed nuclei with arbitrary orientations. A simplified Skryme-type interaction is adopted. The contributions of nuclear interaction and Coulomb interaction due to the deformation and orientation of the nuclei are evaluated for the driving potential used in the description of heavy-ion fusion reaction. So far there is no satisfactory theory to describe the evolution of the dynamical nuclear deformation and orientations during the heavy-ion fusion process. Our results estimated the magnitude of above effects.' address: | 1) Institute of Theoretical Physics, Chinese Academy of Sciences, P. O. Box 2735, Beijing 100080, P. R. China\ 2)Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, P.R. China\ 3) Research Center of Nuclear Theory of National Laboratory of Heavy Ion Accelerator of Lanzhou, Lanzhou 730000, P. R. China\ 4) College of Science, Shenzhen University, Shenzhen 518060, P.R. China\ 5)Department of Physics, Tsinghua University, Beijing 100084, China\ 6) Institut für Theoretische Physik, Justus-Liebig-Universität, Giessen 35392, Germany author: - 'Qingfeng Li$^{1)}$ , Wei Zuo $^{2,3)}$, Wenfei Li$^{2,3)}$, Nan Wang$^{4)}$, Enguang Zhao$^{1,2,5)}$, Junqing Li$^{1,2,3)}$[^1], and W. Scheid$^{6)}$' title: Deformation and orientation effects in the driving potential of the dinuclear model --- Introduction ============ The activity of the study on the synthesis of super-heavy elements is still maintained in both experimentally and theoretically. On the experimental branch, S. Hofmann et. al [@Hof02] in GSI, Darmstadt, performed experiments on the synthesis and identification of the nuclei $^{272}111$ and $^{277}112$ in order to confirm their previous results obtained in the middle of 90s of last century [@Hof95; @Hof96]. Furthermore, several additional decay chains from the reactions $^{64}Ni+^{208}Bi\rightarrow ^{273}111^*$ and $^{70}Zn+^{208}Pb\rightarrow ^{278}112^*$ were also measured. The joint IUPAC-IUPAP Working Party (IWP) has confirmed the discovery of element with atomic number 110, which is named as darmstadtium (Ds), recently the new element with atomic number 111 has also been proposed by IWP to be named as roentgenium (Rg). Experiments on the synthesis of new elements with atomic numbers 115 as well as 113 in the reaction $^{243}Am+^{48}Ca$ were carried out at the U400 cyclotron in Dubna [@Oga04], recently they also reported the results of excitation-function measurements for the $^{244}Pu+^{48}Ca$ fusion-evaporation reactions for element 114 and the synthesis of new isotopes of element 116 with the $^{245}Cm+^{48}Ca$ reaction [@Oga042]. On the theoretical branch, the physics on the more complicated dynamical process to super-heavy elements has been paid more attention [@Ber01] and investigated by several groups under different mechanisms, for example, the di-nuclear concept(see the recent works in [@Ada04; @LWF03] and the references therein), the fluctuation-dissipation model [@Shen02; @Abe03], the concept of nucleon collectivization [@Zag01; @Zag04], as well as the macroscopic dynamical model [@Swi81; @Blo86]. In the di-nuclear system (DNS) concept [@Vol86; @LWF03; @Ant93; @Ada96; @Ada97; @Ada04], the fusion process is considered as the evolution of a di-nuclear system caused by the transfer of nucleons from the light nucleus to the heavy one. The nucleon transfer process is described in Ref.[@LWF03] by solving the master equation numerically. It is found that the fusion probability of the compound nucleus is very sensitive to the specific form of the driving potential. In Ref. [@LWF03], the Coulomb interaction potential of deformed nuclei with a tip-tip orientation is considered. However, spherical nuclei were adopted in calculating the nuclear interactions since it is thought that the nuclear interaction does not depend so strongly on the deformation of the nuclei as the Coulomb interaction due to the short range characteristics of the nuclear force. Although some reasonable results, such as the optimal excitation energies, the residual cross sections of super-heavy compound nuclei, were obtained for different heavy-ion fusion reactions, the reliability has to be checked. Presently a double-folding method is developed to calculate the nuclear and Coulomb interactions between the two deformed nuclei with arbitrary orientations. Here we consider the ground state deformations of the nuclei for all possible combinations of the DNS of a certain reaction. In principle, the deformed nuclei can have different relative orientations which supply quite different conditions for fusion. Some averaging over the orientations of the nuclei has to be carried out at least in the entrance channel. The deformation and the orientation evolutions are difficult to be described, which have not yet been investigated very well by any model so far. Nevertheless, it is important to bear in mind : what are the magnitudes that the deformation of nuclei contributes to the nuclear and Coulomb interactions, respectively, and to explore how and to which extent the orientations contribute. These investigations will give a direction for further improvement. The paper is arranged as follows. In the next section, the treatment of the nuclear and Coulomb potentials is introduced. We present the calculated results and the corresponding discussions in Section III, where the interaction potentials between different deformed nuclei and their dependence on orientations as well as the driving potentials used in the DNS model for different fragmentations in reactions leading to $^{272}Ds$. Finally, Section IV gives a brief conclusion and outlook. Treatment of driving potentials for orientated deformed nuclei of DNS ====================================================================== For a dinuclear system, the local excitation energy is defined as follows, $$\epsilon^*=E^*-U(A_1,A_2,R)-\frac{(J-M)^2}{2\mathcal{J}_{rel}}-\frac{M^2}{2\mathcal{J}_{int}} . \label{excitEn}$$ where $E^*$ is the intrinsic excitation energy of the dinuclear system converted from the relative kinetic energy loss, $M$ is the corresponding intrinsic spin due to the dissipation of relative angular momentum $J$. $\mathcal{J}_{rel}$ and $\mathcal{J}_{int}$ are the relative and intrinsic moments of inertia respectively. $U(A_1,A_2)$ is the driving potential energy responsible for the nucleon transfer in the DNS model, and is written down as, $$U(A_1,A_2,R)=U_{LD+SC}(A_1)+U_{LD+SC}(A_2)-U_{LD+SC}(A_{CN})+U_C(A_1,A_2,R)+U_N(A_1,A_2,R) , \label{DrivPot}$$ where $A_1$, $A_2$, and $A_{CN}$ represent the mass numbers of the two nuclei and the corresponding compound nucleus, respectively, we have $A_1+A_2=A_{CN}$. In the DNS model, the driving potential is normally given as a function of $\eta=(A_1-A_2)/A_{CN}$. The first three parts of the right hand side of the equation are calculated from the Liquid-Drop model plus the shell and pairing corrections [@Moe95; @Mye65]. $U_C(A_1,A_2,R)$ and $U_N(A_1,A_2,R)$ are the corresponding Coulomb and nuclear potential energies between the nuclei and depend on the fragmentation of the dinuclear system, on the internuclear distance $R$ and on the orientation and deformation of the nuclei. They could be calculated by different methods. In the present work, we calculate them by using the double-folding method $$U(\mathbf{r}_1-\mathbf{r}_2)=\int{\rho_1(\mathbf{r}_1)\rho_2(\mathbf{r}_2)\upsilon(\mathbf{r}_1-\mathbf{r}_2)d\mathbf{r}_1d\mathbf{r}_2} \label{DoubFold}$$ where $\rho_1(\mathbf{r}_1)$ and $\rho_2(\mathbf{r}_2)$ are the density distribution of $1$ and $2$ nucleons in a dinuclear system, $\upsilon(\mathbf{r}_1-\mathbf{r}_2)$ is the corresponding interaction between the two points. For the nuclear part $U_N$ we use densities with a smooth falling off at the surface (see later) and constant densities for the Coulomb interaction. The long-range Coulomb interaction is not sensitive to the density at the surface which allows to simplify the calculations. Therefore, we write the Coulomb interaction as follows, ![Schematic presentation of the orientation of two axially quadrupolly deformed nuclei.[]{data-label="fig1"}](fig1.eps){width="90.00000%"} $$U_C(R)=\rho_1^0\rho_2^0\int{\frac{d\mathbf{r}_1d\mathbf{r}_2}{|\mathbf{r}_1-\mathbf{r}_2-\mathbf{R}|}}, \label{CoulPot}$$ where $\mathbf{R}$ is the vector between the two centers of the nuclei (“T” and “P”) as illustrated in Fig.\[fig1\]. The charge densities are set as $\rho_1^0=\frac{Z_1 e}{\Omega_1}$ and $\rho_2^0=\frac{Z_2 e}{\Omega_2}$ where $Z_{1,2}$ and $\Omega_{1,2}$ are the proton numbers and the volumes of the two nuclei, respectively. The symmetry axes ($\vec{a}_1$ and $\vec{a}_2$) of the two deformed nuclei and the $\vec{z}$-axis are assumed to be in the same plane. $\gamma_1$ and $\gamma_2$ are the corresponding angles between the symmetric axes and $\vec{z}$-axis, [*i.e.*]{}, which represent the different orientations of the two nuclei, while $\alpha_1$ and $\alpha_2$ are the angles between arbitrary vectors $\mathbf{r}_{1,2}$ and the symmetric axes $\vec{a}_1$ and $\vec{a}_2$, respectively. The distance between the two points is given by $$|\mathbf{r}_1-\mathbf{r}_2-\mathbf{R}|=\sqrt{(\mathbf{r}_1-\mathbf{r}_2)^2+R^2-2(\mathbf{r}_1-\mathbf{r}_2)\cdot\mathbf{R}} \label{r12R}$$ It is easy to find the following relations, $$(\mathbf{r}_1-\mathbf{r}_2)^2=r_1^2+r_2^2-2r_1r_2(\sin\theta_1\sin\theta_2\cos(\phi_1-\phi_2)+\cos\theta_1\cos\theta_2), \label{r122}$$ $$(\mathbf{r}_1-\mathbf{r}_2)\cdot\mathbf{R}=(r_1\cos\theta_1-r_2\cos\theta_2)R. \label{r12R2}$$ where $\theta_{1,2}$ and $\phi_{1,2}$ are the angles of $\mathbf{r}_{1,2}$ with respect to the coordinates ($\vec{x}$, $\vec{y}$, $\vec{z}$) and ($\vec{x}\prime$, $\vec{y}\prime$, $\vec{z}\prime$), respectively. The upper and lower limits of $r_{1,2}$, $\theta_{1,2}$, and $\phi_{1,2}$ are $$r_{1,2}:(0,\Re(\alpha_{1,2}));\hspace{0.5cm}\theta_{1,2}:(0,\pi);\hspace{0.5cm}\phi_{1,2}:(0,2\pi), \label{limits}$$ where $\Re(\alpha_{1})$ and $\Re(\alpha_{2})$ describe the nuclear surface with quadrupole deformations. $$\Re(\alpha_{i})=R_{0i}(1+\beta_2^iY_{20}(\alpha_{i})). \label{sphhar}$$ Here $R_{0i}$ are the spherical radii of the two nuclei which preserve their fixed volumes. $Y_{20}(\alpha)=(5/4\pi)^{1/2}P_2(\cos\alpha)=(5/4\pi)^{1/2}(3\cos^2\alpha-1)/2$ is spherical harmonics and the axial symmetry is preserved. The $\beta_2^{i}$ is the quadrupole deformation parameter of $i$-nucleus taken from Ref.[@Moe95]. It is easy to write down the expressions for $\alpha_1$ and $\alpha_2$ as $$\cos\alpha_{1}=\hat{\vec{a}}_{1}\cdot\hat{\mathbf{\Re}}(\alpha_1)=\sin\theta_{1}\cos\phi_{1}\sin\gamma_{1}+\cos\theta_{1}\cos\gamma_{1}, \label{cosa1}$$ and $$\cos\alpha_{2}=\hat{\vec{a}}_{2}\cdot\hat{\mathbf{\Re}}(\alpha_2)=\sin\theta_{2}\cos\phi_{2}\sin\gamma_{2}+\cos\theta_{2}\cos\gamma_{2}. \label{cosa2}$$ For the nuclear potential, following the work by Adamian et al. [@Ada96], we adopt the Skyrme-type interaction without considering the momentum and spin dependence, in which a zero-range treatment of the effective interaction $\delta(\mathbf{r}_1-\mathbf{r}_2)$ is assumed. The nuclear potential is obtained in the sudden approximation [@Ada96], $$\begin{aligned} U_N(R)&=&C_0\{\frac{F_{in}-F_{ex}}{\rho_{00}}(\int{\rho_1^2(\mathbf{r})\rho_2(\mathbf{r}-\mathbf{R})d\mathbf{r}} \nonumber\\ &+&\int{\rho_1(\mathbf{r})\rho_2^2(\mathbf{r}-\mathbf{R})d\mathbf{r}})+\int{\rho_1(\mathbf{r})\rho_2(\mathbf{r}-\mathbf{R})d\mathbf{r}}\} \label{Un}\end{aligned}$$ with $$F_{in,ex}=f_{in,ex}+f'_{in,ex}\frac{N_1-Z_1}{A_1}\frac{N_2-Z_2}{A_2}. \label{finex}$$ Here $N_{1,2}$ and $Z_{1,2}$ are the neutron and proton numbers of the two nuclei respectively. Obviously the isospin effect of the nucleon-nucleon interaction is considered here though the relative influence is small. The parameters $C_0=300$ MeV$\cdot$fm$^3$, $f_{in}=0.09$, $f_{ex}=-2.59$, $f'_{in}=0.42$, $f'_{ex}=0.54$, and $\rho_{00}=0.17$fm$^{-3}$ are also used in this work. The functions $\rho_{1}$ and $\rho_2$ are two-parameter Woods-Saxon density distributions (now we set the center of the “P”-nucleus at the coordinate origin and $\mathbf{r}_1=\mathbf{r}$) $$\rho_1(\mathbf{r})=\frac{\rho_{00}}{1+\exp((r-\Re_1(\alpha_1))/a_{\rho_1})} \label{rho1}$$ and $$\rho_2(\mathbf{r})=\frac{\rho_{00}}{1+\exp((|\mathbf{r}-\mathbf{R}|-\Re_2(\alpha_2))/a_{\rho_2})}, \label{rho2}$$ The parameters $a_{\rho_{1}}$ and $a_{\rho_{2}}$ represent the diffuseness of the two nuclei, respectively. Whereas $\cos\alpha_1$ is given in Eq. (\[cosa1\]), we use the following formula with $|\mathbf{r}-\mathbf{R}|=\sqrt{r^2+R^2-2rR\cos\theta}$ $$\begin{aligned} \cos\alpha_2&=&\frac{(\mathbf{r}-\mathbf{R})\cdot{\hat{\vec{a_2}}}}{|\mathbf{r}-\mathbf{R}|}\\ &=&\frac{r(\sin\theta\cos\phi\sin\gamma_2+\cos\theta\cos\gamma_2)-R\cos\gamma_2}{r^2+R^2-2rR\cos\theta}\nonumber.\end{aligned}$$ We directly calculate the six- and three-dimensional integrals in Eqs. (\[CoulPot\]) and (\[Un\]) numerically. For Eq. (\[Un\]), a truncation parameter $r_{cut}$ for the upper limit of $r$ is introduced due to the long tails of the nuclear densities expressed in Eqs. (\[rho1\]) and (\[rho2\]). For each mass asymmetry we calculated the sum of the Coulomb and nuclear potential energies as a function of the internuclear distance $R$ and took the potential at the minimum in $R$ which is shorter than $R_{CB}$ (Coulomb-barrier saddle point) as the driving potential of the DNS model. Numerical results ================= In this paper, the nuclear and Coulomb interaction for the DNS of the reaction $^{64}Ni+^{208}Pb\rightarrow ^{272}Ds$ is studied by taking the nuclear deformations and the corresponding orientations into account. For simplicity, the diffuseness parameters $a_{\rho_1}$ and $a_{\rho_2}$ are chosen as $a_{\rho_1}=a_{\rho_2}=0.6$fm, which is a little bit larger than those in Ref. [@Ada96]. Furthermore, $r_{01}=r_{02}=1.2$fm is used. The parameter $r_{cut}=25$fm for the radial integration of the nuclear potential of the deformed nucleus in Eq. (\[Un\]), is taken for an adequate precision. Fig.\[fig2\] (a) and (b) show the nuclear interaction potentials of two sets of projectile-target combinations, namely $^{28}Na+^{244}Es$ and $^{74}Zn+^{198}Hg$, to form the same compound nucleus $^{272}Ds$ as a function of distance $R$ between the centers of the two nuclei. The corresponding nucleus-nucleus potentials including both the nuclear and Coulomb interactions are given in Fig.\[fig2\] (c) and (d). In Fig.\[fig2\] (a) and (c), both nuclei are prolately deformed, $^{28}Na$ with $\beta_2=0.257$ and $^{244}Es$ with $\beta_2=0.234$, respectively, while in Fig.\[fig2\] (b) and (d), $^{74}Zn$ is prolate and $^{198}Hg$ oblate with $\beta_2=0.125$ and $-0.112$, respectively. The system $^{74}Zn+^{198}Hg$ is more mass-symmetric, [*i.e.*]{}, it has a smaller $|\eta|$ than the system $^{28}Na+^{244}Es$, and thus a higher Coulomb potential energy. In each panel, different orientations for the two systems, [*i.e.*]{}, tip-tip, tip-belly and belly-belly orientations are investigated, an illustration is shown in (c) plot. When both $\beta_2$ values are positive in (a) and (c), the angles $(\gamma_1,\gamma_2)=(0^0,180^0)$, $(0^0,90^0)$, and $(90^0,90^0)$ are the corresponding ones for the tip-tip, tip-belly, and belly-belly cases, respectively, while for the case of $\beta_2^1>0$ and $\beta_2^2<0$ in cases (b) and (d), $(\gamma_1,\gamma_2)=(0^0,90^0)$, $(0^0,0^0)$, and $(90^0,0^0)$, respectively. The two nuclei become more compact with a belly-belly orientation in contrast to the tip-tip one, and the minimum of the potential energy for a belly-belly orientation is at a smaller $R$ than that of the tip-tip case. When the orientation changes from the tip-tip type to the belly-belly one, the minima of the nuclear potentials in (a) and (b) behave differently as those of the total potentials shown in (c) and (d), [*i.e.*]{}, the minima of the nuclear interaction go down while the minima of the total interaction increase. The reason for the decrease from tip-belly to belly-belly in (c) is that the increase of the Coulomb interaction energy is smaller than the decrease of the nuclear interaction energy. Defining a distance between the surface of the two nuclei, for example, for the tip-tip case, $\Delta R=R_{min}-(\Re_{1}^{long}+\Re_{2}^{long})$, while for the belly-belly case, $\Delta R=R_{min}-(\Re_{1}^{short}+\Re_{2}^{short})$ ( $\Re_{i}^{long, short}$ represent the long and short axes of the deformed nucleus $i$, respectively), we find that $\Delta R$ changes a little for different orientations. When $|\eta|$ decreases from $1$ to $0$, the value of $\Delta R$ increases due to a larger repulsive Coulomb force, which can be seen more clearly in Fig.\[fig5\]. Therefore, the effect of the mass asymmetry and the orientation of the DNS on the driving potential can be analyzed from these results. ![The nuclear (in (a) and (b)) and the nuclear+Coulomb potentials (in (c) and (d)) for two sets of projectile-target combinations for the same compound nucleus $^{272}Ds$ are shown as a function of $R$ for different orientations of the two nuclei.[]{data-label="fig2"}](fig2.eps){width="90.00000%"} Fig.\[fig3\] shows the potentials at the minimum of $U_N+U_C$ illustrated in Fig.\[fig2\] for the above two combinations as a function of the orientation, the orientation is chosen in a way that keeps $\gamma_1+\gamma_2=180^0$ for the system $^{28}Na+^{244}Es$ and $\gamma_2+\gamma_1=90^0$ for $^{74}Zn+^{198}Hg$. On the left hand side, $\gamma_1$ goes from $180^0$ to $0^0$ and $\gamma_2$ from $0^0$ to $180^0$, on the right hand side, $\gamma_1$ is chosen from $0^0$ to $-180^0$ and $\gamma_2$ from $90^0$ to $270^0$ in order to obtain similar trends of the variation of potentials as a function of the orientation of the two nuclei as on the left hand side, in the two cases, both of the orientation changes from the tip-tip orientation to the belly-belly one and finally back to the tip-tip orientation (the orientation of the nuclei is shown in the lower-left plot of Fig.\[fig3\]). With the changing of the orientations of the two nuclei, the nuclear potentials (upper panels) form a valley while the Coulomb potentials (middle panels) attain a peak value for the tip-tip orientation. The summation of the two contributions shown in the bottom panels is similar in shape to the Coulomb potential but the change with angle is gentler. ![The potentials with $\gamma_1+\gamma_2=180^0$ for $^{28}Na+^{244}Es$ (left panel) and $\gamma_1+\gamma_2=90^0$ for $^{74}Zn+^{198}Hg$ (right panel). See text for details.[]{data-label="fig3"}](fig3.eps){width="90.00000%"} Fig.\[fig4\] displays the driving potentials in Eq. (\[DrivPot\]) for different orientations. In the upper panel of the figure we fixed $\gamma_1$ and $\gamma_2$ to $0^0$ or $90^0$, while in the lower panel, the results for the tip-tip and belly-belly orientations are shown. The curves in Fig.\[fig4\] were calculated by starting with the initial fragmentation $^{64}Ni+^{208}Pb$ ($\eta_i$) transferring nucleons in steps of one proton or one neutron by searching for the minimum of potential energy. Therefore, the potentials are only approximately symmetric with respect to $\eta=0$ for the tip-tip and belly-belly cases while for the cases with orientations of $(0^0, 0^0)$ and $(0^0, 90^0)$ in the upper panel, this symmetry is lost obviously. From Fig.\[fig4\], we find that the driving potential is quite sensitive to the choice of orientations of the two nuclei. The driving potential for the tip-tip configuration is smaller than that for the belly-belly configuration in the whole range of $\eta$. This result is in disagreement with that obtained in Ref. [@Mis02], this discrepancy might be associated to the different consideration of the fusion process of heavy-ions. To evaluate the difference between the different orientations, we show the differences between the potential energies of the various cases with respect to the tip-tip case in the upper half of Fig.\[fig5\], where $U^{belly-belly}-U^{tip-tip}$ is shown with a line, while the other two cases are shown with different scattered symbols. The differences are peaked in two regions, one in $|\eta|<0.5$ and the other in $|\eta|>0.5$. In each region there exists a large deformation of the nuclei, especially when $|\eta|$ is $0.1\sim0.4$. However, the detailed deformation of the two nuclei in each part is different, that is, when $|\eta|>0.5$, the smaller nucleus is almost spherical while the larger counterpart is prolately deformed. When $|\eta|<0.5$, prolate and oblate deformations of the two nuclei occur, for example, for $\eta=-0.243$, the corresponding configuration is $^{103}Mo+^{169}Er$ with a couple of large prolate deformation $(\beta_2^1, \beta_2^2)=(0.358, 0.304)$. When $\eta=-0.169$, the corresponding $^{113}Pd+^{159}Gd$ consists of a negatively ($-0.25$) deformed $^{113}Pd$ and a positively ($0.28$) deformed $^{159}Gd$. The separation distance $\Delta R$ between the surface of the two nuclei of the DNS is shown in the lower graph of Fig.\[fig5\]. Because of the relatively large Coulomb potential, $\Delta R$ is stretched when the masses of the two participating nuclei become more equal, which has also been shown in Fig.\[fig2\]. ![(a): the difference between the driving potentials of tip-tip case and the other cases. (b): $\Delta R$ as a function of $\eta$ with tip-tip orientation. []{data-label="fig5"}](fig4.eps){width="90.00000%"} ![(a): the difference between the driving potentials of tip-tip case and the other cases. (b): $\Delta R$ as a function of $\eta$ with tip-tip orientation. []{data-label="fig5"}](fig5.eps){width="90.00000%"} For the di-nuclear system $^{64}Ni+^{208}Pb\rightarrow ^{272}Ds$, Fig.\[fig6\] shows the comparison between the present driving potential shown by dots and that calculated in Ref. [@LWF03] shown by a fine line for the tip-tip orientation. In the present calculations, the ground state deformation has been taken into account for both the nuclear and Coulomb interactions, and in Ref. [@LWF03] a parameterized Morse formula [@Ada97] without considering the deformation of the nuclei has been adopted for the nuclear potential. We find that the two potentials are basically very close each other, however, some obvious deviations appear in the relatively large deformed regions, for example, around $|\eta|\sim0.2$ and $|\eta|\sim0.8$. It should be pointed out that a deviation also occurs at $|\eta|\sim0$. After checking the detailed path of evolution, we find that the configurations of the DNS in the two cases are different at this point. For the case with a nuclear interaction of spherical nuclei, the combination $(^{136}La+^{136}I)$ is preferred, while for the one with that of deformed nuclei, a more charge-symmetric combination $(^{136}Ba+^{136}Xe)$ is taken into account. Obviously, the effect of a large deformation in the deformed region $|\eta|\sim0.2$ changes the final path of the evolution near $\eta=0$. ![The comparison of the driving potential by using the ground state deformation and the tip-tip orientation for both the nuclear and Coulomb interactions with previous calculations which did not taken into account the deformation in the nuclear part of interaction.[]{data-label="fig6"}](fig6.eps){width="90.00000%"} Conclusion and outlook ====================== A double-folding method used to calculate the nucleus-nucleus potential between deformed nuclei is further developed to improve the driving potential of nuclear fusion in the DNS model. By taken into account the nuclear deformation in the nuclear interaction together with the Coulomb interaction, the formalism for calculating the driving potential of heavy-ion fusion becomes more reasonable. The deformations and orientations of the interacting nuclei contributing to the nuclear and Coulomb interactions are investigated for every fragmentation of the DNS considered. It is natural that the tip-tip orientation has the lowest interaction energy, and may be preferred during the nucleon exchange process. The minimum energies of the nucleus-nucleus interaction along the distance between the centers of the two nuclei appear at larger distances when mass-asymmetry $|\eta|$ changes from unit to zero, which is due to the larger Coulomb force, and is in favor for the quasi-fission process. So far a dynamical evolution of the deformation and orientation during the heavy-ion fusion process is not reasonably treated by the present models to our knowledge, our results have estimated the effects of the deformation and orientation of the nuclei, on the driving potential. Hopefully it will give a direction for the further investigation and improvement. In the next step, we will calculate the fusion probability of various projectile-target combinations with deformed nuclei. Furthermore, when the distance between the surface of the two nuclei is elongated the effect of quasi-fission is expected more pronounced. We will further consider a two-dimensional potential as functions of the mass asymmetry $\eta$ and the internuclear distance $R$ in order to investigate the effect of quasi-fission in the subsequent work. Acknowledgment ============== The authors (Q. Li, W. Zuo, E. Zhao, J. Li, and W. Li) acknowledge the warm hospitality of the Insitut für Theoretische Physik, Universität Giessen, Germany. We are also grateful to Dr. A. D. Torres for valuable discussions. The work is supported by the National Natural Science Foundation of China under Grant No.10175082, 10235020, 10375001, 10311130175; the Major Basic Research Development Program under Grant No. G2000-0774-07; the Knowledge Innovation Project of the Chinese Academy of Sciences under Grant No. KJCX2-SW-N02; One Hundred Person Project of CSA; CASK.C. Wong Post-doctors Research Award Fund; the National key program for Basic Research of the Ministry of Science and Technology (2001CCB01200, 2002CCB00200) and the financial support from DFG of Germany. [99]{} S. Hofmann, et al., Eur. Phys. J. A [**14**]{}, 147(2002).\ S. Hofmann, et al., Z. Phys. A [**350**]{}, 277(1995).\ S. Hofmann, et al., Z. Phys. A [**354**]{}, 229(1996).\ Yu. Ts. Oganessian, et al., Phys. Rev. C [**69**]{}, 021601R(2004); Erratum ibid. C [**69**]{} 029902E(2004).\ Yu. Ts. Oganessian, et al., Phys. Rev. C [**69**]{}, 054607(2004).\ A.C. Berriman, D.J. Hinde, M. Dasgupta, C.R. Morton, R.D. Butt, and J.O. Newton, Nature, [**413**]{}, 144(2001).\ G.G. Adamian, N.V. Antonenko, and W. Scheid, Phys. Rev. C [**69**]{} 011601R(2004); ibidem 014607(2004).\ W.F. Li, N. Wang, J.F. Li, H.S. Xu, W. Zuo, E.G. Zhao, J.Q. Li, and W. Scheid, Europhys. Lett. [**64**]{} 750(2003).\ G.G. Adamian, N.V. Antonenko, and W. Scheid, Nucl. Phys. A [**618**]{}, 176(1997).\ C.W. Shen, G. Kosenko, and Y. Abe, Phys. Rev. C [**66**]{}, 061602(2002).\ Y. Abe and B. Bouriquet, Nucl. Phys. A [**722**]{}, 241(2003); Erratum ibid. A [**733**]{} 321(2004).\ V.I. Zagrebaev, Phys. Rev. C [**64**]{}, 034606(2001).\ V.I. Zagrebaev, Nucl. Phys. A [**734**]{}, 164(2004).\ W.J. Swiatecki, Phys. Scr. [**24**]{}, 113(1981).\ J.P. Blocki, H. Feldmeier, and W.J. Swiatecki, Nucl. Phys. A [**459**]{}, 145(1986).\ V.V. Volkov, Izv. Akad. Nauk SSSR, Ser. Fiz. [**50**]{}, 1879(1986).\ N.V. Antonenko, E.A. Cherepanov, A.K. Nasirov, V.B. Permjakov, and V.V. Volkov, Phys. Lett. B [**319**]{}, 425(1993).\ G.G. Adamian, N.V. Antonenko, R.V. Jolos, S.P. Ivanova, and O.I.Melnikova, Int. Jour. of Mod. Phys. E [**5**]{}, 191(1996).\ W.D. Myers, W.J. Swiatecki, LBL Report, UCRL-11980, (1965).\ P. Möller, J.R. Nix, W.D. Myers, and W.J. Swiatecki, At. Data Nucl. Data Tables, [**59**]{}, 185(1995).\ Ş. Mişicu, W. Greiner, Phys. Rev. C [**66**]{}, 044606(2002).\ [^1]: Corresponding author. E-mail address: jqli@impcas.ac.cn
--- abstract: 'We construct a one-to-one continuous map from the Morse boundary of a hierarchically hyperbolic group to its Martin boundary. This construction is based on deviation inequalities generalizing Ancona’s work on hyperbolic groups [@AnconaMartinhyperbolic]. This provides a possibly new metrizable topology on the Morse boundary of such groups. We also prove that the Morse boundary has measure 0 with respect to the harmonic measure unless the group is hyperbolic.' author: - Matthew Cordes and Matthieu Dussaule and Ilya Gekhtman bibliography: - 'MorseMartin.bib' title: An immersion of the Morse boundary in the Martin boundary --- Introduction ============ To any Gromov hyperbolic space can be associated a compactification obtained by gluing asymptotic equivalence classes of geodesic rays [@Gromov]. This topological space, called the Gromov boundary, is a quasi-isometry invariant and has therefore been invaluable in studying algebraic, geometric, probabilistic and dynamical properties of hyperbolic groups, that is groups which act properly and cocompactly on Gromov hyperbolic spaces. A major theme in recent research in metric geometry and geometric group theory has been studying various generalizations of hyperbolic groups which nevertheless admit interesting actions on Gromov hyperbolic spaces. These include (a) Weakly hyperbolic groups: groups which admit non-elementary actions on (possibly non proper) Gromov hyperbolic spaces. (b) Acylindrically hyperbolic groups [@Sela], [@Bowditchcurvecomplex], [@Osinacylindrical]: weakly hyperbolic groups which admit a non-elementary action on a hyperbolic space satisfying a weakened discontinuity assumption called acylindricity. These include outer automorphism groups of free groups, as well as the classes listed below. (c) Hierarchically hyperbolic groups [@BehrstockHagenSisto1], [@Sistosurvey]: a special class of weakly hyperbolic groups admitting a nice combinatorial description. These include mapping class groups, graph products such a right angled Artin and Coxeter groups, and finite covolume Kleinian groups. (d) Relatively hyperbolic groups [@Farb], [@Bowditch], [@Osinrelatively], [@DrutuSapir]: groups admitting geometrically finite actions on proper geodesic Gromov hyperbolic spaces. These include fundamental groups of finite volume negatively curved manifolds and free products of arbitrary finite collections of groups. To better study these examples, one would like to associate to non-hyperbolic spaces a quasi-isometry invariant bordification analogous to the Gromov boundary. One such object is the given by the Morse boundary, which roughly encodes all the hyperbolic directions. More precisely, following Cordes [@Cordes], given a function $N:[1,+\infty)\times [0,\infty) \to [0,\infty)$, a geodesic $\alpha$ in a metric space $X$ is called $N$-Morse if for every point $x,y$ on $\alpha$, any $(\lambda,c)$ quasi-geodesic joining $x$ to $y$ stays within $N(\lambda,c)$ of $\alpha$. The function $N$ is called a Morse gauge. The $N$-Morse boundary of a metric space $X$, $\partial_M^NX$, is the set of equivalence classes of Morse geodesic rays, where two $N$-Morse geodesic rays are declared to be equivalent if they stay within bounded distance of each other. One can endow the $N$-Morse boundary with a topology, mimicking the definition of the topology on the Gromov boundary of a hyperbolic space $X$, that is, two equivalence classes of geodesic rays are close in this topology if they fellow travel for a long time. The Morse boundary of $X$, $\partial_MX$, as a set, is the union of all the $N$-Morse boundaries. For $\mathrm{CAT}(0)$ spaces, it coincides with the contracting boundary introduced by Charney and Sultan [@CharneySultan]. It is topologized with the direct limit topology over all Morse gauges. The resulting topological space is a visibility space (i.e. every pair of points in the Morse boundary can be joined by a bi-infinite Morse geodesic) and it is invariant under quasi-isometry, see [@Cordes] for more details on all this. Moreover, it is compact if and only if $X$ is hyperbolic, see [@Murray]. However, this topology is not in general metrizable. However, one can also endow the Morse boundary with a metrizable topology, called the Cashen–Mackay topology, see [@CashenMacKay]. Whenever the Morse boundary embeds as a set into a metrizable set $Z$, such as the visual boundary of a CAT(0) space, it seems interesting to compare the Cashen–Mackay topology with the induced topology from $Z$, see [@Incerti-Medici] for instance. We also refer to [@Cordessurvey] for many more properties of the Morse boundary. A major stream in geometric group theory is devoted to studying to what extent the algebraic and geometric properties of a group $G$ determine the properties of Markov chains on it, and conversely. One way to do this is to relate asymptotic properties of random walks on a group to the dynamics of its action on some geometric boundary $Z$. The goal of such an endeavor is often to show that typical paths in $G$ of the random walk generated by a probability measure $\mu$ converge to a point in the geometric boundary $Z$ and if possible, that $G\cup Z$ is in a measure theoretic sense the maximal bordification with this property. This provides a geometric realization of (some quotient) of the Poisson boundary of the random walk and this identification can in turn provide geometric information, see [@KaimanovichVershik], [@Erschlerannals], [@Erschlersurvey]. Our goal is to study the Morse boundary of Cayley graphs in this framework. Unfortunately, for non-hyperbolic groups it is too small to be a model for the Poisson boundary: indeed as we will show, typical paths of the random walk do not converge to points in the Morse boundary. Nevertheless, in this paper we connect geometry and probability in a different way, by showing the Morse boundary embeds into a probabilistically defined topological space called the Martin boundary. The Martin boundary is defined as follows. Consider a transient random walk on a finitely generated group $\Gamma$. Let $F(g,h)$ be the probability that a random path starting at $g$ ever reaches $h$. The expression $d_{G}(g,h)=-\log F(g,h)$ defines a (possibly asymmetric) metric on $\Gamma$ called the Green metric. Its horofunction compactification is called the Martin boundary, see [@Sawyer]. We will give more details on Poisson boundaries and Martin boundaries in Section \[SectionPoissonMartin\]. Identifying the precise homeomorphism type of the Martin boundary is a difficult problem—in general different random walks on the same group can have wildly different Martin boundaries, see [@Gouezelannalsproba]. It is therefore interesting to relate some geometric property of a group $\Gamma$ with Martin boundaries for large classes of random walks on $\Gamma$. Ancona proved that for finitely supported random walks on hyperbolic groups, the Martin boundary is equivariantly homeomorphic to the Gromov boundary. We prove a weaker result in a more general context. Recall that a relatively hyperbolic group is called non-elementary if its Bowditch boundary, which is the limit set into the Gromov boundary of a hyperbolic space $X$ on which the group geometrically finitely acts by isometries, is infinite. Equivalently, the action on $X$ contains infinitely many independent loxodromic elements. We also fix the following terminology for hierarchically hyperbolic groups. Recall that such a group $\Gamma$ is equipped with an index set $\mathfrak{S}$ together with $\delta$-hyperbolic spaces $(CW,d_W)$, $W\in \mathfrak{S}$ (the constant $\delta$ is fixed). It is also equipped with projection maps $\pi_W:\Gamma \to CW$. Elements of $\mathfrak{S}$ are called domains. The set of domains $\mathfrak{S}$ is endowed with a partial order with respect to which $\mathfrak{S}$ is either empty or has a unique maximal element $\mathbf{S}$. The group $\Gamma$ acylindrically acts on on this maximal set $C\mathbf{S}$. Whenever $C\mathbf{S}$ has unbounded diameter, this action is non-elementary and so in particular, $\Gamma$ is acylindrically hyperbolic. We will say in this situation that $\Gamma$ is *a non-elementary hierarchically hyperbolic group*. \[maintheoremMorsetoMartin\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Let $\mu$ be a probability measure on $\Gamma$ whose finite support generates $\Gamma$ as a semi-group. Then, the identity map on $\Gamma$ extends to an injective map $\Phi$ from the Morse boundary to the Martin boundary which is continuous with respect to the direct limit topology. Indeed, in the proof of this theorem, we show that for each Morse gauge $N$, $\partial_M^N \Gamma$ topologically embeds in in the Martin boundary. Using some results from [@Cordes] we get a corollary that sheds some light on the topology of the Martin boundary of the mapping class group: For any $n \geq 2$ there exists a surface of finite type $S$ such that $\partial_\mu \mathrm{MCG}(S)$ contains a topologically embedded $(n-1)$-sphere, see Corollary \[corollaryMCGs\]. Thus, the Morse boundary can be identified with a certain “canonical” subset of the Martin boundary. For non-elementary relatively hyperbolic groups the corresponding result follows from a stronger result of [@GGPY], but we provide another proof in this paper. In Ancona’s identification of the Martin boundary of a random walk on a hyperbolic group with the Gromov boundary, the main technical step is a certain deviation inequality asserting that the Green metric is roughly (up to an additive constant) additive along word geodesics. Namely, if $g,h,w\in \Gamma$ are on a word geodesic aligned in this order, then $|d_{G}(g,w)-d_{G}(g,h)-d_{G}(g,w)|<C$ for a constant $C$ depending only $\Gamma$ and the random walk. In order to prove Theorem \[maintheoremMorsetoMartin\] we show that a similar inequality holds along Morse geodesics. \[maintheoremMorseAncona\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Let $\mu$ be a probability measure on $\Gamma$ whose finite support generates $\Gamma$ as a semi-group. Then, for any Morse gauge $N$, there exists $C$ such that for any $N$-Morse geodesic $\alpha$ and for any points $g,h,w \in \alpha$ aligned in this order, $$|d_{G}(g,w)-d_{G}(g,h)-d_{G}(g,w)|<C.$$ We do not know in general if the map constructed in Theorem \[maintheoremMorsetoMartin\] is continuous for the Cashen-Mackay topology. However, for relatively hyperbolic groups whose parabolic subgroups have empty Morse boundary, it is a homeomorphism on its image for this topology. Indeed, [@CashenMacKay Theorem 7.6] shows that the Morse boundary endowed with this topology is embedded in the set of conical limit points in the Bowditch boundary, which in turns is embedded in the Martin boundary using results of [@GGPY]. This leads to the following question. Let $\Gamma$ be a non-elementary hierarchically hyperbolic group. Is the map $\Phi$ a homeomorphism on its image for the Cashen–Mackay topology on the Morse boundary ? If this question has a negative answer, then we get a new metrizable topology on the Morse boundary, coming from the Martin boundary, which can be interesting in its own, at least from the perspective of random walks. We also investigate the connection between the Morse boundary and the Poisson boundary. We prove that the image of the Morse boundary can be seen a Borel subset of the Martin boundary. We can thus measure the Morse boundary with respect to the harmonic measure $\nu$. We then prove the following. \[maintheoremmeasureMorse\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Let $\mu$ be a probability measure on $\Gamma$ whose finite support generates $\Gamma$ as a semi-group and let $\nu$ be the corresponding harmonic measure on the Martin boundary. Then $\nu(\partial_M\Gamma)=0$ unless $\Gamma$ is hyperbolic, in which case $\nu(\partial_M\Gamma)=1$. Using results of Maher and Tiozzo [@MaherTiozzo], one could prove that the Morse boundary can be embedded in another realization of the Poisson boundary, namely the Gromov boundary of a space on which the group acyindrically acts. We want to emphase that the Poisson boundary is a measure theoretical object and that we could not a priori deduce from it that there exists a continuous map from the Morse boundary to the Martin boundary. Let us also mention the following. An acylindrically hyperbolic group $\Gamma$ may admit various non-elementary acylindrical actions on hyperbolic spaces. A fruitful line of research initiated by Abbott [@Abbott] is to find the best possible one. Such an action $\Gamma \curvearrowright X$ is called universal if for any element $g$ of $\Gamma$ such that there exists an acylindrical action of $\Gamma$ for which $g$ is loxodromic, the action of $g$ on $X$ also is loxodromic. One can also introduce a partial order on cobounded acylindrical actions, see [@AbbottBalasubramanyaOsin]. When existing, a maximal action for this partial order is called a largest acylindrical action. Any largest action is necessarily a universal action and is unique. Abbott, Behrstock and Durham [@AbbottBehrstockDurham] proved that any non-elementary hierarchically hyperbolic group admits a largest acylindrical action. More precisely, they proposed a way to modify the hierarchical structure of the group so that the action of $\Gamma$ on $C\mathbf{S}$ is a largest acylindrical action, where $\mathbf{S}\in \mathfrak{S}$ is the maximal domain. We will use this modified hierarchical structure in the following, see in particular the discussion after Proposition \[propMorselinearprogressrelativelyhyperbolic\]. Organization of the paper {#organization-of-the-paper .unnumbered} ------------------------- In Section \[SectionPoissonMartin\], we recall the precise definition of the Poisson boundary and the Martin boundary and review known results about their identifications with geometric boundaries. Section \[Sectiondeviation\] is devoted to the proof of Theorem \[maintheoremMorseAncona\]. We first prove an enhanced version of deviation inequalities in acylindrically hyperbolic groups obtained in [@MathieuSisto]. These inequalities basically state the conclusions of Theorem \[maintheoremMorseAncona\] hold, provided that $x,y,z$ are well aligned in the hyperbolic space $X$ on which the group acylindrically acts. We then show that this condition is satisfied for any points $x,y,z$ on a Morse geodesic in a hierarchically hyperbolic group. In Section \[Sectionconstructionmap\], we use these inequalities to construct the map from the Morse boundary to the Martin boundary and we prove Theorem \[maintheoremMorsetoMartin\]. The construction, adapted from [@KaimanovichErgodicity], uses a bit of potential theory. Once the map is constructed, injectivity is proved exactly like in hyperbolic groups. On the other hand, the proof of continuity is new and different, since the constant $C$ in Theorem \[maintheoremMorseAncona\] depends on the Morse gauge, while it is fixed for hyperbolic groups. Finally, in Section \[Sectionmeasurezero\], we prove Theorem \[maintheoremmeasureMorse\]. We actually give two proofs. The first one basically only uses ergodicity of the harmonic measure and it seems it could be adapted other contexts. However, it only works for symmetric random walks, so we give a second proof which is a bit more specific but which does not need such an assumption. Acknowledgements {#acknowledgements .unnumbered} ---------------- The authors would like to thank the organizers of Young Geometric Group Theory VII in Les Diablerets, Switzerland and the 3-manifolds and Geometric Group Theory conference in Luminy, France where part of this work was accomplished. The first author was supported by the ETH Zurich Postdoctoral Fellowship Program, cofunded by a Marie Curie Actions for People COFUND Program. The first author was also supported at the Technion by a Zuckerman STEM Leadership Fellowship and Israel Science Foundation (Grant 1026/15). The third author was partially supported by the National Science and Engineering Research Council of Canada (NSERC). Random walks and probabilistic boundaries {#SectionPoissonMartin} ========================================= The Poisson boundary and the Martin boundary -------------------------------------------- Consider a finitely generated group $\Gamma$ and a probability measure $\mu$ on $\Gamma$. The random walk driven by $\mu$ is defined as $X_n=g_1...g_n$, where $g_k$ are independent random variables following the law of $\mu$. We consider two probabilistic boundaries in this paper, the Poisson boundary and the Martin boundary. The Poisson boundary of a group $\Gamma$ endowed with a probability measure $\mu$ is the space of ergodic components for the time shift in the path-space of the associated random walk [@Erschlersurvey], [@KaimanovichVershik]. It is also isomorphic to a maximal measurable space endowed with a stationary probability measure $\lambda$ such that the random walk almost surely converges in the measure theoretical sense to a point in the boundary, that is, $X_n\cdot \lambda$ almost surely converges to a Dirac measure, see [@KaimanovichPoissonhyperbolic] and [@Furstenberg73]. We emphasize that the Poisson boundary is a purely measure theoretical space, unlike the topological Martin boundary, which we now define. We introduced the Green metric in the introduction. Let us give more details now. The Green function associated with $\mu$ is defined as $$G(g,h)=\sum_{n\geq 0}\mu^{*n}(g^{-1}h),$$ where $\mu^{*n}$ is the $n$th power of convolution of $\mu$. Let $F(g,h)=\frac{G(g,h)}{G(e,e)}$. Then, $F(g,h)$ is the probability of ever reaching $h$, starting the random walk at $g$, see for example [@Woess Lemma 1.13 (b)]. The Green metric $d_G$ is then defined as $$d_G(g,h)=-\log F(g,h).$$ When the measure $\mu$ is symmetric, this is indeed a distance, we refer to [@BlachereBrofferio] for more details, where this metric was first introduced. The triangle inequality can be reformulated as $$\label{triangleGreen} F(g_1,g_2)F(g_2,g_3)\leq F(g_1,g_3).$$ In particular, for any $g_1,g_2,g_3$, we have $$\label{triangleGreen'} G(g_1,g_2)G(g_2,g_3)\leq CG(g_1,g_3)$$ for some uniform constant $C$. Note that this inequality is always true, whether $\mu$ is symmetric or not, since it only states that the probability of reaching $g_3$ starting at $g_1$ is always bigger than the probability of first reaching $g_2$ from $g_1$, then $g_3$ from $g_2$. The Martin boundary is then the horofunction boundary associated with the Green metric $d_G$ on $\Gamma$. More precisely, introduce the Martin kernel $K(\cdot,\cdot)$ as $$K(g,h)=\frac{G(g,h)}{G(e,h)}.$$ Then, the Martin compactification is a topological space $\overline{\Gamma}^{\mu}$ such that 1. the space $\Gamma$ endowed with the discrete topology is a dense and open space in $\overline{\Gamma}^{\mu}$, 2. letting $\partial_\mu\Gamma=\overline{\Gamma}^{\mu}\setminus \Gamma$, a sequence $g_n$ of elements of $\Gamma$ converges to a point in $\overline{\Gamma}^{\mu}$ if and only if $K(\cdot,g_n)$ converges pointwise to a function. If $\xi$ is the corresponding limit in $\partial_{\mu}\Gamma$, we will write $K_{\xi}$ for the corresponding limit function. The complement $\partial_\mu\Gamma$ of $\Gamma$ in the Martin compactification $\overline{\Gamma}^{\mu}$ is called the Martin boundary. Both the Martin compactification and the Martin boundary are unique up to homeomorphism. Moreover, they both are metrizable spaces. This definition makes sense whether $\mu$ is symmetric or not and whether $d_G$ is a true distance or not. We refer to [@Sawyer] for a detailed construction. One important aspect of the Poisson boundary and the Martin boundary is their connection with harmonic functions. Recall that a function $f:\Gamma \to {\mathbb{R}}$ is called harmonic (with respect to $\mu$) if for every $g\in \Gamma$, $$f(g)=\sum_{h\in \Gamma}\mu(g^{-1}h)f(h).$$ The following key theorem states that every positive harmonic function can be represented as an integral on the Martin boundary. [@Sawyer Theorem 4.1] Let $\Gamma$ be a finitely generated group with a probability measure $\mu$ and assume that the random walk driven by $\mu$ is transient. For every positive harmonic function $f$ on $(\Gamma,\mu)$, there exists a borelian measure $\nu_f$ on $\partial_\mu\Gamma$ such that for every $g\in \Gamma$, $$f(g)=\int K_\xi(g)d\nu_f(\xi).$$ In general, the measure $\nu_f$ is not unique. We restrict ourselves to the minimal boundary to obtain uniqueness. A positive harmonic function $f$ on $(\Gamma,\mu)$ is called minimal if for every positive harmonic function $\tilde{f}$ satisfying $\tilde{f}\leq Cf$ for some constant $C$, $\tilde{f}=C'f$ for some constant $C'$. The minimal boundary is defined as $$\partial_\mu^{\min}\Gamma=\left \{\xi \in \partial_\mu\Gamma, K_\xi \text{ is harmonic and minimal}\right \}.$$ Then, for every positive harmonic function $f$, one can choose the measure $\nu_f$ giving full measure to $\partial_\mu^{\min}\Gamma$ and in this case, $\nu_f$ is unique, see [@Sawyer] for more details. The main relation between the Poisson boundary and the Martin boundary is as follows. Let $\Gamma$ be a finitely generated group with a probability measure $\mu$ and assume that the random walk $X_n$ driven by $\mu$ is transient. Then, the random walk $X_n$ almost surely converges to a point in the Martin boundary. Letting $X_{\infty}$ be the corresponding limit, denote by $\nu$ the law of $X_{\infty}$ on $\partial_\mu\Gamma$. Then, $(\partial_\mu\Gamma,\nu)$ is isomorphic as a measured space to the Poisson boundary. For more details on the connection between the two boundaries, we refer to the survey [@Kaimanovichsurvey], and in particular to [@Kaimanovichsurvey Section 2.2]. Comparing the boundaries ------------------------ Trying to identify the Poisson or the Martin boundary with a geometric boundary has been a fruitful line of research, initiated by the work of Furstenberg [@Furstenberg63], [@Furstenberg73]. A landmark result in this direction is the Kaimanovich criterion, stated in [@KaimanovichPoissonhyperbolic] which allows one to identify the Poisson boundary for some classes of measures $\mu$, see [@KaimanovichPoissonhyperbolic Theorem 6.4] for more details. This criterion applies in many situations. For any hyperbolic group $\Gamma$ and any probability measure $\mu$, whose support generates $\Gamma$ as a semi-group, the random walk almost surely converges to a point in the Gromov boundary of $\Gamma$. When $\mu$ has finite entropy and finite logarithmic first moment, one can use the Kaimanovich criterion to prove that the Gromov boundary endowed with the corresponding limit measure is a model for the Poisson boundary. More generally, Maher and Tiozzo [@MaherTiozzo] proved that for any group $\Gamma$ acting on a hyperbolic space $X$, for any non-elementary probability measure $\mu$ on $\Gamma$, the image of the random walk in $X$ almost surely converges to a point in the Gromov boundary $\partial X$ of $X$. Moreover, if the action is acylindrical, they used the Kaimanovich criterion to prove that whenever $\mu$ has finite entropy and finite logarithmic first moment, $\partial X$ endowed with the corresponding limit measure is a model for the Poisson boundary. In particular, for all the groups we consider in this paper, the Poisson boundary is identified with a geometric boundary, namely the Gromov boundary on any hyperbolic space $X$ the group acylindrically acts on. On the other hand, as explained in the introduction, identifying the Martin boundary is a much more difficult task. Ancona proved in [@AnconaMartinhyperbolic] that for every hyperbolic group $\Gamma$ and every probability measure $\mu$ whose finite support generates $\Gamma$ as a semi-group, the Martin boundary is homeomorphic to the Gromov boundary. Recently, Gekhtman, Gerasimov, Potyagailo and Yang proved that for finitely supported measures on a relatively hyperbolic group, the Martin boundary always covers the Bowditch boundary [@GGPY]. The preimage of a conical limit point is always reduced to a point and actually, conical limit points embed into the Martin boundary. Note that whenever the parabolic subgroups have empty Morse boundary, the Morse boundary of the group can be seen as a subset of conical limit points, so results of [@GGPY] provide a more direct proof of our result in this context. It is expected that the Martin boundary is bigger than the Bowditch boundary and that the preimage of a parabolic limit point is the Martin boundary of the induced walk on the corresponding parabolic subgroup. This is proved for hyperbolic groups with respect to virtually abelian groups in [@DGGP]. To the authors’ knowledge, not much is known in general about the Martin boundary of a hierarchically hyperbolic group, even for mapping class groups. Note however that Kaimanovich and Masur [@KaimanovichMasur] used the Kaimanovich criterion to show that Thurston’s PMF boundary of Teichmüller space is a model for the Poisson boundary of the mapping class group, and the stationary measure therein gives full weight to endpoints inside the PMF boundary of Teichmüller geodesics recurring to a fixed subset of Teichmüleer space. It seems reasonable to conjecture that this set of recurrent foliations (which may be considered the direct analogue of conical limit points for relatively hyperbolic groups) can be embedded into the Martin boundary, which would provide a direct proof that the Morse boundary embeds into the Martin boundary in this context. Our result can be viewed as a small step in this direction. Deviation inequalities {#Sectiondeviation} ====================== Global-Ancona inequalities in acylindrically hyperbolic groups -------------------------------------------------------------- Consider a finitely generated group $\Gamma$ acting acylindrically on a hyperbolic space $X$. If $\mathcal{S}$ is a finite generating set for $\Gamma$, write $d_{\mathcal{S}}$ for the word distance associated with $\mathcal{S}$ or simply $d$ whenever $\mathcal{S}$ is fixed. Also, given the choice of a fixed point $o\in X$, write $d_X$ for the induced distance in $\Gamma$, that is $d_X(g,h)=d_X(g\cdot o,h\cdot o)$. A finite sequence $\alpha=g_1,...,g_n$ of points in $\Gamma$ is called a path if $g^{-1}_{i}g_{i+1}\in \mathcal{S}$ for each $i$. It’s length in the word metric induced by $(\Gamma, S)$ will be denoted $l_{\Gamma}(\alpha)$. Recall the following definition from [@MathieuSisto]. Fix a finite generating set $\mathcal{S}$ for $\Gamma$ and let $g,h\in \Gamma$ and $\alpha$ be a $d_\mathcal{S}$ word geodesic from $g$ to $h$. Let $T,S\geq 1$. Then, a point $p$ on $\alpha$ is called a $(T,S)$-linear progress point if for every $p_1,p_2$ on $\alpha$ such that $p_1,p,p_2$ are aligned in this order and such that $d(p,p_1)\geq S$, $d(p,p_2)\geq S$, we have $$d(p_1,p_2)\leq Td_X(p_1,p_2).$$ Whenever $f$ and $g$ are two functions such that there exists $C$ such that $\frac{1}{C}f\leq g \leq C f$, we write $f\asymp g$. When the implied constant $C$ depends on some parameters, we will avoid this notation, except if the dependency is clear from the context. Also, whenever there exists $C$ such that $f\leq C g$, we will write $f\lesssim g$. Ancona inequalities were stated using the Green metric in the introduction, but notice that one can reformulate them as follows. If $\Gamma$ is Gromov hyperbolic, then for any $x,y,z$ aligned on a geodesic, we have $$\label{Ancona} G(x,z)\asymp G(x,y)G(y,z).$$ Our goal in this section is to prove Ancona-type inequalities for linear progress points on a word-geodesic. We first introduce some notations. We consider a trajectory for the random walk $\beta=(\beta_0,...,\beta_n)$ of length $n$. We write $$W(\beta)=\mu(\beta_0^{-1}\beta_1)...\mu(\beta_{n-1}^{-1}\beta_n)$$ and we call $W(\beta)$ the weight of the trajectory $\beta$. Also, given a collection $\mathcal{T}$ of trajectories for the random walk, we write $$W(\mathcal{T})=\sum_{\beta \in \mathcal{T}}W(\beta).$$ In particular, letting $\mathcal{T}_n(g,h)$ be the collection of trajectories of length $n$ from $g$ to $h$, we can rewrite the Green function from $g$ to $h$ as $$G(g,h)=\sum_{n\geq 0}W(\mathcal{T}_n(g,h)).$$ Also, for a subset $A$ of $\Gamma$, we denote by $G(g,h;A)$ the contribution to $G(g,h)$ of trajectories all of whose points lie in $A$, with the possible exception of the endpoints. In other words, $$G(g,h;A)=\delta_{g,h}+\sum_{n\geq 1}\sum_{g_1,...,g_{n-1}\in A}\mu(g^{-1}g_1)\mu(g_1^{-1}g_2)...\mu(g_{n-1}^{-1}h),$$ where $\delta_{g,h}=0$ if $g\neq h$ and $\delta_{g,h}=1$ if $g=h$. \[epsilonAnconaAcylindricallyhyperbolic\] Let $\Gamma$ be a finitely generated acting acylindrically on a hyperbolic space $X$. For every $T,S\geq 1$ and every $\epsilon>0$ , there exists $C_1\geq 0$ such that the following holds. Let $g,h\in \Gamma$ and let $\alpha$ be a geodesic connecting $g$ to $h$. Let $p$ be a $(T,S)$-linear progress point on $\alpha$. Then, $$G(g,h;B_{C_1S}(p)^c)\leq \epsilon G(g,h).$$ We will need the following geometric lemma. [@MathieuSisto Proposition 10.4]\[prop10.4MathieuSisto\] Let $\Gamma$ be a finitely generated group with a fixed finite generating set. Assume that $\Gamma$ acts acylindrically on a hyperbolic space $X$. For any $L$, there exist a constant $C$ and a diverging function $\rho:\mathbb{R}_+\to \mathbb{R}_+$ such that the following holds. Let $\alpha_1$ and $\alpha_2$ be two $L$-Lipschitz paths in the Cayley graph of $\Gamma$. Write $g_1,h_1$, respectively $g_2,h_2$ the endpoints of $\alpha_1$, respectively $\alpha_2$. Then, $$\max \{l_{\Gamma}(\alpha_1),l_{\Gamma}(\alpha_2)\}\geq \bigg (d_X(g_1,h_1)-d_X(g_1,g_2)-d_X(h_1,h_2)-C\bigg )\rho(d_{\Gamma}(\alpha_1,\alpha_2)).$$ We can now prove Proposition \[epsilonAnconaAcylindricallyhyperbolic\], which is a refinement of [@MathieuSisto Lemma 12.3]. Looking carefully, one can see that our statement is actually proven there. We still rewrite the complete proof for convenience. We first introduce some notations. Recall that the support of $\mu$ is finite, so there exists $0<q<1$ such that for any trajectory $\beta$ of the random walk of length $n$, we have $W(\beta)\geq q^n$. Also recall that the support of $\mu$ generates $\Gamma$ as a semi-group. In particular, there exists $\Lambda$ such that a trajectory for the random walk of minimal length connecting points at distance $l$ has length at most $\Lambda l$. Since $\Gamma$ is non-amenable, the spectral radius of the random walk, which is the radius of convergence of the Green function is bigger than 1, see [@Kesten]. Hence, there exists $\theta<1$ such that for any $g\in G$, for any $n$, $\mu^{*n}(g)\leq \theta^n$. In particular, given $g,h\in \Gamma$, $$G(g,h)=\sum_{n\geq 0}\mu^{*n}(g^{-1}h)=\sum_{n\geq \Lambda d(g,h)}\mu^{*n}(g^{-1}h)\leq K\theta^{\Lambda d(g,h)}$$ for some $K$. More generally, if $g$ and $h$ are fixed and if $\beta$ is a collection of trajectories for the random walk from $g$ to $h$ of length at least $n_0$, we have $$\label{weightofalongpath} W(\beta)\leq K\theta^{n_0}.$$ Finally, since $\Gamma$ has at most exponential growth, there exists $v$ such that for any $R$ the cardinality of a ball of radius $R$ is at most $\mathrm{e}^{vR}$. We will need to use three constants $N$, $C_0$ and $C_1$. To make it easier to understand, we explain now how we choose these constants. We will choose $N$ large enough, that will only depend on the random walk (precisely, it will depend on $\theta$ and on $\Lambda$). We will then choose $C_0$ that will depend on $N$ and $T$. Finally we will choose $C_1$ that will depend on $N$, $C_0$, $S$, $T$ and $\epsilon$. Precisely, we choose $N$ such that $\theta^N/q^\Lambda \leq 1/2$. We then choose $C_0$ such that $\rho(t)\geq 2NT$ for every $t\geq C_0$, where the function $\rho$ is given by Lemma \[prop10.4MathieuSisto\]. We also assume that $C_0\geq 2\Lambda$. Finally, we choose $C_1$ such that the following conditions hold. First, $C_1\geq 3C_0$ and then for every $t\geq 2C_1S-4C_0S$, we have (a) $t-2C_0 \geq \frac{t}{2}+T(2C_0S+C)$, where $C$ is the constant given by Lemma \[prop10.4MathieuSisto\], (b) $t+2C_0S\leq 2t$, (c) $q^{-4\Lambda C_0S}\mathrm{e}^{2v\Lambda C_0S}2^{-t}\leq \frac{\epsilon}{K}$, where $K$ is the constant in (\[weightofalongpath\]). Let $g,h,p$ be as in the statement of the proposition. Given any trajectory $\beta$ for the random walk from $g$ to $h$ that avoids a large ball around $p$, we first construct a long sub-trajectory $\gamma$ as follows. Let $\beta=(w_0,...,w_n)$, $w_0=g$ and $w_n=h$ and assume that $\beta$ avoids the ball of radius $C_1S$ centered around $p$. Consider the last point $g'$ on the trajectory $\beta$ which is within a distance at most $C_0S$ from a point $g''$ on the geodesic $\alpha$ and such that $g$, $g''$ and $p$ are aligned in this order. Similarly, consider the first point $h'$ on $\beta$ after $g$’ which is within a distance at most $C_0S$ from a point $h''$ on $\alpha$, where $p$, $h''$ and $h$ are aligned in this order. Let $\gamma$ be the sub-trajectory of $\beta$ starting at $g'$ and ending at $h'$. Notice that $g'\neq h'$. Indeed, a point $g_0$ on $\beta$ cannot be simultaneously $C_0S$-close to points $g_1$, respectively $g_2$, on the geodesic $\alpha$ such that $g,g_1,p$, respectively $p,g_2,h$ are aligned in this order. Otherwise, one would have $$d(g_0,p)\leq d(g_0,g_1) +d(g_1,p) \leq C_0S+ d(g_1,g_2)\leq 3C_0S<C_1S,$$ which is a contradiction since $\beta$ stays outside $B_{C_1S}(p)$. Also, $\gamma$ only intersects the $C_0S$-neighborhood of the geodesic $\alpha$ at $g'$ and $h'$. Indeed if one point $g_0$ of $\gamma$ was $C_0S$-close to $\alpha$, then there would be a point $g_1$ on $\alpha$ with $d(g_0,g_1)\leq C_0S$. If $g,g_1,p$ were aligned in this order, this would contradict the definition of $g'$. If not, this would contradict the definition of $h'$. (-4.8,-.5) to\[out=180, in=180\] (-4.8,.5) – (4.8,.5) to\[out=0, in=0\] (4.8,-.5) – (-4.8,-.5) – cycle ; (-5,0)–(5,0) ; (2,0) arc(0:360:2) ; (0,0)–(1.414,1.414) ; (1,0)–(1,-.5) ; (-3.5,.5)–(-3.5,0) ; (3.2,.5)–(3.2,0) ; plot\[smooth\] coordinates [(-4.5,0) (-4.2,.3) (-3.9,-.2) (-3.5,.5) (-2.8,.7) (-2.2,1.2) (-2,1) (-1.5,2.7) (-.5,2.2) (.5,2.3) (1,2) (1.5,1.6) (2,1) (2.9,.8) (3.2,.5) (3.6,.3) (4.2,.6) (4.5,0)]{} ; (-4.5,-.3) node[$g$]{} ; (4.5,-.3) node[$h$]{} ; (-1,.2) node[$\alpha$]{} ; (0,-.3) node[$p$]{} ; (-3.6,.7) node[$g'$]{} ; (3.3,.7) node[$h'$]{} ; (-3.5,-.25) node[$g''$]{} ; (3.3,-.3) node[$h''$]{} ; (-2,1.6) node[$\gamma$]{} ; (1.5,-.3) node[$C_0S$]{} ; (.2,.7) node[$C_1S$]{} ; Denote by $l(\gamma)$ the length of $\gamma$ in $\Gamma$. Then, Lemma \[prop10.4MathieuSisto\] shows that $$\label{lengthgamma} \mathrm{max}(l(\gamma),d(g'',h''))\geq (d_X(g'',h'')-d_X(g',g'')-d_X(h',h'')-C)\rho(C_0S).$$ Note that $d(g',h')\geq d(g'',h'')-2C_0S$ and since $g'',p,h''$ are aligned in this order, $$d(g',h')\geq d(g'',p)+d(p,h'')-2C_0S\geq d(g',p)+d(h',p)-4C_0S$$ so finally $$d(g',h')\geq 2C_1S-4C_0S.$$ In particular, according to Condition (a) above, $$\frac{d(g',h')-2C_0}{T} \geq \frac{d(g',h')}{2T}+2C_0S+C.$$ Since $p$ is a $(T,S)$-linear progress point, $$d_X(g'',h'')\geq \frac{d(g'',h'')}{T}\geq \frac{d(g',h')-2C_0S}{T}\geq \frac{d(g',h')}{2T}+2C_0S+C.$$ Since $d_X(\cdot,\cdot)\leq d(\cdot,\cdot)$ and $d(g',g'')\leq C_0S$, $d(h',h'')\leq C_0S$, (\[lengthgamma\]) yields $$\mathrm{max}(l(\gamma),d(g'',h''))\geq\frac{d(g',h')}{2T}\rho(C_0S).$$ According to the condition defining $C_0$, we get $$\mathrm{max}(l(\gamma),d(g'',h''))\geq Nd(g',h').$$ Finally, notice that $d(g'',h'')\leq d(g',h')+2C_0S$ so Condition (b) above shows that $d(g'',h'')\leq 2d(g',h')$ and so $d(g'',h'')<Nd(g',h')$. We thus get $$\label{lengthgamma2} l(\gamma)\geq Nd(g',h')\geq N(2C_1S-4C_0S).$$ We now construct a trajectory $\hat{\beta}$ for the random walk that will replace $\beta$ as follows. Consider a trajectory $\hat{\gamma}_1$ of minimal length from $g'$ to the geodesic $\alpha$ and denote by $\hat{g}'$ the endpoint of this trajectory on $\alpha$. Similarly, consider a trajectory $\hat{\gamma}_2$ of minimal length from $\alpha$ to $h'$ with initial point $\hat{h}'$. Note that one can find a trajectory from $g'$ to $g''$ of length at most $\Lambda C_0S$ and similarly with $h'$ and $h''$, so that in particular $d(g',\hat{g}')\leq \Lambda C_0S$ and $d(h',\hat{h}')\leq \Lambda C_0S$. Also, $\hat{g}'$, $p$ and $\hat{h}'$ are aligned in this order. Now, consider trajectories of minimal length connecting successive points on the sub-geodesic of $\alpha$ from $\hat{g'}$ to $\hat{h}'$ and denote by $\hat{\gamma}_3$ the concatenation of these trajectories. Denote by $\hat{\gamma}$ the concatenation of $\hat{\gamma}_1$, $\hat{\gamma}_3$ and $\hat{\gamma}_2$. Then, $\hat{\gamma}$ is a trajectory for the random walk starting at $g'$ that joins the geodesic $\alpha$, roughly follows it and then goes to $h'$. Notice that $\hat{\gamma}$ stays in the $\Lambda$-neighborhood of $\alpha$. Finally, let $\hat{\beta}$ be the concatenation of the sub-trajectory $\hat{\beta}_1$ from $g$ to $g'$, the trajectory $\hat{\gamma}$ and the sub-trajectory $\hat{\beta}_2$ from $h'$ to $h$. (-4.8,-.5) to\[out=180, in=180\] (-4.8,.5) – (4.8,.5) to\[out=0, in=0\] (4.8,-.5) – (-4.8,-.5) – cycle ; (-5,0)–(5,0) ; (2,0) arc(0:360:2) ; plot\[smooth\] coordinates [(-4.5,0) (-4.2,.3) (-3.9,-.2) (-3.5,.5)]{} ; plot\[smooth\] coordinates [(-3.5,.5) (-2.8,.7) (-2.2,1.2) (-2,1) (-1.5,2.7) (-.5,2.2) (.5,2.3) (1,2) (1.5,1.6) (2,1) (2.9,.8) (3.2,.5)]{} ; plot\[smooth\] coordinates [(3.2,.5) (3.6,.3) (4.2,.6) (4.5,0)]{} ; plot\[smooth\] coordinates [(-3.5,.5) (-3.3,.3) (-3.5,.1) (-3.4,0)]{} ; plot\[smooth\] coordinates [(-3.4,0) (-3.15,-.2) (-2.9,0) (-2.65,.2) (-2.4,0) (-2.15,-.2) (-1.9,0) (-1.65,.2) (-1.4,0) (-1.15,-.2) (-.9,0) (-.65,.2) (-.4,0) (-.15,-.2) (.1,0) (.35,.2) (.6,0) (.85,-.2) (1.1,0) (1.35,.2) (1.6,0) (1.85,-.2) (2.1,0) (2.35,.2) (2.6,0) (2.85,-.2) (3.1,0)]{} ; plot\[smooth\] coordinates [(3.1,0) (3.3,.2) (3.1,.3) (3.2,.5)]{} ; (-4.5,-.3) node[$g$]{} ; (4.5,-.25) node[$h$]{} ; (.1,-.3) node[$p$]{} ; (-3.6,.7) node[$g'$]{} ; (3.3,.7) node[$h'$]{} ; (-3.5,-.25) node[$\hat{g}'$]{} ; (3.3,-.25) node[$\hat{h}'$]{} ; (-4.7,.3) node[$\hat{\beta}_1$]{} ; (.6,.3) node[$\hat{\gamma}$]{} ; (4.7,.3) node[$\hat{\beta}_2$]{} ; This construction defines a map $\Psi:\beta\mapsto \hat{\beta}$. To conclude, we just need to see that the total weight of the preimage of a trajectory $\hat{\beta}$ under this map is small, compared to the weight of $\hat{\beta}$. Precisely, let $\hat{\beta}$ be a trajectory for the random walk constructed as above and let $\beta$ be such that $\Psi(\beta)=\hat{\beta}$. Then, $\beta$ is the concatenation of the sub-trajectory $\hat{\beta}_1$ of $\hat{\beta}$ from $g$ to some $g'$, a trajectory that avoids $B_{C1S}(p)$ from $g'$ to some $h'$ and the sub-trajectory $\hat{\beta}_2$ of $\hat{\beta}$ from $h'$ to $h$. Moreover, $\hat{g}'$ and $\hat{h}'$ are completely determined by $\hat{\beta}$. Indeed, since $C_0S\geq 2\Lambda$, these points coincide with the last points on $\hat{\beta}$ around $p$ that intersect $\alpha$ before the trajectory leaves the $2\Lambda$-neighborhood of $\alpha$. Also, recall that $d(g',\hat{g}')\leq \Lambda C_0S$ and $d(h',\hat{h}')\leq \Lambda C_0S$. Finally, once $g'$ and $h'$ are fixed, $\hat{\beta}_1$ and $\hat{\beta}_2$ are completely determined. Letting $\mathcal{T}_{\hat{\beta}}$ be the collection preimages of $\hat{\beta}$, this proves that $$W(\mathcal{T}_{\hat{\beta}})\leq \sum_{g'\in B_{\Lambda C_0S}(\hat{g}')}\sum_{h'\in C_{\Lambda C_0S}(\hat{h}')}W(\hat{\beta}_1)G(g',h';B_{C_1S}(p)^c)W(\hat{\beta}_2).$$ According to (\[lengthgamma2\]), the length of the subtrajectory $\gamma$ of $\beta$ from $g'$ to $h'$ is at least $Nd(g',h')$, so that (\[weightofalongpath\]) shows that $$G(g',h';B_{C_1S}(p)^c)\leq K\theta^{Nd(g',h')}.$$ On the other hand, the sub-trajectory $\hat{\gamma}$ of $\hat{\beta}$ from $g'$ to $h'$ has length at most $\Lambda d(g',h')+4\Lambda C_0S$, so that $$W(\hat{\beta})\geq W(\hat{\beta}_1)W(\hat{\beta}_2)q^{\Lambda d(g',h')+4\Lambda C_0S}$$ and so $$W(\mathcal{T}_{\hat{\beta}})\leq\sum_{g'\in B_{\Lambda C_0S}(\hat{g}')}\sum_{h'\in B_{\Lambda C_0S}(\hat{h}')} W(\hat{\beta})K\theta^{Nd(g',h')} q^{-\Lambda d(g',h')-4\Lambda C_0S}.$$ According to the condition defining $N$, $$\theta^{Nd(g',h')} q^{-\Lambda d(g',h')-4\Lambda C_0S}\leq 2^{-d(g',h')}q^{-4\Lambda C_0S}$$ and since $d(g',h')\geq 2C_1S-4C_0S$, Condition (c) above shows that $$2^{-d(g',h')}\leq q^{4\Lambda C_0S}\mathrm{e}^{-2v\Lambda C_0S}\frac{\epsilon}{K}.$$ Since the balls $B_{\Lambda C_0S}(\hat{g}')$ and $B_{\Lambda C_0S}(\hat{h}')$ have cardinality at most $\mathrm{e}^{v\Lambda C_0S}$, we get $$W(\mathcal{T}_{\hat{\beta}})\leq W(\hat{\beta})\mathrm{e}^{2v\Lambda C_0S}Kq^{-4\Lambda C_0S}q^{4\Lambda C_0S}\mathrm{e}^{-2v\Lambda C_0S}\frac{\epsilon}{K}=\epsilon W(\hat{\beta}).$$ This concludes the proof. \[corollaryAnconalinearprogress\] Let $\Gamma$ be a finitely generated acting acylindrically on a hyperbolic space $X$. For every $T,S\geq 1$ there exists $C\geq 1$ such that the following holds. Let $g,h\in \Gamma$ and let $\alpha$ be a geodesic connecting $g$ to $h$. Let $p$ be a $(T,S)$-linear progress point on $\alpha$. Then, $$\frac{1}{C}G(g,p)G(p,h)\leq G(g,h)\leq C G(g,p)G(p,h).$$ Let $T,S\geq 1$ and let $p$ be a $(T,S)$-linear progress point on a geodesic $\alpha$ joining $g$ to $h$. Then, Proposition \[epsilonAnconaAcylindricallyhyperbolic\] shows there exists $R\geq 0$ such that $$G(g,h;B_R(p)^c)\leq \frac{1}{2} G(g,h).$$ Decomposing a trajectory for the random walk from $g$ to $h$ according to its first visit to $B_R(p)$, we get $$G(g,h)=G(g,h;B_R(p)^c)+\sum_{q\in B_R(p)}G(g,q;B_R(p))G(q,h)$$ so that $$G(g,h)\leq 2 \sum_{q\in B_R(p)}G(g,q)G(q,h).$$ Since $q$ is within $R$ of $p$, there exists a constant $C$ depending only on $R$ such that $G(g,q)\leq C G(g,p)$ and $G(q,h)\leq C G(p,h)$. We thus get $$G(g,h)\leq 2 C \mathrm{Card}(B_R(e))G(g,p)G(p,h).$$ This proves one of the two inequalities. According to (\[triangleGreen’\]), the other inequality is always satisfied, whether $p$ is linear progress point on $\alpha$ or not. This concludes the proof. Morse-Ancona inequalities in HHGs --------------------------------- We will use in this section the results of Abbott, Behrstock and Durham [@AbbottBehrstockDurham]. For any non-elementary hierarchically hyperbolic group $\Gamma$, they construct a special hierarchical structure, which has nice properties. In particular, the action of $\Gamma$ on the underlying space $C\mathbf{S}$ is a largest acylindrical action, where $\mathbf{S}$ is the maximal domain in $\mathfrak{S}$. When we consider a hierarchically hyperbolic group, we will always implicitly consider this hierarchical structure and we will always implicitly consider this acylindrical action on $C\mathbf{S}$. \[propMorselinearprogresshierarchicallyhyperbolic\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group. For every Morse gauge $N$, there exists $S,T$ such that the following holds. Let $\alpha$ be an $N$-Morse geodesic. Then any point on $\alpha$ is a $(T,S)$-linear progress point. To prove this proposition, we will use the following result. Recall the following definition from [@AbbottBehrstockDurham]. Let $(X,\mathfrak{S})$ be a hierarchically hyperbolic space with maximal domain $\mathbf{S}\in \mathfrak{S}$. Let $Y\subset X$ and $D>0$. We say that $Y$ has $D$-bounded projection if for every $U\in \mathfrak{S}\setminus \{\mathbf{S}\}$, we have $\mathrm{diam}_U(Y)<D$. \[ABDTheoremE\][@AbbottBehrstockDurham Theorem E] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group. A geodesic $\alpha$ in $\Gamma$ is $N$-Morse if and only if it has $D$-bounded projections, where $N$ and $D$ determine each other. Actually, Abbott, Behrstock and Durham prove in [@AbbottBehrstockDurham Theorem E] that for every $D$ there exists an $N$ such that the conclusion of this proposition holds, so they only prove there that $D$ determines $N$. However, the fact that $N$ determines $D$ is contained in the discussion in [@AbbottBehrstockDurham Section 6]. Roughly speaking, it is a consequence of the fact that contracting implies stability. Also, projections of infinite Morse geodesics on product regions is well defined, according to [@AbbottBehrstockDurham Lemma 6.5] and the same result holds for infinite Morse geodesics. We can now prove Proposition \[propMorselinearprogresshierarchicallyhyperbolic\]. Let $N$ be a Morse gauge and let $\alpha$ be an $N$-Morse geodesic. Then, $\alpha$ has $D$-bounded projections for some $D$ that only depends on $N$. Let $p$ be any point on $\alpha$ and let $q_1,q_2$ be two points on $\alpha$ such that $q_1,p,q_2$ are aligned in this order. Let $s\geq D$. Recall that for two real numbers $t,s$ $\{\{t\}\}_s=0$ if $t\leq s$ and $\{\{t\}\}_s=t$ otherwise. The distance formula (see [@BehrstockHagenSisto2]) shows that there exists $K,C$ only depending on $D$ (thus only depending on $N$) such that $$d(q_1,q_2)\leq K \sum_{U\in \mathfrak{S}}\left \{\left \{d_U(q_1,q_2)\right \}\right \}_s +C.$$ Since $s\geq D$ and $\alpha$ has $D$-bounded projections, $$d(q_1,q_2)\leq K d_{\mathbf{S}}(q_1,q_2) +C.$$ Choose $S=C$ and $T=2K$. Assuming that $d(q_i,p)\geq S$, we have $C\leq \frac{1}{2}d(q_1,q_2)$, so that $$d(q_1,q_2)\leq Td_{\mathbf{S}}(q_1,q_2),$$ which concludes the proof. We now prove the same result for relatively hyperbolic groups. We first prove the following. Recall that a non-elementary relatively hyperbolic acylindrically acts on the graph obtained coning-off the parabolic subgroups, see [@Osinacylindrical Proposition 5.2]. \[relativelyhyperbolicMorseboundedprojections\] Let $\Gamma$ be a relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. A geodesic $\alpha$ in $\Gamma$ is $N$-Morse if and only if it has $D$-bounded projections on parabolic subgroups, where $D$ and $N$ determine each other. This lemma is a consequence of results in [@Tran Section 5], although it is not stated like that there, so we give the proof for convenience. Since the parabolic subgroups have empty Morse boundary, for fixed $N$, there cannot be arbitrarily large $N$-Morse geodesics starting at the same point. Otherwise, one could extract a sub-sequence of these $N$-Morse geodesics, using the Arzelà-Ascoli theorem to construct a geodesic ray, like in the proof of [@Cordes Corollary 1.4] and according to [@Cordes Lemma 2.10], this resulting geodesic ray would be $N$-Morse. Since parabolic subgroups quasi-isometrically embed in $\Gamma$, [@Cordes Lemma 2.9] shows that a $N$-Morse geodesic in the group $\Gamma$ stays within a bounded distance of a $N'$-Morse geodesic in the parabolic group. Thus, there cannot be $N$-Morse geodesics in $\Gamma$ starting at a fixed base-point and travelling arbitrarily long in parabolic subgroups. Conversely, let $\alpha$ be a geodesic with $D$-bounded projections on parabolic subgroups. Let $p_1,p_2$ be two points on $\alpha$ and let $\beta$ be a $(\lambda,c)$-quasi-geodesic from $p_1$ to $p_2$. We want to prove that any point of $\beta$ is within $N$ of a point of $\alpha$, where $N$ only depends of $\lambda$ and $c$. Recall that the $M$-saturation of $\alpha$ is the union of $\alpha$ and all left cosets of parabolic subgroups whose $M$-neighborhood intersects $\alpha$. First, according to [@DrutuSapir Theorem 1.12 (1)], any point $x$ on $\beta$ is within $M_1$ of a point in the $M_0$-saturation of $\alpha$, where $M_0$ and $M_1$ only depends on $\lambda$ and $c$ (actually, $M_0$ does not even depend on those parameters, but ony on the group). We just need to deal with the case where $x$ is within $M_1$ of a point $y$ in some left coset $gP$, where $P$ is a parabolic subgroup and such that $\alpha$ enters the $M_0$-neighborhood of $gP$. Let $M_2=\max (M_0,M_1)$ so that both $\alpha$ and $\beta$ enter the $M_2$-neighborhood of $gP$ that we denote by $\mathcal{N}_{M_2}(gP)$. Let $\alpha_1$ and $\alpha_2$, respectively $\beta_1$ and $\beta_2$ be the first and last points in $\mathcal{N}_{M_2}(gP)$ for $\alpha$, respectively $\beta$. Then, according to [@Sistoprojections Lemma 1.13 (a)], $\alpha_1$ and $\beta_1$ are within a bounded distance, say $\Lambda$, of the projection $q_1$ of $p_1$ on $gP$. Similarly, $\alpha_2$ and $\beta_2$ are within $\Lambda$ of the projection $q_2$ of $p_2$ on $gP$. Again, $\Lambda$ only depends on $\lambda$ and $c$. Since $\alpha$ has $D$-bounded projections, $d(q_1,q_2)\leq D$ and so the distance between $\beta_1$ and $\beta_2$ is bounded. In particular, since $\beta$ is a $(\lambda,c)$-quasi-geodesic, the distance between $x$ and $\beta_1$ is bounded. Finally, $d(\beta_1,\alpha_1)\leq 2\Lambda$, so the distance between $x$ and $\alpha_1$ is bounded and the bound only depends on $\lambda$ and $c$. This concludes the proof. We deduce the following from Lemma \[relativelyhyperbolicMorseboundedprojections\] and from the distance formula given by [@Sistoprojections Theorem 0.1], exactly like we deduced Proposition \[propMorselinearprogresshierarchicallyhyperbolic\] from Proposition \[ABDTheoremE\] and the distance formula in hierarchically hyperbolic groups. \[propMorselinearprogressrelativelyhyperbolic\] Let $\Gamma$ be a non-elementary relatively hyperbolic groups, whose parabolic subgroups have empty Morse boundaries. For every Morse gauge $N$, there exists $S,T$ such that the following holds. Let $\alpha$ be an $N$-Morse geodesic. Then any point on $\alpha$ is a $(T,S)$-linear progress point. Let us briefly explain why we needed to work with hierarchically hyperbolic and relatively hyperbolic groups and not any acylindrically hyperbolic groups to get these results. This will also explain why we needed to use the modified hierarchical structure from [@AbbottBehrstockDurham]. Consider the free group $\Gamma$ with two generators $a$ and $b$. Then, $\Gamma$ is hyperbolic so that every geodesic is $N$-Morse for some fixed Morse gauge $N$. Also, $\Gamma$ is hyperbolic relative to the subgroup generated by $a$ so it acylindrically acts on the graph obtained by coning-off this particular subgroup. Choose now a geodesic $\alpha$ in $\Gamma$ travelling arbitrarily long in $\langle a \rangle$ so that $e$ is not a $(T,S)$-linear progress point on $\alpha$, whereas this geodesic is $N$-Morse. One can make $e$ a $(T,S)$-linear progress point by considering the acylindrical action on the Cayley graph of $\Gamma$, so the apparent contradiction with our result comes from the fact that the first acylindrical action was not a largest one. We deduce the following from Propositions \[propMorselinearprogresshierarchicallyhyperbolic\] and \[propMorselinearprogressrelativelyhyperbolic\] and from Proposition \[epsilonAnconaAcylindricallyhyperbolic\]. \[epsilonAnconaMorsegeodesic\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Let $\mu$ be a probability measure on $\Gamma$ whose finite support generates $\Gamma$ as a semi-group. Let $N$ be a Morse gauge. Then, for any $\epsilon>0$, there exists $R>0$ such that the following hold. Let $\alpha$ be an $N$-Morse geodesic and let $x,y,z$ be three points in this order on $\alpha$. Then, $$G(x,z;B_R(y)^c)\leq \epsilon G(x,z).$$ Finally, Theorem \[maintheoremMorseAncona\] is a consequence of Proposition \[epsilonAnconaMorsegeodesic\], in the same way that Corollary \[corollaryAnconalinearprogress\] was a consequence of Proposition \[epsilonAnconaAcylindricallyhyperbolic\]. A map from the Morse boundary to the Martin boundary {#Sectionconstructionmap} ==================================================== We consider a finitely generated group $\Gamma$ ans we assume that $\Gamma$ either is non-elementary hierarchically hyperbolic or non-elementary relatively hyperbolic whose parabolic subgroups have empty Morse boundaries. We also consider a probability measure $\mu$ whose finite support generates $\Gamma$ as a semi-group. We now construct a map from the Morse boundary to the Martin boundary. We follow the strategy of [@KaimanovichErgodicity] and use deviation inequalities to prove that whenever $g_n$ is a sequence on a Morse geodesic going to infinity, then $g_n$ converges to some minimal point $\xi$ in the Martin boundary. Construction of the map ----------------------- In the following, we fix $x$ in the Morse boundary $\partial_{\mathcal{M}}\Gamma$ and we fix two Morse geodesic rays $\alpha$ and $\alpha'$, starting at $e$ and such that $[\alpha]=[\alpha']=x$. \[lemma1MorseinsideMartin\] Let $g_n$ be a sequence on $\alpha$ that converges to $x$ and that converges to some point $\xi$ in the Martin boundary $\partial_{\mu}\Gamma$. Then $\xi$ is minimal. Up to taking a sub-sequence, we can assume for simplicity that $g_n$ is an increasing sequence, meaning that $d(e,g_n)> d(e,g_m)$ if $n> m$. We denote by $K_{\xi}$ the limit of $K(\cdot,g_n)$. Let $H_{\xi}$ be the set of positive harmonic functions $h$ such that $\sup_{g\in \Gamma}\frac{h(g)}{K_{\xi}(g)}=1$. The only thing to prove is that $H_{\xi}=\{K_{\xi}\}$. Recall that whenever $f$ and $g$ are two functions such that there exists $C$ such that $\frac{1}{C}f\leq g \leq C f$, we write $f\asymp g$. Since the $g_n$ all lie on a Morse geodesic, Theorem \[maintheoremMorseAncona\] shows that if $m<n$, then $G(e,g_n)\asymp G(e,g_m)G(g_m,g_n)$, so that $K(g_m,g_n)\asymp \frac{1}{G(e,g_m)}$ and thus, for all $g\in \Gamma$, $G(g,g_m)K(g_m,g_n)\asymp K(g,g_m)$. Fixing $m$ and letting $n$ tend to infinity, we thus have $$\label{equationminimalpoint1} G(g,g_m)K_{\xi}(g_m)\asymp K(g,g_m).$$ Letting $g,g'\in \Gamma$, recall that $F(g,g')$ denotes the probability of ever reaching $g'$, starting the random walk at $g$. Also recall that $F(g,g')G(e,e)=G(g,g')$ (see also [@Woess Lemma 1.13 (b)]). Then, (\[triangleGreen\]) shows that $G(g,g'')\geq F(g,g')G(g',g'')$ for any $g''$. Letting $g''$ tend to infinity, we see that for any $\zeta$ in the Martin boundary, $K_{\zeta}(g)\geq F(g,g')K_{\zeta}(g')$. Thus, the Martin representation Theorem shows that any positive harmonic function $h$ satisfies $$h(g)\geq F(g,g')h(g').$$ Combining this inequality with (\[equationminimalpoint1\]), we obtain that there exists $C\geq 1$ such that if $h$ is a positive harmonic function, then $$\label{equationminimalpoint2} \forall m, \forall g\in \Gamma, h(g)\geq \frac{1}{C} K(g,g_m)\frac{h(g_m)}{K_{\xi}(g_m)}.$$ We now argue by contradiction to prove that $H_{\xi}=\{K_{\xi}\}$ and consider $h\in H_{\xi}$ such that $h\neq K_{\xi}$. Then, $h\leq K_{\xi}$ so that $h'=K_{\xi}-h$ also is harmonic and non-negative. Moreover, by harmonicity, if it vanishes at some point, it vanishes everywhere, so $h'$ is in fact positive (see [@Woess (1.15)]). We can thus apply (\[equationminimalpoint2\]) to $h'$ to get $$\label{equationminimalpoint3} \forall g\in \Gamma, h'(g)\geq \frac{1}{C}K_{\xi}(g)\underset{m\to \infty}{\limsup}\frac{h'(g_m)}{K_{\xi}(g_m)}.$$ Since $h\in H_{\xi}$, by definition $\mathrm{inf}_{g\in \Gamma}\frac{h'(g)}{K_{\xi}(g)}=0$, so that (\[equationminimalpoint3\]) yields $$\underset{m\to \infty}{\lim}\frac{h'(g_m)}{K_{\xi}(g_m)}=0,$$ hence $$\underset{m\to \infty}{\lim}\frac{h(g_m)}{K_{\xi}(g_m)}=1.$$ We again use (\[equationminimalpoint2\]), but this time applied to $h$ itself and we let $m$ tend to infinity to obtain $$\label{equationminimalpoint4} \forall g\in \Gamma, h(g)\geq \frac{1}{C}K_{\xi}(g).$$ Note that we necessarily have $C\geq 1$. If $C=1$, then $h(g)\geq K_{\xi}(g)$ and since $h\in H_{\xi}$, we in fact have $h=K_{\xi}$, which is a contradiction. Otherwise $\frac{1}{C}<1$ and we define $C_n=\frac{1}{C}\sum_{k=0}^n(1-\frac{1}{C})^k=1-(1-\frac{1}{C})^{n+1}$. We prove by induction that for all $n$, $h\geq C_nK_{\xi}$. The case $n=0$ is (\[equationminimalpoint4\]). If the inequality is satisfied at $n$, the function $h_n=\frac{1}{1-C_n}(h-C_nK_{\xi})$ also is in $H_{\xi}$ and we can apply (\[equationminimalpoint4\]) to get that $$h_n\geq \frac{1}{C}K_{\xi},$$ so that $$h\geq C_nK_{\xi}+\frac{1}{C}(1-C_n)K_{\xi}=C_{n+1}K_{\xi}.$$ Letting $n$ tend to infinity, we thus have $h\geq K_{\xi}$, which is again a contradiction. In other words, the closure of $\alpha$ in the Martin boundary is contained in the minimal Martin boundary. We need this a priori minimality to prove the following lemma. \[lemma2MorseinsideMartin\] Let $g_n$ be a sequence on $\alpha$, converging to $x$ in the Morse boundary and converging to $\xi$ in the Martin boundary. Let $g_n'$ be a sequence on $\alpha'$, also converging to $x$ in the Morse boundary and converging to $\xi'$ in the Martin boundary. Then, $\xi=\xi'$. We fix $m$. Since $\alpha$ and $\alpha'$ are asymptotic geodesics, there exists a point $\tilde{g}_m$ on $\alpha'$ within a uniformly bounded distance from $g_m$. In particular, for any $g\in \Gamma$ $G(g,\tilde{g}_m)\asymp G(g,g_m)$ and $G(\tilde{g}_m,g)\asymp G(g_m,g)$. Since the Morse gauges of $\alpha$ and $\alpha'$ are fixed, we can use Theorem \[maintheoremMorseAncona\] and show that if $n$ is large enough, then $$G(e,g_n')\asymp G(e,\tilde{g}_m)G(\tilde{g}_m,g_n')\asymp G(e,g_m)G(g_m,g_n')$$ and $$G(e,g_n)\asymp G(e,g_m)G(g_m,g_n).$$ In particular, $K(g_m,g_n')\asymp \frac{1}{G(e,g_m)}\asymp K(g_m,g_n)$. Letting $n$ tend to infinity, we thus have $$\label{equationsamelimitMorsetoMartin} \forall m,K_{\xi}(g_m)\asymp K_{\xi'}(g_m).$$ We now argue by contradiction to prove that $\xi=\xi'$. First, Lemma \[lemma1MorseinsideMartin\] shows that both $K_{\xi}$ and $K_{\xi'}$ are minimal harmonic functions. Thus, if we assume that $\xi\neq \xi'$, [@Anconapotentiel Lemma 1.7] shows that $\frac{K_{\xi'}(g_m)}{K_{\xi}(g_m)}$ converges to 0, when $m$ tends to infinity. We thus get a contradiction with (\[equationsamelimitMorsetoMartin\]), so that $\xi=\xi'$. We can define a map $\Phi$ from the Morse boundary to the Martin boundary, sending a point $x$ to the unique point $\xi$ in the closure of $\alpha$ in the Martin boundary, where $\alpha$ is any Morse geodesic such that $[\alpha]=x$. Uniqueness is given by Lemma \[lemma2MorseinsideMartin\], which also shows that $\xi$ does not depend on the choice of $\alpha$. Lemma \[lemma1MorseinsideMartin\] shows that $\xi$ is minimal. We also denote by $\Phi_N$ the restriction of $\Phi$ to the $N$-Morse boundary. Injectivity ----------- We prove the following here. The map $\Phi$ is one-to-one. Let $[\alpha]$ and $[\alpha']$ be two points in the Morse boundary. Up to taking the maximum of the two Morse gauges, we can assume that $\alpha$ and $\alpha'$ are two $N$-Morse geodesic rays, starting at $e$, for some fixed $N$. Let $\xi=\Phi(\alpha)$ and $\xi'=\Phi(\alpha')$ and let $g_n$ be a sequence on $\alpha$ that converges to $\xi\in \partial_{\mu}\Gamma$ and $g'_n$ a sequence on $\alpha'$ that converges to $\xi'\in \partial_{\mu}\Gamma$. We will prove that $K_{\xi}(g_m)$ tends to infinity, whereas $K_{\xi'}(g_m)$ converges to 0, when $m$ tends to infinity. Let $m\leq n$. Theorem \[maintheoremMorseAncona\] shows that $G(e,g_n)\leq CG(e,g_m)G(g_m,g_n)$ for some fixed $C\geq 0$. Thus, $K(g_m,g_n)\geq \frac{1}{C}\frac{1}{G(e,g_m)}$. Letting $n$ tend to infinity, we see that $K_{\xi}(g_m)\geq \frac{1}{C}\frac{1}{G(e,g_m)}$. Since $g_m$ tends to infinity, $G(e,g_m)$ converges to 0. This proves that $K_{\xi}(g_m)$ tends to infinity. Now, let $\beta_{m,n}$ be a geodesic from $\gamma_m$ to $\gamma'_n$. According to [@Cordes Lemma 2.3], $\beta_{m,n}$ is $N'$-Morse for some $N'$ that only depends on $N$. Since Morse triangles are thin (see [@CharneyCordesMurray Lemma 2.3]), $\beta_{m,n}$ passes within a uniformly bounded distance of $e$. Using again Theorem \[maintheoremMorseAncona\], there exists $C'$ such that $G(g_m,g'_n)\leq C'G(g_m,e)G(e,g'_n)$. Thus, $K(g_m,g'_n)\leq C'G(g_m,e)$ and so $K_{\xi'}(g_m)\leq C'G(g_m,e)$. Again, $G(g_m,e)$ converges to 0, hence so does $K_{\xi'}(g_m)$. Consequently, we can find $m$ such that $K_{\xi}(g_m)\neq K_{\xi'}(g_m)$, so that $\xi\neq \xi'$. Continuity ---------- Recall that we can endow the $N$-Morse boundary with a topology so that convergence is defined as follows. A sequence $x_n\in \partial_{M,e}^N\Gamma$ converges to $x\in \partial_{M,e}^N\Gamma$ if there exists $N$-Morse geodesic rays $\alpha_n$ with $\alpha_n(0)=e$ and $[\alpha_n]=x_n$, such that every sub-sequence of $\alpha_n$ contains a sub-sequence that converges uniformly on compact sets to a geodesic ray $\alpha$ with $[\alpha]=x$, see [@Cordes] for more details. \[prop:topological embedding\] Let $N$ be a Morse gauge. The map $\Phi_N:\partial^N_{M,e}\Gamma\rightarrow \partial_{\mu}\Gamma$ is a topological embedding. Let $x_n$ converge to $x$ in $\partial_{M,e}^N\Gamma$ and $\xi_n=\Phi_N(x_n)$, $\xi=\Phi_N(x)$. Let $\alpha_n$ be Morse geodesics starting at $e$ as above, that is, from every subsequence of $\alpha_n$, one can extract a sub-sequence that converges to some $N$-Morse geodesic $\alpha$, uniformly on compact sets and where $[\alpha_n]=x_n$, $[\alpha]=x$. We want to prove that $\xi_n$ converges to $\xi$. Since the Martin boundary is compact, we only have to prove that $\xi$ is the only limit point of $\xi_n$. We first take a sub-sequence $\alpha_{\sigma_1(n)}$ such that that $\xi_{\sigma_1(n)}$ converges to some $\xi'$ and we now prove that $\xi'=\xi$. We already know that $\xi$ is minimal, so we only have to prove that $K_{\xi'}\leq K_{\xi}$. We take a sub-sub-sequence $\alpha_{\sigma_2(n)}$ and we take $\alpha$ respectively representing $x_{\sigma_2(n)}$ and $x$, such that $\alpha_{\sigma_2(n)}$ converges to $\alpha$, uniformly on compact sets. For simplicity, we write $\alpha_n=\alpha_{\sigma_2(n)}$. Then, there is a sequence of points $g_n$ on $\alpha$ going to infinity such that the geodesic $\alpha_n$ fellow travels with $\alpha$ up to $g_n$. Let $g \in \Gamma$. We want to prove that $K_{\xi'}(g)\leq K_{\xi}(g)$. According to [@Cordes Lemma 2.8], one can find an $N'$-Morse geodesic $\beta$, starting at $g$, which is asymptotic to $\alpha$, with $N'$ that only depends on $N$ and $g$. Similarly, one can find $N'$-Morse geodesics $\beta_n$, starting at $g$ and asymptotic to $\alpha_n$. Recall that Morse triangles are thin, see [@CharneyCordesMurray Lemma 2.3]. Thus, $\beta$ eventually lies a bounded distance away from $\alpha$. More precisely, the geodesic $\beta$ roughly travels from $g$ to its projection $\hat{g}$ on $\alpha$ and then fellow travel with $\alpha$. Similarly, for large enough $n$, the geodesics $\beta_n$ roughly travel from $g$ to $\hat{g}$, then fellow travel with $\alpha$ up to $g_n$ and then fellow travel with $\alpha_n$. To sum up, for every large enough $n$, we can find $g_n$ on $\alpha$ within a uniformly bounded distance of a point on $\beta$, a point on $\beta_n$ and a point on $\alpha_n$. Let us call $g_n'$ the corresponding point on $\alpha_n$. Also, notice that $g_n$ goes to infinity, see the picture at the end of the proof. We fix $\epsilon>0$. Since $N'$ is fixed (although it depends on $g$), Proposition \[epsilonAnconaMorsegeodesic\] shows that there exists $R$ such that for every point $\tilde{g}_n$ on $\alpha_n$ such that $e,g_n',\tilde{g}_n$ lie in this order on $\alpha_n$, $$\label{continuity1} G(g,\tilde{g}_n,B_R(g_n)^c)\leq \epsilon G(g,\tilde{g}_n).$$ Decomposing a path from $g$ to $\tilde{g}_n$ according to its last visit to $B_R(g_n)$, we have $$\label{continuity2} \begin{split} G(g,\tilde{g}_n)&=G(g,\tilde{g}_n;B_R(g_n)^c)\\ &+\sum_{u\in B_R(g_n)}G(g,u)G(u,\tilde{g}_n;B_R(g_n)^c). \end{split}$$ Since $g_n$ goes to infinity, it converges to $\xi$ in the Martin boundary. Let us prove that for large enough $n$, for any $u\in B_R(g_n)$, $K(g,u)\leq K_{\xi}(g)+\epsilon$. Assume by contradiction this is not the case, so that in particular, there is a sequence $u_n\in B_R(g_n)$ such that $K(g,u_n)$ does not converge to $K_{\xi}(g)$. Up to taking a sub-sequence, we can assume that $u_n$ converges to some point $\xi''$ in the Martin boundary. Since, $d(u_n,g_n)$ is uniformly bounded, for any $g'$, $K(g',u_n)\leq CK(g',g_n)$, for some constant $C$ and similarly, $K(g',g_n)\leq CK(g',u_n)$. This proves that $K_{\xi''}(g')\asymp K_{\xi}(g')$ and since $\xi$ is minimal, we have $\xi''=\xi$. In particular, $K(g,u_n)$ converges to $K_{\xi}(g)$, so we get a contradiction. Thus, for large enough $n$, we have, for any $u\in B_R(g_n)$, $$G(g,u)\leq G(e,u)(K_{\xi}(g)+\epsilon).$$ Using (\[continuity1\]) and (\[continuity2\]), we get $$(1-\epsilon)G(g,\tilde{g}_n)\leq (K_{\xi}(g)+\epsilon)\sum_{u\in B_R(g_n)}G(e,u)G(u,\tilde{g}_n;B_R(g_n)^c).$$ Decomposing now a path from $e$ to $\tilde{g}_n$ according to its last visit to $B_R(g_n)$, we see that $$\sum_{u\in B_R(g_n)}G(e,u)G(u,\tilde{g}_n;B_R(g_n)^c)\leq G(e,\tilde{g}_n),$$ hence $(1-\epsilon)G(g,\tilde{g}_n)\leq (K_{\xi}(g)+\epsilon)G(e,\tilde{g}_n)$ and so $$\label{continuity3} (1-\epsilon)K(g,\tilde{g}_n)\leq K_{\xi}(g)+\epsilon.$$ Let us now prove that we can find such a sequence $\tilde{g}_n$ on $\alpha_n$ such that $\tilde{g}_n$ converges to $\xi'$ in the Martin boundary. Indeed, for fixed $n$, any sequence $\tilde{g}_{n,m}$ on $\alpha_n$ that goes to infinity when $m$ goes to infinity converges to $\xi_n$. Now, recall that the Martin compactification is metrizable. Let us fix an arbitrary distance $d_\mu$ on it so that for every $n$, for every large enough $m$ (depending on $n$), $d_{\mu}(\tilde{g}_{n,m},\xi_n)\leq \frac{1}{n}$. Taking $m$ large enough, we can assume that $e,g_n',\tilde{g}_{n,m}$ do lie in this order on $\alpha_n$. For every $n$, let us fix such an $m_n$ and write $\tilde{g}_n=\tilde{g}_{n,m_n}$. Then, $d_{\mu}(\tilde{g}_n,\xi_n)\leq \frac{1}{n}$ and since $\xi_n$ converges to $\xi'$, so does $\tilde{g}_n$. Letting $n$ tend to infinity in (\[continuity3\]), we get $(1-\epsilon)K_{\xi'}(g)\leq K_{\xi}(g)+\epsilon$ and since $\epsilon$ is arbitrary, we have $K_{\xi'}(\gamma)\leq K_{\xi}(\gamma)$. Finally, since $\xi$ is minimal, this proves that $\xi'=\xi$, which proves the $\Phi_N$ is continuous. Since $\partial^N_{M,e}\Gamma$ is compact and $\partial_\mu \Gamma$ is Hausdorff follows from the closed map lemma that $\Phi_N$ is a topological embedding. (0,0)–(0,7); (0,-.3) node[$e$]{}; (0,7.3) node[$\alpha$]{}; (0,1)–(-4.5,7); (-4.7,7.3) node[$\alpha_1$]{}; (0,1.5)–(-3.8,7); (-4,7.3) node[$\alpha_2$]{}; (0,4.5)–(-1.5,7); (-1.7,7.3) node[$\alpha_n$]{}; (4,3)–(0,3); (4.3,3) node[$g$]{}; (-.3,4.3) node[$g_n$]{}; (-.3,3) node[$\hat{g}$]{}; plot \[smooth\] coordinates [(4,3) (1,3.2) (.2,4) (0,7)]{}; plot \[smooth\] coordinates [(4,3) (.4,3.2) (0,4.7) (-1.5,7)]{}; All this concludes the proof of Theorem \[maintheoremMorsetoMartin\], for the map $\Phi$ is continuous with respect to the direct limit topology if and only if all the maps $\Phi_N$ are continuous for fixed $N$. A small application of Proposition \[prop:topological embedding\] sheds some light on the topology of the Martin boundary of the mapping class group, $\partial_\mu \mathrm{MCG}(S)$. \[corollaryMCGs\] For any $n \geq 2$ there exists a surface of finite type $S$ such that $\partial_\mu \mathrm{MCG}(S)$ contains a topologically embedded $(n-1)$-sphere. By [@Cordes Corollary 4.4] for any $n \geq 2$ there exists a surface of finite type $S$ such that the Morse boundary of the Teichmüller space $Teich(S)$ contains a topologically embedded $(n-1)$-sphere, and by [@Cordes Theorem 4.12] the Morse boundary of $Teich(S)$ coincides with the Morse boundary of $\mathrm{MCG}(S)$. The result now follows from Theorem \[maintheoremMorsetoMartin\]. Measure of the Morse boundary {#Sectionmeasurezero} ============================= Consider a transient random walk on a finitely generated group $\Gamma$. As explained in the introduction, the random walk almost surely converges to a point $X_{\infty}$ in the Martin boundary. Denoting the law of $X_{\infty}$ by $\nu$, $(\partial_\mu\Gamma,\nu)$ is a model for the Poisson boundary. The measure $\nu$ is called the harmonic measure. See [@Sawyer] for more details. Recall that a measure $\kappa$ on a measurable space $X$ on which $\Gamma$ acts is called $\mu$-stationary if $\mu * \kappa=\kappa$, where by definition, for every measurable set $A\subset X$, $$\mu*\kappa(A)=\sum_{g\in \Gamma}\mu(g)\kappa(g^{-1}A).$$ The following is folklore. Since the proof is very short, we write it for convenience. \[lemmaharmonicstationary\] The harmonic measure $\nu$ on the Martin boundary is $\mu$-stationary. Since $\nu$ is the exit law of the random walk, we have $$\nu(A)=P(X_\infty \in A)=\sum_{g\in \Gamma}P(X_\infty \in A |X_1=g)P(X_1=g).$$ By definition, $\mu$ is the law of $X_1$ so that $$\nu(A)=\sum_{g\in \Gamma}P(X_1^{-1}X_\infty \in g^{-1}A|X_1=g)\mu(g).$$ Notice that $X_1^{-1}X_\infty$ is the limit of $X_1^{-1}X_n$, hence it has the same law as $X_\infty$. Also, $X_1^{-1}X_\infty$ is independent of $X_1$. We thus get $$\nu(A)=\sum_{g\in \Gamma}P(X_1^{-1}X_\infty\in g^{-1}A)\mu(g)=\sum_{g\in \Gamma}\nu(g^{-1}A)\mu(g)=\mu*\nu(A).$$ This concludes the proof. The Morse boundary has zero harmonic measure -------------------------------------------- As we saw, if $\Gamma$ is non-elementary relatively hyperbolic with parabolic subgroups having empty Morse boundary or if $\Gamma$ is non-elementary hierarchically hyperbolic, there is a map from the Morse boundary to the Martin boundary. Our goal is to prove Theorem \[maintheoremmeasureMorse\], that is unless $\Gamma$ is hyperbolic, its Morse boundary has measure 0 with respect to the harmonic measure. The following elementary result will be useful. \[countableunion\] There exists a countable collection $\mathcal{F}$ of Morse gauges such that for any $\Gamma$, $\partial_{M}\Gamma=\cup_{N\in \mathcal{F}}\partial^{N}_{M}\Gamma$. Moreover $\partial_{M}\Gamma \times \partial_{M}\Gamma \setminus Diag=\cup_{N\in \mathcal{F}}\Omega_N$ where $\Omega_N$ is the set of couples $(x_1,x_2)\in \partial_M\Gamma\times \partial_M\Gamma$ such that there exists a bi-infinite $N$-Morse geodesic from $x_1$ to $x_2$. The Morse boundary is defined as the union of all $N$-Morse boundaries. According to Proposition \[ABDTheoremE\] and Lemma \[relativelyhyperbolicMorseboundedprojections\] for every Morse gauge $N$ there is an integer $D(N)>0$ such that any geodesic with $D(N)$ bounded projections is $N$-Morse. Furthermore any Morse geodesic has bounded projections proving the first claim. For the second claim, according to [@Cordes Proposition 3.11], $\partial_M\Gamma\times \partial_M\Gamma\setminus Diag$ is the union of $\Omega_N$ over all Morse gauges, and we can conclude similarly. Now, we prove the following result, which gives sense to $\nu(\partial_M\Gamma)$. \[LemmaMorseborelian\] Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Then, the Morse boundary $\partial_{M}\Gamma\subset \partial_{\mu}\Gamma$ is a Borel subset of the Martin boundary. By Lemma \[countableunion\] the Morse boundary can be obtained as a countable union of spaces $\partial_M^N\Gamma$. We just need to prove that for fixed $N$, the image of $\partial_M^N\Gamma$ is a borelian subset of the Martin boundary. This follows immediately from the fact that the map $\Phi$ from the Morse boundary to the Martin boundary is continuous and the fact that for fixed $N$, $\partial_M^N\Gamma$ is compact, according to [@Cordes Proposition 3.12]. In fact, we have another realization of the Morse boundary as a subset of a topological model for the Poisson boundary. Recall from the introduction that a hierarchically hyperbolic group acylindrically acts on a hyperbolic space $C\mathbf{S}$ and that a relatively hyperbolic group acylindrically acts on the coned-off graph with respect to the relatively hyperbolic structure. For any such group $\Gamma$, we will denote by $X$ the corresponding hyperbolic space on which $\Gamma$ acylindrically acts. We endow $\Gamma\cup \partial_M\Gamma$ with the direct limit topology and we endow $\Gamma\cup \partial X$ with the usual topology coming from the Gromov product on $X$. \[inclusionMorseGromovboundary\][@AbbottBehrstockDurham Corollary 6.1, Lemma 6.5][@CashenMacKay Theorem 7.6] Let $\Gamma$ be either a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundaries. Then, there is an embedding $\Psi: \partial_M\Gamma \to \partial X$. Moreover, a sequence $g_n$ in $\Gamma$ converge to $x\in \partial_M\Gamma$ if and only if $g_n$ converges to $\Psi(x)\in \partial X$. According to [@MaherTiozzo Theorem 1.1, Theorem 1.5], the random walk almost surely converges to a point in $\partial X$ and $\partial X$ endowed with the exit measure is a realization of the Poisson boundary. In the following, we will denote by $\nu$ the harmonic measure on the Martin boundary $\partial_\mu\Gamma$ and by $\nu_X$ the harmonic measure on $\partial X$ to avoid confusion. We now prove that the Morse boundary has full measure with respect to one harmonic measure if and only if it has full measure with respect to the other. \[propequivalencePoissonboundaries\] Let $\Gamma$ be either a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundaries. Then, $\nu(\Phi(\partial_M\Gamma))=1$ if and only if $\nu_X(\Psi(\partial_M\Gamma))=1$. Assume first that $\nu_X(\Psi(\partial_M\Gamma))=1$. Then, almost surely, the random walk $X_n$ converges to a point $\Psi(x_\infty)$. According to Lemma \[inclusionMorseGromovboundary\], $X_n$ almost surely converges to $x_\infty$ with respect to the topology on $\Gamma \cup \partial_M\Gamma$. By construction of the map $\Phi$, this implies that $X_n$ almost surely converges to $\Phi(x_\infty)$ in the Martin boundary. In particular, $\nu(\Phi(\partial_M\Gamma))=1$. Proving the converse is more difficult, because we do not know if converging to $\Phi(x_\infty)$ in the Martin boundary implies converging to $x_\infty$ in the Morse boundary, so we have to find another strategy. Since the map $\Phi$ is one-to-one, we can define the inverse map $\Phi^{-1}:\Phi(\partial_M\Gamma)\to \partial_M\Gamma$. It is not clear if this inverse map is continuous, but it is measurable as we now prove. \[Phi-1measurable\] The map $\Phi^{-1}:\Phi(\partial_M\Gamma)\to \partial_M\Gamma$ is measurable. By Lemma \[countableunion\] the Morse boundary is a countable union of $N$-Morse boundaries $\partial_M^N\Gamma$. Let $A$ be a borelian subset of $\partial_M\Gamma$. Since $\partial_M^N\Gamma$ is closed in $\partial_M\Gamma$, $\partial_M^N\Gamma\cap A$ also is measurable. Moreover, when restricted to $\partial_M^N\Gamma$ which is compact, the map $\Phi$ is an embedding, so that $\Phi(\partial_M^N\Gamma \cap A)$ is a borelian subset of the Martin boundary. Thus, $$(\Phi^{-1})^{-1}(A)=\bigcup_N\Phi(\partial_M^N\Gamma\cap A)$$ is measurable, which proves the lemma. Since $\nu(\Phi(\partial_M\Gamma))=1$, we can see $\nu$ as a measure on $\Phi(\partial_M\Gamma)$. Consider the pushforward measure $\tilde{\nu}_X=\Psi \circ \Phi^{-1}_*\nu$. By definition, this is a probability measure on $\partial X$ such that $\tilde{\nu}_X(\Psi (\partial_M\Gamma))=1$. Recall that $\nu$ is stationary on $\partial_\mu\Gamma$. Notice that both maps $\Phi$ and $\Psi$ are $\Gamma$-equivariant, hence $\tilde{\nu}_X$ is stationary on $\partial X$. According to [@MaherTiozzo Theorem 1.1], $\nu_X$ is the only stationary probability measure on $\partial X$, so $\tilde{\nu}_X=\nu_X$. This proves that $\nu_X(\Psi(\partial_M\Gamma))=1$, which concludes the proof. Double ergodicity of the harmonic measure on the Martin boundary ---------------------------------------------------------------- We prove here Theorem \[maintheoremmeasureMorse\] in the particular case where the measure $\mu$ is symmetric. In general, letting $\mu$ be a probability measure on $\Gamma$, denote by $\check{\mu}$ the reflected measure, that is, $\check{\mu}(\gamma)=\mu(\gamma^{-1})$. This reflected measure also gives rise to a random walk, hence to a Martin boundary $\partial_{\check{\mu}}\Gamma$ and a harmonic measure $\check{\nu}$. We call $\check{\nu}$ the reflected harmonic measure. We have the following. \[ergodicityPoissonboundary\][@KaimanovichPoissonhyperbolic Theorem 6.3] The product measure $\nu\otimes \check{\nu}$ is ergodic for the diagonal action of $\Gamma$ on $\partial_{\mu}\Gamma\times \partial_{\check{\mu}}\Gamma$. Let $A$ be a $\Gamma$-invariant borelian subset of the Martin boundary $\partial_\mu\Gamma$. Then, $A\times \partial_\mu\Gamma$ is also invariant and so $\nu\otimes \check{\nu}(A\times \partial_\mu\Gamma)$ is 0 or 1. In particular, the harmonic measure $\nu$ is ergodic for the group action on $\partial_{\mu}\Gamma$. Similarly, $\check{\nu}$ is ergodic. The group $\Gamma$ acts on its Morse boundary, sending a Morse geodesic ray to another Morse geodesic ray. By construction, the map $\Phi:\partial_M\Gamma \rightarrow\partial_{\mu}\Gamma$ is equivariant, so that the image of the Morse boundary is $\Gamma$-invariant inside the Martin boundary. By ergodicity, we get $\nu(\partial_{M}\Gamma) = 0 \text{ or } 1$ and $\nu\otimes \check{\nu}(\partial_{M}\Gamma) = 0 \text{ or } 1$. Recall that we want to prove that $\nu(\partial_M\Gamma)=0$ unless the group is hyperbolic. Let $\Gamma$ be a non-elementary hierarchically hyperbolic group or a non-elementary relatively hyperbolic group whose parabolic subgroups have empty Morse boundary. Let $\mu$ be a probability measure on $\Gamma$ whose finite support generates $\Gamma$ as a semi-group. Let $\nu$ be the corresponding harmonic measure on the Martin boundary and $\check{\nu}$ be the reflected harmonic measure. If $\Gamma$ is not hyperbolic, then $\nu\otimes \check{\nu}(\partial_M\Gamma\times \partial_M\Gamma)=0$. We argue by contradiction, so we assume in the following that 1. $\Gamma$ is non-elementary relatively hyperbolic with parabolic subgroups having empty Morse boundary or non-elementary hierarchically hyperbolic, 2. $\Gamma$ is not hyperbolic, 3. $\nu\otimes \check{\nu}(\partial_M\Gamma\times \partial_M\Gamma)\neq 0$. For a fixed Morse gauge $N$, let $\Omega_N$ be the set of couples $(x_1,x_2)\in \partial_M\Gamma\times \partial_M\Gamma$ such that there exists a bi-infinite $N$-Morse geodesic from $x_1$ to $x_2$. By Lemma \[countableunion\] $$\partial_M\Gamma\times \partial_M\Gamma \setminus \mathrm{Diag} =\bigcup_{N}\Omega_N.$$ Moreover, one can choose a countable number of Morse gauges in this union, as in the proof of Lemma \[LemmaMorseborelian\]. Thus, to find a contradiction, it is sufficient to prove that for every $N$, $\nu\otimes \check{\nu}(\Omega_N)=0$. Since we assume that $\nu\otimes \check{\nu}(\partial_M\Gamma\times \partial_M\Gamma)\neq 0$, in particular, $\nu(\partial_M\Gamma)\neq 0$ and $\check{\nu}(\partial_M\Gamma)\neq 0$. Define $Y\subset \Gamma$ to be either an infinite parabolic subgroup if $\Gamma$ is assumed to be non-elementary relatively hyperbolic or one of the hyperbolic spaces $CU$, $U\in \mathfrak{S}\setminus S$ if $\Gamma$ is assumed to be non-elementary hierarchically hyperbolic. For every $g\in Y$, let $\partial_M^{g}\Gamma$ be the set of Morse geodesic rays $x\in \partial_M\Gamma$ which project within distance $D_0$ of $g$, where $D_0$ is a fixed constant. Projection of a Morse geodesic ray on $Y$ is well defined up to a bounded distance, so if $D_0$ is chosen large enough, then $\partial_M^{g}\Gamma$ is well defined. Then, $$\partial_M\Gamma=\bigcup_{g\in Y}\partial_M^{g}\Gamma,$$ so that there exists $g_0$ such that $\nu(\partial_M^{g_0}\Gamma)\neq 0$. Fix $g\in Y$. For any Morse geodesic ray $\alpha$ representing some $x\in \partial_M^{g_0}\Gamma$, the translated geodesic $g\alpha$ represents $g\cdot x\in \partial_M\Gamma$. Moreover, $g\cdot x$ projects on $Y$ within a uniformly bounded distance of $gg_0$, so up to enlarging $D_0$ (not depending on $g$), $g\cdot x\in \partial_M^{gg_0}\Gamma$. Thus, $g\cdot \partial_M^{g_0}\Gamma\subset \partial_M^{gg_0}$. Recall that the random walk almost surely converges to a point $X_{\infty}$ in the Martin boundary and that $\nu$ is the law of $X_{\infty}$, so that $P(X_\infty\in \partial_M^{g_0}\Gamma)>0$. Translating everything by $g$, we get $P_g(X_{\infty}\in \partial_M^{gg_0}\Gamma)>0$, where $P_g$ denotes the probability measure on the set of sample paths for the random walk starting at $g$. Since the random walk generates $\Gamma$ as a semi-group, there exists $n_g$ such that $P(X_{n_g}=g)>0$. Concatenating a trajectory of the random walk from $e$ to $g$ and an infinite trajectory starting at $g$ yields an infinite trajectory starting at $e$. We thus get $P(X_{\infty}\in \partial_M^{gg_0}\Gamma)>0$ and this holds for every $g$. In particular, applying this for $gg_0^{-1}$, we get $\nu(\partial_M^{g}\Gamma)>0$. Similarly, for every $g$, $\check{\nu}(\partial_M^g\Gamma)>0$, so that for every $g_1,g_2$, $\nu\otimes \check{\nu}(\partial_M^{g_1}\Gamma\times \partial_M^{g_2}\Gamma)>0$. Recall that we want to prove that for every $N$, $\nu\otimes \check{\nu}(\Omega_N)=0$. Fix a Morse gauge $N$. Then there exists $D$ such that every $N$-Morse geodesic has $D$-bounded projections. Choose two points $g_1,g_2$ on $Y$ such that $d(g_1,g_2)$ is bigger than $D$. For every $(x_1,x_2)\in \partial_M^{g_1}\Gamma\times \partial_M^{g_2}\Gamma$, any Morse geodesic from $x_1$ to $x_2$ cannot have $D$-bounded projections, so $(x_1,x_2)\notin \Omega_N$. This proves that $\partial_M^{g_1}\Gamma\times \partial_M^{g_2}\Gamma\subset \Omega_N^c$. In particular, $\nu\otimes \check{\nu}(\Omega_N)<1$. Finally, notice that $\Omega_N$ is $\Gamma$-invariant. Since $\nu\otimes \check{\nu}$ is ergodic, we necessarily have $\nu\otimes \check{\nu}(\Omega_N)=0$. This concludes the proof. Assuming that $\mu$ is symmetric, $\mu=\check{\mu}$, so that $\nu=\check{\nu}$. This proves Theorem \[maintheoremmeasureMorse\] for symmetric measures $\mu$. Another proof of Theorem \[maintheoremmeasureMorse\] ---------------------------------------------------- We do not assume anymore that $\mu$ is symmetric. We again argue by contradiction and consider a group $\Gamma$ such that 1. $\Gamma$ is non-elementary relatively hyperbolic with parabolic subgroups having empty Morse boundary or non-elementary hierarchically hyperbolic, 2. $\Gamma$ is not hyperbolic, 3. $\nu(\partial_M\Gamma)\neq 0$. By ergodicity, $\nu(\partial_M\Gamma)=1$ and so according to Proposition \[propequivalencePoissonboundaries\], we also have $\nu_X(\partial_M\Gamma)>0$ (notice that we did not need to use this proposition in our first proof). Since $\partial_M\Gamma$ can be described as a countable union of $N$-Morse boundaries, there exists $\eta>0$ and there exists a Morse gauge $N$ such that $\nu_X(\partial_M^N\Gamma)\geq \eta$. In particular, the random walk $X_n$ converges to a $N$-Morse point $X_\infty$ in $\partial X$ with probability at least $\eta$. Recall the following terminology from [@MathieuSisto]. Let $\Gamma$ be a finitely generated group acylindrically acting on a hyperbolic space $X$. Let $(g_0,g_1,...,g_n,...)$ be a sequence (finite or infinite) of points of $\Gamma$. We say that the sequence is tight around $g_k$ at scale $l$ (with respect to a constant $C$) if for every $k_1\leq k\leq k_2$ with $k_2-k_1\geq l$, we have 1. $d_X(g_{k_2},g_{k_1})\geq (k_2-k_1)/C$, 2. the length of the path $(g_{k_1},...,g_{k_2})$ in $\Gamma$ is at most $C(k_2-k_1)$ 3. $d_{\Gamma}(g_{k'},g_{k'+1})\leq \max (l,|k-k'|/C)$ for every $k'$. Clearly, for an infinite sequence $(g_0,...,g_n,...)$ and for every $k\leq n$, if $(g_0,...,g_n,...)$ is tight around $g_k$ at scale $l$, then the same holds for the finite sequence $(g_0,...,g_n)$. \[tightaroundrandomwalk\][@MathieuSisto Lemma 10.11] There exists $C$ such that for every $k$, for every $l$, the probability that the infinite sequence $(e,X_1,...,X_n,...)$ is tight around $X_k$ at scale $l$ is at least $1-C\mathrm{e}^{-l/C}$. Actually, [@MathieuSisto Lemma 10.11] is only about the finite sequence $(e,X_1,...,X_n)$ for $k\leq n$, but it is clear from the proof that the result holds for the infinite sequence $(e,X_1,...,X_n,...)$. We finish the proof of Theorem \[maintheoremmeasureMorse\]. We choose $l$ so that $1-C\mathrm{e}^{-l/C}\leq \eta/2$. Then, the probability that $X_n$ converges to a $N$-Morse point $X_\infty$ and that the infinite path $(e,X_1,...,X_n,...)$ is tight around $X_k$ at scale $l$ is at least $\eta/2$. According to [@MathieuSisto Lemma 10.12], if $(e,X_1,...,X_n)$ is tight around $k$ at scale $l$, then the distance in $\Gamma$ between $X_k$ and a geodesic from $e$ to $X_n$ is bounded linearly in $l$. Assuming that $X_n$ converges to a $N$-Morse point $X_\infty$ and letting $n$ tend to infinity, we get that the distance between $X_k$ and an infinite geodesic from $e$ to $X_\infty$ is bounded. To sum-up, for every $k$, with probability at least $\eta/2$, $X_n$ converges to a $N$-Morse point $X_\infty$ and $X_k$ is at distance at most $D$ from a geodesic from $e$ to $X_\infty$, for some constant $D$. To conclude, we use the results in [@SistoTaylor] which state that the random walk spends essentially a logarithmic time in non-maximal domains (if $\Gamma$ is assumed to be hierarchically hyperbolic) or in parabolic subgroups (if $\Gamma$ is assumed to be relatively hyperbolic). Precisely, [@SistoTaylor Theorem 2.3] shows that for all large enough $n$, there exists a subspace $U\subset \Gamma$ such that the probability that $d_U(\pi_U(e),\pi_U(X_n)) \geq C^{-1} \log n$ is at least $1-\eta/4$, where $U\in \mathfrak{S}\setminus \{\mathbf{S}\}$ if $\Gamma$ is hierarchically hyperbolic, or $U$ is a parabolic subgroup if $\Gamma$ is relatively hyperbolic. On the other hand, recall that Proposition \[ABDTheoremE\] and Lemma \[relativelyhyperbolicMorseboundedprojections\] show that a $N$-Morse geodesic has $D'$-bounded projections on such a $U$, where $D'$ only depends on $N$. In particular, if the distance from a point $g$ to a $N$-Morse geodesic $\alpha$ is bounded, then $d_U(\pi_U(e),\pi_U(g))$ also is bounded. This proves that for every $k$, with probability at least $\eta/2$, $d_U(\pi_U(e),\pi_U(X_k))$ is uniformly bounded. Finally, for large enough n, with probability at least $\eta/4$, for every $U$ as above, $d_U(\pi_U(e),\pi_U(X_n))$ is uniformly bounded while $d_U(\pi_U(e),\pi_U(X_n)) \geq C^{-1} \log n$ for some $U$. This is a contradiction. We thus proved Theorem \[maintheoremmeasureMorse\].
--- abstract: | We report single crystal growth of the series of [Ce*T*Al$_{3}$]{} compounds with [[*T*=Cu, Ag, Au, Pd and Pt]{}]{} by means of optical float zoning. High crystalline quality was confirmed in a thorough characterization process. With the exception of [CeAgAl$_{3}$]{}, all compounds crystallize in the non-centrosymmetric tetragonal BaNiSn$_{3}$ structure (space group: *I4mm*, No. 107), whereas [CeAgAl$_{3}$]{} adopts the related orthorhombic PbSbO$_2$Cl structure (Cmcm, No. 63). An attempt to grow [CeNiAl$_3$]{} resulted in the composition [CeNi$_2$Al$_5$]{}. Low temperature resistivity measurements down to $\sim$0.1K did not reveal evidence suggestive of magnetic order in [CePtAl$_{3}$]{} and [[CePdAl$_{3}$]{}]{}. In contrast, [[CeAuAl$_{3}$]{}]{}, [[CeCuAl$_{3}$]{}]{} and [[CeAgAl$_{3}$]{}]{} display signatures of magnetic transitions at 1.3K, 2.1K and 3.2K, respectively. This is consistent with previous reports of antiferromagnetic order in [[CeAuAl$_{3}$]{}]{}, and [[CeCuAl$_{3}$]{}]{} as well as ferromagnetism in [[CeAgAl$_{3}$]{}]{}, respectively.\ address: - 'Physik Department, Technische Universität München, D-85747 Garching, Germany' - 'Heinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München, D-85748 Garching, Germany' - 'Max-Planck Institut für die chemische Physik fester Stoffe (MPI CPFS), D-01187 Dresden, Germany' - 'Fakultät für Chemie, Technische Universität München, D-85747 Garching, Germany' author: - 'C. Franz' - 'A. Senyshyn' - 'A. Regnat' - 'C. Duvinage' - 'R. Schönmann' - 'A. Bauer' - 'Y. Prots' - 'L. Akselrud' - 'V. Hlukhyy' - 'V. Baran' - 'C. Pfleiderer' bibliography: - 'citations.bib' title: 'Single crystal growth of [Ce*T*Al$_{3}$]{} (*T*=Cu, Ag, Au, Pd and Pt)' --- Float zone technique ,Single crystal growth ,Rare earth compounds ,Magnetic materials ,Crystal structure determination ,Electric resistivity Introduction ============ Cerium-based intermetallic compounds represent an ideal testing ground for the study of novel electronic ground states and unusual low-lying excitations, where valence fluctuations, heavy-fermion behaviour, unconventional superconductivity and exotic forms of spin and charge order are ubiquitous. Despite many decades of intense research, the understanding of the nature of strong electronic correlations in Ce-based systems, at best, may be referred to as being qualitative. A scenario that is widely alluded to when addressing correlations in f-electron systems considers the competition of Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions, supporting magnetic order, with the single-impurity Kondo effect quenching magnetic moments. While there have been various attempts to advance the understanding, simultaneous treatment of multiple, nearly equivalent energy scales such as exchange and dipolar interaction, spin-orbit coupling, crystal electric fields and strong magneto-elastic coupling has not been attempted. Given this general context, the class of Ce*T*X$_3$, where T is a transition metal element and X a simple metal, offer important new insights. For instance, selected members of this series have been discovered, which exhibit a coexistence of magnetic order and superconductivity under pressure. Another line of research pursues the strong coupling of phonons with relatively low lying crystal electric fields (CEF) levels. An important example has been reported in [[CeCuAl$_{3}$]{}]{} [@Adroja2012], which was interpreted in terms of a quasi-bound vibron state first observed in CeAl$_2$ [@loew; @thal]. However, recent studies in CePd$_2$Al$_2$, as well as preliminary work in other members of the CeTX$_3$ series suggest, that strong interactions of the crystal fields with the spectrum of phonons somewhat akin the claim of vibrons in [[CeCuAl$_{3}$]{}]{} is more generic than assumed so far. In this paper we report single crystal growth of the series [[Ce*T*Al$_{3}$]{}]{} with [[*T*=Cu, Ag, Au, Pd and Pt]{}]{}. For our study we have used optical float zoning, to the best of our knowledge, for the first time. Following a thorough characterisation, we determine the crystal structure of the systems studied. The high sample quality achieved in our study is corroborated in measurements of the electrical resistivity, which was performed at temperatures down to 0.1K. All crystal structures reported in literature for the series of [[Ce*T*Al$_{3}$]{}]{} compounds are derived from the BaAl$_4$ structure, depicted in fig. \[pic:strukturen\](a). The BaAl$_4$ structure crystallizes tetragonal body-centred with eight Ce-atoms at the corners and one in the center. The aluminum atoms are arranged in six planes parallel to *ab* in the unit cell. The following three important types of structures may be derived from the BaAl$_4$ structure in which the Al and *T* atoms are ordered. \(i) The ThCr$_2$Si$_2$ (*I4/mmm*) structure, shown in Fig. \[pic:strukturen\](b) is currently most frequently reported for intermetallic compounds. The sequence of layers along the *c*-direction is given as A-\[X-T-X\]-A with A a rare-earth metal, T the transition metal element and X either Si, Ge or Al. Typical examples include CeT$_2$X$_2$ compounds (CeCu$_2$Si$_2$ [@Edwards1987], CeCu$_2$Ge$_2$ [@deBoer1987], CePd$_2$Si$_2$, CeRh$_2$Si$_2$ [@Ballestracci1978]) as well as URu$_2$Si$_2$ [@Cordier1985] and BaFe$_2$As$_2$ [@rotter2008]. \(ii) The CaBe$_2$Ge$_2$ (*P4/nmm*) variant of the BaAl$_4$ structure, shown in Fig. \[pic:strukturen\](c)), is also characterised by full inversion symmetry and a layered structure along the $c$-axis following the sequence A-\[X-T-X\]-A-\[T-X-T\]-A. The CaBe$_2$Ge$_2$ structure is less frequently found because different atom sizes can not be matched very well in contrast to ThCr$_2$Si$_2$ [@Frik2006]. An important example of immediate relevance to the Al-based compounds addressed in our study is CePd$_2$Al$_2$ [@pv]. \(iii) The BaNiSn$_3$ (*I4mm*) structure is the only subtype with lacking inversion center. In recent years Ce-systems with a BaNiSn$_3$-type structure have generated great interest, since the discovery of superconductivity in heavy fermion systems such as CeIrSi$_3$, CeRhSi$_3$ [@xian1985], CeCoGe$_3$ [@Das2006]. In these systems the superconducting pairing symmetry may be outside traditional classification schemes. Shown in Fig. \[pic:strukturen\](d) is the characteristic unit cell, where the stacking sequence of layers is A-T-X(1)-X(2)-A-T-X(1)-X(2)-A may be readily seen. The point group of these systems is C$_{4v}$, lacking a mirror plane. [lp[0.8cm]{}p[0.95cm]{}p[2.5cm]{}p[1.3cm]{}p[1.2cm]{}p[1.7cm]{}p[1.2cm]{}p[1.7cm]{}]{} & Mag & crystal & space group & T$_{\mathrm{C/N}}$ & T$_{\mathrm{K}}$ & $\gamma$ & $\mu_\mathrm{eff}$ & $\mu_\mathrm{CW}$\ & & & & (K) & (K) & (mJ/molK$^2$) & ($\mu_\mathrm{B}$) & ($\mu_\mathrm{B}$)\ [CeCuAl$_{3}$]{}  & AFM & pc, sc & *I4mm* [@Moze1996; @Mock1999; @Klicpera2014cs] & 2.1 [@Bauer1987; @Kontani1994] & 8 [@Bauer1987] & 140 [@Bauer1987] & 1.8 [@Mock1999] & 2.61 [@Bauer1987]\ [CeAuAl$_{3}$]{}  & AFM & pc & *I4mm* [@Hulliger1993; @Paschen1998; @Sugawara1999] & 1.32 [@Paschen1998] & 4.5 [@Paschen1998] & 227 [@Paschen1998] & 2.1 [@Mock1999] & 2.6-2.8 [@Sugawara1999]\ [CeAgAl$_{3}$]{}  & FM & pc & *I4/mmm* or *I4mm* [@Muranaka2007] & 3.2 [@Muranaka2007] & - & - & - & 2.23 [@Muranaka2007]\ [CePdAl$_{3}$]{}  & AFM & pc & *Fmm2* [@Schank1994] & 6 [@Schank1994] & - & - & - & -\ [CePtAl$_{3}$]{}  & SG & pc & *I4/mmm* [@Mock1999; @Goerlach2006] & 0.8 [@Goerlach2006; @Schank1994] & - & - & 1.8 [@Mock1999] & 2.08 [@Goerlach2006]\ In the following, a brief overview over the literature on [Ce*T*Al$_{3}$]{} compounds is given. Key properties reported are summarised in table\[tab:literature\]. The crystal structure has been determined as *I4mm* in [CeAuAl$_{3}$]{} as well as [CeCuAl$_{3}$]{} in both poly- and single crystals. For [CeAgAl$_{3}$]{} no distinction has been possible between *I4mm* and *I4/mmm*, whereas [CePtAl$_{3}$]{} was reported to adopt a centrosymetric variant *I4/mmm*. Interestingly, [CePdAl$_{3}$]{} was reported to crystallize in the orthorhombic *Fmm2* structure. Compared to the textbook examples of 3d magnets, crystal electric fields in 4f compounds are weak. Nevertheless, the crystal fields influence the magnetic properties of 4f systems rather strongly. By Hund’s rule, Ce$^{3+}$ has a six-fold degenerate ground state, split into three Kramers doublets by the tetragonal crystal field. In [CeCuAl$_{3}$]{} it has been reported that they are low lying with $\Delta_1$=15K and $\Delta_2$=238K [@Bauer1987; @Mentink1993]. Further for [CeAuAl$_{3}$]{} excited levels of $\Delta_1$=57K and $\Delta_2$=265K [@Paschen1998] as well as $\Delta_1$=60K and $\Delta_2$=240K [@Sugawara1999] have been reported. Magnetic ground states in CeTX$_3$ compounds are usually antiferromagnetic, as in [CeCuAl$_{3}$]{} with *T*$_N$=2.1K and [CeAuAl$_{3}$]{} with T$_N$=1.32K. However, the exact magnetic ground states are more complicated, with a propagation vector **k**=(0.4, 0.6, 0) in the case of [CeCuAl$_{3}$]{} [@Klicpera2015mag] and **k**=(0, 0, 0.52) for [CeAuAl$_{3}$]{} [@Adroja2015]. [CeAgAl$_{3}$]{} is a rare example of ferromagnetic Ce-compound below the ordering temperature of T=3.2K. [CePdAl$_{3}$]{} is reported to order antiferromagnetically below 6K, [CePtAl$_{3}$]{} exhibits spin-glass behaviour below 0.8K. All magnetic compounds posses an easy *ab*-plane and hard *c*-axes. Kondo temperatures have been determined in [CeCuAl$_{3}$]{} at 8K and [CeAuAl$_{3}$]{} at 4.5K, where a weak screening of less than 25% has been found in an $^{27}$Al-NMR study [@Vonlanthen1999]. Both compounds show a heavy fermion ground state, demonstrated by an electronic contribution to the specific heat of $\gamma$=227mJ/molK$^2$ as well as a large prefactor of the low-temperature electric resistivity of *A*=5.0$\mu\Omega$cm/K$^2$ in [CeAuAl$_{3}$]{} [@Paschen1998] and $\gamma$=140mJ/molK$^2$ in [CeCuAl$_{3}$]{}. Under pressure, T$_N$ rises up to 60kbar in [CeCuAl$_{3}$]{} and suddenly vanishes at 80kbar [@Kawamura2010; @Nishioka2007], pointing towards the existence of a quantum critical point (QCP). Recently, a phonon - crystal field quasibound state has been found in [CeCuAl$_{3}$]{} [@Adroja2012], which is not present in [CeAuAl$_{3}$]{} [@Adroja2015]. Furthermore, a low energy anomalous excitation [@Aoki2000a] and phonon scattering by Ce magnetic moment fluctuations are reported in [CeAuAl$_{3}$]{} [@Aoki2000], as well as a possible second phase transition at 0.18K [@Adroja2015]. Experimental Methods ==================== The physical properties of rare earth containing compounds tend to be very sensitive to defects and impurities. In turn, we have made great efforts to reduce such defects and impurities to the lowest possible level. Notably, the procedure of the single-crystal preparation is based on the use of high purity starting elements and a bespoke work-flow that is optimised to minimise contamination by oxygen. As the perhaps most important aspect, we have used optical float-zoning (OFZ) representing a crucible-free technique. ![Photographs of all crystals grown by optical float zoning for this study. Scale in cm. (a) [CeAuAl$_{3}$]{} (b) [CeAgAl$_{3}$]{} (c) [CeCuAl$_{3}$]{} (d) [CePtAl$_{3}$]{} (e) [CePdAl$_{3}$]{}[]{data-label="pic:photo"}](figure3.jpg){width="0.8\linewidth"} Metal lumps of Ce (Ames, 99.9995%), shots of Au and Ag (AlfaAesar Premion, 99.995% 99.9999%), lumps of copper (MaTecK, 99.9999%), powder of Pd and Pt (AlfaAesar Premion, both 99.995%) and lumps of Al (MaTecK, 99.9999%) were used for the preparation of the feed and seed rods. Ce and Al were weighted in an argon glove box system. First [CeAl$_2$]{} was prepared in an induction heated horizontal copper cold boat system that can be loaded by means of a bespoke load-lock from a glove box [@Bauer2016]. This reduces contamination of the Ce with oxygen, as the educt [CeAl$_2$]{} is stable on air. In a second step, the [CeAl$_2$]{} was then reacted with the transition metals *T*=Au,Ag,Cu,Pt or Pd and additional Al in a water cooled Hukin-type radio frequency heated copper crucible. The sample was remelted and flipped over several times to assure a good homogeneity. The resulting pill was subsequently cast into cylindrical feed and seed rods with a diameter of 6mm and a length of 40 to 60mm’s. Furthermore, synthesis of a polycrystalline sample of [CeNiAl$_3$]{} was attempted by means of our rod casting furnace. For [CeAuAl$_{3}$]{} the ternary compound was prepared without the intermediate step of first synthesising [[CeAl$_2$]{}]{} by means of the glove box and the horizontal cold boat system and the bespoke load-lock. The actual single crystal growth was performed in a CSI 4-mirror (model FZ-T 10000-H-III-VPS) optical float zone furnace. The furnace was redesigned to be all-metal sealed. Prior to growth the furnace was baked with bespoke heating jackets [@Neubauer2011]. 500W halogen lamps were used for the growth process under a static argon atmosphere at a pressure of 2.5bar. The float zoning was performed with a growth rate of 5mm/h and a counter rotation of 6rpm of the feed and seed rod, respectively. The characterization of the crystals proceeded as follows. First, all crystals were examined under an optical light microscope. Laue X-ray (Multiwire Laboratories MWL 110) images were taken on different spots covering all of the surface of the ingot to identify the single crystalline part and possible grain boundaries. In [CeAuAl$_{3}$]{} additionally cross sections of the upper and lower part of the crystal were analysed as well as a cut through the quenched final part of the molten zone. Figure \[pic:zone\] (a) shows from right to left the polycrystalline feed rod, the quenched molten zone with a pronounced stripe structure and the beginning of the single crystal at the lower end. All crystals were oriented by Laue X-ray diffraction. Fig. \[pic:zone\](b) and (c) show typical Laue pictures of [CeAuAl$_{3}$]{} for the *a*- and *c*-axes, respectively. The composition was confirmed by single crystal and powder x-ray diffraction. For [CeAuAl$_{3}$]{} additional scanning electron microscope images and energy dispersive X-ray spectra were recorded. Small single crystals with dimensions less than 100$\mu$m were mechanically extracted from [Ce*T*Al$_{3}$]{} ([*T*=Cu, Ag, Au, Pd and Pt]{}) ingots as grown. Generally all tested crystals were of high quality with sharp diffraction peaks. Intensity data were collected with graphite-monochromatized Mo K$\alpha$ X-ray radiation. Three-dimensional data were indexed, integrated and corrected for Lorentz-, polarization, absorption and background effects using a diffractometer specific software, namely CrystalClear for a Rigaku Saturn724+ and X-Area for a STOE IPDS II. Initial structure analysis/solution (using direct methods) for [Ce*T*Al$_{3}$]{} (*T*=Ag, Au, Cu, Pt) was done with SHELX-2012 as implemented in the program suite WinGX 2014.1 [@Farrugia2012]. The crystal structure of modulated [CePdAl$_{3}$]{} was solved using WinCSD [@Akselrud2014].The experimental data and results of the structure refinement for selected samples are reported in table \[tab:PowderDiff\], while the fractional atomic coordinates, occupation numbers and equivalent isotropic, and anisotropic atomic displacement parameters are listed in table \[tab:S1\] and \[tab:S2\]. Samples for resistivity measurements were cut from the single-crystalline sections of the ingots with a diamond wire saw. Typical dimension of the samples were 4 to 6 times 1 times 0.2mm$^3$, oriented such that the electrical current could be applied along longest direction which corresponding to the *c*-axis. The resistivity was measured in a standard four-probe ac-configuration using a digital lock-in amplifier and an excitation current of 5mA at an excitation frequency of 22.08Hz. Room temperature transformers were used for impedance matching. Data were recorded between 2K and 300K in a standard $^4$He cryostat. Data down to much lower temperatures were measured for [Ce*T*Al$_{3}$]{} with *T*=Cu, Ag, Au in an adiabatic demagnetisation refrigerator (ADR) using the same detection technique at a lower excitation current of 1mA down to a temperature below $\sim$300mK. For samples with *T*=Pt, Pd a $^3$He/$^4$He dilution refrigerator was used down to temperatures well below 100mK. Experimental Results ==================== Crystal Structure ----------------- The actual growth process resulted in the ingots shown in Fig.\[pic:photo\]. We obtained large single crystals of [CeAuAl$_{3}$]{}, [CeCuAl$_{3}$]{} and [CePtAl$_{3}$]{} shown in Figs.\[pic:photo\](a), (c) and (d), respectively. While these ingots remained mechanically intact after growth, the [CeCuAl$_{3}$]{} ingot (shown in Fig.\[pic:photo\](a)) broke spontaneously in one location during cool-down after growth. For [CeAgAl$_{3}$]{} single crystalline grains were prepared from the ingot. The melt in the growth process of [CePdAl$_{3}$]{} was very unstable and we could only prepare small single crystalline samples with a typical size of up to 3mm. A second attempt to grow [[CePdAl$_{3}$]{}]{} with a reduced growth rate of 1mm/h allowed to obtain the large single crystal depicted in Fig.\[pic:photo\](e). Note that all measurements on [[CePdAl$_{3}$]{}]{} reported in this paper were carried out on samples cut from the first crystal. The analysis of the arrays of the diffraction data revealed that the majority of the [Ce*T*Al$_{3}$]{} compounds (*T*=Au, Cu, Pt) studied display reflections consistent with the tetragonal lattice and in line with one of distorted [BaAl$_{4}$]{} structure types. The T/Al antisite disorder essentially corresponds to the local differences between ThCr$_2$Si$_2$, CaBe$_2$Ge$_2$, HoCuAl$_3$ or BaNiSn$_3$ structures. Extinction of characteristic reflections revealed a body centred tetragonal lattice and the structure solution corresponded to a BaNiSn$_3$ type structure for [CeAuAl$_{3}$]{}, [CeCuAl$_{3}$]{} and [CePtAl$_{3}$]{}. Evidence for putative Au/Al antisite disorder in [CeAuAl$_{3}$]{} was below the detection limit, whereas for [CeCuAl$_{3}$]{} a small Cu/Al mixing on the copper site was observed consistent with a $\sim$5% site occupation factor (sof) deficiency on the 4b Al site. A higher degree of antisite disorder was found in our [CePtAl$_{3}$]{} sample. Assuming a stoichiometric composition and fully occupied atomic sites of [CePtAl$_{3}$]{}, the antisite Pt/Al disorder appears to be as high as 18% sof at both T and Al 2a sites. The obtained unit cell volumes were found to decrease in the series Au – Pt – Cu in accordance with the metallic radii (r$_{Au}$=1.44Å, r$_{Pt}$=1.39Å, r$_{Cu}$=1.28Å). For [CeAgAl$_{3}$]{} a small orthorhombic distortion of the BaAl$_4$ type lattice was detected of order $\sim$0.1Å. The character of the reflection splitting and the extinction revealed a C-base centred orthorhombic lattice with a$_0\approx$a$_T\sqrt{2}$, b$_0$$\approx$c$_T$ and c$_0$$\approx$a$_T\sqrt{2}$, where “O” and “T” correspond to orthorhombic and tetragonal lattice dimensions. The structure solution corresponds to the *Cmcm* space group and a model consistent with the structure type of PbSbO$_2$Cl. The orthorhombic superstructure observed occurs in oxyhalides and is found rarely in intermetallics, e.g., in LaZn$_4$ [@Oshchapovsky2012] and SrPdGa$_3$ [@Seidel2014]. Similar to [CeCuAl$_{3}$]{} a small Ag/Al mixing occurs on the Ag site along with a weak (ca. 4% sof) deficiency on the 8e Al site. A comparison of the Ag and Au based [Ce*T*Al$_{3}$]{} structures is plotted in Fig.\[pic:strukturen\](e) and (f). We note, that Ag and Au nominally possess the same metallic radii (r$_{Ag}$=1.44Å), which may correspond to a similar magnitude of chemical pressure in both [Ce*T*Al$_{3}$]{} (*T*=Au, Ag). Indeed, the normalised lattice dimensions of [CeAgAl$_{3}$]{} (a$_T$$\approx$(a$_0$+b$_0$)/2$\sqrt{2}$$\approx$4.3566(8)Å, c$_T$$\approx$10.837(2)Å) are very similar to those of [CeAuAl$_{3}$]{}. Further, the relative cerium and aluminium positions reproduce well, whereas major differences were noticed for the distribution of gold and silver sites. By their arrangement the structure may be viewed as an intermediate step between BaAl$_4$ (or HoCuAl$_3$) and the BaNiSn$_3$ structure type. The antisite disorder observed smears out the layered structure along the c-axis in [CeAgAl$_{3}$]{}, which initially can be described as A - \[T[X]{}-X-T[X]{}\] – A, where T[X]{} indicated mixed T and X layer occupation. ![Refined structural modifications of [CePdAl$_{3}$]{}: plane atoms - BaNiSn$_3$ type of structure, structured atoms - klassengleiche subgroup of BaNiSn$_3$ type of structure with 3*a* multiplied axis.[]{data-label="pic:anat_2"}](figure4.jpg){width="0.9\linewidth"} ![Lattice parameter, cell volume and c/a ratio for different [Ce*T*Al$_{3}$]{} (*T*=Ag, Au, Cu, Pd, Pt) plotted as a function of T metallic radius (black squares). Data showed by red circles correspond to CeAu$_{1-x}$Cu$_x$Al$_3$ solid solution taken from [@Klicpera2014]. Two data points for [CePdAl$_{3}$]{} correspond to BaNiSn$_3$ and “3a” structures. The lines are showed as guides for the eyes.[]{data-label="pic:latticeParams"}](figure5.jpg){width="\linewidth"} The set of Bragg reflections collected for our single crystal of [CePdAl$_{3}$]{} could neither be described with a conventional tetragonal nor with orthorhombic lattices. Careful indexing reveals a tetragonal cell with a *c*-parameter comparable to [Ce*T*Al$_{3}$]{} (*T*=Au, Ag, Cu, Pt), however, with an *a* lattice parameter multiplied by a factor of $\sim$3. Analysis of peak systematics resulted in a solution corresponding to a body centred tetragonal lattice. This is in agreement with group theory, as the 3*a*, *c* axis multiplication in the *I4mm* space group is allowed in the frame of klassengleiche subgroups IIc for *I4mm* (“3a” structure). Multiplication of the axis results in a splitting of the Ce and Pd positions into three independent sites, whilst the two initial aluminium sites are split into seven positions in the larger cell. Similar to [CeCuAl$_{3}$]{}, [CePtAl$_{3}$]{}, [CeAgAl$_{3}$]{} Rietveld refinement corresponded to a Pd/Al antisite disorder, where, however, only one Pd site of three as well as two of the seven Al sites were affected. Examination of a crushed and pulverized ingot using lab X-ray powder diffraction did not reveal any hints for a 3*a* structure, but a conventional BaNiSn$_3$ modification for [CePdAl$_{3}$]{} (see Table \[tab:S2\]). The crystal structures of both the BaNiSn$_3$-type and the 3*a*, *c* axis multiplied modifications of the [CePdAl$_{3}$]{} structures are shown in Fig.\[pic:anat\_2\]. The Ce(0,0,0) atomic position is here used as a reference. In comparison to the BaNiSn$_3$-type modification of [CePdAl$_{3}$]{}, the 3*a* structure contains a certain degree of Pd/Al disorder and by its layer structure A - \[T[X]{}-X-T[X]{}\] - A it becomes very similar to [CeAgAl$_{3}$]{}. An attempt to refine the structure modulation in [CePdAl$_{3}$]{} was performed assuming both commensurate and incommensurate modulation vectors of the BaNiSn$_3$ parent structure. The atomic modulated displacements were comparable to standard uncertainties of atom localization and improvement to fit residuals (when compared to the structure model with 3*a*, *c* axis multiplication), and thus, found to be marginal. The lattice parameters of the [Ce*T*Al$_{3}$]{} (*T*=Ag, Au, Cu, Pd, Pt) samples studied were found to be in a good agreement with experimental data reported in Refs.[@Klicpera2014; @Moze1996; @Zare1965; @Mentink1993; @Kontani1994]. Cell volumes in [Ce*T*Al$_{3}$]{} (*T*=Ag, Au, Cu, Pd, Pt) follow a linear dependence as a function of T ionic radii (see Fig. \[pic:latticeParams\]) in line with values reported for the [CeAuAl$_{3}$]{} to [CeCuAl$_{3}$]{} pseudobinary system [@Klicpera2014], which displays a behaviour characteristic of Vegards law. The observed linear dependence may be viewed in terms of structural changes in [Ce*T*Al$_{3}$]{} (*T*=Ag, Au, Cu, Pd, Pt) driven by “chemical pressure”. However, all [Ce*T*Al$_{3}$]{} studied are characterised by a large anisotropy. Notably, for [CeCuAl$_{3}$]{}, [CeAuAl$_{3}$]{} and corresponding solid solutions [@Klicpera2014] the c/a ratio is nearly constant with c/a$\sim$2.5. This compares with [CePdAl$_{3}$]{} [CePtAl$_{3}$]{} and [[CeAgAl$_{3}$]{}]{}, which display lower c/a values. Taking a separate look on the experimentally determined lattice parameters indicates *a* parameters that are systematically reduced, as well as *c* lattice parameters that are systematically smaller than those observed in the [CeAuAl$_{3}$]{} – [CeCuAl$_{3}$]{} pseudobinary system. Taking into account nominally equal metallic radii for Au and Ag, the differences observed for the a lattice dimensions and cell volume of [CeAuAl$_{3}$]{} and [CeAgAl$_{3}$]{} as well as their different structure, significantly reduces the relevance of an approximation in the spirit in terms of “chemical pressure” and shows that the chemical nature of T in [Ce*T*Al$_{3}$]{} plays a dominant role for the structure and its distortion. Electric Resistivity -------------------- ![image](figure6.jpg){width="\linewidth"} $\rho_0$ ($\mu\Omega$cm) RRR $\rho_{300\,K}$ ($\mu\Omega$cm) ------------------- -------------------------- ------ --------------------------------- [CeAuAl$_{3}$]{}  15.18 2.50 37.8 [CeCuAl$_{3}$]{}  12.9 2.84 36.6 [CeAgAl$_{3}$]{}  7.9 3.81 30.1 [CePdAl$_{3}$]{}  25.1 1.39 35.0 [CePtAl$_{3}$]{}  67.4 1.25 84.1 : Parameters derived from resistivity measurements on [Ce*T*Al$_{3}$]{} with *T*=Au,Cu,Ag,Pd and Pt. $\rho_0$ is the residual resistivity extrapolated to T=0, RRR the residual resistivity ratio $\rho$(300K)/$\rho_0$, where $\rho$(300K) denotes the resistivity at room temperature.[]{data-label="tab:res"} Shown in Fig.\[pic:resistivity\](a1) to (a5) is the electric resistivity for [Ce*T*Al$_{3}$]{} with [*T*=Cu, Ag, Au, Pd and Pt]{} covering three decades of temperature from $\sim$0.1K to 300K. For these data the electrical currents were applied parallel to the crystallographic *c*-axis. Further, Fig. \[pic:resistivity\] (b1) - (b5) shows the same data on a temperature scale up to 20K for a better visibility of the low temperature features. All samples show metallic behaviour over the complete temperature range. The residual resistivity, resistivity at 300K and associated residual resistivity ratios (RRR) of all compounds are summarized in table \[tab:res\]. The residual resistivity ratios below 5 for all compounds and are in good agreement with the literature. Observed values have been found lower than in Si or Ge based compounds, probably due to the remaining antisite disorder in our samples. We wish to note that preliminary work suggests, that careful annealing studies and tiny compositional adjustments to the starting compositions as well as variations of the parameters used for single crystal growth promise considerable improvements to these values. The key features observed in the temperature dependences may be summarised as follows. For [CeAuAl$_{3}$]{} the resistivity decreases of from 300K to 15K in a quasi-linear manner, probably due to crystal electric field levels as discussed in ref. [@Paschen1998]. With decreasing temperature the decrease of the resistivity is followed by a plateau around 8K and another strong decrease below the onset of magnetic order at 1.32K. However the onset of the decrease at 4.5K is considerably higher than $T_c$ and maybe attributed to the coherence of a Kondo lattice. At $\sim$1K, an additional maximum due to the opening of a super-zone gap is observed reminiscent of behaviour observed in CeGe [@Das2012] and Cr [@Stebler1970]. A linear decrease is observed in [CeAgAl$_{3}$]{} down to approximately 17K, followed by a steeper linear decrease. The change of slope is of unknown origin, but may be due to the changes of the coupling with the lattice degrees of freedom. A clear kink at 3.2K, corresponding to the magnetic transition temperature of ferromagnetic order reported in the literature, is followed by a quadratic temperature dependence down to lowest temperatures. The specific temperature dependence may be characteristic of weakly spin-polarised Fermi liquid, where electron magnon scattering is weak. In [[CeCuAl$_{3}$]{}]{} the linear decrease at high temperatures is followed by a plateau between 30K and 10K, whereas a clear maximum as reported in Refs[@Kontani1994; @Klicpera2015mag] is not observed. The strong decrease below 10K traditionally would be attributed to the coherence of a Kondo lattice, where a clear signature of the magnetic ordering transition at 2.1K is not observed and maybe hidden by this Kondo coherence. However, on a speculative note an alternative scenario may be related to the strong electron-phonon interactions associated with the quasi-bound vibron. Below 0.5K the decrease flattens towards a residual resistivity of 12.9$\mu\Omega$cm, which is among the lowest reported in the literature for this compound. The linear decrease of the resistivity in [CePtAl$_{3}$]{} down to $\sim$50K is followed by a stronger decrease to a minimum at 15.9K. The origin of this kink may again be referred to as a Kondo coherence effect and yet, as speculated above, related to changes of the coupling to the crystal lattice. A logarithmic increase to a maximum at 4.3K is most probably due the onset of Kondo scattering and followed by a strong decrease suggesting again Kondo coherence. Towards lowest temperatures, the decrease flattens and is extrapolated to a residual resistivity of 25.1$\mu\Omega$cm. Overall, the resistivity is about a factor of three higher than for other compounds studied, which may be due to the higher degree of antisite disorder. There are no signs suggesting magnetic order, however below $\sim$0.8K the decrease of the resistivity flattens out. Last but not least, [CePdAl$_{3}$]{} shows a sublinear decrease over the entire temperature range, probably due to crystal field effects. The overall behaviour with a faint maximum around 5.7K and strong decrease down to the lowest temperatures very much resembles the behaviour of [CePtAl$_{3}$]{}. However, both the maximum and the decrease to lowest temperatures are less pronounced. Towards the lowest temperatures there is a linear decrease of the resistivity instead of a flattening out. As for [CePtAl$_{3}$]{} we do not observe any features suggesting magnetic order. Conclusions =========== Single crystals of the non-centrosymmetric intermetallic compounds [Ce*T*Al$_{3}$]{} ([*T*=Cu, Ag, Au, Pd and Pt]{}) were grown. All samples were grown by means of optical float-zoning, to the best of our knowledge for the first time. Large single crystals were obtained for all compounds except [CeAgAl$_{3}$]{}. For [CePdAl$_{3}$]{} the growth rate had to be reduced from 5mm/h to 1mm/h to be stable. An attempt to grow a poly-crystal of [CeNiAl$_3$]{} resulted in the compound [CeNi$_2$Al$_5$]{}. The crystal structure was determined by single crystal and powder X-ray diffraction. The crystal structure is *I4mm* in all cases except for [CeAgAl$_{3}$]{}, which shows a small distortion corresponding to the orthorhombic *Cmcm* structure. Further, the space group of [CePdAl$_{3}$]{} has a three fold multiplied lattice parameter a, which is allowed by group theory. Although the unit cell volume follows the metallic radii of the transition metal element rather well, the c/a ratio is deviating strongly. Site-antisite disorder of the T and X-site appears to be nominally absent in [[CeAuAl$_{3}$]{}]{}, but reaches a value as high as 18% in [CePtAl$_{3}$]{}, accounting for the larger residual resistivities as compared to the Si and Ge based compounds in this series. While the sample quality of our samples as grown is already high, we expect that careful studies addressing the precise growth conditions and role of post-growth treatment, such as annealing and/or electro-transport, promise significant improvements. Further, resistivity measurements as described in the traditional language used for Ce-based compounds are consistent with Kondo behaviour for all samples except [CeAgAl$_{3}$]{}. The occurrence of the Kondo maximum depends thereby on the transition metal element and is most pronounced for *T*=Pt. It is interesting to speculate, if, what seems to be Kondo-like behaviour, eventually turns out to be due to strong electron-phonon coupling as related to the notion of the quasi-bound vibron state reported in [[CeCuAl$_{3}$]{}]{}. As a final point, features in the resistivity of [[CeAuAl$_{3}$]{}]{}, [[CeCuAl$_{3}$]{}]{} and [[CeAgAl$_{3}$]{}]{} at low temperatures are consistent with published reports of antiferromagnetic order in the former two compounds and ferromagnetism in the latter system, respectively. This contrasts the literature on [CePtAl$_{3}$]{} and [[CePdAl$_{3}$]{}]{}, where we do not find signs of magnetic order down to 0.1K in the resistivity, whereas spin glass order has been reported for [CePtAl$_{3}$]{} and antiferromagnetic order for [CePdAl$_{3}$]{}. Taken together all single-crystals are of high quality consistent with very low residual resistivities as compared to the literature. This demonstrates that optical float-zoning as employed under pure conditions provides an excellent method for the preparation of Ce-based intermetallic compounds. Acknowledgements ================ We wish to thank Matthias Ruderer for access to the glove box of E13, Rainer Jungwirth (FRMII) for EDX measurements on [CeAuAl$_{3}$]{}, the TUM crystal lab for crystal orientation with Laue X-ray and preparation of the samples. Financial support of the Deutsche Forschungsgemeinsschaft and DFG TRR80 are gratefully acknowledged. References {#references .unnumbered} ========== [p[2.5cm]{}lllll]{} & [CeAgAl$_{3}$]{}  & [CeAuAl$_{3}$]{}  & [CeCuAl$_{3}$]{}  & [CePdAl$_{3}$]{}  & [CePtAl$_{3}$]{} \ Diffractometer & Stoe IPDS-II & Rigaku Saturn724+ & Stoe IPDS-II & Stoe IPDS-II & Rigaku Saturn724+\ Radiation & MoK$\alpha$ & MoK$\alpha$ & MoK$\alpha$ & MoK$\alpha$ & MoK$\alpha$\ Wavelength (Å)& 0.71073 & 0.71073 & 0.71073 & 0.71073 & 0.71073\ Abs. correction type & empirical & empirical & empirical & empirical & empirical\ Abs. coeff. $\mu$ & 15.737 & 237.31 & 16.792 & 161.11 & 232.91\ Crystal system & orthorhombic & tetragonal & tetragonal & tetragonal & tetragonal\ space group & Cmcm & I4mm & I4mm & I4mm & I4mm\ a (Å) & 6.2050(12) & 4.3364(4) & 4.2508(4) & 12.988(1) & 4.3239(4)\ b (Å) & 10.837(2) & 4.3364(4) & 4.2508(4) & 12.988(1) & 4.3239(4)\ c (Å) & 6.1176(12) & 10.8501(15) & 10.6436(13) & 10.589(1) & 10.6670(15)\ V (Å$^3$) & 411.38(14) & 204.03(4) & 192.32 (3) & 1786.2(5) & 199.43(4)\ Z & 4 & 2 & 2 & & 2\ $\rho_\mathrm{calc}$ (g/cm$^3$) & 5.206 & 6.805 & 4.758 & 5.421 & 6.930\ index range h & -9...9 & -5...6 & -5...5 & -17...17 & -6...3\ & -16...16 & -3...6 & -5...5 & -17...17 & -6...5\ & -9...8 & -16...14 & -14...14 & -14...14 & -16...16\ reflections collected & 14323 & 1821 & 2475 & 6685 & 879\ independent reflections & 417 & 226 & 100 & 703 & 141\ R$_\mathrm{int}$ (%) & 9.52 & 3.52 & 10.49 & 2.00 & 3.45\ RF2 (%) & 6.61 & 5.25 & 2.18 & 8.21 & 7.60\ RF2w (%) & 5.45 & 9.36 & 5.79 & 6.29 & 8.50\ RF (%) & 3.30 & 3.78 & 2.00 & 5.84 & 4.81\ $\chi^2$ & 0.369 & 1.98 & 1.373 & 5.84 & 2.24\ no. of free parameters & 19 & 14 & 14 & 26 & 14\ [lXXXXXXXXXXXX]{} Atom & Wyckoff & x/a & y/b & z/c & sof & u$_{eq}$ & u$_{11}$ & u$_{22}$ & u$_{33}$ & u$_{12}$ & u$_{13}$ & u$_{23}$\ \ Ce & 4c & 0 & 0.24552(6) & 1/4 & 1.0 & 0.0105(4) & 0.0097(3) & 0.0102(3) & 0.0117(5) & 0 & 0 & 0\ Ag1 & 4c & 1/2 & 0.38336(9) & 1/4 & 0.918(8) & 0.0133(5) & 0.0125(4) & 0.0123(5) & 0.0150(6) & 0 & 0 & 0\ Al1 & 4c & 1/2 & 0.38336(9) & 1/4 & 0.082(8) & 0.0133(5) & 0.0125(4) & 0.0123(5) & 0.0150(6) & 0 & 0 & 0\ Al2 & 4c & 1/2 & 0.1512(4) & 1/4 & 1.0 & 0.0132(17) & 0.0132(17) & 0.0154(16) & 0.0111(19) & 0 & 0 & 0\ Al3 & 8e & 0.2309(4) & 1/2 & 0 & 0.96(1) & 0.0140(12) & 0.0141(10) & 0.0132(10) & 0.0148(16) & 0 & 0 & 0.0002(10)\ \ Ce1 & 2a & 0 & 0 & 0 & 1.0 & 0.0078(7) & 0.0068(6) & 0.0068(6) & 0.0099(7) & 0 & 0 & 0\ Au1 & 2a & 0 & 0 & 0.63564(17) & 1.0 & 0.0124(6) & 0.0137(6) & 0.0137(6) & 0.0099(5) & 0 & 0 & 0\ Al1 & 2a & 0 & 0 & 0.4064(12) & 1.0 & 0.008(3) & 0.003(2) & 0.003(2) & 0.017(4) & 0 & 0 & 0\ Al2 & 4b & 0 & 1/2 & 0.2608(7) & 1.0 & 0.009(3) & 0.005(3) & 0.011(3) & 0.012(3) & 0 & 0 & 0\ \ Ce1 & 2a & 0 & 0 & 0 & 1.0 & 0.0063(4) & 0.0058(4) & 0.0058(4) & 0.0071(5) & 0.000 & 0.000 & 0.000\ Cu1 & 2a & 0 & 0 & 0.6314(3) & 0.84(4) & 0.0084(12) & 0.0086(13) & 0.0086(13) & 0.0080(19) & 0.000 & 0.000 & 0.000\ Al1 & 2a & 0 & 0 & 0.6314(3) & 0.16(4) & 0.0084(12) & 0.0086(13) & 0.0086(13) & 0.0080(19) & 0.000 & 0.000 & 0.000\ Al2 & 2a & 0 & 0 & 0.4040(8) & 1.0 & 0.0092(15) & 0.0062(17) & 0.0062(17) & 0.015(3) & 0.000 & 0.000 & 0.000\ Al3 & 4b & 0 & 1/2 & 0.2489(6) & 0.94(4) & 0.0081(13) & 0.008(3) & 0.009(3) & 0.0079(14) & 0.000 & 0.000 & 0.000\ \ Ce1 & 2a & 0 & 0 & 0 & 1.0 & 0.0086(14) & 0.0084(12) & 0.0084(12) & 0.0090(19) & 0.00000 & 0.00000 & 0.00000\ Pt1 & 2a & 0 & 0 & 0.6364(3) & 0.816(10) & 0.0107(11) & 0.0127(10) & 0.0127(10) & 0.0067(13) & 0.00000 & 0.00000 & 0.00000\ Al1 & 2a & 0 & 0 & 0.6364(3) & 0.184(10) & 0.0107(11) & 0.0127(10) & 0.0127(10) & 0.0067(13) & 0.00000 & 0.00000 & 0.00000\ Pt2 & 2a & 0 & 0 & 0.3858(12) & 0.816(10) & 0.034(5) & 0.021(3) & 0.021(3) & 0.060(8) & 0.00000 & 0.00000 & 0.00000\ Al2 & 2a & 0 & 0 & 0.3858(12) & 0.184(10) & 0.034(5) & 0.021(3) & 0.021(3) & 0.060(8) & 0.00000 & 0.00000 & 0.00000\ Al3 & 4b & 0 & 1/2 & 0.2566(11) & 1.0 & 0.011(5) & 0.010(5) & 0.013(5) & 0.011(4) & 0.00000 & 0.00000 & 0.00000\ [lXXXXXX]{} Atom & Wyckoff & x/a & y/b & z/c & sof & u$_{eq}$\ \ Ce1 & 2a & 0 & 0 & 0 & 1.0 & 0.01266(30)\ Ce2 & 8d & 0.3333(4) & 0 & 0.0189(9) & 1.0 & 0.0094(15)\ Ce3 & 8c & 0.3291(4) & 0.32918 & 0.012(2) & 1.0 & 0.0129(10)\ Pd1 & 2a & 0 & 0 & 0.649(2) & 1.0 & 0.0086(17)\ Pd2 & 8d & 0.5 & 0.1653(5) & 0.875(2) & 1.0 & 0.0139(25)\ Pd3 & 8c & 0.1696(5) & 0.16963 & 0.155(2) & 0.65(2) & 0.0103(19)\ Al1 & 8c & 0.1696(5) & 0.16963 & 0.155(2) & 0.34(2) & 0.0103(19)\ Al2 & 2a & 0 & 0 & 0.422(3) & 0.78(7) & 0.0113\ Pd4 & 2a & 0 & 0 & 0.422(3) & 0.21(7) & 0.0113\ Al3 & 4b & 0.5 & 0 & 0.262(6) & 1.0 & 0.0076(88)\ Al4 & 8d & 0.5 & 0.1687(9) & 0.102(2) & 0.80(3) & 0.0190(57)\ Pd5 & 8d & 0.5 & 0.1687(9) & 0.102(2) & 0.19(3) & 0.0190(57)\ Al5 & 8d & 0.5 & 0.339(3) & 0.257(5) & 1.0 & 0.0089(76)\ Al6 & 8d & 0.159(3) & 0 & 0.271(5) & 1.0 & 0.0152(89)\ Al7 & 8c & 0.154(2) & 0.15476 & 0.882(4) & 1.0 & 0.0241(63)\ Al8 & 16e & 0.347(2) & 0.168(2) & 0.260(3) & 1.0 & 0.0076(51)\ \ Ce1 & 2a & 0 & 0 & 0 & 1.0 & 0.0316(8)\ Pd1 & 2a & 0 & 0 & 0.6247(4) & 1.0 & 0.0410(14)\ Al1 & 2a & 0 & 0 & 0.3488(12) & 1.0 & 0.0114(11)\ Al2 & 4b & 0 & 1/2 & 0.2579(8) & 1.0 & 0.0114(11)\
--- abstract: 'Recently it has been found that models with at least two lifetimes have to be considered when analyzing the angle resolved photoemission data in the nodal region of the cuprates \[T. Kondo et al., Nat. Commun. 6, 7699 (2015)\]. In this paper we compare two such models. First we show that the phenomenological model used by Kondo et al. violates the sum rule for the occupation number. Next we consider the recently proposed model of the so-called Dynes superconductors, wherein the two lifetimes measure the strengths of pair-conserving and pair-breaking processes. We demonstrate that the model of the Dynes superconductors is fully consistent with known exact results and we study in detail the resulting spectral functions. Finally, we show that the spectral functions in the nodal region of the cuprates can be fitted well by the model of the Dynes superconductors.' author: - František Herman and Richard Hlubina title: 'Consistent two-lifetime model for spectral functions of superconductors ' --- Introduction ============ Recent experimental progress in angle resolved photoemission spectroscopy (ARPES)[@Hashimoto14] enables not only to determine the position of features in the electron spectral function, but also to study more subtle issues such as the spectral lineshapes. At least for the conventional low-temperature superconductors, there does exist a theoretical technique which can address such issues - namely the Eliashberg theory,[@Marsiglio08] which allows for strong coupling between the electrons and bosonic collective modes. However, it is not obvious whether this type of theory is applicable to the cuprates. Moreover, even conventional superconductors may possess complicated phonon spectra and/or they can exhibit substantial elastic scattering,[@Szabo16] both of which complicate the Eliashberg analysis. For all these reasons, it is desirable to have a simple and generic theory which does take finite quasiparticle lifetimes in superconductors into account. Let us start by noting that spectral functions of a BCS superconductor in presence of elastic pair-conserving scattering can be found even in textbooks.[@Zhu04; @Marsiglio08] However, it is well known that the density of states (or the so-called tomographic density of states in case of anisotropic superconductors[@Reber12]) implied by such spectral functions exhibits a full spectral gap consistent with the Anderson theorem. On the other hand, experimentally, the gap is quite often only partial[@Szabo16; @Reber12] and the tunneling density of states is better described by the phenomenological Dynes formula.[@Dynes78; @White86] This means then that, in order to take the non-trivial density of states into account, also a second type of scattering processes - which are not subject to the Anderson theorem - have to be considered. Two-lifetime phenomenology of precisely this type has in fact been applied quite recently,[@Kondo15] with the aim to parameterize the high-resolution ARPES data in the nodal region of the cuprates. The goals of this paper are twofold. First, in Section 2 we demonstrate that the model used in Ref.  can be cast into a form consistent with the generalized Eliashberg theory, and that it exhibits several attractive features. However, we also show that the resulting $2\times 2$ Nambu-Gor’kov propagator violates the sum rule for the occupation number and therefore the model used in Ref.  should be discarded. Our second goal is to demonstrate that, nevertheless, a fully consistent two-lifetime phenomenology for superconductors does exist. To this end we consider the recently proposed model of the so-called Dynes superconductors,[@Herman16] wherein the two-lifetime phenomenology results as a consequence of taking into account both, the pair-conserving and the pair-breaking scattering processes. In Section 3 we present detailed predictions for the spectral functions of the Dynes superconductors and we explicitly demonstrate the applicability of this approach to the low-temperature ARPES data in the nodal region of the cuprates. Furthermore, in the Appendices we show that the model of the Dynes superconductors is fully consistent with known exact results, and we present explicit formulas for the momentum distribution functions[@Campuzano04] within the Eliashberg theory. Finally, in Section 4 we present our conclusions. The model used by Kondo et al. ============================== Following previous theoretical suggestions,[@Norman98; @Chubukov07] the authors of Ref.  fit their high resolution ARPES data in the nodal region of optimally doped and overdoped Bi2212 samples to spectral functions derived from the phenomenological self energy $$\Sigma(\mathbf{k},\omega) = -i \Gamma_1 + \frac{\overline{\Delta}^2} {\omega + \varepsilon_{\mathbf{k}}+i\Gamma_0} \label{eq:SelfEnergy}$$ and they interpret the scattering rates $\Gamma_1$ and $\Gamma_0$ as the single-particle and pair scattering rates, respectively. We shall comment on these identifications later. In order to demonstrate the physical meaning of the phenomenological self energy Eq. , let us first note that it implies that the electron Green function in the superconducting state can be written in the form $$G(\mathbf{k},\omega) = \frac{(\omega + i\gamma) + (\varepsilon_{\mathbf{k}} + i\gamma')} {(\omega + i\gamma)^2 - (\varepsilon_{\mathbf{k}} + i\gamma')^2 - \overline{\Delta}^2}, \label{eq:kondo_green}$$ where we have introduced $\gamma = (\Gamma_0 + \Gamma_1)/2$ and $\gamma' = (\Gamma_0 - \Gamma_1)/2$. According to Ref. , throughout the superconducting phase $\gamma'<0$. With increasing temperature $|\gamma'|$ decreases and it vanishes at the critical temperature. The main observation of this Section is that Eq.  should form the upper left component of the general $2\times 2$ Nambu-Gor’kov Green function for a superconductor: $$\label{eq:G_g} {\hat G}(\mathbf{k},\omega) = \frac{\omega Z(\mathbf{k},\omega)\tau_0 + \left[\varepsilon_{\mathbf{k}} + \chi(\mathbf{k},\omega)\right]\tau_3 + \phi(\mathbf{k},\omega)\tau_1}{\left[\omega Z(\mathbf{k},\omega)\right]^2 - \left[\varepsilon_{\mathbf{k}} + \chi(\mathbf{k},\omega)\right]^2 - \phi(\mathbf{k},\omega)^2},$$ where $\tau_i$ are the Pauli matrices, $Z(\mathbf{k},\omega)$ is the wave-function renormalization, and $\phi(\mathbf{k},\omega)$ is the anomalous self-energy. The function $\chi(\mathbf{k},\omega)$ describes the renormalization of the single-particle spectrum and it vanishes in a particle-hole symmetric theory; that is why it is usually neglected. For the sake of completeness, let us mention that the matrix $\tau_2$ does not enter Eq. , because we work in a gauge with a real order parameter. Since in the high-frequency limit the functions $\chi$, $\phi$, and $\omega(Z-1)$ should stay at most constant, Eq.  in this limit can be uniquely interpreted in terms of the general expression Eq.  with $$Z(\omega)=1+i\gamma/\omega, \quad \chi(\omega) = i\gamma', \quad \phi(\omega) = \overline{\Delta}, \label{eq:Z}$$ where we have chosen a real anomalous self-energy $\phi$. One checks readily that Eq.  reproduces Eq.  for all frequencies. The finite value of $\chi$ is unusual but seems to be attractive, since the cuprates, being doped Mott insulators, might be expected to break the particle-hole symmetry. Note also that all three functions $Z$, $\chi$ and $\phi$ do not depend on ${\bf k}$, which is the standard behavior. It is well known that the $2\times 2$ formalism leads to a redundant description, and therefore the Green function has to satisfy additional constraints. These constraints are most clearly visible in the Matsubara formalism, therefore let us reformulate Eq.  on the imaginary axis, allowing explicitly only for frequency-dependent functions $Z_n=Z(i\omega_n)$, $\chi_n=\chi(i\omega_n)$, and $\phi_n=\phi(i\omega_n)$: $${\hat G}(\mathbf{k},\omega_n) = -\frac{i\omega_n Z_n\tau_0 + \big(\varepsilon_{\mathbf{k}} + \chi_n\big)\tau_3 + \phi_n\tau_1}{\big(\omega_n Z_n\big)^2 + \big(\varepsilon_{\mathbf{k}} + \chi_n\big)^2 + \phi_n^2}. \label{eq:matsubara_green}$$ Due to the redundancy of the $2\times 2$ formalism, singlet superconductors have to exhibit the following symmetry: $$G_{22}(\mathbf{k},\omega_n)=-G_{11}(\mathbf{k},-\omega_n). \label{eq:symmetry}$$ Note that the functions Eq.  read as $Z_n=1+i\gamma/|\omega_n|$, $\chi_n = i\gamma'$, and $\phi_n = \overline{\Delta}$ on the imaginary axis. It is easy to see that when these expressions are plugged in into Eq. , the Green function does satisfy Eq. . An additional attractive feature of the phenomenology Eq.  is that it leads (also for finite values of $\gamma^\prime$) to the Dynes formula for the tunneling density of states, $$N(\omega)=N_0{\rm Re}\left[\frac{\omega+i\gamma} {\sqrt{(\omega+i\gamma)^2-{\bar \Delta}^2}}\right], \label{eq:dynes}$$ in agreement with the experimental findings of Ref. . The square root has to be taken so that its imaginary part is positive and we keep this convention throughout this paper. In Eq. (\[eq:dynes\]) $N_0$ denotes the normal-state density of states. Note that, although the particle-hole symmetry is broken due to $\chi\neq 0$, $N(\omega)$ is an even function of $\omega$. The broken particle-hole symmetry is clearly visible already in the normal state with $\overline{\Delta}=0$, in which case $$G_{11}(\mathbf{k},\omega)= \frac{1}{\omega - \varepsilon_{\mathbf{k}} + i\Gamma_1}, \quad G_{22}(\mathbf{k},\omega)= \frac{1}{\omega + \varepsilon_{\mathbf{k}} + i\Gamma_0}. \label{eq:normal}$$ These results show that $\Gamma_0$ and $\Gamma_1$ should not be interpreted as pair and single-particle scattering rates as has been done in Ref. , but rather as the scattering rates for the holes and for the electrons, respectively. ![Spectral function $A_{11}(\mathbf{k},\omega)$ of an electron inside the Fermi sea according to the model . The widths of the electron-like branch at $\omega\approx -E_{\bf k}$ and of the hole-like branch at $\omega\approx E_{\bf k}$, where $E_{\bf k}$ is the quasiparticle energy Eq. , are essentially determined by $\Gamma_1$ and $\Gamma_0$, respectively.[]{data-label="fig:kondo"}](fig1.pdf){width="7.0cm"} Unfortunately, Eqs.  turn out to be mutually inconsistent, but in a quite subtle way. In order to show this, let us introduce the spectral functions $A_{ii}(\mathbf{k},\omega)$ with $i=1,2$, corresponding to the Green functions $G_{ii}(\mathbf{k},\omega)$. Applying standard procedures, the following exact sum rules can be established, $$\begin{aligned} \int_{-\infty}^{\infty}\frac{d\omega}{1+e^{-\omega/T}} A_{11}(\mathbf{k},\omega) &=& \langle c^{}_{{\bf k}\uparrow}c^\dagger_{{\bf k}\uparrow}\rangle =1-n^{}_{{\bf k}\uparrow}, \\ \int_{-\infty}^{\infty}\frac{d\omega}{1+e^{-\omega/T}} A_{22}(\mathbf{k},\omega) &=& \langle c^\dagger_{-{\bf k}\downarrow}c^{}_{-{\bf k}\downarrow}\rangle =n^{}_{-{\bf k}\downarrow}.\end{aligned}$$ But in a singlet superconductor we have $n^{}_{{\bf k}\uparrow}=n^{}_{-{\bf k}\downarrow}$, and therefore the following exact relation should hold: $$\int_{-\infty}^{\infty}\frac{d\omega}{1+e^{-\omega/T}} \left[A_{11}(\mathbf{k},\omega)+A_{22}(\mathbf{k},\omega)\right]=1. \label{eq:sumrule}$$ Making use of Eqs.  at $T=0$, the integrals on the left-hand side can be taken easily and the results are $$\begin{aligned} \int_{0}^{\infty}d\omega A_{11}(\mathbf{k},\omega) &=& \frac{1}{2} +\frac{1}{\pi}\arctan\left(\frac{\varepsilon_{\mathbf{k}}}{\Gamma_1}\right), \\ \int_{0}^{\infty}d\omega A_{22}(\mathbf{k},\omega) &=& \frac{1}{2} -\frac{1}{\pi}\arctan\left(\frac{\varepsilon_{\mathbf{k}}}{\Gamma_0}\right).\end{aligned}$$ It can be seen readily that, if $\Gamma_0\neq \Gamma_1$, the sum rule  is violated by these results. One might have the impression that the particle-hole asymmetry which causes the sum rule violation is an artifact of our generalization of the Green function  to the matrix form . That this is not the case can be seen by plotting the spectral function directly for Eq. , see Fig. \[fig:kondo\], which clearly shows that the electron- and hole-like branches exhibit different scattering rates. We conclude that the phenomenology  is internally consistent only if $\Gamma_0=\Gamma_1=\gamma$, in which case $\gamma^\prime=0$. But then the Green function  has a simple two-pole structure with poles at $\omega=\pm E_{\bf k}-i\gamma$ where $$E_{\bf k}=\sqrt{\varepsilon_{\bf k}^2+{\overline{\Delta}}^2}, \label{eq:dispersion}$$ implying that the spectral function is a sum of two Lorentzians. However, the authors of Ref.  stress that the experimentally observed lineshapes are asymmetric. This means then that the phenomenology  is not applicable to the nodal spectral functions of the cuprates. Dynes superconductors ===================== Very recently, a consistent two-lifetime phenomenology for superconductors has been derived within the coherent potential approximation, assuming a Lorentzian distribution of pair-breaking fields and an arbitrary distribution of pair-conserving disorder.[@Herman16] If we denote the pair-breaking and pair-conserving scattering rates as $\Gamma$ and $\Gamma_s$, respectively, then the result of Ref.  for the Nambu-Gor’kov Green function of the disordered superconductor can be written as $$\hat{G}({\bf k},\omega)= \frac{(1+i\Gamma_s/\Omega)\left[(\omega+i\Gamma)\tau_0 +{\bar \Delta}\tau_1\right] +\varepsilon_{\bf k}\tau_3} {(\Omega+i\Gamma_s)^2-\varepsilon_{\bf k}^2}, \label{eq:dynes_green}$$ where $$\Omega(\omega)=\sqrt{(\omega+i\Gamma)^2-{\bar \Delta}^2}. \label{eq:dynes_omega}$$ Some useful properties of the function $\Omega(\omega)$ are described in Appendix A. The tunneling density of states implied by the Green function Eq.  is described by the Dynes formula Eq.  with $\gamma=\Gamma$, and that is why superconductors described by Eq.  have been called Dynes superconductors in Ref. . In this work we shall keep this term. It is worth pointing out that in absence of pair-breaking processes, Eq.  reproduces the textbook results for pair-conserving scattering, see e.g. Refs. . On the other hand, in the opposite limit $\Gamma_s=0$ when only pair-breaking processes are present, Eq.  coincides with the phenomenology  in the physically consistent case with $\Gamma_0=\Gamma_1=\Gamma$. Moreover, in the normal state with $\overline{\Delta}=0$ the Green function Eq.  becomes diagonal and its matrix elements are $$G_{11}(\mathbf{k},\omega)= \frac{1}{\omega - \varepsilon_{\mathbf{k}} + i\Gamma_n}, \quad G_{22}(\mathbf{k},\omega)= \frac{1}{\omega + \varepsilon_{\mathbf{k}} + i\Gamma_n}, \label{eq:dynes_normal}$$ where $\Gamma_n=\Gamma+\Gamma_s$ is the total scattering rate, which involves both, pair-breaking as well as pair-conserving scattering processes. Note that Eq.  does not exhibit the pathologies implied by Eq. . One checks readily that the Green function Eq.  is analytic in the upper half-plane of complex frequencies, as required by causality, and that $\hat{G}({\bf k},\omega)\propto \tau_0/|\omega|$ for $|\omega|\rightarrow \infty$. In Appendix B, we present an explicit proof that Eq.  satisfies the well-known sum rules for the zero-order moments of the electron spectral function, in particular also the sum rule Eq. . Moreover, in Appendix D we prove that the electron and hole spectral functions are positive-definite, as required by general considerations. In view of these observations, we believe that Eq.  represents the simplest internally consistent Green function for a superconductor with simultaneously present pair-breaking and pair-conserving scattering processes. This generic BCS-like Green function is parameterized by three energy scales: scattering rates $\Gamma$ and $\Gamma_s$, as well as by the gap parameter $\overline{\Delta}$. In what follows we present a detailed analysis of its spectral properties. Spectral functions of the Dynes superconductor for an electron with momentum ${\bf k}$ fixed to lie inside the Fermi sea are shown in Fig. \[fig:arpes1\]. The BCS quasiparticle peaks at $\omega\approx\pm E_{\bf k}$ are seen to be broadened by the total scattering rate $\Gamma_n$, irrespective of the ratio between pair-breaking and pair-conserving scattering processes. The relative importance of the two types of processes becomes important only in the vicinity of the chemical potential. For $\Gamma=0$ a full spectral gap appears for $|\omega|<\overline{\Delta}$, in agreement with the Anderson theorem, and additional peaks appear in the spectral function at $\omega=\pm \overline{\Delta}$. After switching on a finite pair breaking rate $\Gamma\neq 0$, the spectral gap starts to fill in, and at the same time the peaks at $\omega=\pm \overline{\Delta}$ get smeared away. Finally, when $\Gamma=\Gamma_n$ and the pair-conserving processes disappear completely, the spectral function is given by a sum of two Lorentzians centered at $\omega=\pm E_{\bf k}$. Spectral functions for an electron directly at the Fermi surface, $\varepsilon_{\bf k}=0$, are somewhat different and they are shown in Fig. \[fig:arpes2\]. The difference is caused by the fact that the quasiparticle energies $\pm E_{\bf k}$ in this case coincide with $\pm \overline{\Delta}$. Therefore only two peaks are present in the spectral function, in contrast to the general case with four peaks. However, the rest of the phenomenology can be simply related to the case $\varepsilon_{\bf k}\neq 0$: the high-energy form of the spectral functions is controlled exclusively by the total scattering rate $\Gamma_n$, whereas finite pair-breaking fills in the spectral gap and smears the peaks at $\omega=\pm \overline{\Delta}$. In Appendix C we complement the discussion of electron spectral functions by studying the so-called momentum distribution functions.[@Campuzano04] In addition to presenting explicit formulas valid for any Eliashberg-type superconductor with only frequency-dependent functions $Z(\omega)$ and $\phi(\omega)$, we also show that making use of the momentum distribution functions, one can determine the total scattering rate $\Gamma_n$ of a Dynes superconductor in an alternative way. In Fig. \[fig:experiment\] we demonstrate that Eq.  can fit the experimentally observed symmetrized spectral functions in the nodal region of the cuprates with at least comparable quality as Eq. . The number of fitting parameters is the same for both fits: two scattering rates, the gap $\overline{\Delta}$, and the energy scale $\Lambda$ which determines the phenomenological background $|\omega|/\Lambda^2$. This type of background description has been used in all fits presented in Ref. . We have determined the fitting parameters by the standard least-squares technique in the interval from -100 meV to 100 meV; their values are $\overline{\Delta}=25$ meV, $\Gamma=3.7$ meV, $\Gamma_s=16$ meV, and $\Lambda=103$ meV for the fit using Eq. . On the other hand, we have found $\overline{\Delta}=27$ meV, $\Gamma_0=0$ meV, $\Gamma_1=12$ meV, and $\Lambda=87$ meV for the fit using Eq. . Both fits find roughly the same value of the gap $\overline{\Delta}$ and of the background parameter $\Lambda$, but the scattering rates turn out to be quite different. In view of the latter observation it seems to be worthwhile to repeat the analysis of Ref. , but with the ansatz Eq.  for the electron Green function. It remains to be seen whether this type of analysis can be applied also at temperatures above $T_c$, and what is the resulting temperature dependence of the scattering rates $\Gamma$ and $\Gamma_s$. The small value of the pair-breaking rate $\Gamma$ with respect to the large pair-conserving rate $\Gamma_s$ implied by Fig. \[fig:experiment\] is consistent with the observation that the concept of the tomographic density of states is useful in the analysis of the ARPES data.[@Reber12; @Reber15] Since in an anisotropic superconductor large-angle scattering is pair-breaking, the smallness of $\Gamma$ implies that the dominant scattering processes (at least in the nodal region and at low temperatures) have to be of the forward-scattering type. The importance of forward-scattering processes in the nodal region has been confirmed recently also by an analysis of the momentum distribution curves.[@Hong14] As for the microscopic origin of the forward scattering, it has been argued that it can be caused by elastic scattering on disorder located outside the CuO$_2$ planes.[@Abrahams00] Other explanations include scattering on (quasi) static long-range fluctuations, perhaps due to competing order, or scattering on low-energy long-wavelength emergent gauge fields.[@Lee06] These different scenaria can be distinguished by different dependence on temperature and/or Fermi-surface location and further experimental work is needed to discriminate between them. Conclusions =========== To summarize, we have shown that the phenomenological self-energy Eq. , which has been proposed theoretically in Refs.  and applied recently in Ref. , is internally consistent only in the case when $\Gamma_0=\Gamma_1$; in this case the electron spectral function in the superconducting state is a sum of two Lorentzians. The simplest consistent genuine two-lifetime Green function of a superconductor is given by Eq. . This model depends on two scattering rates: the pair-breaking scattering rate $\Gamma$ and the pair-conserving scattering rate $\Gamma_s$. The Green function Eq.  implies that the density of states is described by the Dynes formula Eq.  with $\gamma=\Gamma$ and the electron spectral functions exhibit more structure than might be expected naively, see Figs. \[fig:arpes1\],\[fig:arpes2\]. The Green function Eq.  is analytic in the upper half-plane, it has the correct large-frequency asymptotics, its diagonal spectral functions are positive-definite, and it satisfies the exact sum rules Eq.  and Eq. . Moreover, in the three limiting cases of either $\Gamma=0$, or $\Gamma_s=0$, or $\overline{\Delta}=0$, it reduces to the well-known results. Therefore, although Eq.  has been originally derived only for a special distribution of pair-breaking fields within the coherent potential approximation, we believe that it represents a [*generic*]{} two-lifetime Green function of a superconductor. Our results provide a (in principle) straightforward recipe for extracting the scattering rates $\Gamma$ and $\Gamma_n=\Gamma+\Gamma_s$ from experimental data: the pair-breaking scattering rate $\Gamma$ is best determined from the tunneling (or, in anisotropic superconductors, tomographic[@Reber12]) density of states, whereas the total scattering rate $\Gamma_n$ may be extracted from the widths of the quasiparticle peaks in spectral functions, see Fig. \[fig:arpes1\]. Alternatively, as shown in Appendix C, the scattering rate $\Gamma_n$ can be determined from the width of the momentum distribution functions, and it enters also the analysis of optical conductivity.[@Herman17] Obviously, description of superconductors making use of Eq.  can be quantitatively correct only at energies smaller than the typical boson energies of the studied system. At higher energies, application of a full-fledged Eliashberg-type theory[@Marsiglio08] - but extended so as to allow for processes leading to Eq.  at low energies - is unavoidable. For completeness, in Appendix C we have described a procedure which, starting from the assumption of only frequency-dependent Eliashberg functions $Z(\omega)$ and $\Delta(\omega)$, allows for their complete determination from ARPES data by combining two approaches: the momentum distribution technique and the tomographic density of states. Finally, in Fig. \[fig:experiment\] we have demonstrated that the low-temperature ARPES data in the nodal region of the cuprates can be fitted well using Eq. . Our results confirm previous claims about the importance of forward-scattering processes in this region, but identification of their physical origin will require further detailed angle- and temperature-dependent studies. Properties of the function $\Omega(\omega)$ =========================================== Let us decompose the function $\Omega(\omega)$ defined by Eq.  into its real and imaginary parts, $\Omega=\Omega_1+i\Omega_2$. One finds readily that $\Omega_{1,2}$ should satisfy the relations $$\Omega_1\Omega_2=\omega \Gamma, \qquad \Omega_1^2-\Omega_2^2=\nu^2, \label{eq:omega12}$$ where $\nu^2=\omega^2-\overline{\Delta}^2-\Gamma^2$. Our sign convention leads then to the following explicit expressions for $\Omega_{1,2}$: $$\begin{aligned} \Omega_1(\omega)&=&{\rm sgn}(\omega) \sqrt{\left[\sqrt{\nu^4+4\omega^2\Gamma^2}+\nu^2\right]/2}, \\ \Omega_2(\omega)&=& \sqrt{\left[\sqrt{\nu^4+4\omega^2\Gamma^2}-\nu^2\right]/2}.\end{aligned}$$ Note that $\Omega_1(\omega)$ is an odd function of $\omega$, while $\Omega_2(\omega)$ is positive definite and even. A straightforward calculation shows that for $\omega>0$ the following inequalities are valid: $$\Omega_1\leq \omega, \qquad \Omega_2\geq \Gamma. \label{eq:inequalities_2}$$ These inequalities will be used in Appendix D. It is worth pointing out that the function $\Omega_1(\omega)$ characterizing the Dynes superconductor is in principle directly measurable in low-temperature tunneling experiments. In fact, it is well known that in such experiments the derivative of the current-voltage characteristics, $dI/dV$, is proportional to the tunneling density of states $N(\omega)$ with $\omega=eV$. But, since $N(\omega)\propto d\Omega_1/d\omega$, the function $\Omega_1(\omega)$ is proportional to the measured function $I=I(V)$. Sum rules for the Dynes superconductors ======================================= In this Appendix we prove that Eq.  satisfies the sum rules for the zero-order moments of the electron spectral function. To this end, let us introduce an auxiliary complex function $F(\omega)$ of the real frequency $\omega$, $$F(\omega)=\varepsilon_{\bf k}^2- \left[\Omega(\omega)+i\Gamma_s\right]^2.$$ Note that the function $F(\omega)$ depends also on the momentum ${\bf k}$, but for the sake of simplicity this dependence will not be displayed explicitly. Let us furthermore define the function $$H(\omega)=\ln F(\omega)=\ln |F(\omega)|+i\varphi(\omega).$$ In the second equality we have represented the complex function $F(\omega)=|F(\omega)|\exp\{i\varphi(\omega)\}$ in terms of its amplitude $|F(\omega)|$ and phase $\varphi(\omega)$ constrained to the interval $(-\pi,\pi)$. A plot of the real and imaginary parts of the function $H(\omega)$ is shown in Fig. \[fig:function\_f\]. Note that the phase $\varphi(\omega)$ is an odd function of frequency and its asymptotic values are $\varphi(\pm\infty)=\mp \pi$. Making use of the function $H(\omega)$, the Nambu-Gor’kov Green function  can be written in the following elegant form: $$\hat{G}({\bf k},\omega)= \frac{1}{2}\left[\frac{\partial H}{\partial\omega}\tau_0 -\frac{\partial H}{\partial\overline{\Delta}}\tau_1 -\frac{\partial H}{\partial\varepsilon_{\bf k}^{}}\tau_3\right]. \label{eq:dynes_elegant}$$ From here follows the following explicit expression for the Nambu-Gor’kov spectral function, defined as usual by $\hat{A}({\bf k},\omega)= -\pi^{-1}{\rm Im}\hat{G}({\bf k},\omega)$: $$\hat{A}({\bf k},\omega)= \frac{1}{2\pi}\left[-\frac{\partial \varphi}{\partial\omega}\tau_0 +\frac{\partial \varphi}{\partial\overline{\Delta}}\tau_1 +\frac{\partial \varphi}{\partial\varepsilon_{\bf k}^{}}\tau_3\right]. \label{eq:dynes_a_elegant}$$ Equation  forms the starting point of our discussion of the sum rules. Using the oddness of the function $\varphi(\omega)$ and of its asymptotic values, one finds readily that Eq.  implies the matrix equation $$\int_{-\infty}^\infty d\omega \hat{A}({\bf k},\omega)=\tau_0, \label{eq:sum_rule_1}$$ in perfect agreement with the well-known exact sum rule for the zero-order moment of the spectral function. Next we prove that Eq.  satisfies the exact sum rule Eq. . Since ${\rm Tr}\hat{A}({\bf k},\omega)= -\pi^{-1}\partial \varphi/\partial\omega$, we have to prove that $$\int_{-\infty}^{\infty}\frac{d\omega}{1+e^{-\omega/T}} \frac{\partial \varphi}{\partial\omega}=-\pi.$$ By calculating the integral on the left-hand side by parts, our task reduces to proving the equality $$\int_{-\infty}^{\infty}\frac{d\omega}{4T \cosh^2(\omega/2T)} \varphi(\omega)=0.$$ But, since $\varphi(\omega)$ is odd, this last equality is trivially satisfied. Thus we have proven that the Dynes superconductors satisfy Eq. . For the sake of completeness, let us note that the full matrix form of the sum rule Eq.  reads $$\int_{-\infty}^\infty \frac{d\omega}{1+e^{-\omega/T}} \hat{A}({\bf k},\omega)= \left(\begin{array}{cc} 1-n_{\bf k} & b_{\bf k} \\ b_{\bf k} & n_{\bf k} \end{array} \right), \label{eq:sum_rule_2}$$ where $n_{\bf k}=n_{{\bf k}\uparrow}=n_{-{\bf k}\downarrow}$, $b_{\bf k}=\langle c^{}_{{\bf k}\uparrow}c^{}_{-{\bf k}\downarrow}\rangle =\langle c^\dagger_{-{\bf k}\downarrow}c^\dagger_{{\bf k}\uparrow}\rangle$, and the thermodynamic expectation values $n_{\bf k}$ and $b_{\bf k}$ are given by $$\begin{aligned} n_{\bf k}&=&\frac{1}{2}-\int_0^\infty \frac{d\omega}{2\pi} \frac{\partial \varphi}{\partial\varepsilon^{}_{\bf k}} \tanh\frac{\omega}{2T}, \\ b_{\bf k}&=&\int_0^\infty \frac{d\omega}{2\pi} \frac{\partial \varphi}{\partial\overline{\Delta}} \tanh\frac{\omega}{2T}.\end{aligned}$$ It should be pointed out that sum rules for higher-order moments of the spectral function which generalize Eqs. (\[eq:sum\_rule\_1\],\[eq:sum\_rule\_2\]) can be also derived, but their right-hand sides depend on the Hamiltonian of the problem. Such sum rules therefore do not provide useful checks of the phenomenological Green function Eq. . Momentum distribution functions in the Eliashberg theory ======================================================== We have already noted that, within the Eliashberg theory, the functions $Z(\omega)$ and $\phi(\omega)$ usually depend only on frequency $\omega$ and are independent of the momentum ${\bf k}$. Quite some time ago, it has been pointed out that in such cases it is useful to study the spectral function $A_{11}({\bf k},\omega)$ for fixed frequency $\omega$ as a function of the bare electron energy $\varepsilon_{\bf k}$, the so-called momentum distribution function.[@Campuzano04] To simplify the formulas, in this Appendix we will replace $A_{11}({\bf k},\omega)$ by $A(\varepsilon,\omega)$. Instead of the two complex functions $Z(\omega)$ and $\phi(\omega)$, let us introduce the following four real functions of frequency $\widetilde{\omega}(\omega)$, $\widetilde{\gamma}(\omega)$, $\widetilde{\Omega}(\omega)$, and $\widetilde{\Gamma}(\omega)$: $$\begin{aligned} \omega Z&=&\widetilde{\omega}+i\widetilde{\gamma}, \nonumber \\ \sqrt{(\omega Z)^2-\phi^2}&=&\widetilde{\Omega}+i\widetilde{\Gamma}. \label{eq:tildes}\end{aligned}$$ To illustrate their symmetries and typical form, in Fig. \[fig:tildes\] we plot the functions $\widetilde{\omega}(\omega)$, $\widetilde{\gamma}(\omega)$, $\widetilde{\Omega}(\omega)$, and $\widetilde{\Gamma}(\omega)$ for a Dynes superconductor. After a tedious but straightforward calculation the spectral function of a general Eliashberg superconductor can be written as $$\begin{aligned} A(\varepsilon,\omega)&=& \frac{1}{2} \left[\frac{\widetilde{\gamma}}{\widetilde{\Gamma}}+1\right] \delta_{\widetilde{\Gamma}}(\varepsilon-\widetilde{\Omega}) +\frac{1}{2} \left[\frac{\widetilde{\gamma}}{\widetilde{\Gamma}}-1\right] \delta_{\widetilde{\Gamma}}(\varepsilon+\widetilde{\Omega}) \nonumber \\ &+& \frac{1}{2} \left[\frac{\widetilde{\omega}}{\widetilde{\Omega}} -\frac{\widetilde{\gamma}}{\widetilde{\Gamma}}\right] \frac{4\pi\widetilde{\Omega}^2}{\widetilde{\Gamma}} \delta_{\widetilde{\Gamma}}(\varepsilon-\widetilde{\Omega}) \delta_{\widetilde{\Gamma}}(\varepsilon+\widetilde{\Omega}), \label{eq:momentum}\end{aligned}$$ where we have introduced the notation $$\delta_{\widetilde{\Gamma}}(x)=\frac{1}{\pi} \frac{\widetilde{\Gamma}}{x^2+\widetilde{\Gamma}^2}$$ for a Lorentzian with width $\widetilde{\Gamma}$. According to Eq. , the spectral function $A(\varepsilon,\omega)$, when viewed as a function of energy $\varepsilon$ at fixed frequency $\omega$, consists of three terms. The first two terms are Lorentzians, whereas the third term is a product of two Lorentzians. When the measured momentum distribution functions are fitted by Eq. , $\widetilde{\Omega}$ can be determined from the positions of the Lorentzians and $\widetilde{\Gamma}$ is given by their widths. Finally, from the relative weights of the three terms in Eq.  one can determine the ratios $\widetilde{\gamma}/\widetilde{\Gamma}$ and $\widetilde{\omega}/\widetilde{\Omega}$. With all four functions $\widetilde{\omega}(\omega)$, $\widetilde{\gamma}(\omega)$, $\widetilde{\Omega}(\omega)$, and $\widetilde{\Gamma}(\omega)$ known, one obtains full information about the superconducting state. This idea has been used in an impressive set of recent papers, see Ref. and references therein. One should note, however, that in order to determine all four parameters $\widetilde{\omega}$, $\widetilde{\gamma}$, $\widetilde{\Omega}$, and $\widetilde{\Gamma}$, it is necessary to resolve all three terms in Eq. , together with their relative weights. But at sufficiently large frequencies we should expect that $\widetilde{\Gamma}\ll |\widetilde{\Omega}|$, see Fig. \[fig:tildes\]. In this case the following approximate equality is valid $$\frac{4\pi\widetilde{\Omega}^2}{\widetilde{\Gamma}} \delta_{\widetilde{\Gamma}}(\varepsilon-\widetilde{\Omega}) \delta_{\widetilde{\Gamma}}(\varepsilon+\widetilde{\Omega}) \approx \delta_{\widetilde{\Gamma}}(\varepsilon-\widetilde{\Omega}) +\delta_{\widetilde{\Gamma}}(\varepsilon+\widetilde{\Omega}),$$ which means that the product of two Lorentzians can not be distinguished from their sum. Inserting this equality into Eq. , one finds readily that the spectral function $A(\varepsilon,\omega)$ is given by a sum of only two Lorentzians, $$\begin{aligned} A(\varepsilon,\omega)\approx \frac{1}{2} \left[\frac{\widetilde{\omega}}{\widetilde{\Omega}}+1\right] \delta_{\widetilde{\Gamma}}(\varepsilon-\widetilde{\Omega}) +\frac{1}{2} \left[\frac{\widetilde{\omega}}{\widetilde{\Omega}}-1\right] \delta_{\widetilde{\Gamma}}(\varepsilon+\widetilde{\Omega}). %\label{eq:momentum_approx}\end{aligned}$$ But if this is the case, then from fits to the momentum distribution function one can determine only $\widetilde{\Omega}$, $\widetilde{\Gamma}$, and $\widetilde{\omega}$, but not $\widetilde{\gamma}$. In other words, we do not have access to the pairing function $\phi^2(\omega)=(\widetilde{\omega}+i\widetilde{\gamma})^2 -(\widetilde{\Omega}+i\widetilde{\Gamma})^2$ in this frequency limit. There is yet another reason why fits to the momentum distribution function can provide reliable estimates of the Eliashberg parameters only for $|\omega|\lesssim \overline{\Delta}$: namely, this technique requires that both ratios, $\widetilde{\gamma}/\widetilde{\Gamma}$ and $\widetilde{\omega}/\widetilde{\Omega}$, are sufficiently different from 1, so that the weights of the second and third terms in Eq.  can be determined precisely. But Fig. \[fig:tildes\] clearly shows that this criterion is satisfied only for $|\omega|\lesssim \overline{\Delta}$. Does this mean that the Eliashberg problem of finding the functions $Z(\omega)$ and $\Delta(\omega)$ can not be solved in the frequency range $\overline{\Delta}\lesssim |\omega|$? The answer is no: it has been pointed out recently[@Bzdusek15; @Bok16] that, by applying the powerful inversion technique developed in Ref. , it is possible to extract the complex gap function $\Delta(\omega)=\phi(\omega)/Z(\omega)$ from the measured tomographic density of states. When this knowledge is combined with the momentum distribution technique - which allows for a relatively straightforward determination of $\widetilde{\Omega}$ and $\widetilde{\Gamma}$ in the limit $\overline{\Delta}\lesssim |\omega|$ with one Lorentzian only - making use of the expression $$Z(\omega)=\frac{\widetilde{\Omega}+i\widetilde{\Gamma}} {\sqrt{\omega^2-\Delta^2(\omega)}}$$ one can determine also the second Eliashberg function $Z(\omega)$, thereby solving the Eliashberg problem.[@Bok16] Finally, let us note that the momentum distribution functions can be useful also in the special case of the Dynes superconductors described by Eq. . In fact, since in the frequency range $\overline{\Delta}\lesssim |\omega|$ the width of the observable Lorentzian in the momentum distribution function of a Dynes superconductor is $\widetilde{\Gamma}\approx\Gamma_n$, this gives us an independent procedure for measuring the total scattering rate $\Gamma_n=\Gamma+\Gamma_s$. Proof of the inequalities $A_{ii}({\bf k},\omega)\geq 0$ for the Dynes superconductors ====================================================================================== In this Appendix we will prove that the diagonal spectral functions $A_{ii}({\bf k},\omega)$ of the Dynes superconductors are positive-definite, as required by general considerations. To this end, let us first note that the diagonal components of the Nambu-Gor’kov Green function within the Eliashberg theory read as $$G_{ii}({\bf k},\omega)= \frac{\widetilde{\omega}+i\widetilde{\gamma}\pm\varepsilon_{\bf k}} {(\widetilde{\Omega}+i\widetilde{\Gamma})^2-\varepsilon_{\bf k}^2}.$$ Since $A_{ii}({\bf k},\omega)=-\pi^{-1}{\rm Im}G_{ii}({\bf k},\omega)$, from here it follows that the requirement $A_{ii}({\bf k},\omega)\geq 0$ is equivalent to $$2\widetilde{\Omega}\widetilde{\Gamma}\widetilde{\omega} -\widetilde{\gamma}(\widetilde{\Omega}^2-\widetilde{\Gamma}^2) \geq -\widetilde{\gamma}\varepsilon_{\bf k}^2 \mp 2\widetilde{\Omega}\widetilde{\Gamma}\varepsilon_{\bf k},$$ which has to hold for all $\varepsilon_{\bf k}$ and $\omega$. Maximizing the expression on the right-hand side with respect to $\varepsilon_{\bf k}$, this requirement can be rewritten as $$\frac{\widetilde{\gamma}^2\widetilde{\Gamma}^2} {\widetilde{\gamma}^2+\widetilde{\Gamma}^2} + \widetilde{\omega}\widetilde{\Omega} \frac{2\widetilde{\gamma}\widetilde{\Gamma}} {\widetilde{\gamma}^2+\widetilde{\Gamma}^2} \geq \widetilde{\Omega}^2,$$ which has to be valid for all frequencies $\omega$. Since the first term on the left-hand side is obviously positive, it follows that it is sufficient to show that $$\frac{\widetilde{\omega}}{\widetilde{\Omega}} \geq \frac{1}{2}\left( \frac{\widetilde{\gamma}}{\widetilde{\Gamma}} +\frac{\widetilde{\Gamma}}{\widetilde{\gamma}} \right).$$ In order to prove this latter inequality, we will prove the following two simpler inequalities: $$\frac{\widetilde{\omega}}{\widetilde{\Omega}} \geq \frac{\widetilde{\gamma}}{\widetilde{\Gamma}}, \qquad \frac{\widetilde{\omega}}{\widetilde{\Omega}} \geq \frac{\widetilde{\Gamma}}{\widetilde{\gamma}}. \label{eq:inequalities}$$ In view of the symmetries illustrated by Fig. \[fig:tildes\], one checks easily that it is sufficient to prove that these inequalities hold for $\omega>0$. So far, our discussion was valid for any Eliashberg superconductor. Now we specialize to the Dynes superconductors. Making use of Eqs. , one finds easily that in this case the quantities $\widetilde{\omega}$, $\widetilde{\Omega}$, $\widetilde{\Gamma}$, and $\widetilde{\gamma}$ can be written in terms of the functions $\Omega_1$ and $\Omega_2$ introduced in Appendix A as $$\begin{aligned} \widetilde{\Omega}&=&\Omega_1, \\ \widetilde{\Gamma}&=&\Omega_2+\Gamma_s, \\ \widetilde{\omega}&=&\omega+ \Gamma_s\frac{\omega\Omega_2-\Gamma\Omega_1} {\Omega_1^2+\Omega_2^2}, \\ \widetilde{\gamma}&=&\Gamma+ \Gamma_s\frac{\omega\Omega_1+\Gamma\Omega_2} {\Omega_1^2+\Omega_2^2}.\end{aligned}$$ Let us note in passing that these expressions justify the results plotted in Fig. \[fig:tildes\]. Next we plug the expressions for $\widetilde{\omega}$, $\widetilde{\Omega}$, $\widetilde{\Gamma}$, and $\widetilde{\gamma}$ into Eqs. . If one makes use of the equalities Eqs.  and of the inequalities Eqs. , after some straightforward algebra one can check that the inequalities Eqs.  are satisfied. This completes the proof that, for the Dynes superconductors, the inequalities $A_{ii}({\bf k},\omega)\geq 0$ are valid. This work was supported by the Slovak Research and Development Agency under contracts No. APVV-0605-14 and No. APVV-15-0496, and by the Agency VEGA under contract No. 1/0904/15. [99]{} M. Hashimoto et al., Nat. Phys. [**10**]{}, 483 (2014). F. Marsiglio and J. P. Carbotte, in [*Superconductivity*]{}, K. H. Bennemann and J. B. Ketterson, Eds., Vol. I, Springer, Berlin, 2008, p. 73. P. Szabó et al., Phys. Rev. B [**93**]{}, 014505 (2016). L. Zhu, P. J. Hirschfeld, and D. J. Scalapino, Phys. Rev. B [**70**]{}, 214503 (2004). T. J. Reber et al., Nat. Phys. [**8**]{}, 606 (2012). R. C. Dynes, V. Narayanamurti, and J. P. Garno, Phys. Rev. Lett. 41, 1509 (1978). A. E. White, R. C. Dynes, and J. P. Garno, Phys. Rev. B 33, 3549(R) (1986). T. Kondo et al., Nat. Commun. [**6**]{}, 7699 (2015). F. Herman and R. Hlubina, Phys. Rev. B [**94**]{}, 144508 (2016). J.C. Campuzano, M.R. Norman and M. Randeria, [*Photoemission in the High Tc Superconductors*]{}, in [*The Physics of Superconductors*]{}, K.H. Bennemann and J.B. Ketterson, eds., Vol. II, Springer, New York, 2004, pp. 167-273. M. R. Norman et al., Phys. Rev. B [**57**]{}, R11093 (1998). A. V. Chubukov et al., Phys. Rev. B [**76**]{}, 180501 (2007). T. J. Reber et al., preprint arXiv:1508.06252. S. H. Hong et al., Phys. Rev. Lett. [**113**]{}, 057001 (2014). E. Abrahams and C. Varma, Proc. Natl. Acad. Sci. U.S.A. [**97**]{}, 5714 (2000). P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. [**78**]{}, 17 (2006). F. Herman and R. Hlubina, in preparation. J. M. Bok et al., Sci. Adv. [**2**]{}, e1501329 (2016). T. Bzdušek and R. Hlubina, Philos. Mag. [ **95**]{}, 609 (2015). A.A. Galkin, A.I. D’yachenko and V.M. Svistunov, Sov. Phys. JETP [**39**]{}, 1115 (1974).
--- abstract: 'Small (sub)-micron dust is present over the entire lifetime of protoplanetary disks. As aggregation readily depletes small particles, one explanation might be that dust is continuously generated by larger bodies in the midplane and transported to the surface of the disks. In general, in a first step of this scenario, the larger bodies have to be destroyed again and different mechanisms exist with the potential to accomplish this. Possible destructive mechanisms are fragmentation in collisions, erosion by gas drag or light induced erosion. In laboratory experiments we find that the latter, light induced erosion by Knudsen compression and photophoresis, can provide small particles. It might be a preferred candidate as the dust is released into a low particle density region. The working principle of this mechanism prevents or decreases the likelihood for instant re-accretion or re-growth of large dense aggregates. Provided that there is a particle lift, e.g. turbulence, these particles might readily reach the surface of the disk.' author: - 'T. Kelling and G. Wurm' title: | \ A Mechanism to Produce the Small Dust Observed in Protoplanetary Disks --- Introduction ============ It has become common knowledge that protoplanetary disks exist for a time span of up to 10 million years. The best evidence for this comes from observations of an infrared excess, of which the probability of detection varies with the age of star formation regions ([@haisch2001]). These observations are sensitive for small particles. A more detailed modeling of the spectral energy distribution of such disks and emission features at 10 $\mu$m show that, at least in part, this is due to the existence of small particles of the order of 1 $\mu$m ([@olofsson2010]). The existence of even smaller 100 nm particles typical for interstellar conditions has also been reported ([@meeus2001], [@acke2004]). The existence of small grains at the surface of protoplanetary disks for millions of years is not trivial to explain. Aggregation by relative motion, collisions, and sticking rapidly depletes the small grains and leads to the formation of larger aggregates ([@dullemond2005]). Eventually these aggregates get compacted and rain out to the midplane. To solve the missing small particle problem two alternatives are possible for preventing the agglomeration and sedimentation at a certain stage. Charging of dust particles might be one possible explanation as suggested by [@okuzumi2009]. The second alternative might be the destruction of larger bodies and the feedback of the smaller particles from the midplane to the surface. Along this line of reasoning a few candidates exist that could destroy the large bodies. The first mechanism is fragmentation by collisions. At higher collision velocities, collisions can result in small dust particles ([@wurm2005], [@teiser2009], [@schraepler2011]) and turbulence might transport these dust particles back to the surface. There are two caveats with this mechanism. First, collisions will produce the dust in a cascade that usually results in a power-law size distribution ([@dohnany1969], [@mathis1977]). This way. producing small dust particles implies that mass is also present in the larger mass ranges. Such a continuous size distribution might not be consistent with the total mass available in the disk; i.e. T-Tauri disks have a large mass fraction in millimeter-size particles and adding additional mass bins of similar mass might amount to more mass than is available in the system. Second, the dust will be embedded in a dense environment and will readily collide with other particles again. On the way to the surface collisions will lead to re-growth and eventually compact aggregates will again rain out. Wheter a fraction of dust could survive its way back to the disk without forming aggregates is subject to future simulations and is not yet settled. The second mechanism is erosion by gas drag in the simple sense as deserts are eroded by wind on Earth. This effect has been studied in laboratory experiments by [@paraskov2006] who showed that indeed erosion by gas drag is possible in dense disks for slightly eccentric dusty bodies. This certainly will occur but to what degree is uncertain. One advantage is that the size distribution is not continuous but the larger body only produces small aggregates. This might be more consistent with the total mass budget of a disk. However, it shares the problem of re-accretion or re-growth of particles on the way back to the surface. It is also unclear if the mechanism can provide micron-size grains. Evaporation of inward drifting material and re-condensation is also a way to provide small particles but this would imply that the dust will primarily consist of high temperature components and crystalline which is not what is observed ([@olofsson2010]). Instead of these well-known mechanisms as a possible candidate for dust generation, we consider light-induced erosion mechanisms. In [@wurm2006], [@wurm2007], [@wurm2008], [@kocifaj2010] and [@kelling2011] we showed that the illumination of a dusty body in a low ambient pressure environment leads to eruptions of dust as outlined below. Especially at the inner edge of a disk this might disassemble dusty bodies ([@wurm2007]). With respect to explaining dust at the surface of protoplanetary disks this mechanism has a few features that make it a prime candidate for explaining dust at the surface of disks. It provides dust in a low-density environment, and large grains instantly get separated from the small grains. Therefore, small grains can remain small over a long time period and small particles can be transported to the disk surface. The mechanism does not change the size distribution throughout the disk as it only erodes the small fraction of larger bodies at the inner edge; therefore, it will not interfere with the overall mass budget of the disk. Also, as there is evidence that high-temperature minerals that formed close to the star are transported to the outer regions of protoplanetary disks as, e.g., found by the *Stardust* mission ([@zolensky2006]), the inner disk is already known as a reservoir of dust particles. In this paper, we describe the basics of the light-induced erosion mechanisms. We report on laboratory experiments, which show that indeed small grains are generated as part of the erosion process and show that typically smaller grains are separated from larger grains in a way that keeps the small particles at low particle density. Photophoretic eruptions, Knudsen compression, and photophoretic particle sorting ================================================================================ In laboratory experiments [@wurm2006] found that a light absorbing dust bed, which is illuminated with optical radiation at a flux larger than a few kW m$^{-2}$ eruptively loses particles. [@kelling2011] recently showed that a dust bed also loses, for a certain time span, even more particles if the light source is turned off. In protoplanetary disks this might be the case close to the terminator or for moving shadows from surface unevenness of an illuminated rotating body. Both erosion processes are related to temperature gradients, which are established in the top layers of the illuminated dusty body. This has been modeled in detail by [@kocifaj2010; @kocifaj2011]. In the illuminated case, radiation can penetrate to a depth some $\mu$m below the surface but thermal radiation cools the surface right at the top. This could be described as a solid state greenhouse effect ([@niederdoerfer1933], [@kaufmann2007]). As a consequence, a temperature gradient evolves, where the maximum temperature is below the surface and not at the top. In the case where the illumination is turned off, the surface cools by radiation. This also leads to a temperature gradient where the surface is cooler than the dust below the surface. The temperature gradient (cool at the surface, warm below) in this case is established over a much larger depth (millimeters). Forces on the particles, which lead to dust eruptions, are generated if the dust bed is situated in a gaseous environment. It is known for more than a century that temperature gradients lead to gas flows along particle surfaces called thermal creep ([@knudsen09]) and that interaction with gas molecules at different sides of a particle with different temperatures directly leads to a momentum transfer to the particle called photophoresis ([@rohatschek1995]). The effects of thermal creep ([@hettner1924]) and photophoresis ([@beresnev1993]) are non trivial. For special cases (large Knudsen numbers, mean free path of gas molecules is much larger than the relevant scale), a rather intuitive description can be given. We first note that thermal creep refers to the motion of the gas while photophoresis refers to the motion of a solid. Both occur in gas / solid systems where temperature gradients exist. Both are related to each other by momentum conservation. Nevertheless, both might best be visualized by different models. *Thermal creep*. Consider two gas reservoirs 1 and 2, with different temperatures $T_1$ and $T_2$ and different pressures $p_1$ and $p_2$ that are connected by a tube of diameter $s$, which is much smaller than the mean free path of the gas molecules $\lambda$. In equilibrium, the gas flow rates from both sides through the tube must equal $n_1v_1=n_2v_2$. As $n\propto p$ and $v\propto \sqrt{T}$, it follows that the pressures in the chambers are related by $p_2/p_1 = \sqrt{T_2/T_1}$ ([@knudsen09]). Hence, in the warmer chamber an overpressure $\Delta p$ is established. For $Kn \simeq 1$ the overpressure is directly proportional to the temperature difference $\Delta p \propto \Delta T$ ([@muntz2002]). Simply speaking ([@hettner1924]), a surface element of the connection between the chambers is impacted by gas molecules with a greater momentum from the hotter side of the tube. Hence, the tube is affected by a tangential force in the direction from warm to cold. Due to momentum conservation, the tube itself applies a tangential force to the gas in the opposite direction and hence the gas moves from cold to warm. *Photophoresis*: The motion of a solid particle is caused by an interaction of the surface of a suspended particle and the surrounding gas molecules (Figure \[fig:photopho\_principle\]). In principle, gas molecules with thermal momentum accommodate to the particle’s surface, take over the local surface temperature, and leave the surface again with a momentum according to their new temperature ([@rohatschek1995]). If a particle has a temperature gradient over its surface, the local surface temperature varies. Gas molecules accommodating to the warmer side leave the surface therefore with a larger momentum than the gas molecules accommodating to the cooler parts of the surface. As a result, a force acts on the suspended particle accelerating it in the direction from warm to cold. The temperature gradient over the particle might be established by illumination, hence the name photophoresis. For perfectly absorbing, spherical particles at large Knudsen numbers, the photophoretic force is given as ([@rohatschek1995]) $$F_{ph_{\Delta T}} = \frac{\pi \alpha p}{T} \frac{\Delta T}{6 }a^2\label{eq:photopho1},$$ where $\alpha$ is the accommodation coefficient of the particles surface, $p$ is the ambient pressure, $a$ is the particle radius, $T$ is the mean temperature, and $\Delta T$ is the temperature difference over the particle surface. Photophoresis and the effects of thermal creep can lead to particle ejection. Depending on the temperature difference, which depends on the illuminating flux and dust bed parameters, dust particles at the surface can directly be ejected by photophoresis (see also Figure \[fig:photopho\]). If the temperature gradient spans deeper into the dust bed (covers more layers of dust, e.g., after the light source is switched off), thermal creep can lead to a gas pressure increase below the surface analogous to a compressor built by [@knudsen09]. If the pressure difference is large enough, dust is ejected from the dust bed (Figure \[fig:knudsen\]). Then the ejection is not a direct response to thermal creep like photophoresis but a slower build-up of pressure by means of thermal creep. [@kelling2009] demonstrated that single dust agglomerates can be levitated over a hot surface by such a Knudsen compressor effect. Once particles have been released from a dusty body in a protoplanetary disk, an already existing slow gas flow will transport the particles away from their parent body. Therefore a cloud of small particles then moves within the protoplanetary disk which is still illuminated by the central star. Photophoresis acts on the particles but this time it is not generated by a temperature gradient at the top of a dust bed but by direct illumination with stellar light. For a particle with constant thermal conductivity $\kappa_p$ Equation (\[eq:photopho2\]) can be written as ([@rohatschek1995]) $$F_{ph_I} = \frac{\pi \alpha p}{T}\frac{J_1 I}{3\kappa_p}a^3.\label{eq:photopho2}$$ Here, $J_1$ denotes the so-called asymmetry factor (heat source distribution function within the illuminated particle) which is set to $\left| J_1\right| = 1/2$ for an opaque sphere, $I$ is the incident light flux and $\kappa_p$ is the particles thermal conductivity. The photophoretic force $F_{ph_I}\propto a^3$. Photophoresis will accelerate the particles to a velocity where the gas drag equals the photophoretic force. Gas drag at large Knudsen numbers can be written as $$F_{gas} = \frac{mv}{\tau}\label{eq:gasdrag},$$ where $m$ is the particle mass, $v$ is the particles velocity and $\tau$ is the gas-grain coupling time which can be expressed as ([@blum1996]) $$\tau = \epsilon \frac{m}{\sigma}\frac{1}{\rho_g v_m}\label{eq:tau},$$ with an empirical factor $\epsilon$, $m$ as the particles mass, $\sigma$ as the geometrical cross section of the particle, $\rho_g$ as the gas density, and $v_m = \sqrt{(8k_B T)/(\pi m_g)}$ the mean thermal velocity of the gas molecules with $k_B = 1.38\times 10^{-23}$ J K$^{-1}$ as the Boltzmann constant and $m_g = 3.9 \times 10^{-27}$ kg as the molecular mass of the gas molecules . Hence, Equations (\[eq:gasdrag\]) and (\[eq:tau\]) show that the gas drag is $F_{gas} \propto a^2$ (with $\sigma \propto a^2$). In equilibrium the drift velocity away from the star therefore linearly depends on the particle size ($v\propto a$) with $$v = \frac{\alpha \epsilon J_1 p I}{3 \kappa_p \rho_g T v_m}a\label{eq:vdrift1}.$$ With $p=\rho_g T R/M$, where $M = 2.34\times 10^{-3}$ kg mol$^{-1}$ is the molar mass of the gas and $R = 8.3$ J mol$^{-1}$ K$^{-1}$ is the universal gas constant, one finds for the drift velocity $$v = \frac{\alpha \epsilon J_1 R I}{3\kappa_p M}\frac{1}{\sqrt{(8k_B T)/(\pi m)}}a\label{eq:vdrift2}.$$ Typical values $\alpha = 1$, $\epsilon = 0.7$, $J_1=0.5$, $I = 10$ kW m$^{-2}$ at $0.1$ AU, $\kappa_p = 0.01$ W m$^{-1}$K$^{-1}$ and $T= 10^3$ K ([@wood2000]) yield $v\simeq 10^{-1}$ m s$^{-1}$ for a 1 $\mu$m particle and $v\simeq 10$ m s$^{-1}$ for a 100 $\mu$m particle. Larger grains are therefore rapidly pushed outward where – eventually – they are pushed into an optically dense region and likely are reprocessed in one way or the other, e.g., taking part in reaggregation. However, the small (sub) micron fraction stays in the optically thin region in a diluted cloud, where particles can stay isolated for a prolonged time. If particles grow, they will initially grow to fractal-like structures, which changes neither their aerodynamic properties nor their optical properties much ([@wurm1998], [@martin1986]). In this way, the particles are the perfect sample to be transported to the surface of the protoplanetary disks, i.e., by turbulence. The overall rate of the particle production is an interplay of the attenuation of the incident light flux induced by the cloud of ejected particles and the transportation of the particles by gas drag away from the host body. A prerequisite for this scenario though is that light-induced eruptions actually produce a fraction of small grains. To show this, we carried out laboratory experiments with a basalt dust sample which contains small grains. Laboratory experiments on small particle generation =================================================== We carried out photophoretic experiments according to Figures \[fig:photopho\] and \[fig:setup\] where we placed a particle collector around the illuminated spot. As a light source, we used a red laser diode of about 40 mW focused on a spot of $\sim$1 mm$^2$ from above the dust sample. This way the incident light flux is on the order of some $10^4$ W m$^{-2}$ which we regard as appropriate for dust eruptions in the inner region of protoplanetary disks. We used basalt powder as a dust sample. The size distribution of this sample was determined by microscopy and is shown in Figure \[fig:histo\_original\]. The size distribution of the sampled particles after the light induced eruptions was also analyzed by optical microscopy and the resultant size distribution is plotted in Figure \[fig:histo\]. The resolution limit is $\sim$2 $\mu$m. However, the plot shows that the light-induced erosion process produces a large fraction of small particles. Conclusion ========== In this paper, we outlined a possible scenario from which small particles present in protoplanetary disks over millions of years might originate. Our laboratory experiments prove that light-induced eruptions can disassemble a dusty body to its constituents at least down to $\sim 2$ $\mu$m in size. As outlined above, photophoretic sorting will preferentially select the small (sub) micron grains to stay in the optical thin region from where they can be transported to the surface without re-growing to large compact aggregates too rapidly. We conclude that light induced eruptions should be considered as one preferred candidate for the continuous supply of small dust seen at the surface of protoplanetary disks. This work was supported by the DFG. Acke, B., & van den Ancker, M.E. 2004, Astronomy and Astrophysics, 426, 151 Blum, J., Wurm, G., Kempf, S., & Henning, T. 1996, Icarus, 124, 441 Beresnev, S., Chernyak, V., & Fomyagin, G. 1993, Physics of Fluids, 5, 2043 Dohnanyi, J. S. 1969, Journal of Geophysical Research, 74, 2531 Dullemond, C. P., & Dominik, C. 2005, Astronomy and Astrophysics, 434, 971 Haisch, Jr., K. E., Lada, E. A., & Lada, C. J. 2001, The Astrophysical Journal, 553, L153 Hettner, G. 1924, Zeitschrift für Physik, 27, 12 Kaufmann, E., Kömle, N. I., & Kargl, G. 2007, Advances in Space Research, 39, 370 Kelling, T., & Wurm, G. 2009, Physical Review Letters, 103, 215502 Kelling, T., Wurm, G., Kocifaj, M., Klačka, J., & Reiss, D. 2011, Icarus, 212, 935 Knudsen, M. 1909, Annalen der Physik, 336, 633 Kocifaj, M., Klačka, J., Kelling, T., & Wurm, G. 2011, Icarus, 211, 832 Kocifaj, M., Klačka, J., Wurm, G., Kelling, T., & Kohút, I. 2010, Monthly Notices of the Royal Astronomical Society, 404, 1512 Martin, J. E., Schaefer, D. W., & Hurd, A. J. 1986, Phys. Rev. A, 33, 3540 Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, The Astrophysical Journal, 217, 425 Meeus, G., Waters, L.B.F.M., Bouwman, J., van den Ancker, M.E., Waelkens, C., & Malfait, K. 2004, Astronomy and Astrophysics, 365, 476 Muntz, E.P., Sone, Y., Aoki, K., Vargo, S., & Young, M. 2002, Journal of Vacuum Science Technology, 20, 214 Niederdorfer, E. 1933, Meteorologische Zeitschrift, 50, 201 Okuzumi, S. 2009, The Astrophysical Journal, 698, 1122 Olofsson, J., Augereau, J., van Dishoeck, E. F., Merín, B., Grosso, N., Ménard, F., Blake, G. A., & Monin, J. 2010, Astronomy and Astrophysics, 520, A39+ Paraskov, G. B., Wurm, G., & Krauss, O. 2006, The Astrophysical Journal, 648, 1219 Rohatschek, H. 1995, Journal of Aerosol Science, 26, 717 Schräpler, R., & Blum, J. 2011, The Astrophysical Journal, submitted Teiser, J., & Wurm, G. 2009, Astronomy and Astrophysics, 505, 351 Wood, J. A. 2000, Space Science Reviews, 92, 87 Wurm, G. 2007, Monthly Notices of the Royal Astronomical Society, 380, 683 Wurm, G., & Blum, J. 1998, Icarus, 132, 125 Wurm, G., & Krauss, O. 2006, Physical Review Letters, 96, 134301 Wurm, G., Paraskov, G., & Krauss, O. 2005, Icarus, 178, 253 Wurm, G., Teiser, J., & Reiss, D. 2008, Geophysical Research Letters, 1, 1 Zolensky, M. E., et al. 2006, Science, 314, 1735
--- abstract: 'The loop representation formulation of non-relativistic particles coupled with abelian gauge fields is studied. Both Maxwell and Chern-Simons interactions are separately considered. It is found that the loop-space formulations of these models share significant similarities, although in the Chern-Simons case there exists an unitary transformation that allows to remove the degrees of freedom associated with the paths. The existence of this transformation, which allows to make contact with the anyonic interpretation of the model, is subjected to the fact that the charge of the particles be quantized. On the other hand, in the Maxwell case, we find that charge quantization is necessary in order to the geometric representation be consistent.' address: | $^a$Grupo de Campos y Partículas, Departamento de Física, Facultad de Ciencias, Universidad Central de Venezuela, AP 47270, Caracas 1041-A, Venezuela\ $^b$ Departamento de Matemática, Universidad Metropolitana, Caracas, Venezuela author: - 'Ernesto Fuenmayor$^a$[^1], Lorenzo Leal$^a$[^2] and Ryan Revoredo$^{a,b}$[^3]' title: 'Loop Representation of charged particles interacting with Maxwell and Chern-Simons fields' --- 0.3cm 2 Introduction ============ The loop representation (L.R.) constitutes an useful tool in present day investigations in gauge theories [@O-1; @O-2]. There are several approaches to the L.R. [@1a; @1b; @1c; @1d], all of them sharing the recognition of string-like structures as the basic objects needed to build a geometric representation for gauge field quantization. In this paper we study the L.R. formulation of point particles interacting with abelian gauge fields. The coupling of point particles to fields presents certain subtleties that make the canonical quantization far from being straightforward. In turn, the corresponding L.R. shows its own particularities, which had not yet been reported. This study is carried out first for non-relativistic dynamical point particles in electromagnetic interation. We shall not worry about the lack of Lorentz covariance, neither we shall discuss regularization issues. As we shall see, for the L.R. formulation of this model to be consistent, charge must be quantized. This result should be compared with a similar one obtained several years ago for the Maxwell theory, within the Spin Networks version of the L.R. [@3; @3a]. As a second model we consider the topological interaction between non-relativistic dynamical charged particles caused by a Chern-Simons term [@4; @5]. Althought both theories share the same geometrical framework when quantized in the L.R., in the Chern-Simons case the loop dependence may be eliminated by means of an unitary transformation, which yields a quantum mechanics of many particles subjected to a long range interaction. As we shall discuss, this unitary transformation holds provided charge is quantized. This paper is organized as follows. In section II we study the L.R. formulation of non-relativistic point particles in electromagnetic interaction. Section III is devoted to consider the L.R. quantization of point particles with Chern-Simons interaction. Some final remarks are left for the last section. Electromagnetic interaction of non-relativistic point particles =============================================================== The action for $N$ electromagnetically interacting non-relativistic charged particles may be writen as $$\begin{aligned} S&=&\int dt\sum_{p=1}^{N}\left(\frac12\,m_{(p)}|\dot{\vec{r}}_{(p)}|^2-e\,q_{(p)}\dot{r}^{i}A_{i}(\vec{r}_{(p)},t)\right.\nonumber\\ &&\left.-e\,q_{(p)}A_{o}(\vec{r}_{(p)},t)\right)-\frac{1}{4}\int d^{4}x \,F^{\mu\nu}(\vec{x},t)F_{\mu\nu}(\vec{x},t)\;, \label{ec1}\end{aligned}$$ where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ and $\vec{r}_{(p)}$, $q_{(p)}$ denote the position and charge of the $p-th$ particle respectively. After Dirac quantization in the $A_{o}=0$ gauge, one obtains the first class Hamiltonian $$\begin{aligned} H&=&\sum_{p=1}^{N}\frac{1}{2m_{(p)}}\left(p_{(p)i}+eq_{(p)}A_{i}(\vec{r}_{(p)},t)\right)^{2}\;\nonumber\\ &&+ \int d^{3}x \;\frac{1}{2}\left(|\vec{E}|^{2}+|\vec{B}|^{2}\right)\;, \label{ec2}\end{aligned}$$ together with the Gauss (first class) constraint: $$\varphi\equiv\partial_{i}E^{i}-\sum_{p}e\,q_{(p)}\delta^{3}(\vec{r}_{(p)}-\vec{x})\approx0\;. \label{ec3}$$ In these equations $e$ is the electromagnetic coupling constant (which in $3+1$ space is dimensionless), while $E^{i}\equiv F^{io}$ and $B^{i}=-\frac{1}{2}{\epsilon}^{ijk}F_{jk}$ denote the electric and magnetic fields. The operators $\vec{r}_{(p)}$ and $\vec{p}_{(p)}$ are canonical conjugates, likewise $\vec{A}$ and $\vec{E}$: $$\begin{aligned} \left[r^{i}_{(p)},p_{(q)j}\right]&=&i\delta^{i}_{j}\delta_{pq}\;, \label{dificil}\end{aligned}$$ $$\begin{aligned} \left[A_{i}(\vec{x}),E^{j}(\vec{y})\right]&=&i\delta^{j}_{i}\delta^{3}(\vec{x}-\vec{y})\;. \label{ec4}\end{aligned}$$ The expression $A_{i}(\vec{r}_{(p)},t)$ is a shorthand for $$A_{i}(\vec{r}_{(p)},t)\equiv \int d^{3}\vec{x}\;\delta^{3}(\vec{x}-\vec{r}_{(p)})A_{i}(\vec{x},t)\;, \label{ec5}$$ where $\delta^{3}(\vec{x}-\vec{r}_{(p)})$ is an operator-valued distribution acting on the Hilbert space of the $p$-th particle: $$\delta^{3}(\vec{x}-\vec{r}_{(p)Operator})\,|\vec{r}_{(p)}\rangle = \delta^{3}(\vec{x}-\vec{r}_{(p)})\,|\vec{r}_{(p)}\rangle\;. \label{ec6}$$ The full Hilbert space of the theory may be spanned by the basis $\prod |\vec{r}_{(p)}\rangle \otimes |\vec{A}\rangle$, constructed by taking the tensorial product of the “position” eigenstates $|\vec{r}_{(p)}\rangle$ and $|\vec{A}\rangle$ associated to the particles and the field respectivelly. The Hilbert space must be restricted to the physical space, in Dirac sense, defined by $\varphi\, |\psi_{Physical}\rangle =0$. Also, we must identify which operators are first class, i.e., gauge invariant \[remember that the Gauss constraint (\[ec3\]) generates spatial gauge transformations, both on particle and field operators\]. It is inmediate to check that the electric and magnetic fields $\vec{E}$, $\vec{B}$, together with the particle position operator $\vec{r}_{(p)}$ and the gauge covariant momentum $\vec{p}_{(p)}+e\,q_{(p)}\vec{A}(\vec{r}_{(p)},t)$ commute with the Gauss constraint, unlike the gauge dependent operators $\vec{p}_{(p)}$ and $\vec{A}$. It is worth mentioning that every physical observable may be constructed in terms of the first class operators mentioned above \[see, for instance, expression (\[ec2\]) for the energy of the field-particles system\]. Next, let us consider the L.R. appropriate to the theory we are dealing with. A brief review of how it works in the sourceless case will help. In the pure Maxwell theory [@1a; @1b; @1c; @1d], the gauge invariante operators $\vec{E}$ and $\vec{B}$ may be realized onto loop dependent wave funtionals $\Psi (C)$ as: $$E^{i}\,\Psi (C)=e\,T^{i}(\vec{x},C)\,\Psi (C)\;, \label{ec7}$$ $$F_{ij}\,\Psi (C)=i/e\,\Delta_{ij}(\vec{x})\,\Psi (C)\;, \label{ec8}$$ where the form factor $$T^{i}(\vec{x},C)\equiv \oint dy^{i} \, \delta^{3}(\vec{x}-\vec{y})\;, \label{ec9}$$ is a distributional vector density that encodes the information of the shape of the spatial loop $C$. The loop derivative of Gambini-Trías $\Delta_{ij}(\vec{x})$ [@1a; @1b; @1c; @1d] is defined as: $$\Psi (\sigma \cdot C)= \left(1+ \sigma^{ij}\Delta_{ij}(\vec{x})\right)\, \Psi (C)\;, \label{ec10}$$ with $\sigma^{ij}$ being the area of an infinitesimal plaquette attached at the spatial point $\vec{x}$. Thus $\Delta_{ij}(\vec{x})$ measures how the loop dependent funtion $\Psi (C)$ changes under a small deformation of its argument $C$. In the loop representation, the source-free Gauss law constraint $(\partial_{i}E^{i}=0)$ is automatically satisfied, since $T^{i}(\vec{x},C)$ has vanishing divergence. One can thus interpret $C$ as a closed Faraday’s line of electric flux. In the case of particles interacting with fields, one needs to enlarge the space of states. To simplify the discussion, let us begin by considering the one particle case. The interpretation of loops as Faraday’s lines of electric flux, leads in a natural way to try the following picture: consider an open path $\gamma_{\vec{r}}$ starting at the particle’s position $\vec{r}$ and ending at the spatial infinity \[to take into account the source-free sector, this open path might be accompanied by closed contours too\]. Then, consider path-dependent wave functionals $\Psi (\gamma_{\vec{r}})$, and define the action of the electric field operator as in equation (\[ec7\]) $$E^{i}(\vec{x})\,\Psi (\gamma_{\vec{r}})=e\,T^{i}(\vec{x},\gamma_{\vec{r}})\,\Psi (\gamma_{\vec{r}})\;. \label{ec12}$$ Then, the Gauss constraint (\[ec3\]) states that: $$\begin{aligned} \big(&e&\partial_{i}T^{i}(\vec{x},\gamma_{\vec{r}})-e\,q\,\delta^{3}(\vec{r}-\vec{x})\;\big)\,\Psi(\gamma_{\vec{r}})\nonumber\\ &=&e\left(\delta^{3}(\vec{r}-\vec{x})-q\,\delta^{3}(\vec{r}-\vec{x})\right)\,\Psi(\gamma_{\vec{r}})\nonumber\\ &=&0, \label{ec13}\end{aligned}$$ where we have dropped the $\delta^{3}(\infty)$ contribution arising from the end of the path. Equation (\[ec13\]) implies that $q=1$. This result provides the key to complete the picture of the kinds of paths allowed. Had we taken an incoming path instead of the outgoing one, the Gauss law had been satisfied only for $q=-1$. On the other hand, if we take a “multiple” open path, i.e., $n$ strands outgoing (incoming) from (towards) $\vec{r}$, the allowed value for $q$ would be $n$ $(-n)$. Finally, it is easy to see that for $N$ charges, one must take $N$ “bundles” of open paths, one for each charge $q_{(p)}$, having as many strands as the value of the charge, and oriented according to its sign. Hence, within this formalism there is no room for fractionary charges: a Faraday line carries one unit of electric flux $e$, which must be emitted from or absorbed by an integral charge $q_{(p)}$. Then one has: $$\sum_{p=1}^{N}q_{(p)}\delta^{3}(\vec{x}-\vec{r}_{(p)})= \sum_{s}\left(\delta^{3}(\vec{x}-\vec{a}_{s})-\delta^{3}(\vec{x}-\vec{b}_{s})\right)\;, \label{ec14}$$ with $\vec{a}_{s}$ and $\vec{b}_{s}$ labeling the starting and ending points of the $s$-th “strand”, and the Gauss constraint (\[ec3\]) becomes an identity on the physical states. It remains to study whether or not the algebra of observables admits a realization in terms of operators acting on these path-dependent (Faraday’s lines dependent) functionals $\Psi (\gamma_{\vec{r}})$. Besides the electric and magnetic fields, wich are realized as in equations (\[ec7\]) and (\[ec8\]) \[remember that the paths may also be comprised by closed loops, hence the loop derivative makes sense in this context too\], we prescribe: $$\begin{aligned} &&p_{(p)i}+e\,q_{(p)}\,A_{i}(\vec{r}_{(p)},t)\rightarrow\nonumber\\ &&-iD_{i}(\vec{r}_{(p)})\equiv -i\left(\frac{\partial}{\partial r_{(p)}^{i}}-q_{(p)}\,\delta_{i}(\vec{r}_{(p)})\right)\;, \label{ec15}\end{aligned}$$ where $\delta_{i}(\vec{x})$ is the “path derivative”, that acts onto path-dependent functions $\Psi(\gamma_{\vec{r}_{(p)}})$ by measuring their change when an infinitesimal open path starting at $\vec{x}$ and ending at $\vec{x}+\vec{h}$ $(\vec{h}\rightarrow 0)$ is appended to the list of paths comprised in $\gamma_{\vec{r}_{(p)}}$ [@8]: $$\Psi(h\cdot \gamma_{\vec{r}_{(p)}})=\left(1+h^{i}\,\delta_{i}(\vec{r}_{(p)})\right)\,\Psi(\gamma_{\vec{r}_{(p)}})\;. \label{ec16}$$ The $\delta_{i}(\vec{x})$ derivative is related with the loop derivative (\[ec10\]) through: $$\Delta_{ij}(\vec{x})= \frac{\partial}{\partial x^{i}}\delta_{j}(\vec{x})-\frac{\partial}{\partial x^{j}}\delta_{i}(\vec{x})\;. \label{ec17}$$ The gauge invariant combination $D_{i}(\vec{r}_{(p)})$ coincides with the derivative introduced by Mandelstam several years ago [@9]. It comprises the ordinary derivative, representing the momentum operator of the particle, plus $q_{(p)}$ times the “path derivative” $\delta_{i}(\vec{r}_{(p)})$. The “Mandelstam operator” $D_{i}(\vec{r}_{(p)})$ has a nice geometric interpretation within the present formulation, as we shall see. In this representation, both particles and fields are described by geometric means: particles are labelled by points $\vec{r}_{(p)}$ (as usual), and fields by open paths. Gauge invariance restricts paths to be closed, or to start (or end) at the points where particles “live”. Gauge invariant operators, on the other hand, respect the geometrical properties dictated by gauge invariance: the “position” operators $\vec{r}_{(p)}$ and $\vec{E}$, are diagonal in this representation, and act by displaying the localization and shape of the geometric configurations. In turn, the magnetic field operator computes the change in the wave functional when a small “plaquette” is added, while the covariant momentum $-iD_{i}(\vec{r}_{(p)})$ measures the change when both the particle and its attached “bundle” of paths are infinitesimally displaced. In both cases, the involved derivative operation fulfills the geometrical requirements imposed by gauge invariance. At this point, it should be observed that a more appropriate notation for the path dependent functionals would be $\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})$, since it displays both the path and point-dependence, which are affected by the path and ordinary derivatives respectivelly. Finally, it can be shown that the path-space operators obey the algebra arising from the canonical commutators, i.e., they constitute a representation of the quantum theory under study. For instance, one has $$\begin{aligned} &&\left[-iD_{i}(\vec{r}_{(p)}),-iD_{j}(\vec{r}_{(p)})\right]\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})\nonumber\\ &&=q_{(p)}\,\Delta_{ij}(\vec{r}_{(p)})\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})\;, \label{ec18}\end{aligned}$$ which corresponds to the relation $$\begin{aligned} &&\left[p_{(p)i}+e\,q_{(p)}\,A_{i}(\vec{r}_{(p)}), \,p_{(p)j}+e\,q_{(p)}\,A_{j}(\vec{r}_{(p)})\right]\nonumber\\ &&=-ie\,q_{(p)}\,F_{ij}(\vec{r}_{(p)}) \;. \label{ec19}\end{aligned}$$ Summarizing, we saw that the L.R. of the Maxwell theory coupled with point charged particles is a “Faraday’s lines representation” that may be set up only if electric charges are quantized, the fundamental unit of charge being the electromagnetic coupling constant $e$, which in this framework is the unit of electric flux carried by each Faraday’s line. Non-relativistic point particles interacting through Chern-Simons field ======================================================================= We now turn our attention to the model described by the action, $$\begin{aligned} S&=&\int dt \sum_{p=1}^{N} \Big[\frac{1}{2}m_{(p)}|\dot{\vec{r}}_{(p)}|\nonumber\\ &-&e\,q_{(p)}\dot{r}_{(p)}^{i}A_{i}(\vec{r}_{(p)},t)-e\,q_{(p)}A_{o}(\vec{r}_{(p)},t)\Big]\nonumber\\ &+&\frac{\kappa}{2}\int d^{3}x\;\varepsilon^{\mu\nu\lambda}\partial_{\nu}A_{\lambda}(x)A_{\mu}(x)\;. \label{ec19.a}\end{aligned}$$ This theory has been studied throughly [@4], mainly due to its relationship with anyonic statistics. Our main concern will be to discuss its L.R. formulation. To this end we need the results of the Dirac quantization of this model, which may be summarized as follows [@4]. The first class Hamiltonian is given by: $$H=\sum_{p=1}^{N}\frac{1}{2m_{(p)}}\left(\vec{p}_{(p)}-e\,q_{(p)}\vec{A}(\vec{r}_{(p)},t)\right)^{2}\;. \label{ec20}$$ It should be recalled that the Chern-Simons term, due to its topological character, does not contribute to the energy momentum tensor. That is why the Hamiltonian in the present case looks like that of a collection of particles in an external field. Another difference with the previous case is the commutator $$\left[A_{i}(\vec{x}), A_{j}(\vec{y})\right]=\frac{i}{\kappa} \varepsilon^{ij}\delta^{2}(\vec{x}-\vec{y})\;, \label{ec21}$$ which, together with the commutators of the canonical operators for free particles \[i.e., equation (\[dificil\])\] complete the non trivial part of the algebra of the quantum theory \[the remaining commutators vanish identically\]. The first class constraint that replaces the Gauss law of Maxwell theory, and generates time independent gauge transformations is given by $$\kappa \vec{B}(\vec{x})+\sum_{(p)}e\,q_{(p)}\delta^{2}(\vec{x}-\vec{r}_{(p)})\approx 0\;, \label{ec23}$$ where $B(\vec{x})\equiv -\frac{1}{2} \varepsilon^{ij}F_{ij}$ is the “magnetic field”. This constraint states that on the physical sector of the Hilbert space, every particle carries an amount of “magnetic” flux proportional to its electric charge, and confined to the point where the particle is. It can be verified that the position and velocity operators, $\vec{r}_{(p)}$ and $m_{(p)}\vec{v}_{(p)}\equiv \vec{p}_{(p)}-e\,q_{(p)}\vec{A}(\vec{r}_{(p)},t)$ are gauge invariant. Moreover, it can be seen that on the physical sector of the Hilbert space every observable of the theory may be expressed in terms of them [@4]. Then, our next task is to find a suitable realization of these operators in a geometric representation. As in the theory of the previous section, we consider the space of path dependent functionals $\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})$. The action of the path and loop derivatives $\delta_{i}(\vec{x})$, $\Delta_{ij}(\vec{x})$, the Mandelstam derivative $D_{i}(\vec{r}_{(p)})$, and the form factor $T^{i}(\vec{x}, \gamma)$ is defined as in the former case. Then it is easy to see that the prescription: $$A_{i}(\vec{x})\;\rightarrow \;\frac{i}{e}\delta_{i}(\vec{x})-\frac{e}{2\kappa}\varepsilon_{ij}T^{j}(\vec{x}, \gamma) \label{ec24}$$ realizes the commutator (\[ec21\]). From this result we can obtain the velocity operator as $$\begin{aligned} m_{(p)}v_{(p)i}=&-&i\left(\frac{\partial}{\partial r_{(p)}^{i}}+q_{(p)}\delta_{i}(\vec{r}_{(p)})\right)\nonumber\\ &+&\;\frac{e^{2}}{2\kappa}q_{(p)}\varepsilon_{ij}T^{j}(\vec{r}_{(p)}, \gamma)\nonumber\\ =&-&iD_{i}(\vec{r}_{(p)})+\frac{e^{2}}{2\kappa}\,q_{(p)}\,\varepsilon_{ij}T^{j}(\vec{r}_{(p)}, \gamma)\;, \label{ec25}\end{aligned}$$ when acting on “Faraday’s lines” dependent functionals $\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})$. After some calculations one can compute the following commutators in the path representation $$\begin{aligned} \left[m_{(p)}v_{(p)}^{i}, m_{(q)}v_{(q)}^{j}\right]&=&i\varepsilon^{ij}\Big(\delta_{pq}e\,q_{(q)}\,B(\vec{r}_{(q)})\nonumber\\ &&+\frac{e^{2}}{\kappa}q_{(p)}\,q_{(q)}\delta^{2}(\vec{r}_{p}-\vec{r}_{q})\Big)\;,\\ \left[r_{(p)}^{i}, m_{(q)}v_{(q)}^{j}\right]&=&i\,\delta^{ij}\delta_{pq}\;,\\ \left[r_{(p)}^{i}, r_{(q)}^{j}\right]&=&0\;,\end{aligned}$$ and check that they agree with what it is obtained when the same commutators are calculated directly from the canonical ones, i.e., from equations (\[ec21\]) and (\[dificil\]) [@4]. Our next step will consist on studying the gauge constraint (\[ec23\]). Substituting equation (\[ec24\]) into equation (\[ec23\]), we find $$\begin{aligned} \left\{\frac{i}{2e}\varepsilon^{ij}\Delta_{ij}(\vec{x})+\frac{e}{2\kappa}\sum_{s}\left(\delta^{2}(\vec{x}-\vec{a}_{s})-\delta^{2}(\vec{x}-\vec{b}_{s})\right)\right.\nonumber\\ \left.\qquad\qquad-\frac{e}{\kappa}\sum_{p=1}^{N}q_{(p)}\delta^{2}(\vec{x}-\vec{r}_{(p)})\right\}\;\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})=0\, .\nonumber\\ \label{ec29}\end{aligned}$$ The first two terms of this expression come from the realization of the magnetic field that rises from equation (\[ec24\]). There is a special situation in which one knows the solution of the path-dependent differential equation (\[ec29\]), namely, the case when the charge is proportional to the number of strands $$q_{(p)}=\alpha\; n_{(p)}\, . \label{ec30}$$ In this case, equation (\[ec29\]) can be cast in the form $$\begin{aligned} &&\left\{\frac{e}{2\kappa}(2\alpha -1)\sum_{s}\left(\delta^{(2)}(\vec{x}-\vec{b}_{s})-\delta^{(2)}(\vec{x}-\vec{a}_{s})\right)\right.\nonumber\\ &&\left.\qquad\qquad\qquad +\; \frac{i}{2e}\varepsilon^{ij}\Delta_{ij}(\vec{x})\right\}\,\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})=0\, ,\nonumber\\ \label{ec30a}\end{aligned}$$ which we recognize as the first class constraint of the abelian Maxwell-Chern-Simons theory in an open-path representation [@10]. There is a subtlety which does not spoil the similarity between the constraints of both theories: in the present study the points $\vec{r}_{(p)}$ are “ocupied” by two entities, the charged particles that may be displaced by means of $\partial/\partial r^{i}_{(p)}$, and the boundaries of the paths that respond to the action of the path derivative $\delta_{i}(\vec{r}_{(p)})$. In the Maxwell-Chern-Simons case, on the other hand, there only exist objects of the second type. The solution of (\[ec30a\]) is given by [@10] $$\Psi(\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})=exp\left(i\frac{e^2 (2\alpha-1) }{4\pi\kappa}\Delta\Theta(\gamma)\right)\Phi(\partial\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)}), \label{otra30}$$ where $\Phi(\partial\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})$ is a function that depends on the path $\gamma_{\vec{r}_{(p)}}$ only through its boundary $\partial\gamma_{\vec{r}_{(p)}}$, and $\Delta\Theta(\gamma)$ is the sum of the angles subtended by the pieces of the path $\gamma$ from their final points $\vec{b}_{s}$, minus the sum of the angles subtended by these pieces measured from their starting points $\vec{a}_{s}$: $$\begin{aligned} \Delta\Theta(\gamma)\equiv\sum_{s}\int_{\gamma}dx^{k}\,\varepsilon^{lk}\left[\frac{(x-b_{s})^{l}}{|\vec{x}-\vec{b}_{s}|^{2}}-\frac{(x-a_{s})^{l}}{|\vec{x}-\vec{a}_{s}|^{2}}\right]\;. \label{ec31}\end{aligned}$$ At this point one should verify whether the gauge invariant operators of the theory preserve the form of the physical states given by equation (\[otra30\]). It is found that this is so, provided that $\alpha=1$. For instance, one has for the velocity operator $$\begin{aligned} &&m_{(p)}v_{(p)i}\left[exp\left(i\frac{e^2}{4\pi\kappa}\Delta\Theta(\gamma)\right)\Phi(\partial\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})\right]\nonumber\\ &&=exp\left(i\frac{e^2}{4\pi\kappa}\Delta\Theta(\gamma)\right)\times\nonumber\\ &&\quad\Bigg\{\frac{q_{(p)}e^{2}}{2\pi\kappa}\sum_{s}\left[\frac{(r_{(p)}-b_{s})^{i}}{|\vec{r}_{(p)}-\vec{b}_{s}|^{2}}-\frac{(r_{(p)}-a_{s})^{i}}{|\vec{r}_{(p)}-\vec{a}_{s}|^{2}}\right]\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\quad\quad-\;i\varepsilon^{ij}D_{j}(\vec{r}_{(p)})\Bigg\}\nonumber\\ &&\quad\times\;\Phi(\partial\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})\nonumber\\ &&=exp\left(i\frac{e^2}{4\pi\kappa}\Delta\Theta(\gamma)\right)\Phi'(\partial\gamma_{\vec{r}_{(p)}}, \vec{r}_{(p)})\,, \label{ec32}\end{aligned}$$ where $\Phi'$, in the last line, is a boundary-dependent functional, likewise $\Phi$. Hence, we find that a consistent solution of the gauge constraint is given by equation (\[otra30\]), in the case where the charges of the particles coincide with their number of attached strands. As in the Maxwell-Chern-Simons case [@10] there is an unitary transformation that allows us to eliminate the path dependent phase $\chi(\gamma)\equiv i\frac{e^2}{4\pi\kappa}\Delta\Theta(\gamma)$. It is given by: $$\begin{aligned} \Psi(\gamma, \vec{r})&\rightarrow&\widetilde{\Psi}(\partial\gamma, \vec{r})=exp\left[-\chi(\gamma)\right]\Psi(\gamma, \vec{r})\,,\\A\,&\rightarrow&\,\widetilde{A}=exp\left[-\chi(\gamma)\right]\,A\; exp\left[\chi(\gamma)\right]\,, \label{ec33}\end{aligned}$$ with $A$ being any gauge invariant operator of the theory. Once this transformation is performed, the path dependence of the wave functional $\widetilde{\Psi} $ is reduced to the boundary $\partial\gamma_{\vec{r}_{(p)}}$ of the path, which is just the set $\{\vec{r}_{(p)}\}$ of points occupied by the particles. A moments thought leads one to realize that, at this point, the boundary dependence of the wave functional becomes redundant, and it suffices to employ ordinary wave functions $\Psi(\vec{r}_{(p)})$, instead of the “boundary dependent” functionals $\Psi(\partial\gamma_{\vec{r}_{(p)}} , \vec{r}_{(p)})$. At the same time, we should replace the Mandelstam derivative $D_{i}(\vec{r}_{(p)})=\frac{\partial}{\partial r_{(p)}^{i}}+q_{(p)}\,\delta_{i}(\vec{r}_{(p)})$ by the ordinary “point” derivative $\frac{\partial}{\partial r_{(p)}^{i}}$. The Schrödinger equation of the model may then be written down as $$\begin{aligned} i\,\partial_{t}\psi(\vec{r}_{(p)}, t)=\left[\frac{1}{2}\sum_{p=1}^{N}m_{(p)}{v}_{(p)}^{2}\right]\;\psi(\vec{r}_{(p)}, t) \label{ec34}\end{aligned}$$ with $m_{(p)}v_{(p)}^{i}$ given by $$m_{(p)}v_{(p)}^{i}=p_{(p)}^{i}-eq_{(p)}\frac{1}{2\pi\kappa}\varepsilon^{ij}\sum_{q\neq p}eq_{(q)}\frac{(r_{(p)}^{j}-r_{(q)}^{j})}{|\vec{r}_{(p)}-\vec{r}_{(q)}|^{2}}\;, \label{ec35}$$ and then we recover the well known description of the quantum mechanics of non-relativistic particles interacting through a quantized Chern-Simons field [@4] that gives rise to a model of anyons. This fact should be seen as the basic justification for choosing the charge quantization scheme that we adopted in the Chern-Simons case. Conclusion ========== We have studied the L.R. quantization of point particles interacting by means of Maxwell and Chern-Simons fields. In both cases we found that the appropriate Hilbert space is made of wave functionals whose arguments are Faraday’s lines emanating from or ending at the particles positions. In the Maxwell case, since the lines of force carry an amount of electric flux that must be a multiple of the coupling constant $e$, we find that electric charge must be quantized in order to have a consistent formulation. In the Chern-Simons case, on the other hand, the quantization of the electric charge allows to relate, in a simple form, the geometric representation of the model with the quantum mechanics of anyons as discussed in references [@4; @5]. We think that this feature justifies the choice of the charge quantization prescription to solve the gauge constraint (\[ec30a\]). Hence, in the Chern-Simons case, we obtain the following picture: the paths may be “erased” by means of an unitary transformation if we prescribe that the charge is quantized. We want to underline how gauge invariance is maintained within the geometrical framework we have presented. For instance, the covariant momentum is a generalized derivative, that translates both the charges and their associated bundles of force lines. In a similar manner, every gauge invariant operator respects the geometrical setting where the theory is represented. It seems possible to develop a similar formulation for models of extended objects interacting through abelian $p$-forms. It would also be interesting to explore whether or not charge quantization is necessary for the consistence of the L.R. of the model of charged fields (instead of particles) in electromagnetic interaction. [14]{} R. Gambini and J. Pullin, “Loops, knots, gauge theories and quantum gravity,” [*Cambridge, UK: Univ. Pr. (1996)*]{}. C. Rovelli and P. Upadhya, gr-qc/9806079. C. di Bartolo, F. Nori, R. Gambini and A. Trias, Lett. Nuovo Cim.  [**38**]{}, 497 (1983). R. Gambini and A. Trias, Phys. Rev. D [**27**]{}, 2935 (1983). C. Rovelli and L. Smolin, Nucl. Phys. B [**331**]{}, 80 (1990). C. Rovelli and L. Smolin, Phys. Rev. D [**52**]{}, 5743 (1995). A. Corichi and K. V. Krasnov, quantization,” hep-th/9703177. A. Corichi and K. V. Krasnov, Mod. Phys. Lett. A [**13**]{}, 1339 (1998). R. Jackiw, Annals Phys.  [**201**]{}, 83 (1990). R. Banerjee and B. Chakraborty, system,” Phys. Rev. D [**49**]{}, 5431 (1994). J. Camacaro, R. Gaitan and L. Leal, Mod. Phys. Lett. A [**12**]{} (1997). S. Mandelstam, Annals Phys.  [**19**]{}, 25 (1962). L. Leal and O. Zapata, Phys. Rev. D [**63**]{}, 065010 (2001). L. Leal, representation,” Mod. Phys. Lett. A [**7**]{}, 3013 (1992). [^1]: efuenma@fisica.ciens.ucv.ve [^2]: lleal@fisica.ciens.ucv.ve [^3]: revoredoryan@yahoo.com
--- abstract: 'We provide a detailed formulation of the recently proposed variational approach \[Y. Ashida et al., Phys. Rev. Lett. 121, 026805 (2018)\] to study ground-state properties and out-of-equilibrium dynamics for generic quantum spin-impurity systems. Motivated by the original ideas by Tomonaga, Lee, Low, and Pines, we construct a canonical transformation that completely decouples the impurity from the bath degrees of freedom. By combining this transformation with a Gaussian ansatz for the fermionic bath, we obtain a family of variational many-body states that can efficiently encode the strong entanglement between the impurity and fermions of the bath. We give a detailed derivation of equations of motions in the imaginary- and real-time evolutions on the variational manifold. We benchmark our approach by applying it to investigate ground-state and dynamical properties of the anisotropic Kondo model and compare results with those obtained using matrix-product state (MPS) ansatz. We show that our approach can achieve an accuracy comparable to MPS-based methods with several orders of magnitude fewer variational parameters than the corresponding MPS ansatz. Comparisons to the Yosida ansatz and the exact solution from the Bethe ansatz are also discussed. We use our approach to investigate the two-lead Kondo model and analyze its long-time spatiotemporal behavior and the conductance behavior at finite bias and magnetic fields. The obtained results are consistent with the previous findings in the Anderson model and the exact solutions at the Toulouse point.' author: - Yuto Ashida - Tao Shi - Mari Carmen Bañuls - 'J. Ignacio Cirac' - Eugene Demler bibliography: - 'reference.bib' title: 'Variational principle for quantum impurity systems in and out of equilibrium: application to Kondo problems' --- Introduction ============ Out-of-equilibrium phenomena in quantum many-body systems are an active area of research in both ultracold gases [@EM11; @CM12; @FuT15; @KAM16; @RL17] and traditional solid-state physics [@DFS02; @THE11; @LC11; @IZ15; @MMD17]. A broad class of problems that correspond to a quantum impurity coupled to the many-body environment are particularly important and have been at the forefront of condesned matter physics starting with the pioneering paper by Kondo [@KJ64]. They have proven crucial to the understanding of thermodynamic properties in strongly correlated materials [@AK75; @HAC97; @LH07; @GP08; @SQ10], transport phenomena [@GL88; @NTK88; @MY93; @LW02; @YLH04; @GGD98; @CSM98; @SF99; @WWG00; @RMP07; @KAV11; @KAV12] and decoherence [@LAJ87; @LD98; @WZ07] in nanodevices, and lie at the heart of formulating dynamical mean-field theory [@GA96] (DMFT). ![\[fig\_schem1\] Schematic illustration of the canonical transformation introduced in our variational approach. (a) In the original frame, the localized impurity spin can interact with mobile bath particles in an arbitrary manner, generating strong entanglement between the impurity spin and bath. (b) After employing the canonical transformation, we can move to the “corotating" frame of the impurity in which the impurity dynamics can be made frozen at the expense of introducing an interaction among bath particles. ](fig_schem1.pdf){width="50mm"} A variational approach is one of the most successful and powerful approaches for solving many-body problems. Its guiding principle is to design a family of variational states that can efficiently capture essential physics of the quantum many-body system while avoiding the exponential complexity of the exact wavefunction. In quantum impurity problems, such variational studies date back to Tomonaga’s treatment [@TS47] of the coupling between mesons and a single nucleon. Soon after that, Lee, Low and Pines [@LLP53] (LLP) applied similar approach to the problem of a polaron, a mobile spinless impurity interacting with phonons. Their key idea is to take advantage of the total-momentum conservation by transforming to the comoving frame of the impurity via the unitary transformation [ $$\begin{aligned} \label{LLP} \hat{U}_{\rm LLP}=e^{-i\hat{\bf x}\cdot\hat{\bf P}_{\rm b}},\end{aligned}$$ ]{} where $\hat{{\bf x}}$ is the position operator of the impurity and $\hat{\bf P}_{\rm b}$ is the total momentum operator of bath phonons. After employing the transformation, the conserved quantity becomes the momentum operator $\hat{\bf p}$ of the impurity as inferred from the relation: [ $$\begin{aligned} \label{LLPconservation} \hat{U}_{\rm LLP}^{\dagger}(\hat{\bf p}+\hat{{\bf P}}_{\rm b})\hat{U}_{\rm LLP}=\hat{\bf p}.\end{aligned}$$ ]{} In the transformed frame, $\hat{{\bf p}}$ can be taken as a classical variable and thus, the impurity is decoupled from the bath degrees of freedom, or said differently, its dynamics is completely frozen. This observation naturally motivates the following family of variational states: [ $$\begin{aligned} \label{LLPvar} |\Psi_{\rm tot}(\xi)\rangle=\hat{U}_{\rm LLP}|{\bf p}\rangle|\Psi_{\rm b}(\xi)\rangle,\end{aligned}$$ ]{} where $|{\bf p}\rangle$ is a single-particle state of the impurity having eigenmomentum ${\bf p}$, and $|\Psi_{\rm b}(\xi)\rangle$ is a bath wavefunction characterized by a set of variational parameters $\xi$. For example, when the bath is given by an ensemble of bosonic excitations, $|\Psi_{\rm b}\rangle$ can be chosen as a factorizable coherent state, as has been suggested in the original papers by Tomonaga [@TS47] and Lee, Low and Pines [@LLP53]. Variational approach combining the transformation (\[LLP\]) with a variety of efficiently parametrizable wavefunctions has been very successful in solving both in- and out-of-equilibrium problems and thus laid the cornerstone of successive studies in polaron physics [@DJT09; @DJT10; @AAS_book; @FRP55; @BJ67; @NP90; @ZWM90; @AT93; @TJ09; @NA10; @CW11; @CW112; @RS13; @VJ13; @VJ14; @LW14; @FG15; @SYE16; @SYE162; @YA17]. Our aim is to generalize this variational approach to yet another important paradigm in many-body physics: localized spin-impurity models (SIM) (Fig. \[fig\_schem1\]). The most fundamental problem in SIM is the Kondo model [@KJ64], a localized spin-1/2 impurity interacting with a fermionic bath. Its equilibrium properties are now theoretically well understood based on the results of the perturbative renormalization group (RG) [@APW70], Wilson’s numerical renormalization group (NRG) [@WKG75; @BR03; @BL07; @BRC08; @SH08; @BL09; @BCA10] and the exact solution via the Bethe ansatz [@PBW80; @AND80; @ANL81; @NK81; @AN83; @SP89]. Yet, its out-of-equilibrium dynamics is still an area of active experimental [@RL17; @DFS02; @THE11; @LC11; @IZ15; @MMD17] and theoretical research [@STL08; @WP09; @SM09; @WP10; @CG13; @NP99; @KA00; @AR03; @HA08; @KM01; @PM10; @HA09L; @HA09B; @CT11; @BS14; @FS15; @BCZ17; @WSR04; @SP04; @AHKA06; @DSLG08; @WA09; @HMF09; @HMF10; @NHTM17; @AFB05; @AFB06; @AFB07; @AFB08; @RD08; @JE10; @LB14; @NM15; @DB17; @LF96; @LF98; @SA98; @LD05; @VR13; @SG14; @MM13; @BCJ16] with many open questions. Earlier works include real-time Monte Carlo [@STL08; @WP09; @SM09; @WP10; @CG13], perturbative RG [@NP99; @KA00; @AR03; @HA08; @KM01; @PM10] and Hamiltonian RG method [@HA09L; @HA09B; @CT11], coherent-state expansion [@BS14; @FS15; @BCZ17], density-matrix renormalization group (DMRG) [@WSR04; @SP04; @AHKA06; @DSLG08; @WA09; @HMF09; @HMF10; @NHTM17], time-dependent NRG (TD-NRG) [@AFB05; @AFB06; @AFB07; @AFB08; @RD08; @JE10; @LB14], time evolving decimation (TEBD) [@NM15; @DB17], and analytical solutions [@LF96; @LF98; @SA98; @LD05; @VR13; @SG14; @MM13; @BCJ16]. In spite of the rich theoretical toolbox for studying the quantum impurity systems, analysis of the long-time dynamics remains very challenging. Most theoretical approaches suffer from the same fundamental limitation: they become increasingly costly in terms of computational resources at long times. For example, it has been pointed out in TD-NRG that the logarithmic discretization may cause artifacts in predicting long-time dynamics [@RA12], and calculations based on the matrix-product states (MPS) are known to be extremely challenging in the long-time regimes because the large amount of entanglement forces one to use an exponentially large bond dimension [@SU11]. Another difficulty intrinsic to some of the methods is to understand spatiotemporal dynamics of the bath degrees of freedom since the latter are often integrated out or represented as a simplified effective bath, in which details of the microscopic eigenstates are omitted. Furthemore, the currently available methods have been constructed for a specific class of the Kondo impurity model, in which electrons move ballistically and interaction between the impurity spin and fermions is local. Extending these techniques to the case of electrons strongly scattered by disorder and nonlocal interactions with the impurity spin is not obvious. These challenges motivate us to develop a new theoretical approach to quantum impurity systems. In the accompanying paper [@YA18L], we introduce a new canonical transformation that is the core of the proposed variational approach and provide strong evidence for the validity of our approach to correctly describe in- and out-of-equilibrium properties of SIM. In this paper, we present comprehensive details of the proposed variational approach. For the sake of completeness, we first provide the construction of the new transformation that generates the entanglement between the impurity and the bath. In contrast to the LLP transformation (\[LLP\]) of going into the comoving frame of the mobile spinless impurity, such a construction of the canonical transformation in SIM is not obvious due to the SU(2)-commutation relation of the impurity spin operators. We discuss a general approach to constructing such transformations in this paper. We then combine the transformation with Gaussian states to obtain a family of variational states that can efficiently capture the impurity-bath entanglement. We provide a set of nonlinear equations of motions for the covariance matrix to study ground-state and out-of-equilibrium properties of generic SIM. We benchmark our approach by applying it to the anisotropic Kondo model and compare results with those obtained using MPS ansatz. We also compare our results to the ones obtained from the Yosida ansatz and the exact solution via the Bethe ansatz. We analyze out-of-equilibrium dynamics and transport properties in the two-lead Kondo model and demonstrate that our approach can be used to compute the long-time spatiotemporal dynamics and the conductance at finite bias and magnetic field. The obtained results are consistent with the previous studies in the Anderson model [@KRM02; @AHKA06; @JE10] and the exact solutions at the Toulouse point [@BCJ16]. This paper is organized as follows. In Sec. \[secformalism\], we present a general concept of our variational approach to SIM. In particular, we discuss the canonical transformation introduced in Ref. [@YA18L] that decouples the impurity from the bath degrees of freedom. We then derive the equations of motions for the covariance matrix of fermionic Gaussian states that describe the ground-state properties and real-time evolutions of SIM. In Sec. \[secKondo\], we apply our theory to the anisotropic Kondo model and benchmark it with MPS-based calculations and the exact solution via the Bethe ansatz. In Sec. \[secKondotwo\], we analyze transport dynamics in the two-lead Kondo model in further detail and reveal long-time spatiotemporal dynamics and the conductance behavior at finite bias and magnetic fields. Finally, we summarize the results in Sec. \[secConclusion\]. General Formalism {#secformalism} ================= Canonical transformation ------------------------ We first formulate our variational approach to SIM in a general way. The key idea is to introduce a new canonical transformation that completely decouples the impurity and the bath degrees of freedom for a generic spin-1/2 impurity Hamiltonian $\hat{H}$ by employing its parity symmetry $\hat{\mathbb P}$, i.e., $[\hat{H},\hat{\mathbb P}]=0$ with $\hat{\mathbb P}^{2}=1$. Specifically, we aim to find a unitary transformation $\hat{U}$ that maps this parity operator $\hat{\mathbb P}$ to an impurity operator as (c.f. Eq. (\[LLPconservation\])) [ $$\begin{aligned} \label{Conserve} \hat{U}^{\dagger}\hat{\mathbb P}\hat{U}=\mathbf{n}\cdot\boldsymbol{\hat{\sigma}}_{\rm imp},\end{aligned}$$ ]{} where $\hat{\boldsymbol \sigma}_{\rm imp}=(\hat{\sigma}_{\rm imp}^{x},\hat{\sigma}_{\rm imp}^{y},\hat{\sigma}_{\rm imp}^{z})^{\rm T}$ is a vector of the impurity spin-1/2 operator and $\bf n$ is some specific direction defined by a three dimensional real vector. It then follows that in the transformed frame the Hamiltonian commutes with the impurity operator $[\hat{U}^{\dagger}\hat{H}\hat{U},\mathbf{n}\cdot\boldsymbol{\hat{\sigma}}_{\rm imp}]=0$ such that the impurity dynamics is frozen, i.e., the transformed Hamiltonian conditioned on a classical variable $\mathbf{n}\cdot\boldsymbol{\hat{\sigma}}_{\rm imp}=\pm1$ only contains the bath degrees of freedom. The price that one pays for decoupling the impurity spin is the appearance of nonlocal multiparticle interactions between the bath degrees of freedom. In spite of the seeming complexity of the fermionic interactions in the decoupled frame, we show that their dynamics can be efficiently described using Gaussian variational wavefunctions, which require the number of parameters that grows at most polynomially with the system size. In this paper, we apply this approach to a spin-1/2 impurity interacting with a fermionic bath: [ $$\begin{aligned} \label{Ham} \hat{H}&=&\sum_{lm\alpha}h_{lm}\hat{\Psi}_{l\alpha}^{\dagger}\hat{\Psi}_{m\alpha}-h_z\hat{s}_{\rm imp}^{z}+\hat{\bf s}_{\rm imp}\cdot\hat{\bf \Sigma},\end{aligned}$$ ]{} where $\hat{\Psi}_{l\alpha}^{\dagger}$ ($\hat{\Psi}_{l\alpha}$) is a fermionic creation (annihiliation) operator corresponding to a bath mode $l=1,2,\ldots,N_{f}$ and spin-$z$ component $\alpha=\uparrow,\downarrow$, $h_{lm}$ is an arbitrary $N_f\times N_f$ Hermitian matrix describing a single-particle Hamiltonian of a bath, and $h_z$ is a magnetic field acting on the impurity. We define the impurity spin operator $\hat{\bf s}_{\rm imp}=\hat{\boldsymbol \sigma}_{\rm imp}/2$ and introduce the bath-spin density operator including couplings as [ $$\begin{aligned} \label{impbath} \hat{\Sigma}^{\gamma}=\frac{1}{2}\sum_{lm\alpha\beta}g_{lm}^{\gamma}\hat{\Psi}_{l\alpha}^{\dagger}\sigma_{\alpha\beta}^{\gamma}\hat{\Psi}_{m\beta},\end{aligned}$$ ]{} where $\gamma=x,y,z$. The first term in Eq. (\[Ham\]) describes a noninteracting bath, the second term describes a local magnetic effect acting on the impurity, and the third term characterizes the interaction between the impurity and the bath. The impurity-bath couplings in Eq. (\[impbath\]) are determined by $N_f\times N_f$ Hermitian matrices $g_{lm}^{\gamma}$ labeled by $\gamma=x,y,z$ and can in general be anisotropic and long-range. While the interaction leads to strong impurity-bath entanglement, we note that it can also generate entanglement between bath modes since the ones that interact with the impurity are not diagonal with respect to the eigenbasis of bath Hamiltonian $h_{lm}$ in general. The magnetic-field term acting on the bath can be also included; we omit that for simplicity. Our canonical transformation relies on the parity symmetry hidden in the Hamiltonian (\[Ham\]). To unveil this symmetry, we introduce the operator $\hat{\mathbb{P}}=\hat{\sigma}_{\rm imp}^{z}\hat{\mathbb P}_{\rm bath}$ with a bath parity operator [ $$\begin{aligned} \label{pbath} \hat{\mathbb P}_{\rm bath}=e^{(i\pi/2)(\sum_{l}\hat{\sigma}^{z}_{l}+\hat{N})}=e^{i\pi\hat{N}_{\uparrow}},\end{aligned}$$ ]{} where $\hat{N}$ is the total particle number in the bath, $\hat{\sigma}_l^{\gamma}=\sum_{\alpha\beta}\hat{\Psi}^{\dagger}_{l\alpha}\sigma^{\gamma}_{\alpha\beta}\hat{\Psi}_{l\beta}$ ($\gamma=x,y,z$) is a spin-density operator with a bath mode $l$, and $\hat{N}_{\uparrow}$ is the number of spin-up fermions. These operators satisfy $\hat{\mathbb P}_{\rm bath}^{2}=\hat{\mathbb P}^{2}=1$. We observe that the Hamiltonian (\[Ham\]) conserves $\hat{\mathbb P}$, i.e., $[\hat{H},\hat{\mathbb P}]=0$. This corresponds to the symmetry under the rotation of the entire system around $z$ axis by $\pi$, which maps both impurity and bath spins as $\hat{\mathbb P}^{-1}\hat{\sigma}^{x,y}\hat{\mathbb P}= -\hat{\sigma}^{x,y}$ while it keeps $\hat{\mathbb P}^{-1}\hat{\sigma}^{z}\hat{\mathbb P}= \hat{\sigma}^{z}$. We employ this parity conservation to construct the disentangling transformation $\hat{U}$ satisfying [ $$\begin{aligned} \hat{U}^{\dagger}\hat{\mathbb P}\hat{U}=\hat{\sigma}_{\rm imp}^{x}\end{aligned}$$ ]{} such that the impurity spin turns out to be a conserved quantity in the transformed frame. Here we choose ${\bf n}=(1,0,0)^{\rm T}$ in Eq. (\[Conserve\]); other choices will lead to the same class of variational states. We define a unitary transformation $\hat{U}$ as [ $$\begin{aligned} \hat{U}=\exp\left[\frac{i\pi}{4}\hat{\sigma}_{\rm imp}^{y}\hat{\mathbb P}_{\rm bath}\right]=\frac{1}{\sqrt{2}}\left(1+i\hat{\sigma}_{\rm imp}^{y}\hat{\mathbb P}_{\rm bath}\right).\end{aligned}$$ ]{} Employing this transformation, we arrive at the following transformed Hamiltonian $\hat{\tilde{H}}\! =\!\hat{U}^{\dagger}\hat{H}\hat{U}$: [ $$\begin{aligned} \hat{\tilde{H}} \! &=&\!\sum_{lm\alpha}h_{lm}\hat{\Psi}_{l\alpha}^{\dagger}\hat{\Psi}_{m\alpha}-h_z\hat{s}_{\rm imp}^{x}\hat{{\mathbb P}}_{\rm bath}\nonumber\\ &&+\hat{s}_{\rm imp}^{x}\hat{\Sigma}^{x} +\hat{{\mathbb P}}_{\rm bath}\left(-\frac{i\hat{\Sigma}^{y}}{2}+\hat{s}_{\rm imp}^{x}\hat{\Sigma}^{z}\right). \label{transformedHam}\end{aligned}$$ ]{} As expected from the construction, the impurity spin commutes with the Hamiltonian $[\hat{\tilde{H}},\hat{s}_{\rm imp}^x]=0$, and thus we can take the impurity operator as a conserved number $\sigma_{\rm imp}^{x}=2s_{\rm imp}^{x}=\pm1$ in the transformed frame. Decoupling of the impurity spin came at the cost of introducing interactions among the bath particles. Note that these interactions are multiparticle and nonlocal, as can be seen from the form of the operator $\hat{\mathbb P}_{\rm bath}$. The appearance of the bath interaction can be interpreted as spin exchange between fermions via the impurity spin. This is analogous to the case of the mobile spinless impurity [@LLP53], where a nonlocal phonon-phonon interaction is introduced after transforming to the comoving frame of the impurity via the LLP transformation. We emphasize that the elimination of the impurity relies only on the elemental parity symmetry in the original Hamiltonian and thus should have a wide applicability. The construction of $\hat{U}$ holds true for arbitrary conserved parity, and can be also applied to two-impurity systems [@SP18]. It follows from Eq. (\[transformedHam\]) that for the even bath-particle number $N$ and the zero magnetic field $h_z=0$, the Hamiltonian has two degenerate equivalent energy sectors corresponding to the conserved quantity $\sigma_{\rm imp}^{x}=\pm1$ in the transformed frame. This is because the two sectors in Eq. (\[transformedHam\]) can be exactly related via the additional unitary transformation $\hat{U}^{y}_{\rm bath}=e^{(i\pi/2)\sum_{l}\hat{\sigma}_{l}^{y}}$, which maps the bath spins as $\hat{\sigma}^{x,z}_{l}\to -\hat{\sigma}^{x,z}_{l}$. To our knowledge, such an exact spectrum degeneracy hidden in generic SIM has not been pointed out except for some specific cases that are exactly solvable via the Bethe ansatz [@PBW80; @AND80; @ANL81; @NK81; @AN83; @SP89] or at the Toulouse point [@ZG00]. The explicit form of variational states is (c.f. Eq. (\[LLPvar\])) [ $$\begin{aligned} \label{varoriginal} |\Psi\rangle&=&\hat{U}|\pm_{x}\rangle_{\rm imp}|\Psi_{\rm b}\rangle\nonumber\\ &=&|\!\uparrow~\!\!\rangle_{\rm imp}\hat{\mathbb P}_{\pm}|\Psi_{\rm b}\rangle\pm|\!\downarrow~\!\!\rangle_{\rm imp}\hat{\mathbb P}_{\mp}|\Psi_{\rm b}\rangle,\end{aligned}$$ ]{} where $|\pm_{x}\rangle_{\rm imp}$ represents the eigenstate of $\hat{\sigma}_{\rm imp}^{x}$ with eigenvalue $\pm 1$, $|\Psi_{\rm b}\rangle$ represents a bath wavefunction, and $\hat{{\mathbb P}}_{\pm}=(1\pm\hat{{\mathbb P}}_{\rm bath})/2$ is the projection onto the subspace with even or odd number of spin-up fermions. As we will see later, this form of variational wavefunction can naturally capture the strong impurity-bath entanglement, which is an essential feature of, for example, the formation of the Kondo singlet. We stress that our variational approach is unbiased in the sense that the variational states (\[varoriginal\]) are constructed without any particular [*a priori*]{} knowledge about the underlying impurity physics such as the Kondo physics. The canonical transformation $\hat{U}$ includes neither parameters in the Hamiltonian (e.g., Kondo coupling) nor any variational parameters. While we will choose $|\Psi_{\rm b}\rangle$ as the Gaussian states in the next subsection, the variational states (\[varoriginal\]) are still unbiased because the Gaussian states take into account all the two-particle excitations in an unbiased manner. Variational time-evolution equations ------------------------------------ ### Fermionic Gaussian states To solve SIM efficiently, we have to introduce a family of variational bath wavefunctions that can approximate the ground state and real-time evolutions governed by the Hamiltonian (\[transformedHam\]) while they are simple enough so that calculations can be done in a tractable manner. In the decoupled frame, we choose variational many-body states for the bath as fermionic Gaussian states [@WC12; @MJ13; @CVK10]. It is convenient to introduce the Majorana operators $\hat{\psi}_{1,l\alpha}=\hat{\Psi}^{\dagger}_{l\alpha}+\hat{\Psi}_{l\alpha}$ and $\hat{\psi}_{2,l\alpha}=i(\hat{\Psi}^{\dagger}_{l\alpha}-\hat{\Psi}_{l\alpha})$ satisfying the anticommutation relation $\{ \hat{\psi}_{\xi,l\alpha},\hat{\psi}_{\eta,m\beta}\} =2\delta_{\xi\eta}\delta_{lm}\delta_{\alpha\beta}$ with $\xi,\eta=1,2$. We describe the bath by a pure fermionic Gaussian state $|\Psi_{\rm G}\rangle$ that is fully characterized by its $4N_f\!\times \!4N_f$ covariance matrix $\Gamma$ [@WC12; @MJ13; @CVK10]: [ $$\begin{aligned} \Gamma=\frac{i}{2}\left\langle[\hat{\boldsymbol \psi},\hat{\boldsymbol \psi}^{\rm T}]\right\rangle_{\rm G}, \end{aligned}$$ ]{} where $(\Gamma)_{ij}=-(\Gamma)_{ji}\in\mathbb{R}$ is real antisymmetric and $\Gamma^{2}=-{\rm I}_{4N_{f}}$ for pure states with ${\rm I}_{d}$ being the $d\times d$ unit matrix. Here, $\langle\cdots\rangle_{\rm G}$ denotes an expectation value with respect to the Gaussian state $|\Psi_{\rm G}\rangle$, and we introduced the Majorana operators $\hat{\boldsymbol{\psi}}=(\hat{\boldsymbol{\psi}}_{1},\hat{\boldsymbol{\psi}}_{2})^{\rm T},$ where we choose the ordering of a row vector $\hat{\boldsymbol \psi}_{\xi}$ with $\xi=1,2$ as [ $$\begin{aligned} \hat{\boldsymbol{\psi}}_{\xi}&=&(\hat{\psi}_{\xi,1\uparrow},\ldots,\hat{\psi}_{\xi,N_f\uparrow},\hat{\psi}_{\xi,1\downarrow},\ldots,\hat{\psi}_{\xi,N_f\downarrow}).\end{aligned}$$ ]{} We can explicitly write the bath state as $$|\Psi_{\rm G}\rangle=e^{\frac{1}{4}\hat{\boldsymbol{\psi}}^{\rm T}X\hat{\boldsymbol{\psi}}}|0\rangle\equiv\hat{U}_{{\rm G}}|0\rangle,$$ where we define the Gaussian unitary operator by $\hat{U}_{\rm G}$ and introduce a real-antisymmetric matrix $(X)_{ij}=-(X)_{ji}\in\mathbb{R}$. The latter can be related to $\Gamma$ via $$\label{GmXi} \Gamma=-\Xi\,\sigma\left(\Xi\right)^{\rm T},$$ where $\Xi=e^{X}$ and $\sigma=i\sigma^{y}\otimes{\rm I}_{2N_f}$. It will be also useful to define the $2N_f\!\times \!2N_f$ correlation matrix: [ $$\begin{aligned} \Gamma_{f}=\langle\hat{\boldsymbol \Psi}^{\dagger}\hat{\boldsymbol \Psi}\rangle_{{\rm G}}\end{aligned}$$ ]{} in terms of Dirac fermions [ $$\begin{aligned} \hat{\boldsymbol \Psi}=(\hat{\Psi}_{1\uparrow},\ldots,\hat{\Psi}_{N_f\uparrow},\hat{\Psi}_{1\downarrow},\ldots,\hat{\Psi}_{N_f\downarrow}).\end{aligned}$$ ]{} ### Imaginary- and real-time evolutions We approximate the exact time evolution of a bath wavefunction $|\Psi_{\rm b}\rangle$ by projecting it onto the manifold spanned by the family of variational states. This can be done by employing the time-dependent variational principle [@JR79; @KP08; @CVK10; @ST17], which allows us to study ground-state properties via the imaginary-time evolution and also out-of-equilibrium dynamics via the real-time evolution. Let us first formulate the former one. The imaginary-time evolution [ $$\begin{aligned} \label{imagint} |\Psi_{\rm b}(\tau)\rangle=\frac{e^{-\hat{\tilde{H}}\tau}|\Psi_{\rm b}(0)\rangle}{\left\Vert e^{-\hat{\tilde{H}}\tau}|\Psi_{\rm b}(0)\rangle\right\Vert}\end{aligned}$$ ]{} gives the ground state in the asymptotic limit $\tau\to\infty$ if the initial seed state $|\Psi_{\rm b}(0)\rangle$ has a non-zero overlap with the ground state. Differentiating Eq. (\[imagint\]), we obtain the equation of motion $$\frac{d}{d\tau}|\Psi_{\rm b}(\tau)\rangle=-(\hat{\tilde{H}}-E)|\Psi_{\rm b}(\tau)\rangle,$$ where $E=\langle\Psi_{\rm b}(\tau)|\hat{\tilde{H}}|\Psi_{\rm b}(\tau)\rangle$ represents the mean energy. If we consider the covariance matrix $\Gamma$ of the Gaussian state $|\Psi_{\rm G}\rangle$ to be the time-dependent variational parameters, their imaginary-time evolution equation can be obtained by minimizing its deviation $\epsilon$ from the exact imaginary-time evolution: [ $$\begin{aligned} \label{error} \epsilon=\left\Vert \frac{d}{d\tau}|\Psi_{\rm G}(\tau)\rangle+(\hat{\tilde{H}}-E_{\rm var})|\Psi_{\rm G}(\tau)\rangle\right\Vert ^{2},\nonumber\\\end{aligned}$$ ]{} where $E_{\rm var}=\langle\Psi_{\rm G}(\tau)|\hat{\tilde{H}}|\Psi_{\rm G}(\tau)\rangle$ is the variational energy. This is formally equivalent to solving the projected differential equation: $$\label{vareqformal} \frac{d}{d\tau}|\Psi_{\rm G}(\tau)\rangle=-\hat{{\rm P}}_{\partial\Gamma}(\hat{\tilde{H}}-E_{\rm var})|\Psi_{\rm G}(\tau)\rangle,$$ where $\hat{\rm P}_{\partial\Gamma}$ is the projector onto the subspace spanned by tangent vectors of the variational manifold. On one hand, the left-hand side of Eq. (\[vareqformal\]) gives [ $$\begin{aligned} \frac{d}{d\tau}|\Psi_{\rm G}(\tau)\rangle \!=\!\hat{U}_{{\rm G}}\left(\!\frac{1}{4}\!:\hat{\boldsymbol{\psi}}^{\rm T}\Xi^{\rm T}\frac{d\Xi}{d\tau}\hat{\boldsymbol{\psi}}:\!+\frac{i}{4}{\rm Tr}\left[\Xi^{\rm T}\frac{d\Xi}{d\tau}\Gamma\right]\right)|0\rangle,\nonumber\\\label{leftvar}\end{aligned}$$ ]{} where $:\;:$ represents taking the normal order of the Dirac operators $\hat{\Psi}$ and $\hat{\Psi}^{\dagger}$. On the other hand, we can write the right-hand side of Eq. (\[vareqformal\]) as: [ $$\begin{aligned} -(\hat{\tilde{H}}-E_{\rm var})|\Psi_{\rm G}(\tau)\rangle=-\left(\frac{i}{4}:\hat{\boldsymbol{\psi}}^{\rm T}\Xi^{\rm T}{\cal H}\Xi\hat{\boldsymbol{\psi}}:+\delta\hat{O}\right)|0\rangle,\nonumber\\ \label{rightvar}\end{aligned}$$ ]{} where ${\cal H}=4\delta E_{\rm var}/\delta\Gamma$ is the functional derivative of the variational energy [@ST17] and $\delta \hat{O}$ denotes the cubic and higher order contributions of $\hat{\boldsymbol \psi}$ that are orthogonal to the tangential space and thus will be projected out by $\hat{\rm P}_{\partial \Gamma}$ in Eq. (\[vareqformal\]). Comparing Eqs. (\[leftvar\]) and (\[rightvar\]), and using Eq. (\[GmXi\]), we can uniquely determine the imaginary time-evolution equation of the covariance matrix $\Gamma$ as [@CVK10; @ST17] [ $$\begin{aligned} \label{imagvar} \frac{d\Gamma}{d\tau}&=&-{\cal H}-\Gamma {\cal H}\Gamma,\end{aligned}$$ ]{} which guarantees that the variational energy $E_{\rm var}$ monotonically decreases and the variational ground state is achieved in the limit $\tau\to\infty$. In this limit, the error $\epsilon$ in Eq. (\[error\]) is equivalent to the variance of the energy in the reached ground state and can be used as an indicator to check the accuracy of the variational state. In the similar way, we can derive the equation of motion for $\Gamma$ in the real-time evolution [ $$\begin{aligned} |\Psi_{\rm b}(t)\rangle=e^{-i\hat{\tilde{H}}t}|\Psi_{\rm b}(0)\rangle.\end{aligned}$$ ]{} The projection $$\frac{d}{dt}|\Psi_{\rm G}(t)\rangle=-i\hat{{\rm P}}_{\partial\Gamma}\hat{\tilde{H}}|\Psi_{\rm G}(t)\rangle$$ of the Schr[ö]{}dinger equation on the variational manifold leads to the following real-time evolution equation of the covariance matrix $\Gamma$ [@CVK10; @ST17]: [ $$\begin{aligned} \label{realvar} \frac{d\Gamma}{dt}&=&{\cal H}\Gamma-\Gamma {\cal H}.\end{aligned}$$ ]{} Functional derivative of variational energy ------------------------------------------- To analytically establish the variational time-evolution equations (\[imagvar\]) and (\[realvar\]) of the covariance matrix $\Gamma$, we have to obtain the functional derivative ${\cal H}=4\delta E_{\rm var}/\delta\Gamma$ of the variational energy. In this subsection, we give its explicit analytical expression. First of all, we write the expectation value $E_{\rm var} =\langle\hat{\tilde{H}}\rangle_{{\rm G}}$ of the Hamiltonian (\[transformedHam\]) with respect to the fermionic Gaussian state as [ $$\begin{aligned} E_{\rm var} \!& =&\!\sum_{lm\alpha}h_{lm}(\Gamma_{f})_{l\alpha,m\alpha} -\!\frac{h_z}{2}\sigma_{\rm imp}^{x}\langle\hat{\mathbb{P}}_{{\rm bath}}\rangle_{{\rm G}}\nonumber\\ & +&\frac{1}{4}\sum_{lm\alpha\beta}g_{lm}^{x}\sigma_{{\rm imp}}^{x}\sigma_{\alpha\beta}^{x}(\Gamma_{f})_{l\alpha,m\beta}\nonumber\\ \!&+&\!\frac{1}{4}\!\sum_{lm\alpha\beta}\!\!\left(-ig_{lm}^{y}\sigma^{y}\!+g_{lm}^{z}\sigma_{{\rm imp}}^{x}\sigma^{z}\right)_{\alpha\beta}\!\!(\Gamma_{f}^{\rm P})_{l\alpha,m\beta},\! \label{meanE}\end{aligned}$$ ]{} where we condition the impurity operator on a classical number $\sigma_{\rm imp}^{x}=\pm1$ and introduce the $2N_f\!\times\! 2N_f$ matrix containing the parity operator as $\Gamma_{f}^{\rm P}=\langle\hat{\mathbb{P}}_{{\rm bath}}\hat{\boldsymbol\Psi}^{\dagger}\hat{\boldsymbol\Psi}\rangle_{{\rm G}}.$ The values of $\langle\hat{\mathbb P}_{\rm bath}\rangle_{\rm G}$ and $\Gamma_{f}^{\rm P}$ can be obtained as [ $$\begin{aligned} \label{parityG} \langle\hat{\mathbb{P}}_{{\rm bath}}\rangle_{\rm G}&=&(-1)^{N_{f}}{\rm Pf}\left[\frac{\Gamma_{F}}{2}\right],\end{aligned}$$ ]{} and [ $$\begin{aligned} \Gamma _{f}^{\mathrm{P}}&&=\frac{1}{4}\langle \hat{ \mathbb{P}}_{\mathrm{bath}}\rangle _{\mathrm{G}}\nonumber\\ &&\times\Sigma _{z}(\mathrm{I} _{2N_{f}},-i\mathrm{I}_{2N_{f}})\Upsilon ^{-1}\! \left( \Gamma \sigma -\mathrm{I}_{4N_{f}}\right) \!\!\left(\! \begin{array}{c} \mathrm{I}_{2N_{f}} \\ i\mathrm{I}_{2N_{f}} \end{array} \!\right),\end{aligned}$$ ]{} where $\rm Pf$ denotes the Pfaffian and the matrices [ $$\begin{aligned} \Gamma_{F}&=&\sqrt{{\rm I}_{4N_f}+\Lambda}\,\Gamma\sqrt{{\rm I}_{4N_f}+\Lambda}-({\rm I}_{4N_f}-\Lambda)\sigma,\\ \Upsilon &=&\mathrm{I}_{4N_{f}}+\frac{1}{2}\left( \Gamma \sigma -\mathrm{I}_{4N_{f}}\right) \left( \mathrm{I}_{4N_{f}}+\Lambda \right),\end{aligned}$$ ]{} are defined by $\Lambda={\rm I}_2\otimes\Sigma_z$ and $\Sigma_z=\sigma^z\otimes{\rm I}_{N_f}$. We recall that the matrix $\sigma$ is defined below Eq. (\[GmXi\]). Since the first and third terms in the Hamiltonian (\[transformedHam\]) are quadratic, we can write them exactly in the Majorana basis as $i\hat{\boldsymbol{\psi }}^{\rm T}\mathcal{H}_{0}\hat{\boldsymbol{\psi }}/4$ with [ $$\begin{aligned} \mathcal{H}_{0}=i\sigma ^{y}\otimes \lbrack \mathrm{I}_{2}\otimes h_{lm}+(\sigma _{\mathrm{imp}}^{x}/4)\,\sigma ^{x}\otimes g_{lm}^{x}],\end{aligned}$$ ]{} where $h_{lm}$ and $g_{lm}^{x}$ are understood to be $N_f\!\times\! N_f$ real matrices. Thus, the functional derivative ${\cal H}=4\delta E_{\rm var}/\delta\Gamma$ of the mean energy (\[meanE\]) is given by [ $$\begin{aligned} \label{interH} \mathcal{H}\!=\!\mathcal{H}_{0}+\frac{\delta}{\delta\Gamma}\left[-2h_{z}\sigma_{\mathrm{imp}}^{x}\langle \hat{{\mathbb{P}}}_{\mathrm{bath}}\rangle _{\mathrm{G}}+{\rm Tr}({\cal S}^{\rm T}\Gamma_{f}^{\rm P})\right],\end{aligned}$$ ]{} where we introduce the matrix [ $$\begin{aligned} \mathcal{S}=-i\sigma ^{y}\otimes g_{lm}^{y}+\sigma_{\mathrm{imp}}^{x}\sigma ^{z}\otimes g_{lm}^{z}.\end{aligned}$$ ]{} Taking the derivatives of $\langle\hat{\mathbb P}_{\rm bath}\rangle_{\rm G}$ and $\Gamma_{f}^{\rm P}$ with respect to the covariance matrix $\Gamma$ in Eq. (\[interH\]), we finally obtain the analytical expression of the functional derivative $\cal H$: [ $$\begin{aligned} \label{hm} \mathcal{H} \!&=&\!\mathcal{H}_{0}\!+\!\left[h_z\sigma _{\mathrm{imp}}^{x}\langle \hat{{\mathbb{P}}}_{\mathrm{bath}}\rangle_{\mathrm{G}}\!-\! \frac{1}{2}{\rm Tr}(\mathcal{S}^{\rm T}\Gamma _{f}^{\mathrm{P}})\right]\!\mathcal{P}\nonumber\\ &&-\frac{i}{4}\langle \hat{{\mathbb{P}}}_{\mathrm{bath}}\rangle _{\mathrm{G}}\;\mathcal{A}\left[ {\cal V}\mathcal{S}^{\rm T}\Sigma_{z}{\cal V}^{\dagger}\right], \end{aligned}$$ ]{} where ${\cal A}[M]\!=\!(M\!-\!M^{\rm T})/2$ denotes the matrix antisymmetrization and we introduce the matrices [ $$\begin{aligned} \mathcal{P}&=&\sqrt{\mathrm{I}_{4N_{f}}+\Lambda }\Gamma _{F}^{-1}\sqrt{\mathrm{I}_{4N_{f}}+\Lambda},\\ {\cal V}&=&(\Upsilon^{\rm T})^{-1}\left(\begin{array}{c} {\rm I}_{2N_f}\\ i{\rm I}_{2N_f} \end{array}\right).\end{aligned}$$ ]{} Integrating Eqs. (\[imagvar\]) and (\[realvar\]) with the general form (\[hm\]) of the functional derivative, one can study ground-state properties and out-of-equilibrium dynamics of SIM on demand. Application to the anisotropic Kondo model {#secKondo} ========================================== Model ----- We benchmark our general variational approach by applying it to the anisotropic Kondo model and comparing the results with those obtained using matrix-product state (MPS) [@FV08]. We also compare our results to the Yosida ansatz [@KY66] and the exact solution via the Bethe ansatz [@PBW80; @AND80; @ANL81; @NK81; @AN83; @SP89]. The one-dimensional Kondo Hamiltonian is given by [ $$\begin{aligned} \label{anisotropicK} \hat{H}_{\rm K}\!=\!-t_{\rm h}\!\!\sum_{l=-L}^{L}\left(\hat{c}^{\dagger}_{l\alpha}\hat{c}_{l+1\alpha}\!+\!{\rm h.c.}\right)\!+\!\frac{1}{4}\!\sum_{\gamma}\!J_{\gamma}\hat{\sigma}^{\gamma}_{\rm imp}\hat{c}_{0\alpha}^{\dagger}\sigma^{\gamma}_{\alpha\beta}\hat{c}_{0\beta},\nonumber\\\end{aligned}$$ ]{} where $\hat{c}^{\dagger}_{l\alpha}$ ($\hat{c}_{l\alpha}$) creates (annhilates) a fermion with position $l$ and spin $\alpha$. The spin-1/2 impurity $\hat{\sigma}_{\rm imp}^{\gamma}$ locally interacts with particles at the impurity site $l=0$ via the anisotropic couplings $J_{x,y}=J_\perp$ and $J_z=J_{\parallel}$. We choose the unit $t_{\rm h}=1$ and the summations over $\alpha,\beta$ are understood to be contracted hereafter. This model shows a quantum phase transition [@LAJ87] between an antiferromagnetic (AFM) phase and a ferromagnetic (FM) phase. The former leads to the formation of the singlet state between the impurity and bath spins, leading to the vanishing impurity magnetization $\langle\hat{\sigma}_{\rm imp}^{z}\rangle=0$. The latter exhibits the triplet formation and the impurity magnetization takes a non-zero, finite value in general. We remark that, employing the infinite-bandwidth approximation and the bosonization, the Kondo model can be mapped to the spin-boson model [@LAJ87]. Within this treatment, the impurity magnetization and the spatiotemporal dynamics have been previously studied by TD-NRG method [@AFB06; @LB14] and also by the bosonic Gaussian states combined with a unitary transformation [@ST17]. In the latter, the unitary transformation was specifically designed to the spin-boson model and one had to choose a specific symmetry sector to obtain meaningful results. In contrast, we here construct a completely different, general family of variational states by introducing the new decoupling transformation $\hat{U}$. The transformation $\hat{U}$ specifies no specific conditions on a physical system as far as it contains (arbitrary) parity symmetry $\hat{\mathbb{P}}$ and a spin-1/2 operator. In view of this strong versatility, our approach can be applied to a much wider class of problems than the previous work [@ST17]. Moreover, we apply this general approach to analyze in and out of equilibrium problems of the fermonic Kondo models on the lattice, which are more challenging problems than the bosonized version due to their intrinsically finite bandwidth. To apply our general formalism in Sec. \[secformalism\], we note that in the Hamiltonian (\[anisotropicK\]) only the symmetric bath modes couple to the impurity spin. We thus identify the fermionic bath modes $\hat{\Psi}$ as [ $$\begin{aligned} \label{compbasis} \hat{\Psi}_{0\alpha}=\hat{c}_{0\alpha},\;\;\;\hat{\Psi}_{l\alpha}=\frac{1}{\sqrt{2}}\left(\hat{c}_{l\alpha}+\hat{c}_{-l\alpha}\right)\end{aligned}$$ ]{} with $l=1,2,\ldots,L$. The corresponding bath Hamiltonian $h_{lm}$ is given by the following $(L\!+\!1)\!\times\!(L\!+\!1)$ hopping matrix $h_1$ of the single lead: [ $$\begin{aligned} \label{hopping1} h_1=(-t_{\rm h})\left( \begin{array}{ccccc} 0 & \sqrt{2} & 0 & \cdots & 0 \\ \sqrt{2} & 0 & 1 & 0 & \vdots \\ 0 & 1 & 0 & 1 & \vdots \\ \vdots & 0 & 1 & \ddots & 1 \\ 0 & \cdots & \cdots & 1 & 0\end{array}\right).\end{aligned}$$ ]{} The couplings $g_{lm}^{\gamma}$ are identified as the local Kondo interaction $J_{\gamma}\delta_{l0}\delta_{m0}$. In this section, we set the magnetic field to be zero $h_z=0$ for simplicity. Solving Eqs. (\[imagvar\]) and (\[realvar\]) with the functional derivative (\[hm\]), we obtain the covariance matrix $\Gamma$ corresponding to the ground state and the real-time dynamics. The associated observables can be efficiently calculated in terms of the covariance matrix. For example, the impurity-bath spin correlations for each direction are obtained from $$\chi^{x}_{l}=\frac{1}{4}\langle \hat{\sigma}_{{\rm imp}}^{x}\hat{\sigma}_{l}^{x}\rangle =\frac{1}{4}\sigma_{{\rm imp}}^{x}\sigma^x_{\alpha\beta}(\Gamma_{f})_{l\alpha,l\beta},$$ $$\chi^{y}_{l}=\frac{1}{4}\langle \hat{\sigma}_{{\rm imp}}^{y}\hat{\sigma}_{l}^{y}\rangle =\frac{1}{4}(-i\sigma^{y}_{\alpha\beta})(\Gamma_{f}^{\rm P})_{l\alpha,l\beta},$$ and $$\chi^{z}_{l}=\frac{1}{4}\langle \hat{\sigma}_{{\rm imp}}^{z}\hat{\sigma}_{l}^{z}\rangle =\frac{1}{4}\sigma_{{\rm imp}}^{x}\sigma^z_{\alpha\beta}(\Gamma_{f}^{\rm P})_{l\alpha,l\beta},$$ where $\langle\cdots\rangle$ denotes an expectation value with respect to wavefuntion in the [*original*]{} frame. The impurity magnetization can be obtained from $$\langle\hat{\sigma}^{z}_{\rm imp}\rangle=\sigma_{\rm imp}^{x}\langle \hat{\mathbb P}_{\rm bath}\rangle_{{\rm G}}=\sigma_{\rm imp}^{x}(-1)^{N_{f}}{\rm Pf}\left[\frac{\Gamma_{F}}{2}\right],$$ where we use Eq. (\[parityG\]) in the second equality. Structure of variational ground state ------------------------------------- In this subsection, we discuss how the entanglement in the Kondo-singlet state is naturally encoded in our variational ground state. It is useful to choose a correct sector of variational manifold by specifying an appropriate conserved quantum number. In particular, if the initial seed state is an eigenstate of the total spin-$z$ component $\hat{\sigma}^{z}_{\rm tot}=\hat{\sigma}_{\mathrm{imp}}^{z}+\hat{\sigma}_{\rm bath}^{z}$ with $\hat{\sigma}_{\rm bath}^{z}=\sum_{l=0}^{L}\hat{\sigma}_{l}^z$, its value $\sigma^{z}_{\rm tot}$ is conserved through the imaginary-time evolution due to $[\hat{\sigma}^{z}_{\rm tot},H]=0$. To show the singlet behavior explicitly, we investigate the variational ground state by choosing the initial seed state in the sector $\sigma^{z}_{\rm tot}=0$. As inferred from the definition of the bath parity (\[pbath\]) and the variational ansatz (\[varoriginal\]) conditioned on the sector $\sigma_{\rm imp}^{x}=1$, in the [*original*]{} frame the spin-up impurity $|\!\!\uparrow\rangle$ is coupled to the bath with even number of spin-up fermions $N_{\uparrow}$ (correspondingly, odd number of spin-down fermions $N_{\downarrow}=N_{\uparrow }+1$), while the spin-down impurity $|\!\!\downarrow\rangle$ is coupled to the bath having odd (even) number of spin-up (-down) fermions $N_{\uparrow}$ ($N_{\downarrow}=N_{\uparrow }-1$). It then follows that the projected states $|\Psi_{\rm \pm}\rangle=\hat{\mathbb P}_{\pm}|\Psi_{\rm b}\rangle$, which couple with the impurity spin-up and -down states, respectively (c.f. Eq. (\[varoriginal\])), are eigenstates of $\hat{\sigma}_{\rm bath}^{z}$ as follows: [ $$\begin{aligned} \label{vargs1} \hat{\sigma}_{\rm bath}^{z}|\Psi_{\pm}\rangle=\mp|\Psi_{\pm}\rangle.\end{aligned}$$ ]{} In the deep AFM phase with $\sigma_{\rm tot}^{z}=0$, one can numerically verify that the projected states $|\Psi_{\rm \pm}\rangle$ satisfy the following relations: [ $$\begin{aligned} \label{vargs2} \left\vert\left\vert \left\vert \Psi _{\pm}\right\rangle \right\vert\right\vert =1/2,\;\;\left\langle \Psi_{-}\right\vert \hat{\sigma}_{\mathrm{bath}}^{+}\left\vert \Psi_{+}\right\rangle=-1/2.\end{aligned}$$ ]{} This automatically results in the vanishing of the impurity magnetization $\langle\hat{\sigma}_{\rm imp}^{z}\rangle=0$ and underscores that the variational ground state is indeed the singlet state between the impurity and the bath spins: [ $$\begin{aligned} \label{AFMvar} |\Psi_{\rm AFM}\rangle=\frac{1}{\sqrt{2}}\left(|\!\!\uparrow\rangle_{\rm imp}|\Psi_{\downarrow}\rangle-|\!\!\downarrow\rangle_{\rm imp}|\Psi_{\uparrow}\rangle\right),\end{aligned}$$ ]{} where we use Eq. (\[varoriginal\]) and denote the normalized spin-1/2 bath states as $|\Psi_{\downarrow}\rangle=\sqrt{2}|\Psi_{+}\rangle$ and $|\Psi_{\uparrow}\rangle=-\sqrt{2}|\Psi_{-}\rangle$. It is worthwhile to mention that, if we choose $|\Psi_{\rm b}\rangle$ in Eq. (\[varoriginal\]) as a single-particle excitation on top of the Fermi sea, Eq. (\[AFMvar\]) reproduces the variational ansatz that has been originally suggested by Yosida [@KY66] and then later generalized to the Anderson model [@VCM76] and resonant-state approach [@BG07]. This variational state has been recently revisited [@YC17] and shown to contain the majority of the entanglement in the ground state of the Kondo model, indicating the ability of our variational states to encode the most significant part of the impurity-bath entanglement. Yet, we emphasize that our variational ansatz goes beyond such a simple ansatz since we consider the general Gaussian state that takes into account all the correlations between two fermionic operators. Such a flexibility beyond single-particle excitations becomes crucially important when we do quantitative analyses on the Kondo physics as we will demonstrate later. ![image](fig_mps1_arxiv.pdf){width="170mm"} In practice, we also monitor the formation of the Kondo-singlet formation by testing the sum rule [@BL07] of the impurity-bath spin correlation $\chi_l=\langle\hat{\boldsymbol \sigma}_{\rm imp}\cdot\hat{\boldsymbol \sigma}_{l}\rangle/4$: [ $$\begin{aligned} \sum_{l=0}^{L}\chi_{l}=\frac{1}{8}\langle\hat{\boldsymbol\sigma}_{\rm tot}^{2}-\hat{\boldsymbol\sigma}_{\rm imp}^{2}-\hat{\boldsymbol\sigma}_{\rm bath}^{2}\rangle=-\frac{3}{4}.\end{aligned}$$ ]{} In the discussions below, we checked that this sum rule has been satisfied with an error below $0.5\%$ in AFM regime with $J_\parallel>0$. These observations clearly show that our variational states successfully capture the most important feature of Kondo physics in an efficient manner, i.e., with a number of variational parameters growing only quadratically with the system size $L$. Comparisons with matrix-product states and Yosida ansatz -------------------------------------------------------- ### Matrix-product states To further test the accuracy of our approach, we compare our variational results with those obtained using a MPS ansatz. For the sake of completeness, here we describe the MPS-based method applied to the Kondo model. The MPS ansatz for a generic $N$-body quantum system takes the form [ $$\begin{aligned} |\Psi_{D}\rangle\!=\!\sum_{\{i_k\}} \!\mathrm{Tr}\!\left(A[0]^{i_0} \!A[1]^{i_1}\!\cdots \!A[N-1]^{i_{N\!-\!1}}\right)\!|i_0 i_1\cdots i_{N-1}\rangle,\nonumber\\\end{aligned}$$ ]{} where for each site $k$, $\{|i_k\rangle\}$ represents a finite basis of the corresponding Hilbert space, and $A[k]^{i_k}$ is a $D\times D$ matrix labelled by an index $i_k$. We note that, with open boundary conditions, the first and last matrices reduce to $D$-dimensional vectors. The parameter $D$ is known as bond dimension, and determines the number of variational parameters in the ansatz. The MPS ansatz can be used to approximate ground states and low lying excitations of strongly correlated quantum many-body systems. It can also be employed to simulate real-time dynamics and lies at the basis of the DMRG method [@white92dmrg; @verstraete04dmrg; @vidal2003mps; @SU11; @Vidal2004; @daley04adapt; @FV08]. In order to find a MPS approximation to the ground state of Hamiltonian (\[anisotropicK\]), we express the Hamiltonian as a matrix product operator (MPO) [@pirvu10mpo]. For the sake of convenience, we actually work in terms of the bath modes in Eq. (\[compbasis\]) and map them to spins using a Jordan-Wigner transformation such that $\hat{\Psi}_{l \uparrow}=\Pi_{k<l}( \hat{\sigma}_{2k}^z\hat{\sigma}_{2k+1}^z) \hat{\sigma}_{2l}^-$ and $\hat{\Psi}_{l \downarrow}=\Pi_{k<l}( \hat{\sigma}_{2k}^z\hat{\sigma}_{2k+1}^z) \hat{\sigma}_{2l}^z \hat{\sigma}_{2l+1}^-$. A MPS approximation to the ground state is found by variational minimization of the energy over the family of MPS with fixed bond dimension $D$, $$|\Psi_0\rangle=\mathrm{argmin} \frac{\langle\Psi_D| H |\Psi_D\rangle}{\langle\Psi_D| \Psi_D\rangle}.$$ To solve the minimization, an alternating least squares strategy is applied, where all tensors but one are fixed, and the problem is transformed in the optimization for a single local tensor. Once the local problem is solved, the optimization is repeated with respect to the next site in the chain and so on until the end of the chain is reached. The sweeping is iterated back and forth over the chain until the energy of the ground state is converged to the desired precision level (see e.g., Refs. [@SU11; @FV08] for details of technical details). The algorithm produces a MPS candidate for the ground state, and local expectation values and correlations can be efficiently evaluated. To improve the precision, the procedure can be repeated with increasing bond dimension using the previously found state as the initial guess. Also the convergence criterion can be refined for more accurate results. For our comparisons we set a convergence cutoff of $10^{-8}$-$10^{-6}$. We can also use the MPS ansatz for a time-evolved state after the quench protocol considered in this paper (see explanations in the next subsection). Although the initial state, i.e., the Fermi sea of the bath (c.f. Eq. (\[initialKondo\]) below), is not an exact MPS, we can find a good approximation to it by running the ground state algorithm described above in the absence of the impurity. Then, the full initial state obtained by a tensor product with the polarized impurity is evolved with the full Hamiltonian. To this end, the evolution operator is approximated by a Suzuki-Trotter decomposition [@trotter59; @Suzuki1985] as a product of small discrete time steps, each of which can be written as a product of MPO [@Vidal2004; @Verstraete2004a; @pirvu10mpo]. The action of the latter on the state is then approximated by a new MPS that represents the evolved state. This is achieved again by an alternating least squares algorithm and minimizing the distance between the MPS and the result of each evolution step (c.f. Refs. [@SU11; @FV08]). In general, real-time evolutions may quickly increase the entanglement in the state, and to maintain an accurate MPS approximation the bond dimension of the ansatz will need to grow with time. In order to keep track of the accuracy of the simulations, we check convergence of the results at different times when we repeat the simulations with increasing bond dimension. ![\[fig\_mps2\] Comparison of (a) the ground-state energy and (b) impurity magnetization across the phase boundary of the anisotropic Kondo model calculated from the Yosida ansatz (green dotted line), the matrix-product state (MPS, red dashed line with crosses) and our non-Gaussian variational state (NGS, blue solid line with circles). In (a), we plot the variational energies $E_{\rm var}-E_{f}$ relative to a constant ground-state energy $E_{f}$ of free fermions on the lattice. The left and right insets magnify the ground-state energies in the FM phase and close to the phase boundary, respectively. We choose the dimensionless Kondo coupling $j_{\perp}=0.5$ and vary $j_{\parallel}$ from $-1$ to $1$ as schematically illustrated in the inset of (b). The calculations are done in the sector $\sigma_{\rm tot}^{z}=0$. System size is $L=200$ and the bond dimension of MPS is $D=280$. ](fig_mps2.pdf){width="80mm"} ### Yosida ansatz To highlight the importance of taking into account contributions beyond a single-particle excitation, we also compare our results to the much simpler ansatz originally suggested by Kei Yosida [@KY66]. This ansatz assumes a single-particle excitation on top of the half-filled Fermi sea $|{\rm FS}\rangle$. To deal with the anisotropic Kondo model, we consider the following ansatz: [ $$\begin{aligned} \label{yosidaansatz} |\Psi_{\rm Yosida}\rangle=\begin{cases} \frac{1}{\sqrt{2}}\sum_{n>n_{\rm F}}d_{n}\left(|\!\uparrow\rangle_{\rm imp}\hat{c}^{\dagger}_{n\downarrow}-|\!\downarrow\rangle_{\rm imp}\hat{c}_{n\uparrow}^{\dagger}\right)|{\rm FS}\rangle & (-J_{\parallel}\leq J_{\perp}),\\ \sum_{n>n_{\rm F}}d_{n}|\!\uparrow\rangle_{\rm imp}\hat{c}^{\dagger}_{n\uparrow}|{\rm FS}\rangle & (-J_{\parallel}> J_{\perp}), \end{cases}\end{aligned}$$ ]{} where $n$ denotes the bath mode in the energy basis and the summation over $n>n_{\rm F}$ indicates the contributions from the bath modes above the Fermi surface. The former (latter) ansatz in Eq. (\[yosidaansatz\]) approximates the singlet (triplet) state in the AFM (FM) regime. We minimize the mean energy $\langle\hat{H}\rangle_{\rm Yosida}$ with respect to amplitudes $\{d_{n}\}$ and obtain the following variational equation: [ $$\begin{aligned} \label{varyosida} \epsilon_{n}d_{n}-\frac{1}{4}J_{\rm eff}\psi_{0n}^{*}\sum_{m>n_{\rm F}}\psi_{0m}d_{m}=E_{\rm GS}d_{n},\end{aligned}$$ ]{} where $\epsilon_{n}$ denotes a bath energy, $\psi_{ln}$’s are expansion coefficients in terms of the lattice basis, $\hat{c}_{l}=\sum_{n}\psi_{ln}\hat{c}_{n}$, and $E_{\rm GS}$ is the variational ground-state energy. We define $J_{\rm eff}$ as the effective Kondo coupling as follows: [ $$\begin{aligned} J_{\rm eff}= \begin{cases} 2J_{\perp}+J_{\parallel}&(-J_{\parallel}\leq J_{\perp}),\\ -J_{\parallel}&(-J_{\parallel}> J_{\perp}). \end{cases}\end{aligned}$$ ]{} We solve the eigenvalue equation (\[varyosida\]) and determine $E_{\rm GS}$ and the wavefunction $d_{n}$ from which the impurity-bath correlation can be calculated. It is worthwhile to mention that the Yosida ansatz is naturally included in our family of variational states. For instance, in the AFM regime the singlet entanglement can be decoupled in the transformed frame as [ $$\begin{aligned} \hat{U}^{-1}|\Psi_{\rm Yosida}\rangle=|+_{x}\rangle\sum_{n>n_{\rm F}}\frac{d_{n}}{\sqrt{2}}\left(\hat{c}_{n\downarrow}^{\dagger}-\hat{c}_{n\uparrow}^{\dagger}\right)|{\rm FS}\rangle.\end{aligned}$$ ]{} Here, the bath wavefunction in the transformed frame just corresponds to a single-particle excitation on top of the Fermi sea, which is a very special subclass of the whole fermionic Gaussian states. ### Benchmark results We plot in Fig. \[fig\_mps1\] the ground-state impurity-bath spin correlations $\chi^{z}_{l}=\langle\hat{\sigma}^{z}_{\rm imp}\hat{\sigma}_{l}^{z}\rangle/4$ of the anisotropic Kondo model in four different regimes of the phase diagram (see the left inset of Fig. \[fig\_mps1\](a)). We use the dimensionless Kondo couplings $j_{\parallel,\perp}=\rho_{F}J_{\parallel,\perp}$ with $\rho_{F}=1/(2\pi t_{\rm h})$ being the density of states at the Fermi energy. Our variational results not only correctly reproduce the formations of the singlet (triplet) pair between the impurity and bath spins in the AFM (FM) phase, but they also show quantitative agreement with MPS results with a deviation which is at most a few percent of the value at the impurity site. We find that the agreement is particularly good in the deep FM and AFM regimes (see Figs. \[fig\_mps1\](c) and (d)), where the difference is below $1$%. We note that while the Yosida ansatz qualitatively captures the correct AFM and FM correlations in each regime, it fails to agree with our variational results and MPS ones quantitatively (Figs. \[fig\_mps1\](e) and (f)). We also compare the ground-state energy $E_{\rm var}$ (Fig. \[fig\_mps2\](a)) and the corresponding magnetization $\langle\hat{\sigma}_{\rm imp}^{z}\rangle$ (Fig. \[fig\_mps2\](b)) across the phase transition line (see the inset of Fig. \[fig\_mps2\](b)). In the FM phase, we observe that our variational results agree very well with the MPS results, with an error below $0.5$%. Remarkably, our ansatz achieves slightly lower energies deep in the phase (the left inset of Fig. \[fig\_mps2\](a)). Close to the phase boundary, we observe the largest deviation with respect to the MPS ansatz both in energy (the right inset of Fig. \[fig\_mps2\](a)) and magnetization (Fig. \[fig\_mps2\](b)), with our ansatz showing a residual magnetization close to the transition. Finally, in the deep AFM phase ($j_\parallel>0$), the ground-state energies calculated from both methods again agree well with an error typically below $0.5$%. In this regime, the corresponding residual magnetization is $\langle\hat{\sigma}_{\rm imp}^{z}\rangle\simeq O(10^{-4})$ in MPS and $O(10^{-5})$ in our variational method. We note that the Yosida ansatz gives significantly higher ground-state energies than our results and MPS ones over the whole phase diagram. We attribute the small discrepancies between our results and the MPS ones to the finite values of the system size $L$ and the bond dimension $D$. A relatively large difference in the AFM phase close to the phase boundary should be attributed to the large entanglement present in this regime. This fact can be inferred from the nonmonotonic RG flows, which show that the Kondo couplings can take very small values corresponding to the generation of the large Kondo cloud during the flows. As our calculations are carried out in the real-space basis, this fact implies that our variational states must encode such a large entanglement, which can be beyond the amount that is generated by the unitary transformation $\hat{U}$. The comparisons to the MPS results show the great efficiency of our variational approach. The number of variational parameters in MPS is approximately $4LD^2$ with $D$ being the bond dimension, while that of our variational ansatz is $4L^2$. Since the bond dimension $D$ in the calculations is typically taken to be 200-300, our variational approach can achieve the accuracy comparable to MPS with two or three orders of magnitude fewer variational parameters, and shorter CPU time accordingly, than the corresponding MPS ansatz. This indicates that our variational ansatz successfully represents the ground-state wavefunction of SIM in a very compact way. Meanwhile, the comparison to the Yosida ansatz clearly demonstrates the importance of considering contributions beyond the single-particle excitations. Equivalently, it indicates that one can dramatically improve the variational calculations by just taking into account up to two-fermionic excitations. This significant simplification of the original many-body problems is made possible through the decoupling transformation that can efficiently encode the impurity-bath entanglement. ![\[fig\_mps\_RT\] Comparisons of dynamics of the impurity magnetization calculated from the matrix-product state (MPS, red chain curve) and our non-Gaussian variational state (NGS, blue solid curve) at (a) SU(2)-symmetric points and (b) strongly anisotropic regimes. The dimensionless Kondo couplings are (a) $j_{\parallel}=j_{\perp}=\pm0.35$ in the SU(2)-symmetric antiferromagnetic (AFM) and ferromagnetic (FM) phases, respectively, and (b) $(j_{\parallel},j_\perp)=(0.1,0.4)$ and $(-0.4,0.1)$ in the anisotropic AFM and FM phases, respectively. The inset in (b) magnifies the dynamics in FM phase. System size is $L=100$ and the bond dimension of the MPS for the different cases varies in $D\in[220,\,260]$, for which we checked that the results are converged within the time window shown in the plots. ](fig_mps_RT.pdf){width="86mm"} Finally, to complete the benchmark test in the anisotropic Kondo model, we compare the real-time dynamics of the impurity magnetization calculated with a MPS algorithm and our variational method. We consider the initial state [ $$\begin{aligned} \label{initialKondo} |\Psi(0)\rangle=|\!\uparrow\rangle_{\rm imp}|{\rm FS}\rangle,\end{aligned}$$ ]{} where $|{\rm FS}\rangle$ is the half-filled Fermi sea of the lead. At time $t=0$, we then drive the system by suddenly coupling the impurity to the lead. First, we show the results at the SU(2)-symmetric points of AFM and FM phases (Fig. \[fig\_mps\_RT\](a)), which are usually of interest in condensed matter systems. The magnetization eventually relaxes to zero in the AFM phase as a result of the Kondo-singlet formation, while it remains nonvanishing and exhibits oscillations in the FM phase. The long-lasting fast oscillation found in the FM phase comes from a high-energy excitation of a fermion from the bottom of the band and its period $2\pi\hbar/{\cal D}$ is characterized by the bandwidth ${\cal D}=4t_{\rm h}$ (note that we choose the unit $\hbar=1$ and $t_{\rm h}=1$ in the plots). Since the impurity spin will be decoupled from the bath degrees of freedom in the low-energy limit in the FM phase (see RG flows in Fig. \[fig\_mps2\](b) inset), the oscillation can survive even in the long-time limit. The MPS and our variational results show quantitative agreement with an error that is at most $O(10^{-2})$. Second, we compare the results in the strongly anisotropic AFM and FM regimes (Fig. \[fig\_mps\_RT\](b)). In the short and intermediate time regimes, the results obtained using MPS and our variational states exhibit a good agreement. In the long-time regime ($t\gtrsim10$), the two methods show a small discrepancy that is about $O(6\times 10^{-2})$ in the AFM phase and $O(1\times 10^{-2})$ in the FM phase. Yet, they still share the qualitatively same features such as the exponential relaxation in the AFM regime and long-lasting oscillations characterized by the bandwidth in the FM regime. Comparison with the Bethe ansatz solution ----------------------------------------- As a further test of the validity of the present approach, we compare our approach to the exact solution obtained via the Bethe ansatz with the infinite-bandwidth assumption [@PBW80; @AND80; @ANL81; @NK81; @AN83; @SP89]. When all the physical parameters are smaller than the bandwidth ${\cal D}=4t_{\rm h}$, our approach should reproduce the universal prediction from the Bethe ansatz. To be specific, we calculate the zero-temperature magnetization $m=\langle\hat{\sigma}_{\rm imp}^{z}\rangle/2$ under a finite magnetic field $h_z$ and compare the results to the one predicted from the Bethe ansatz (BA) solution [@ANL81]: [ $$\begin{aligned} m_{\rm BA}=\begin{cases} \frac{1}{4\sqrt{\pi^3}}\sum_{k=0}^{\infty}\frac{(-1)^k}{k!}\left(\frac{\pi}{2}\right)^{2k+1}\left(\frac{k+1/2}{2\pi}\right)^{k-\frac{1}{2}}\left(\frac{h_z}{T_{\rm K}}\right)^{2k+1}& (h_z/T_{\rm K}\leq\sqrt{8/(\pi e)}),\\ \frac{1}{2}\Biggl\{1-\frac{1}{\pi^{3/2}}\int_{0}^{\infty}dt\frac{\sin(\pi t)}{t}\left[\frac{8}{\pi e}\left(\frac{T_{\rm K}}{h_z}\right)^2\right]e^{-t(\ln t-1)}\Gamma\left(t+\frac{1}{2}\right)\Biggr\}& (h_z/T_{\rm K}>\sqrt{8/(\pi e)}). \end{cases}\end{aligned}$$ ]{} Here, we note that the Kondo temperature $T_{\rm K}$ is determined from the magnetic susceptibility $\chi\equiv\partial m/\partial h_{z}=1/(4T_{\rm K})$ at the zero field $h_z=0$. Figure \[fig\_Bethe\] shows the comparison between the magnetization obtained from our variational method for different Kondo couplings $j$ and the Bethe ansatz solution $m_{\rm BA}$ (dashed black line). The numerical results agree with $m_{\rm BA}$ with at most a few percent deviation at small magnetic field $h_z/T_{\rm K}\lesssim 1$. As we increase the magnetic field $h_z/T_{\rm K}$, the deviation from $m_{\rm BA}$ becomes more significant. In particular, the deviation starts at smaller $h_z/T_{\rm K}$ for a larger Kondo coupling $j$. These features should be attributed to a finite-bandwidth effect intrinsic to the lattice model. Indeed, at the largest value of $h_z/T_{\rm K}\simeq 3$ the value of magnetic field $h_z$ itself can be an order of the bandwidth ${\cal D}=4t_{\rm h}$. The larger Kondo coupling $j$ corresponds to the larger Kondo temperature $T_{\rm K}$, thus leading to the smaller threshold value of $h_z/T_{\rm K}$ at which the finite-bandwidth effect begins to take place. ![\[fig\_Bethe\] Comparison of the zero-temperature magnetization $m=\langle\hat{\sigma}_{\rm imp}^{z}\rangle/2$ calculated from our approach for different Kondo couplings $j$ with the Bethe ansatz solution (dashed black line), which is exact under the infinite-bandwidth assumption. The deviation at a large $h_z/T_{\rm K}$ from the Bethe ansatz solution originates from a finite bandwidth. The deviations begin at smaller $h_z/T_{\rm K}$ for a larger coupling $j$ since the latter leads to a larger Kondo temperature $T_{\rm K}$. The Kondo temperature is extracted from the magnetic susceptibility at the zero field $\chi=1/(4T_{\rm K})$. ](fig_Bethe.pdf){width="70mm"} Application to the two-lead Kondo model {#secKondotwo} ======================================= ![\[fig\_trans\_s\] Schematic figure of the two-lead Kondo model. The localized spin-1/2 impurity is coupled with the centers of the left (L) and right (R) leads via an isotropic Kondo coupling $J$. The bath fermions can move within each lead with a hopping $t_{\rm h}$. The bias potentials $V_{\rm L,R}$ are applied to the left and right leads. The time evolutions of the current $I(t)$ between the two leads, the impurity magnetization $\langle\hat{\sigma}^{z}_{\rm imp}(t)\rangle$, and spatially resolved bath properties can be efficiently calculated by our variational method. ](fig_trans_s.pdf){width="56mm"} ![image](fig_trans1_arxiv.pdf){width="170mm"} Model ----- We next apply our approach to investigate out-of-equilibrium dynamics and transport properties in the two-lead Kondo model [@KA00] (Fig. \[fig\_trans\_s\]), where the localized spin impurity is coupled to the centers of the left and right leads via the isotropic Kondo coupling. The Hamiltonian is [ $$\begin{aligned} \label{two-leads} \hat{H}_{\rm two}&=&\sum_{l\eta}\biggl[-t_{\rm h}\bigl(\hat{c}^{\dagger}_{l\eta\alpha}\hat{c}_{l+1\eta\alpha}\!+\!{\rm h.c.}\bigr)+eV_{\eta}\,\hat{c}^{\dagger}_{l\eta\alpha}\hat{c}_{l\eta\alpha}\biggr]\nonumber\\ &&+\frac{J}{4}\sum_{\eta\eta'}\hat{\boldsymbol{\sigma}}_{\rm imp}\cdot\hat{c}_{0\eta\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}\hat{c}_{0\eta'\beta}-\frac{h_z}{2}\hat{\sigma}_{\rm imp}^{z},\end{aligned}$$ ]{} where $\hat{c}_{l\eta\alpha}^{\dagger}$ ($\hat{c}_{l\eta\alpha}$) creates (annihilates) a fermion with position $l$ and spin $\alpha$ on the left ($\eta={\rm L}$) or right ($\eta={\rm R}$) lead, the hopping $t_{\rm h}=1$ sets the energy unit, $J$ is an isotropic Kondo coupling strength, and $eV_{\eta}$ are chemical potentials for each lead. The spin indices $\alpha,\beta$ are understood to be contracted as usual. In the same manner as in the single-lead case in the previous section, only the following symmetric modes are coupled to the impurity: [ $$\begin{aligned} \label{compbasis2} \hat{\Psi}_{0\eta,\alpha}=\hat{c}_{0\eta\alpha},\;\;\; \hat{\Psi}_{l\eta,\alpha}=\frac{1}{\sqrt{2}}\left(\hat{c}_{l\alpha\eta}+\hat{c}_{-l\eta\alpha}\right)\end{aligned}$$ ]{} with $l=1,2,\ldots,L$. Our formalism in Sec. \[secformalism\] can be applied by identifying the bath Hamiltonian $h_{lm}$ as the following two-lead matrix $h_{2}$: [ $$\begin{aligned} \label{two-lead1} h_2 &=&\left( \begin{array}{cc} h_1+eV_{\mathrm{L}}\mathrm{I}_{L+1} & 0 \\ 0 & h_1+eV_{R}\mathrm{I}_{L+1}\end{array}\right),\end{aligned}$$ ]{} where we recall that $h_1$ is the single-lead hopping matrix given in Eq. (\[hopping1\]). The coupling matrix $g_{lm}^{\gamma}$ is given by the local two-lead Kondo coupling: [ $$\begin{aligned} \label{two-lead2} g^{\gamma } &=&J\left( \begin{array}{cc} 1 & 1 \\ 1 & 1\end{array}\right) \otimes \mathrm{diag}_{L+1}(1,0,...,0),\end{aligned}$$ ]{} where ${\rm diag}_{d}(v)$ denotes a $d\!\times\! d$ diagonal matrix with elements $v$. Using Eqs. (\[two-lead1\]) and (\[two-lead2\]), together with the general expression of the functional derivative (\[hm\]), we solve the variational real-time evolution (\[realvar\]) to study the out-of-equilibrium dynamics and transport properties. Quench dynamics --------------- To analyze transport phenomena, we prepare the initial state [ $$\begin{aligned} |\Psi(0)\rangle=|\uparrow\rangle_{\rm imp}|{\rm FS}\rangle_{\rm L}|{\rm FS}\rangle_{\rm R},\end{aligned}$$ ]{} where $|{\rm FS}\rangle_{\rm L,R}$ are the half-filled Fermi sea of each lead and, at time $t=0$, we drive the system to out of equilibrium by suddenly coupling the impurity and applying bias potentials $V_{\rm L}=V/2$ and $V_{\rm R}=-V/2$. We then calculate the real-time evolution by integrating Eq. (\[realvar\]). Without loss of generality, we choose a bias $V>0$ so that the current initially takes a positive value (particles flow from the the left to right lead). ![\[fig\_trans\_p\] Time evolutions of the current $I(t)$ after the quench for various bias potentials $V$. After transient dynamics, all the currents reach their steady values that are shown as the black dashed lines. The current is plotted in unit of $et_{\rm h}/h$. The dimensionless Kondo coupling is $j=0.4$ and we set a bias potential $V=V_0+\Delta V/2$ with $\Delta V=0.01t_{\rm h}$ and vary $V_0=0,0.1,\cdots,1.1t_{\rm h}$ from the bottom to top. System size is $L=100$ for each lead. ](fig_trans_p.pdf){width="71mm"} The black solid curve in Fig. \[fig\_trans1\](a) shows the time evolution of the current $I(t)$ flowing between the two leads: [ $$\begin{aligned} I(t) & &=i\frac{eJ}{4\hbar}\left[\langle \hat{\boldsymbol{\sigma}}_{{\rm imp}}\cdot\hat{c}_{0{\rm L}\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}\hat{c}_{0{\rm R}\beta}\rangle -{\rm h.c.}\right]\nonumber\\ && =-\frac{eJ}{2\hbar}{\rm Im}\biggl[\sigma_{{\rm imp}}^{x}\sigma_{\alpha\beta}^{x}(\Gamma_{f})_{0{\rm L}\alpha,0{\rm R}\beta}\nonumber\\ && \;\;\;\;\;\; \;\;\;\;\;\; \;\;\;\;+(-i\sigma^{y}+\sigma_{{\rm imp}}^{x}\sigma^{z})_{\alpha\beta}(\Gamma_{f}^{{\rm P}})_{0{\rm L}\alpha,0{\rm R}\beta}\biggr],\end{aligned}$$ ]{} where we recall that $\langle\cdots\rangle$ is an expectation value with respect to wavefunction in the original frame, and we put back $\hbar$ when the current and conductance are under consideration. After a short transient dynamics, the current quickly reaches a steady value and forms a plateau in the time evolution. The relaxation time here is characterized by the time scale of the Kondo-singlet formation, which is roughly equal to the decaying time of the magnetization (blue chain curve in Fig. \[fig\_trans1\](a)). Figures \[fig\_trans\_p\] and \[fig\_trans\_2\](b) demonstrate that these time scales are characterized by the inverse of the Kondo temperature $1/T_{\rm K}$ with $T_{\rm K}$ being extracted from the magnetic susceptibility at the zero field as we discussed before. The stability of the reached steady state can be checked by adding a small perturbation on the dynamics and observing the response of physical observables; if they return back to the original steady values, we can conclude that the reached state is actually stable. Figure \[fig\_trans1\](b) shows the spatiotemporal development of the impurity-bath spin correlation $\chi^{z}_{l}(t)$, which clearly indicates the formation of the Kondo cloud. Figure \[fig\_trans1\](c) shows the corresponding spatiotemporal dynamics of the changes in density relative to the initial value. After the quench, density waves propagate ballistically and eventually reach the ends of leads. Then, the reflected waves come back to the impurity site at the middle of the time evolution. This associates with the sudden flip of the current and recurrence of the magnetization (Fig. \[fig\_trans1\](a)). Also, the spin correlation passingly becomes ferromagnetic at this moment (Fig. \[fig\_trans1\](b)). After the second reflections at the ends of leads, the density in both leads turn back to the initial value as shown in Fig. \[fig\_trans1\](c). These results demonstrate the ability of our variational method to accurately calculate long-time spatiotemporal dynamics, which is challenging to obtain in the previous approaches. As we consider the global quench protocol, the system acquires an extensive amount of energy relative to the ground state and thus the amount of entanglement can grow significantly in time. This severely limits the applicability of, e.g., DMRG calculations in the long-time regime [@HMF09]. The predicted spatiotemporal dynamics can be readily tested with site-resolved measurements by quantum gas microscopy [@EM11; @CM12; @FuT15; @KAM16; @MM15; @YA15; @RY16]. We remark that all the results are converged against a system size $L$ up to the time $L/t_{\rm h}$ at which the density waves come back to the impurity site and the finite-size effects take place. To avoid a finite-size effect, we also recall that the Kondo coupling $J$ should be large enough such that the Kondo length satisfies $\xi_{\rm K}\ll L$ (see also discussions in Ref. [@YA18L]). ![\[fig\_trans\_2\] (a) The differential conductance $G$ (in unit of $e^2/h$) is plotted against a magnetic field $h_z$ for different bias potentials $V$. (b) Time evolution of the impurity magnetization $\langle\hat{\sigma}_{\rm imp}^{z}(t)\rangle$ after the quench at finite bias $V$ and magnetic field $h_z$. System size is $L=100$ for each lead and we use $j=0.35$, $V=0.8t_{\rm h}$ in (b). ](fig_trans2.pdf){width="86mm"} ![\[fig\_trans\_3\] (Top panel) The current-bias characteristics with different strengths of magnetic field $h_z$. The dashed black line indicates the perfect conductance with $I/V=2e^2/h$. The current is plotted in unit of $et_{\rm h}/h$. (Bottom panel) The corresponding plot of the nonlinear differential conductance $G=dI/dV$ against the bias potential $V$ for different magnetic fields $h_z$. System size is $L=100$ for each lead and we use $j=0.35$ for which the Kondo temperature is obtained as $T_{\rm K}=0.4414t_{\rm h}$ from the magnetic susceptibility $\chi=1/(4T_{\rm K})$. ](fig_trans3.pdf){width="76mm"} Conductance at finite bias and magnetic field --------------------------------------------- It has been known that the conductance can exhibit the universal scaling behavior against external parameters such as the magnetic field, bias potential and temperature [@AR01; @ACH05; @GL05; @SE09; @MC09; @KAV11; @MC15; @FM17; @OA18; @OA182]. In the accompanying paper [@YA18L], we have checked that our approach reproduced the correct low- and high-magnetic-field dependence at the zero bias. To further check that our approach captures the essential feature of the transport phenomena in the two-lead Kondo model, we study the conductance behavior at finite bias and magnetic field. For a given bias $V_{0}$, we calculate the current $I(t)$ and its time-averaged steady value $\bar{I}$. We then determine the differential conductance $G$ by calculating $\overline{I}$ for slightly modulated biases $V=V_{0}\pm\Delta V$ as follows: $$G(V_{0})\simeq\frac{\bar{I}(V_{0}+\Delta V)-\bar{I}(V_{0}-\Delta V)}{2\Delta V}.$$ We plot the typical time-evolutions of the currents relaxing to their steady values for various bias potentials $V$ in Fig. \[fig\_trans\_p\]. Figure \[fig\_trans\_2\](a) shows the conductance behavior against magnetic field $h_z$ for zero and finite bias potentials. In the absence of bias, applying a magnetic field $h_z$ larger than the Kondo temperature $T_{\rm K}$ eventually destroys the Kondo singlet and monotonically diminishes the conductance (black circles). In contrast, with a finite bias potential $V$, it has a peak around $h_z=V$ at which the level matching between the fermi surfaces in the two leads occurs. Importantly, peak values of the conductance are less than the unitarity limit $2e^2/h$ since the magnetic field partially destroys the Kondo singlet as inferred from nonzero impurity magnetizations shown in Fig. \[fig\_trans\_2\](b). We also plot the current-bias characteristics for different strengths of applied magnetic field in the top panel in Fig. \[fig\_trans\_3\]. As we increase the magnetic field, the current-bias curve deviates from the perfect linear characteristics (black dashed line) due to the partial breaking of the Kondo singlet. At finite bias and fixed $h_z$, the slope of the current-bias curve becomes increasingly sharper as we increase the bias $V$ as long as the magnetic field remains below the resonance $h_z<V$. Equivalently, this behavior manifests itself as the peak in the differential conductance around $h_z\sim V$ (c.f. the red curve in the bottom panel of Fig. \[fig\_trans\_3\]). These characteristic features of the conductance under finite bias and magnetic field are consistent with previous findings in the Anderson model [@KRM02; @AHKA06; @JE10] and analytical results at the Toulouse point [@BCJ16]. Several remarks are in order. Firstly, in real-space calculations [@AHKA06; @JE10] as performed in the present implementation, it is known that the bias potential $V$ should be kept small because of the finite bandwidth intrinsic to the lattice model. If one chooses $V$ to be a very large value comparable to the bandwidth, the bias becomes an order of the Fermi energy and the calculations of the current and the conductance can no longer be faithful. For the parameters used in Fig. \[fig\_trans\_3\], we find that the results faithfully converge to steady-state values as long as the bias is small enough to satisfy $V\lesssim t_{\rm h}$. Beyond that value, the current often does not reach to a steady value in the available time scale. Secondly, we remark that, in the perturbative regime $V\lesssim 0.1t_{\rm h}$ in Fig. \[fig\_trans\_3\], very small deviations of the conductance from the perfect value $G_0=2$ can be easily masked by numerical errors caused by time-dependent current fluctuations in the steady-state regime (see e.g., Fig. \[fig\_trans\_p\]). This made it somewhat difficult to test the perturbative scaling of $G$ in the limit of $V\to 0$ [@AR01; @ACH05; @SE09; @MC09; @KAV11; @MC15; @FM17; @OA18; @OA182]. Thirdly, as mentioned above, we found that several important features in our results are consistent with the analytical results at the Toulouse point. These include the appearance of the peak in the differential conductance around $h_z\sim V$ and the asymmetry between the conductance behavior at $V=0$ with varying $h_z$ and the one at $h_z=0$ with varying $V$. In particular, it has been argued that this asymmetry is an important feature which is absent in the “conventional" bosonization treatment [@BCJ16]. Yet, there seem to be differences between the asymmetric behavior found in our results and that found in the above reference. For instance, while our results (c.f. the $h_z=0$ curve in Fig. \[fig\_trans\_3\]) indicate the negative curvature $d^2G/dV^2<0$ in the intermediate regime $V\sim T_{\rm K}\simeq 0.44t_{\rm h}$, Ref. [@BCJ16] has predicted the opposite sign of the curvature with a strong linear $V$ dependence in a rather broad regime of small $V$ (c.f. the blue dashed curve in Fig. 6 in the reference). We leave the observed discrepancy between the lattice Kondo results presented here and the analytical results at the Toulouse point as an interesting open question. Conclusions and Outlook {#secConclusion} ======================= Motivated by the original ideas by Tomonaga [@TS47] and Lee, Low and Pines [@LLP53], we have developed an efficient and versatile theoretical approach for analyzing the ground-state properties and out-of-equilibrium dynamics of quantum spin-impurity systems. A key idea of this approach is to introduce a canonical transformation that decouples the impurity from the bath degrees of freedom such that the impurity dynamics is completely frozen in the transformed frame. We obtain this transformation using the conserved parity operator corresponding to the discrete symmetry of the spin-impurity Hamiltonians. By combining the canonical transformation with Gaussian wavefunctions for the fermionic bath, we obtain a family of variational states that can represent nontrivial correlations between the impurity spin and the bath. In the accompanying paper [@YA18L], we have presented the unitary transformation decoupling the spin-1/2 operator and the bath, and provided strong evidence that our approach correctly captures the nontrivial ground-state and out-of-equilibrium properties in spin-impurity problems. In this paper, we presented the complete details of our variational method and benchmarked its accuracy by comparing to the results obtained with the MPS approach and the exact solution via the Bethe ansatz. Furthermore, we have analyzed out-of-equilibrium dynamics of the two-lead Kondo model in further detail by calculating its long-time spatiotemporal dynamics and studying the conductance behavior at finite bias and magnetic fields. Let us summarize the key advantages of the proposed theoretical approach. Firstly, this approach is versatile and can be applied to generic spin-impurity models, including systems with long-range spin-bath interactions and disorder. The canonical transformation that we used relies only on the existence of the elemental parity symmetry, and Gaussian states can be used to describe both the fermionic and bosonic baths. The simplicity of our variational approach can provide new physical insight into fundamental properties of challenging impurity problems. Secondly, our method can be used to predict spatiotemporal dynamics of the total system including both impurity and bath in long-time regimes, which were challenging (if not impossible) to explore in the previous approaches [@SU11; @RA12]. This capability allows us to reveal new types of out-of-equilibrium phenomena in SIM, e.g., non-trivial crossovers in the long-time dynamics which originate from the nonmonotonic RG flows of equilibrium systems as found in the accompanying paper [@YA18L]. Thirdly, we note remarkable efficiency of our approach, as we achieve accuracy comparable to the MPS-based method using several orders of magnitude fewer variational parameters. This suggests that our variational states can represent nontrivial impurity-bath correlations in a very compact way. Finally, in contrast to several previous methods, our approach can be used without relying on the bosonization, which requires introducing a cut-off energy and using a strictly linear dispersion. This advantage is particularly important in view of recent developments of simulating quantum dynamics such as in ultracold atoms [@RL17; @EM11; @CM12; @FuT15; @KAM16; @MM15; @YA15; @RY16; @AR05; @PD11; @BJ13; @NY13; @NY16; @NM152; @ZR15; @ZR16; @KM17] and quantum dots [@DFS02; @THE11; @LC11; @IZ15; @MMD17], which allow quantitative comparisons between theory and experiments on both short and long time scales. Our variational approach can be generalized in several ways. Owing to its versatility, the proposed approach can be straightforwardly generalized to multi-channel systems [@RMP07; @IZ15; @BZ17; @MAK12; @MAK16; @MAK17], disordered systems [@EM96], interacting bath such as the Kondo-Hubbard models [@TH97], and long-range interacting systems [@KKS17; @CF17] as typified by the central spin problem [@WZ07] that is relevant to nano-electronic devices such as quantum dots [@LD98]. Another promising directions are generalizations to bosonic systems [@FGM04; @FS06; @FFM11; @FT15], the Anderson model and two-impurity systems [@SP18], which will be published elsewhere. Extending the present approach, it is also possible to calculate the frequency-domain quantities such as the spectral function [@AR03; @WA09]. Our approach can be also extended to study driven systems and quantum pumping. Most of the previous works in this direction were restricted to noninteracting electrons [@FR09; @CR11; @DM15; @PY16]. Our approach will allow for studying full distribution function in charge transport; previous studies have been mainly limited to either noninteracting models [@LSL96; @NYV02] or one dimensional systems that allow bosonization [@GDB10; @GDB10L]. In this paper, we focused on the pure Gaussian state to represent coherent dynamics of an isolated system at the zero temperature. Generalizing our method to Gaussian density matrices will make it possible to explore finite temperature systems [@CTA00; @BR01; @CPH16] and, together with the variational principle for master equations [@CJ15; @WH15], to study Markovian open quantum systems subject to dissipation [@DAJ14; @BG13; @LR16; @TT17] or continuous measurements [@HC93; @LTE14; @YA17nc; @PYS15; @YA18fcs]. Extending our approach to multiple impurities [@JC81; @JBA87; @GA99; @ZZ06; @SP18] would allow for studying the most challenging problems in many-body physics like competing orders in strongly correlated fermions [@VM00] and confinement in lattice gauge theories [@WKG74; @KJ75]. We hope that our work stimulates further studies in these directions. We acknowledge Carlos Bolech Gret, Adrian E. Feiguin, Shunsuke Furukawa, Leonid Glazman, Vladimir Gritsev, Masaya Nakagawa and Achim Rosch for fruitful discussions. Y.A. acknowledges support from the Japan Society for the Promotion of Science through Program for Leading Graduate Schools (ALPS) and Grant No. JP16J03613, and Harvard University for hospitality. T.S. acknowledges the Thousand-Youth-Talent Program of China. J.I.C. is supported by the ERC QENOCOBA under the EU Horizon 2020 program (grant agreement 742102). E.D. acknowledges support from Harvard-MIT CUA, NSF Grant No. DMR-1308435, AFOSR Quantum Simulation MURI, AFOSR grant number FA9550-16-1-0323, the Humboldt Foundation, and the Max Planck Institute for Quantum Optics.
--- abstract: 'Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a “Sparse Shrink” algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algorithm is able to prune redundant feature maps accordingly. The resulting pruned model thus directly saves computational resource. We have evaluated our algorithm on CIFAR-100. As shown in our experiments, we can reduce $56.77\%$ parameters and $73.84\%$ multiplication in total with only minor decrease in accuracy. These results have demonstrated the effectiveness of our “Sparse Shrink” algorithm.' author: - | Xin Li, Changsong Liu\ State Key Laboratory of Intelligent Technology and Systems,\ Tsinghua National Laboratory for Information Science and Technology,\ Department of Electronic Engineering, Tsinghua University, Beijing 100084, China\ Email: {lixin08, lcs}@ocrserv.ee.tsinghua.edu.cn bibliography: - 'sparse\_shrink.bib' title: Prune the Convolutional Neural Networks with Sparse Shrink --- Introduction {#sec:intro} ============ In recent years, great progress has been achieved in computer vision which is arguably attributed to greater computation resources and the application of deep learning algorithms [@szegedy2015going; @simonyan2014very; @long2015fully; @ren2015faster]. The convolutional neural networks (CNN) is a popular example of deep learning algorithms. It adopts a deep architecture that consist of many stacked convolutional and fully-connected layers, which is specifically designed for solving computer vision related problems. Although CNN has bring breakthrough into computer vision, we are still not possible to decide the optimum network architecture, *e.g.* number of channels in convolutional layer, for a specific task. Nowadays, people tend to design large networks with large number of channels to build a high-capacity model. However, this brings a large demand on computation and memory capacity, which are especially limited on embedded devices. The heavy computation and large memory footprint of CNN models become the major burden in real application. On the other hand, it is observed that there is redundancy in large networks [@han2015learning; @wen2016learning]. Convolutional layers occupy the main calculation in CNN, and the responses of their resulting feature maps are sometimes largely correlated to each other. Therefore, it is intuitive to prune a large pre-trained model by removing redundant connections. This will results in a lightweight network with comparable level of performance and less demand on both memory and computational complexity. Motivated by this, we propose a novel “Sparse Shrink” algorithm to prune a CNN model: we evaluate the importance of each channel of feature maps, and prune less important channels to get a slimmer network. The pruned model is of a similar performance with original model, yet thinner structure and lower computational complexity. ![By evaluating the importance of each channel, “Sparse Shrink” prunes less important channels and builds a slimmer model. Weights in the upper and lower layer are modified accordingly.[]{data-label="fig:demo"}](Figures/demo.png){width="0.95\columnwidth"} Related Work ============ Extensive work have been done to accelerate the testing of CNN models or lower its memory cost. Some [@jaderberg2014speeding; @zhang2015efficient] of them speed up the testing by explore the sparsity in CNN models with low rank decomposition. Vasilache [@vasilache2014fast] speed up the convolution operation by a Fast Fourier Transform implementation. However, these algorithms focus on either accelerating test speed or lower memory footprint of CNN models without changing their model structures. Network pruning has been studied by several researchers [@lecun1989optimal; @hassibi1993second; @stepniewski1997pruning; @rastegari2016xnor] . Lecun *et al.* [@lecun1989optimal] and Hassibi *et al.* [@hassibi1993second] show that a portion of weights can be set to zero by analyzing their value and Hessian matrix. Han *et al.* [@han2015learning; @han2015deep] gradually prune the small-weights in a network, and further reduce storage requirement by compressing weights in fully connected layer with matrix factorization and vector quantization. Rastegari *et al.* [@rastegari2016xnor] binarize both the weights and layer inputs, such that the resulting network mainly uses XNOR operations. Stepniewski *et al.* [@stepniewski1997pruning] prunes network with genetic algorithm and simulated annealing. However, these algorithms only makes use of intra-kernel sparsity, without doing channel wise pruning. This limits GPUs to expolit computational savings. Different from existing algorithms, our “Sparse Shrink” algorithm directly prune network structure in convolutional layer by channel wise pruning. The most related work on channel wise pruning would be “Structured pruning” [@anwar2015structured]. It naively remove the incoming and outgoing weights of a pruned channel. In contrast, we modify convolutional kernel in the upper layer by reconstructing original feature maps in order to reduce decrease in accuracy. ![image](Figures/pipeline.png){width="0.95\linewidth"} Sparse Shrink ============= In this section, we elaborate how our “Sparse Shrink” algorithm prune an existing network by channel-level pruning in convolutional layer. The basic idea of “Sparse Shrink” is intuitive: there exists redundancy in convolutional layers, and we can remove redundant channels to produce a pruned model with minimum loss in accuracy. There are three major steps in our algorithm. Firstly, we evaluate the importance of each channel with “Sparse Reconstruction” algorithm. Secondly, those redundant, *i.e.* less important channels, are removed, and related convolutonal kernels are modified, as shown in Figure \[fig:pipeline\]. This results in a pruned model with a minor decrease in accuracy. Finally, the pruned model is re-trained to achieve its best performance. Importance Evaluation --------------------- Sparse reconstruction [@elhamifar2012see; @mairal2008discriminative; @ramirez2010classification] is a well-studied problem which focus on finding representative data points, such that each data point in the dataset can be described as a linear combination of a set of representative points. Formally, with a data matrix $D\in \mathbb{R}^{m\times N}$, *i.e.* $N$ data points of a dataset in $\mathbb{R}^m$, the standard $\ell_{1}$ relaxation of the optimization problem can be written as $$\label{eqn:sparse_reconstruction} min\left \| D-DU \right \|^{2}_{F}, s.t. \left \| U \right \|_{1,q}\leq \tau, \mathbf{1}^{\top }\mathbf{U}=\mathbf{1}^{\top }$$ where $U\in \mathbb{R}^{N\times N}$ is corresponding reconstruction coefficient matrix and $\left \| U \right \|_{1,q} \triangleq \sum_{i=1}^{N} \left \| u^i \right \|_{q}$ is the sum of the $\ell_q$ norms of the rows of $U$. We choose $q=2$ so that the optimization program is convex and $\tau>0$ is an appropriately chosen parameter. $\mathbf{1}^{\top }\mathbf{U}=\mathbf{1}^{\top }$ is a affine constraint to make the representatives be invariant with respect to a global translation of the data. Now we elaborate how to make use of sparse reconstruction to evaluate the importance of each channel in a convolutional layer. Throughout this paper, we use the following notations for the simplicity of explanation. Let $f^\ell$ denote the output feature maps for the $\ell$-th layer and $f^\ell_i$ denote the value of the $i$-th channel. The feature maps has a dimension of $C_\ell \times H\times W$, where $C_\ell$ is the number of channels in layer $\ell$, and $H \times W$ is the corresponding spatial size. To evaluate the importance of each channel in feature maps $f^\ell$, we randomly select $N$ input image, and get a data matrix $D^{N\times C_\ell \times H\times W}$. In contrast to standard sparse reconstruction algorithm as Equation , which focus on finding representative data points among $N$ total data points, our algorithm aims at finding representative channels among the $C_\ell$ channels. Therefore we reshape the data matrix into $D^{ \left ( N\times H\times W \right ) \times C_\ell}$, and regard each channel $c_i$ as a “data point” in $\mathbb{R}^{N\times H\times W}$. With this representation, we are able to find the most representative channels by reconstructing data matrix $D$. More specifically, we use the entire data matrix as dictionary and try to reconstruct the data matrix with reconstruction coefficients $U \in \mathbb{R}^{C_\ell \times C_\ell}$. $$\begin{aligned} \begin{bmatrix} d_1 & d_2 & ... & d_{C_\ell} \end{bmatrix} \approx \begin{bmatrix} d_1 & d_2 & ... & d_{C_\ell} \end{bmatrix}\begin{bmatrix} u^1\\ u^2\\ ...\\ u^{C_\ell} \end{bmatrix} \nonumber\end{aligned}$$ Then we solve the optimization problem in Equation to get the reconstruction coefficients $U$. The regularization term $\left \| U \right \|_{1,2} \triangleq \sum_{i=1}^{C_\ell} \left \| u^i \right \|_{2}$ in Equation provides information about relative importance between channels. A more representative channel takes larger part in reconstruction, and thus the corresponding reconstruction coefficients have more non-zeros elements with larger values. Hence, the resulting coefficients can be intuitively utilized to rank importance of each channel, and to evaluate feature maps redundancy. More precisely, we rank a channel $i$ by its importance factor $\left \| u^i \right \|_{2}$, where $u^i \in \mathbb{R}^{1\times C_\ell}$ indicates the $i$-th row of reconstruction matrix $U$. The lower importance factor is, the more redundant the channel become. Therefore, we prune these bottom-ranking channels to get a slimmer network. Network Pruning --------------- Once we rank the importance factors, we can prune the network in layer $\ell$, by removing the least important $K$ channels. This involves two specific modifications in network weights, removing channels in layer $\ell$ and reconstructing feature maps in layer $\ell+1$. As illustrated in Figure \[fig:pipeline\], the feature maps $f^{\ell}$ are obtained by convolving $f^{\ell-1}$ with kernel $W^\ell \in \mathbb{R}^{C_\ell \times C_{\ell-1} \times k\times k}$, where $k$ is the spatial size of convolutional kernel. To remove a channel $c_i$ in $f^{\ell}$, we only need to remove corresponding “Slice” in $W^\ell$, *i.e.* $W^{\ell}_{c_i} \in \mathbb{R}^{C_{\ell-1} \times k\times k}$. Having pruned $K$ least important feature maps, the new pruned convolutional kernel $\overline{W}^{\ell} \in \mathbb{R}^{\left ( C_{\ell}-K \right ) \times C_{\ell-1}\times k\times k}$ has a channel number $C_{\ell}-K$. And the new feature maps $\overline{f^{\ell}} \in \mathbb{R}^{\left ( C_{\ell}-K \right ) \times C_{\ell-1}\times H\times W}$ is obtained by convolving $\overline{W}^{\ell}$ with $f^\ell$. Pruning layer $\ell$ will obviously affect layer $\ell+1$. Instead of naively removing corresponding channels in $W^{\ell+1}$, we manage to get a new convolutional kernel by reconstructing the original feature maps $f^{\ell}$, in order to minimize the decrease in accuracy after pruning. Given a data matrix $\overline{D} \in \mathbb{R}^{\left ( C_{\ell}-K \right ) \times\left ( N\times H\times W \right )}$ of pruned feature maps $\overline{f^\ell}$, we try to reconstruct original $f^{\ell}$ data matrix by minimizing reconstruction error, $$\begin{aligned} \label{eqn:reconstruction} \min Err & = & \min_{V}\left \| D-\overline{D}V \right \|\end{aligned}$$ Where $V \in \mathbb{R}^{\left ( C_\ell-K \right )\times C_\ell}$ is the reconstruction coefficients. We can obtain a closed-form solution for Equation , $$\begin{aligned} \label{eqn:reconstruction_weights} V & = & \left ( \overline{D}^{\top }\overline{D} \right )^{-1}\overline{D}D\end{aligned}$$ Let $\widehat{V} \in \mathbb{R}^{C_\ell \times \left ( C_\ell-K \right ) \times 1 \times 1}$ denote the $1\times1$ convolutional kernel derived from $V$, where $\widehat{V}_{i,j,1,1}\triangleq V_{j,i}$. The reconstructed feature maps $\widehat{f}^\ell$ is obtained with, $$\begin{aligned} \widehat{f}^\ell = \widehat{V} \ast \overline{f^{\ell}} \nonumber\end{aligned}$$ And the feature maps $\overline{f}^{\ell+1}$ in the pruned network can thus be written as, $$\begin{aligned} \overline{f}^{\ell+1} & = & ReLU \left ( W^{\ell+1} \ast \widehat{f}^\ell \right ) \nonumber\\ & = & ReLU \left ( W^{\ell+1} \ast \left ( \widehat{V} \ast \overline{f^{\ell}} \right ) \right ) \nonumber\\ & = & ReLU \left ( \left ( W^{\ell+1} \ast \widehat{V} \right ) \ast \overline{f^{\ell}} \right ) \nonumber\end{aligned}$$ And the new convolution kernel $\overline{W}^{\ell+1} \in \mathbb{R}^{C_{\ell+1}\times \left ( C_\ell-K \right )}$ is, $$\begin{aligned} \overline{W}^{\ell+1} &=& W^{\ell+1} \ast \widehat{V} \nonumber\\ &=& W^{\ell+1} V^{\top}\end{aligned}$$ Now we get a pruned network with $C_\ell-K$ channels in layer $\ell$, and pruned convolution kernels $\overline{W}^{\ell}$, $\overline{W}^{\ell+1}$. The newly pruned model may perform better after further training for more iterations. Experiment ========== We evaluated the performance of “Sparse Shrink” algorithm on the benchmark dataset CIFAR-100 [@krizhevsky2009learning]. CIFAR-100 is a widely used benchmark dataset for image classification with $60,000$ color images of 100 categories in total. This size of images is $32\times32$. Images are split into $50,000$ training set and $10,000$ test set. Following NIN [@lin2013network] we use global contrast normalization and ZCA whitening as pre-processing. We use NIN [@lin2013network] model as a baseline model, which has been proven to be a successful CNN structure on CIFAR-100. There are three convolutional layers in the NIN model, *i.e.* $Conv1$,$Conv2$,$Conv3$, with $192$ channels in each of them. In this paper we focus on pruning these three convolutional layers to obtain slimmer networks. We employ Caffe [@jia2014caffe] implementation as our experiment platform. Throughout the experiments, we fix the initial learning rate to $0.01$ and the weight decay coefficient to $0.001$. The code and models is released at: [https://github.com/lixincn2015](url). ![Importance factors of each channel in the three convolutional layer.[]{data-label="fig:compareC"}](Figures/compareC.png){width="0.9\columnwidth"} \[tab:cifar\] -------- ------- ------- ------- ------- ------- ------- Pruned Conv1 68.08 67.80 67.86 67.86 67.36 Conv2 68.08 67.51 67.36 65.95 64.67 Conv3 68.08 67.68 66.07 65.09 61.17 -------- ------- ------- ------- ------- ------- ------- : Table 1. Comparison of accuracies (%) of the pruned models on CIFAR-100 test set. We conduct three sets of experiments to evaluate our algorithm. In the first experiment, we apply “Sparse Shrink” algorithm to each of the three convolutional layers separately. And the sorted importance factors of each layer are shown in Figure \[fig:compareC\]. As shown in Figure \[fig:compareC\], there are some channels with obviously larger importance in all three convolutional layers, while others have relatively smaller ones. Pruning those channels with smaller importance factors is supposed to result in less decrease in performance. By pruning different number of channels according to importance factors, we get corresponding pruned models and then evaluate these models on CIFAR-100 test set. Detailed result is shown in Table 1 \[tab:cifar\], where $Conv1$,$Conv2$,$Conv3$ are three convolutional layers from the bottom up. The baseline NIN model, *i.e.* not pruning any channels on any layer, has an accuracy of $68.08\%$. As shown in Table 1 \[tab:cifar\], with a decrease of $\sim1\%$ in accuracy, we can prune as many as $176$, $128$, and $96$ channels on three convolutional layers respectively (highlighted in blue). It is worth mentioning that pruning $176$ channels on $Conv1$ layer brings only minor decrease of $0.7\%$ in accuracy. We attribute this to the effectiveness of our “Sparse Shrink” algorithm, which can dramatically reduce redundancy in feature maps while preserving important information. \[tab:compare\_shrunk\] --------- ---------------- ------------------------------------ ------------------------------------ --------------- -------------------- -------------------- --------------- Baseline Model pruned model Reduction (%) Baseline Model pruned model Reduction (%) Conv1 $32 \times 32$ $192 \times 3 \times 5 \times 5$ $16 \times 3 \times 5 \times 5$ 91.67 $1.47 \times 10^7$ $1.23 \times 10^6$ 91.67 Cccp1 $32 \times 32$ $160 \times 192 \times 1 \times 1$ $160 \times 16 \times 1 \times 1$ 91.67 $3.15 \times 10^7$ $2.62 \times 10^6$ 91.67 Cccp2 $32 \times 32$ $96 \times 160 \times 1 \times 1$ $96 \times 160 \times 1 \times 1$ 0 1$.57 \times 10^7$ $1.57 \times 10^7$ 0 Conv2 $16 \times 16$ $192 \times 96 \times 5 \times 5$ $64 \times 96 \times 5 \times 5$ 66.67 $1.18 \times 10^8$ $3.93 \times 10^7$ 66.67 Cccp3 $16 \times 16$ $192 \times 192 \times 1 \times 1$ $192 \times 64 \times 1 \times 1$ 66.67 $9.44 \times 10^6$ $3.15 \times 10^6$ 66.67 Cccp4 $16 \times 16$ $192 \times 192 \times 1 \times 1$ $192 \times 192 \times 1 \times 1$ 0 $9.44 \times 10^6$ $9.44 \times 10^6$ 0 Conv3 $8 \times 8$ $192 \times 192 \times 3 \times 3$ $96 \times 192 \times 3 \times 3$ 50.00 $2.12 \times 10^7$ $1.06 \times 10^7$ 50.00 Cccp5 $8 \times 8$ $192 \times 192 \times 1 \times 1$ $192 \times 96 \times 1 \times 1$ 50.00 $2.36 \times 10^6$ $1.18 \times 10^6$ 50.00 Cccp6 $8 \times 8$ $100 \times 192 \times 1 \times 1$ $100 \times 192 \times 1 \times 1$ 0 $1.23 \times 10^6$ $1.23 \times 10^6$ 0 Overall - $9.83 \times 10^5$ $4.25 \times 10^5$ 56.77 $3.23 \times 10^8$ $8.45 \times 10^7$ 73.84 --------- ---------------- ------------------------------------ ------------------------------------ --------------- -------------------- -------------------- --------------- Pruning any one of three convolutional layer results in decreased performance, wheres the decrease show different features. Pruning lower layers brings less accuracy decrease. More specifically, with the same level of decrease in accuracy (highlighted in blue), we can prune much more channels in $Conv1$ than $Conv3$ (176 vs 96). It indicates that there is more redundancy in the lower layers of NIN model than in the upper layers, and $Conv1$ needs much less feature maps than $Conv3$. This finding is consistent with previous studies [@zeiler2014visualizing; @simonyan2014very]. It’s well observed that there is a hierarchical nature of the features in deep networks. Feature maps of lower layers mostly responds to low-level visual features, *e.g.* edges or corners, which can be shared between high-level patterns. Upper layers then assemble the low-level features to exponentially more complex visual patterns. Hence we need a lot more channels in upper layers than in lower layers. ![Comparison of pruning top-ranking and bottom-ranking channels in Conv3.[]{data-label="fig:prune"}](Figures/prune_different_channels.png){width="0.9\columnwidth"} In the second experiments, we compare the accuracy of pruning different channels in $Conv3$ layer. More specifically, we prune top-ranking and bottom-ranking channels according to importance factors, and evaluate the pruned models on test set. As shown in Figure \[fig:prune\], pruning both top-ranking and bottom-ranking channels results in decrease in accuracy. However, pruning bottom-ranking channels brings less decrease. As the number of pruned channels increases, the gap becomes larger. And pruning $128$ bottom-ranking channels has an advantage of $2\%$ over pruning top-ranking channels (61.17% vs 59.12%) . This validates that our “Sparse Shrink” algorithm is able to successfully evaluate the importance of each channel, and hence keep the most important feature maps during pruning. Finally, in the third experiment, we further prune all the three convolutional layers in the network from the bottom up, and remove $176$, $128$, and $96$ channels in $Conv1$, $Conv2$, $Conv3$ respectively. The final pruned model has an accuracy of $65.53\%$ on test set. Table \[tab:compare\_shrunk\] provides a detailed comparison between baseline model and the pruned model in terms of number of parameters and number of multiplication. For a convolutional kernel $W^\ell \in \mathbb{R}^{C_\ell \times C_{\ell-1} \times k \times k}$ in layer $\ell$, the corresponding number of parameter is $C_\ell \times C_{\ell-1} \times k \times k$. And the number of multiplication in layer $\ell$ is $C_\ell \times C_{\ell-1} \times k \times k \times H \times W$, where $H$ and $W$ are the input size of layer $\ell$. Compared to the baseline model, this pruned model reduces $56.77\%$ parameters and $73.84\%$ multiplication, at a minor decrease of $2.55\%$ in accuracy. This validates that our “Sparse Shrink” algorithm is able to save computational resource of a well-trained model without serious performance degradation. Conclusion ========== In this paper, we propose a “Sparse Shrink” algorithm for convolutional neural network pruning. The Sparse Shrink algorithm evaluates the importance of each channel by sparse reconstruction. Channels with smaller importance factors is considered to be more redundant, and is pruned to get a slimmer network. New convolutional kernels can be derived from reconstructing original feature maps. Experiments on CIFAR-100 dataset show that the “Sparse Shrink” algorithm is able to significantly save computational resource with only minor decrease in performance. Acknowledgments =============== This work was supported by the National Natural Science Foundation of China under Grant No. 61471214 and the National Basic Research Program of China (973 program) under Grant No. 2013CB329403 .
--- abstract: 'We investigate the magnetic properties of the isostructural spinel-spinel interface of NiMn$_{2}$O$_{4}$(NMO)-Fe$_{3}$O$_{4}$. Although the magnetic transition temperature of the NMO film is preserved, both bulk and interface sensitive measurements demonstrate that the interface exhibits strong interfacial magnetic coupling up to room temperature. While NMO thin films have a ferrimagnetic transition temperature of 60K, both NiFe$_{2}$O$_{4}$ and MnFe$_{2}$O$_{4}$ are ferrimagnetic at room temperature. Our experimental results suggest that these magnetic properties arise from a thin interdiffused region of (Fe,Mn,Ni)$_{3}$O$_{4}$ at the interface leading to Mn and Ni magnetic properties similar to MnFe$_{2}$O$_{4}$ and NiFe$_{2}$O$_{4}$.' author: - 'B.B. Nelson-Cheeseman' - 'R.V. Chopdekar' - 'J.S. Bettinger' - 'E. Arenholz' - 'Y. Suzuki' title: 'Magnetism of NiMn$_{2}$O$_{4}$-Fe$_{3}$O$_{4}$ Spinel Interfaces' --- The oxide spinel Fe$_{3}$O$_{4}$ is an ideal candidate for highly spin polarized electrode material to be used in spintronic applications. It has been theoretically predicted to be half metallic, and is highly attractive for applications due to its high Curie temperature (T$_C$) of  850K[@Zhang91]. Experimental studies of Fe$_{3}$O$_{4}$ in spintronic heterostructures, however, have exhibited much lower junction magnetoresistance (JMR) values than expected from a half-metallic electrode material. Among the highest JMR values of Fe$_{3}$O$_{4}$-based heterostructures are observed in layered systems with epitaxially grown isostructural spinel barrier layers. Oxide spinels such as CoCr$_{2}$O$_{4}$, MgTi$_{2}$O$_{4}$, FeGa$_{2}$O$_{4}$, and MnCr$_{2}$O$_{4}$ have been used as barrier layers in magnetic tunnel junctions with spinel Fe$_{3}$O$_{4}$ and half metallic perovskite electrodes[@Hu02; @Alldredge06], while CoFe$_{2}$O$_{4}$ has been used with Fe$_{3}$O$_{4}$ in spin-filter junctions[@Chapline06; @Ramos07]. Recently, NiMn$_{2}$O$_{4}$ (NMO) has also been identified as an effective spin filter barrier material in Fe$_{3}$O$_{4}$-based magnetic junctions with perovskite counter electrodes[@N-C07]. Whereas perovskite and spinel layers have been shown to be magnetically uncoupled in these structures[@N-C07], the magnetism near the isostructural spinel interfaces is a subject of interest. A more detailed investigation of the interfacial magnetic interactions between spinel structure materials is necessary in order to understand transport and magnetic interaction results attributed to these multilayers, as well as to optimize the use of these heterostructures for spintronic applications. In this paper, we observe magnetic properties in NiMn$_{2}$O$_{4}$ thin film bilayers with Fe$_{3}$O$_{4}$ not observed in either film alone. Although the NMO magnetic transition at 60K is preserved, interfacial element-specific magnetism measurements of NMO/Fe$_{3}$O$_{4}$ bilayers show strong interfacial coupling of the Fe, Mn and Ni moments. We suggest these magnetic results can be explained by a thin interdiffused layer at the interface. Thin film heterostructures of NMO and Fe$_{3}$O$_{4}$ were grown by pulsed laser deposition on (110)-oriented single crystal SrTiO$_{3}$ (STO) substrates. The NMO was grown at 600$^\circ$C in 10mTorr of 99%N$_{2}$/1%O$_{2}$, while the Fe$_{3}$O$_{4}$ was grown at 400$^\circ$C in vacuum. The NMO film was grown first to minimize oxidation of the Fe$_{3}$O$_{4}$ film during deposition. The films grow epitaxially on the STO substrates as confirmed by X-ray diffraction and Rutherford Backscattering mesasurements. The single NMO thin films have a T$_C$ of 60K[@unpubNMO]. The bulk magnetism of the samples was probed by a superconducting quantum interference device (SQUID) magnetometer. The element specific magnetic properties of the interfacial Ni, Mn and Fe were investigated by X-ray magnetic circular dichroism (XMCD) (BL4.0.2 and BL6.3.1) in total electron yield at the Advanced Light Source. Due to the surface sensitive nature of this technique and in order to be interface specific, all samples had a 5nm Fe$_{3}$O$_{4}$ top layer. Additionally, because the NMO films have a low saturation magnetization (0.8$\mu_{B}$) compared to Fe$_{3}$O$_{4}$ films (4.1$\mu_{B}$), two different thicknesses of NMO film in the bilayer (40nm and 5nm) were utilized to elucidate any effect of the bulk NMO film on the interface. Lastly, because these measurements are relevant to spin polarized heterostructures, where the bottom spinel layer is usually grown on a perovskite counter electrode, such a heterostructure was also investigated. Therefore, the NMO/Fe$_{3}$O$_{4}$ interface was investigated in the following three samples: a ’thick bilayer’ of STO//NMO(40nm)/Fe$_{3}$O$_{4}$(5nm), a ’thin bilayer’ of STO//NMO(5nm)/Fe$_{3}$O$_{4}$(5nm), and a ’trilayer’ of STO//La$_.7$Sr$_.3$MnO$_3$(LSMO)(40nm)/NMO(5nm)/Fe$_{3}$O$_{4}$ (5nm). All magnetic measurements were performed along the \[100\] in-plane direction. Moment versus temperature measurements taken at 10 Oe of the NMO/Fe$_3$O$_4$ bilayers are shown in Fig 1(a). The thick bilayer sample shows a Brillouin shape for the NMO T$_{C}$ of 60K; however, after reaching a minimum at 60K, the moment begins to rise with increasing temperature (Fig 1a), uncharacteristic of the magnetic behavior observed in either individual film. This behavior is largely absent in the thin bilayer sample, although a slight discontinuity may be seen at  50K (Fig 1(b)). Such results prompted more detailed investigation of the magnetic interactions at the interface. XMCD spectra and hysteresis loops were taken of the NMO/Fe$_{3}$O$_{4}$ interface in all heterostructures at various temperatures. The thin NMO bilayer XAS and XMCD results are displayed in Fig 2. The NMO/Fe$_{3}$O$_{4}$ interface exhibits virtually identical Fe, Mn and Ni XAS and XMCD spectra for all temperatures between 30K-300K, as seen in Fig 2. The thick NMO bilayer and trilayer samples also demonstrate this behavior. In addition, for a given temperature, the Fe, Mn and Ni XMCD hysteresis loops are identical to one another. Nevertheless, the *shape* of the hysteresis loops changes distinctly as a function of temperature, showing a dramatic increase in coercive field for all three elements below the NMO T$_C$. Similar results are seen in the temperature dependent hysteresis loops of the trilayer sample. As shown in Fig 3, the coercive fields of the Fe are the same at 80K and 55K, but increase at 30K and 15K. Furthermore, at 55K, the hysteresis loop shows a slight decrease in remanent dichroism, which is consistent with the minimum moment in the SQUID data. As the normalized Fe, Mn and Ni hysteresis loops are identical for each given temperature, this hysteresis loop behavior is seen in the Mn and Ni as a function of temperature also, but has been omitted for clarity in Fig 3. One can now discuss the apparent source of the bulk moment measurements by utilizing the element and interface specific XMCD information. First, it appears that there is significant magnetic coupling at the interface as evidenced by identical Fe, Mn and Ni hysteresis loops. However, although they are identical for a given temperature, the magnetic nature of the hysteresis loops becomes increasingly harder as the temperature is decreased through 60K. This evolution suggests that the species at the interface are coupled to the magnetically soft Fe$_3$O$_4$ at temperatures above the NMO T$_C$, but, as the NMO layer becomes ferrimagnetic, the species couple to the magnetically hard NMO, as well. The decrease in remanent asymmetry observed in the trilayer around 60K could be due to a magnetic frustration of the interfacial species as the NMO layer becomes ferrimagnetic. This can all be related to the *increase* in bulk moment observed above 60K in Fig 1a in the following way: just above the T$_{C}$ of the NMO film, the bulk hysteresis loop exhibits greater squareness than that at lower temperatures, which results in an effective *increase* in moment at small fields as the temperature is increased. The magnetic transition of the NMO film in the presence of this strong magnetic coupling at the interface is also of interest. It is apparent from the change in hysteresis loop shape and increase in coercive field that even the thin NMO layer undergoes a magnetic transition around 60K. Any depressed onset of the coupling to the NMO layer could be due to the relatively low magnetization of the NMO compared to the Fe$_3$O$_4$. Now that we have discussed how the interfacial species respond to the bulk of the NMO and Fe$_3$O$_4$ thin films, let us focus on how the magnetic species at the interface can give rise to room temperature Mn and Ni magnetic circular dichroism. Two possible explanations are: (1) a thin interfacial layer of the NMO thin film is magnetized far above the NMO T$_C$ by the close proximity to the Fe$_3$O$_4$ layer, or (2) the presence of a mixed (Fe,Mn,Ni)$_3$O$_4$ spinel at the interface that is ferrimagnetic at room temperature. Such a solid solution at the interface is reasonable as the cations of the spinel structure occupy only a small fraction of the available octahedral and tetrahedral sites of the oxygen face-centered-cubic sub-lattice, leaving ample opportunity for cation diffusion throughout the structure. The XAS and XMCD data supports the presence of Mn and Ni in MnFe$_{2}$O$_{4}$ and NiFe$_{2}$O$_{4}$ environments at the interface, consistent with a mixed (Fe,Mn,Ni)$_{3}$O$_{4}$ spinel. The Ni XAS and XMCD spectra is characteristic of NiFe$_{2}$O$_{4}$, while the Mn XAS and XMCD spectra is characteristic of MnFe$_2$O$_4$[@Pattrick02]. Additionally, the alignment of the Ni and Mn moments with respect to the Fe moments in the XMCD is consistent with MnFe$_{2}$O$_{4}$ and NiFe$_{2}$O$_{4}$. As seen in Fig 2, the maximum Ni dichroism is parallel to the third peak of the Fe dichroism, as in bulk NiFe$_{2}$O$_{4}$, and the maximum Mn dichroism is antiparallel to the third peak of the Fe dichroism, as in bulk MnFe$_{2}$O$_{4}$[@Antonov03]. Furthermore, the lack of a change in the Mn and Ni XMCD spectra below the NMO T$_C$ may result from probing the magnetism in an interdiffused region, which would form the bulk of the XMCD probing depth and, due to the comparatively high saturation magnetization values of NiFe$_{2}$O$_{4}$ and MnFe$_{2}$O$_{4}$ with respect to NMO, overwhelm the NMO dichroism. In conclusion, isostructural spinel interfaces of Fe$_{3}$O$_{4}$ and NiMn$_{2}$O$_{4}$ exhibit strong interfacial magnetic coupling, although the NMO T$_{C}$ of 60K is preserved. Element and interface specific magnetic analysis suggests that this behavior is due to limited interdiffusion of the Fe, Mn and Ni cations at the interface, thus creating a spinel solid solution of (Fe,Mn,Ni)$_{3}$O$_{4}$ that exhibits the magnetic properties of NiFe$_{2}$O$_{4}$ and MnFe$_{2}$O$_{4}$. Above the NMO T$_{C}$, this interdiffused region couples to the magnetic Fe$_{3}$O$_{4}$ layers; however, with the onset of ferrimagnetism in the NMO film, the interfacial region first becomes frustrated, and then couples to the magnetically hard NMO. This work is relevant for understanding magnetic interfacial interactions in spinel-spinel heterostructures. This work was supported in full by the Director, Office of Science, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. [99]{} Z. Zhang and S. Satpatathy. Phys. Rev. B. **44**, 13319 (1991). G. Hu and Y. Suzuki. Phys. Rev. Lett. **89**, 276601 (2002). L.M.B. Alldredge, R.V. Chopdekar, B.B. Nelson-Cheeseman and Y. Suzuki. Appl. Phys. Lett. 89, 182504 (2006). M.G. Chapline and S.X. Wang. Phys. Rev. B **74**, 014418 (2006). A.V. Ramos, J.-B. Moussy, M.-J. Guittet, M. Gautier-Soyer, C. Gatel, P. Bayle-Guillemaud, B. Warot-Fonrose, and E. Snoeck. Phys. Rev. B **75**, 224421 (2007). B.B. Nelson-Cheeseman, R.V. Chopdekar, L.M.B. Alldredge, J.S. Bettinger, E. Arenholz, and Y. Suzuki. (arXiv.org/cond-mat/0706.2726.) B.B. Nelson-Cheeseman, R.V. Chopdekar, M.F. Toney, J.S. Bettinger, E. Arenholtz, and Y. Suzuki. (Unpublished.) R.A.D. Pattrick, G. van der Laan, C.M.B. Henderson, P. Kuiper, E. Dudzik, D.J. Vaughan. Eur. J. Mineral. **14**, 1095 (2002). V.N. Antonov, B.N. Harmon, and A.N. Yaresko. Phys. Rev. B. **67**, 024417 (2003).
--- abstract: 'We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multi-dimensional systems of conservation laws in finite-volume form. The mesh generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical and cylindrical coordinates and is modular so that auxilliary physics solvers are readily integrated into the TESS framework and so that the TESS framework can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.' author: - 'Paul C. Duffell and Andrew I. MacFadyen' title: 'TESS: A Relativistic Hydrodynamics Code on a Moving Voronoi Mesh' --- =1 Introduction {#sec:intro} ============ Many astrophysical gas dynamical systems involve the motion of compressible fluid with relativistic velocity or energy density. Techniques for the numerical solution of the equations governing the multi-dimensional dynamics of relativistic fluids have been developed greatly in recent years. Significant recent progress has been made in grid-based methods where the fuid is described using Eulerian meshes for hydrodynamics on both uniform meshes (see review by Marti & Muller, 2003, and references therein) and with adaptive mesh refinement (AMR) [@duncan; @ram; @amrvac], as well as using smoothed particle hydrodynamics (Rosswog, 2010). Extensions of these methods have been implemented including for general relativistic magnetohydrodynamics [@font; @harm; @dh2003; @echo; @cosmos; @pluto; @wham; @nagataki] and for dynamic spacetimes [@anderson; @cd08; @etienne]. In this paper we present a new method for relativistic gas dynamics based on a dynamic Voronoi tessellation of space. The Voronoi tessellation generates a numerical mesh which moves and distorts with an arbitrary velocity field depending on how the mesh-generating points in the tessellation are moved. While holding the mesh points fixed results in an Eulerian method, allowing them to move with the local fluid velocity results in an effectively Lagrangian method. In this paper we present the TESS code which we have written on a dynamic mesh generated by successive Voronoi tesselations of the spatial domain. TESS is a numerical code and general framework for both relativistic and non-relativistic implementations of hydrodynamics and magnetohydrodynamics (MHD). The strength of the method is to retain the advantages of conservation-law formulation essential for accurate computation of relativistic gas dynamics, while gaining the advantages of a Lagrangian scheme. Of particular importance, the mesh motion allows for contact discontinuities to propagate without the excessive numerical diffusion often found in fixed mesh computations. The preservation of contact discontinuities is of great importance for problems involving the development of fluid instabilities and for reactive hydrodyamics where artifical diffusion of reactant species can lead to unphysical solutions. Using mesh motion, TESS accurately solves the numericaly challenging case of relativistic shear flows (see Section 3), a problem which underresolved Eulerian simulations calculate incorrectly (Zhang & MacFadyen, 2006). Lagrangian codes have had great success when employed in one dimension, usually to treat problems with spherical symmetry. Multidimensional problems, however, are more challenging for Lagrangian codes, due to distortion of the computational mesh when complexities in the flow develop. [@n64] formulated a simple strategy for dealing with mesh distortion in multiple dimensions, which was to continuously remap the computational domain as the system evolved, to prevent the mesh from becoming overly distorted. Codes employing this strategy are referred to as “Arbitrary Lagrangian-Eulerian” (ALE) codes. ALE codes solve the problem of mesh distortion, but at the cost of diffusive remaps. [@bp87] addressed the problem of mesh distortion by using a tessellated computational domain. The improvement was to employ the Voronoi tessellation; a unique decomposition of space into polyhedra based on a set of mesh-generating points (originally called “fluid markers”). The advantage of such an approach is that the mesh-generating points can move freely through the computational domain and can be added or removed to enable adaptive refinement. The Voronoi tessellation adjusts its shape and topology so that the computational mesh does not become overly distorted. [@w95] also made use of domain tessellation for mesh generation. His “signal method” was a finite-volume method, partially inspired by finite-element methods employed for solving problems with irregular boundaries. The method was first-order and conservative, and employed a Delaunay triangulation of the computational domain. The technique employed high-resolution shock-capturing techniques developed over the last decades for grid-based Eulerian codes, while also having the advantages that come from solving the fluid equations on a moving mesh. The “grid cells” in this case were Delaunay triangles, and the main source of diffusion came about during changes in the mesh topology. During such changes, the conserved quantities were redistributed evenly among the affected triangles. This process introduced diffusion similar to that encountered in ALE codes during remapping of the mesh, but because the diffusion was localized and only occurred during topology changes, the accumulated error was reduced. More recently, [@arepo] developed a second-order finite-volume approach to solving Euler’s equations using a Voronoi tessellation of the computational domain. This method was implemented in the AREPO code, which has recently been applied to simulate star formation in a cosmological context [@arepo1]. The advantage of using the Voronoi tessellation instead of the Delaunay triangulation is that the Voronoi cells do not suffer abrupt transformations during changes in mesh topology so that re-mapping and fluid re-distribution are uneccessary. This was the first time a second-order finite-volume Lagrangian method of this nature was proposed. Another important property of this method Galilean invariance; the code’s performance is independent of the velocity of the reference frame in which the calculation is performed. In this paper we generalize these developments to the case of relativistic hydrodynamics. We should not expect to retain all of the advantages found in the Newtonian case; in particular, the prospect of having a Lorentz-invariant formulation is not to be expected, since in standard formulations mesh points are assumed to be simultaneous at each timestep. The paper is structured as follows: In §2, we describe the formulation of the fluid equations, and the specific implementation in the code. In §3, we provide a series of test problems to determine the convergence rate and to illustrate what sort of problems naturally benefit from a Lagrangian approach. In §4, we demonstrate the code’s usefulness in an astrophysical context, by looking at the Rayleigh-Taylor instability in a decelerating relativistic shock. In §5, we summarize our results. Numerical Method {#sec:form} ================ We first give a brief description of the Voronoi tessellation, before explaining how it is used in solving the fluid equations. A tessellation is a decomposition of space into separate pieces, in our case polygons or polyhedra. For the moment, we restrict our attention to two-dimensional tesselations, mainly because they are easier to describe and to visualize. A Voronoi tessellation can be generated from some collection of points (see Fig. \[fig:tessellation\]). Each mesh-generating point (i.e. “mesh generator”) will correspond to a polygon in the tessellation. The polygon associated with point P is defined to be the set of all points which are closer to P than to any other mesh generating point. Thus, if an edge is adjacent to the two Voronoi polygons of points P and Q, this edge lies in the line of points which is equidistant to P and Q. In general, we will refer to the polygons as “cells” and the edges as “faces”. This terminology makes it easier to generalize to arbitrary numbers of dimensions. Additionally, we will speak of the “volume” of a cell, which in two dimensions will mean area and in one dimension will mean length. We will also refer to the “area” of a face, which in two dimensions will mean length, and in one dimension will just be a constant number which we set equal to unity. An important related tessellation is the Delaunay triangulation. Given a set of mesh generating points, one can generally form a triangulation with the mesh generators as vertices. In a sense, the Delaunay tessellation is the “nicest possible” triangulation of the space, given by the following definition: for every triangle in the tessellation, the three vertices of the triangle all lie on some circle, C. In order for this to be a proper Delaunay triangle, no other mesh generators may lie within the circle C. This property almost uniquely defines the triangulation. In degenerate cases, where four or more mesh generators lie on a common circle, the triangulation is ambiguous for these mesh generators. Degenerate cases like this will not concern us greatly in this paper. For a given set of mesh generators, the Delaunay tessellation is the topological dual to the Voronoi tessellation. Every Delaunay edge corresponds to a Voronoi face, and every Voronoi vertex corresponds to a Delaunay triangle (in fact, the Voronoi vertex lies at the center of the circle generated by the three vertices of the Delaunay triangle). This fact will help us in constructing the Voronoi tessellation, because the problem can be reduced to finding the equivalent Delaunay triangulation. Since there is a straightforward test to check that a tessellation is Delaunay, this is typically the easiest way to find out which mesh generators are neighbors. Before describing the details of how one might generate a tessellation from a given set of points, it will be useful to write down the numerical formulation so that we know what information needs to be extracted from the tessellation procedure. We shall find that our formulation applies generally to any hyperbolic system of conservation laws, regardless of whether the underlying equations are relativistic. Finite Volume Formulation ------------------------- The equations being solved will always have the form: $$\partial_\mu f^\mu = S \label{eqn:cons}$$ $$f^\mu = \left( \begin{array}{c} U \\ \vec F \end{array} \right).$$ In particular, for the case of Euler’s equations: $$U = \left( \begin{array}{c} \rho \\ S^i \\ E \end{array} \right) F^j = \left( \begin{array}{c} \rho v^j \\ S^i v^j + P \delta^{ij} \\ E v^j + P v^j \end{array} \right) \label{eqn:cons_nr}$$ where $\rho$ is the density of the fluid, $\vec S$ is the momentum density, and $E$ is the energy density. $P$ is the pressure, and $\vec v$ is the flow velocity. For simplicity we consider the case where there are no source terms, ${S = 0}$. For the relativistic version: $$U = \left( \begin{array}{c} \rho u^0 \\ \rho h u^i u^0 \\ \rho h u^0 u^0 - P - \rho u^0 \end{array} \right) F^j = \left( \begin{array}{c} \rho u^j \\ \rho h u^i u^j + P \delta^{ij} \\ \rho h u^0 u^j - \rho u^j \end{array} \right). \label{eqn:cons_rel}$$ Where now ${u^\mu}$ is the four-velocity. $$\rho h = \rho + \epsilon + P$$ and ${\epsilon}$ is the internal energy density, which can be found from the equation of state: $$\epsilon = \epsilon ( \rho , P ).$$ For the case of an adiabatic equation of state, we have $$\epsilon = P/(\Gamma - 1)$$ where $\Gamma$ is the adiabatic index of the fluid. We consider general physical equations of state in the Appendix. To derive the finite volume form of these equations, we shall set the source term to zero for brevity, but the generalization to a nonzero source term is straightforward. For concreteness, we shall work in 2+1 dimensions, but everything said here easily generalizes to arbitrary numbers of dimensions. Equation (\[eqn:cons\]) does not depend on the spacetime metric a priori, and so we can write down a formulation independent of the metric. For the following derivation we shall assume a Euclidean metric, rather than a Minkowski metric. We now look at the evolution of a Voronoi cell over one timestep. It is assumed that the cell changes its shape and size by the linear motion of its faces, and that over a timestep it traces out a solid polyhedron in 2+1 dimensions, whose top and bottom are surfaces of constant time (see Fig. \[fig:poly\]). The shape is actually not quite a polyhedron, but this is an approximation we are making, which is valid in the limit of short timesteps. In practice, we resolve this issue and others by taking a high-order Runge-Kutta timestep. We integrate (\[eqn:cons\]) over this polyhedral 2+1-dimensional volume P: $$\int_{P} \partial_\mu f^\mu d^3x = 0.$$ We can easily convert this to an integral over the two-dimensional boundary of this solid: $$\int_{\partial P} f^\mu n_\mu d^2x = 0 \label{eqn:int}$$ where ${n_\mu}$ is the (euclidean) unit normal to the boundary. For the top and bottom of the polyhedron: $$n_\mu = \left( \begin{array}{c} \pm 1 \\ \vec 0 \end{array} \right).$$ The 2+1-dimensional unit normal to the other faces will be related to the 2-dimensional unit normal on a given timeslice, ${\hat n}$, but it will also have a component in the time dimension because the face is moving with some velocity, ${\vec w}$. If we assume that the face is not changing its size or orientation as it moves (another assumption which is resolved by taking a high-order Runge-Kutta timestep), it is straightforward to check that the 2+1-dimensional unit normal will be $$n_\mu = { 1 \over \sqrt{1 + (\vec w \cdot \hat n)^2}} \left( \begin{array}{c} - \vec w \cdot \hat n \\ \hat n \end{array} \right).$$ Now we evaluate the integrals in (\[eqn:int\]) by averaging the integrated quantities over spacetime. In doing so, we need to know the 2+1 dimensional spacetime volume being integrated over. For the top and bottom, this is straightforward: $$\int_{Bottom} d^2x = dV^n, \int_{Top} d^2x = dV^{n+1}$$ where dV is the cell volume at the beginning or end of the timestep. For the other faces, it is easy to check that: $$\int_{Face} d^2x = \sqrt{ 1 + (\vec w \cdot \hat n)^2 }dA \Delta t.$$ Recall that “dA” is the face “area”, so it refers to the length of a Voronoi edge. Note that our factors of ${\sqrt{ 1 + (\vec w \cdot \hat n)^2 }}$ will end up cancelling, which is to be expected if our formulation is independent of the spacetime metric. If we interpret ${U^n}$ as the cell-averaged value of U at timestep n, and let ${F_{ij}}$ denote the time-averaged flux through the face adjacent to cells i and j, and likewise use ${U_{ij}}$ to denote the time-and-area-averaged value of U on this same face, we get the following result: $$\begin{aligned} \begin{array}{c} 0 = \int_{\partial P} f^\mu n_\mu d^2x = \hspace{80pt} \\ \\ U^{n+1} dV^{n+1} - U^n dV^n + \Delta t \sum\limits_{cell j} ( F_{ij} - \vec w \cdot \hat n U_{ij} ) dA_j . \\ \end{array}\end{aligned}$$ This gives us a simple prescription for how to evolve the conserved variables from one timestep to the next, assuming we know the time-averaged fluxes and conserved quantities on the faces, and the velocity of each face: $$U^{n+1} dV^{n+1} = U^n dV^n - \Delta t \sum\limits_{cell j} ( F_{ij} - \vec w_{ij} \cdot \hat n U_{ij} ) dA_j . \label{eqn:evolve}$$ Of course, this result is not surprising at all. The prescription merely tells us to add an advective term to the flux (${\vec F \rightarrow \vec F - \vec w U}$), and evolve things in a way analogous to the fixed-grid approach. It is worth noting that our formulation does not depend on the physical content of the equations expressed by (\[eqn:cons\]). In particular, it does not matter whether the velocity ${\vec w}$ exceeds the speed of light. In order for this to be possible, we must be careful about our physical interpretation for the mesh itself. For example, it is not necessarily meaningful to speak of the “rest frame” of a cell or face. Riemann Solver {#sec:riemann} -------------- Equation (\[eqn:evolve\]) approaches an exact result, in the limit of short timesteps. This means that all of the numerical error due to the spatial discretization must stem from our estimates of the time-averaged values of the fluxes and conserved quantities on the faces of the Voronoi cells. In TESS, we estimate these fluxes using an approximate Riemann solver. This means we assume that, at the beginning of the timestep, there is a uniform state on either side of the face. Generally speaking, a Riemann solver estimates the time-averaged flux through the face by evolving this constant-state initial condition, either exactly or approximately. It turns out that getting the most out of the Lagrangian nature of our scheme is highly dependent on how we solve the Riemann problem on each cell face. The Riemann solver in AREPO [@arepo] is effective because it is Galilean-invariant; the Riemann problem is solved in the rest frame of the face. In our case we cannot assume it makes sense to even speak of a “rest frame” for a face, so we cannot have a method which is Galilean-invariant (this is to be expected anyway for relativistic problems). Nonetheless, we can still have a code which preserves contact discontinuities to very high accuracy. What we require is a Riemann solver which is exact for a pure advection problem. The HLLC Riemann solver, appropriately employed, meets this requirement. In the TESS code, we employ the HLLC solver for relativistic problems as described by [@hllc]. A pure advection problem has constant pressure and velocity on both states, and two different densities. The HLLC solver divides spacetime into four pieces: the original left and right state, and two interior states known as \*L and \*R, separated by a contact discontinuity. For advection, we have the following: $$\begin{array}{c} \rho_{*L} = \rho_L \\ \rho_{*R} = \rho_R \\ P_{*L} = P_{*R} = P_{L} = P_{R} \\ v_{*L} = v_{*R} = v_{L} = v_{R} . \end{array}$$ This is the exact solution to the advection problem, and hence when using the HLLC solver, we can solve the advective Riemann problem to machine precision, as long as the correct starred state is chosen, i.e. the solver should choose the spacetime region which houses the path traced out by the face as it moves with velocity ${\vec w}$ (see Fig. \[fig:rfan\]). The values ${F^*}$ and ${U^*}$ found in this spacetime region are the values used in the update step given by equation (\[eqn:evolve\]). The reason that the HLLC solver is so crucial is that it accurately calculates advective fluxes. Since the numerical error caused by the spatial discretization is entirely housed in estimating the time-averaged fluxes ${\vec F \cdot \hat n - \vec w \cdot \hat n U}$, and if the velocity of each face is very close to the fluid velocity, then for HLLC the advective fluxes will cancel, so that the numerical error for advective fluxes will be small. In the particular case of pure advection, advective fluxes completely cancel, meaning that the advection problem is solved exactly to machine precision. This property is extremely important for preserving contact discontinuities. Primitive Variable Solver {#sec:roots} ------------------------- The Riemann solver takes in two states corresponding to two adjacent Voronoi cells. To use the information in a cell to solve the Riemann problem for a given face, we need the primitive variables (density, pressure, and four velocity) on either side of the face. In order to find these primitive variables, we need to invert formulas (\[eqn:cons\_nr\]) or (\[eqn:cons\_rel\]) for the conserved variables on either side. This can be done using a Newton-Raphson rootfinding scheme. For relativistic hydrodynamics with an adiabatic equation of state, this solver is not difficult to write. For an arbitrary equation of state, we incorporate the temperature as an additional variable which must be solved for. In addition to an arbitrary equation of state, we want TESS to become a very general code capable of solving wide classes of problems, for example the equations of general relativistic magnetohydrodynamics (GRMHD). This makes the Newton-Raphson step somewhat more complicated, but this has already been employed in TESS even though we have not yet employed hydrodynamic variables which would necessitate such a solver. The GRMHD primitive variable solver is based on the method described by [@cons2prim] and uses a three-dimensional Newton-Raphson step, solving for the variables ${W = \gamma^2 \rho h}$, ${K = \gamma^2 - 1}$, and the temperature, $T$ (see the Appendix for more details). Reconstruction of Face-Centered Primitive Variables {#sec:plm} --------------------------------------------------- The root finding method described above determines the primitive variables (density, pressure, and four-velocity) at the cell centers. To solve the Riemann problem on each face, we must extrapolate the primitive variables from the cell centers to the face centers. If we assume all our variables to be piecewise constant, then we can assume they have the same value on the face as they do at the center of mass of a cell. However, if we want accuracy higher than first order in space, we need to extrapolate the variable values based on the values in neighboring cells. We use piecewise linear reconstruction, using the method derived by [@arepo] for calculating the variable gradients in each cell. We repeat the results here. Assume we have some variable, $\phi$, for which we would like to calculate the gradient at cell i based on the values at adjacent cells. The formula we use is $$\left< \vec \nabla \phi \right>_i = {1 \over V_i} \Sigma_j dA_j ( [\phi_j - \phi_i] {\vec c_{ij} \over r_{ij}} - {\phi_i + \phi_j \over 2}{\vec r_{ij} \over r_{ij}} ).$$ We use this gradient to extrapolate primitive variable values via: $$\phi(\vec f_{ij}) = \phi_i + \left< \vec \nabla \phi \right>_i \cdot (\vec f_{ij} - \vec s_i).$$ Here, ${\vec r}$ represents the location of a mesh generator, ${\vec s}$ represents the location of the center of mass of a cell, and ${\vec f_{ij}}$ is the center of mass of the face adjacent to cells i and j. Note that primitive variables are defined at ${\vec s}$, not at the mesh generators. This prescription would lead to a code which is second order in space, but it is well known that piecewise linear reconstruction can cause numerical oscialltions in the neighborhood of shocks. To deal with this, we need to constrain the estimated gradients in the neighborhood of sharp discontinuities. In other words, we need to construct a generalization of the slope limiters used in grid-based Eulerian codes. AREPO uses a slope limiter which could be considered a generalization of minmod [@kur00], but one which does not have the total variation diminishing (TVD) property. As a result, this slope limiter caused mild oscillations in some calculations. TVD is an especially important property for relativistic hydrodynamics, since oscillatory behavior in the conserved variables can potentially cause wild variation of the primitive variables, particularly in situations with large Lorentz factors. To address this problem, we optionally employ an alternative slope limiter which is much more conservative. AREPO uses the slope-limited gradient, $$\left< \vec \nabla \phi \right>'_i = \alpha_i \left< \vec \nabla \phi \right>_i$$ $$\alpha_i = min(1,\psi_{ij})$$ $$\psi_{ij} = \left\{ \begin{array} {l@{\quad:\quad}l} (\psi_i^{max} - \psi_i)/\Delta \psi_{ij} & \Delta \psi_{ij} > 0 \\ (\psi_i^{min} - \psi_i)/\Delta \psi_{ij} & \Delta \psi_{ij} < 0 \\ 1 & \Delta \psi_{ij} = 0 \end{array} \right. \label{eqn:minmod}$$ Our method is similar, but replaces (\[eqn:minmod\]) with $$\psi_{ij} = \left\{ \begin{array} {l@{\quad:\quad}l} max( \theta (\psi_j - \psi_i)/\Delta \psi_{ij} , 0) & \Delta \psi_{ij} > 0 \\ max( \theta (\psi_j - \psi_i)/\Delta \psi_{ij} , 0) & \Delta \psi_{ij} < 0 \\ 1 & \Delta \psi_{ij} = 0 \end{array} \right. \label{eqn:minmod2}$$ where ${\theta}$ is generally set to unity, but reduced if a still more diffusive scheme is desired. The slope limiter in (\[eqn:minmod2\]) enforces monotonicity, though it is more diffusive than (\[eqn:minmod\]). In practice, we find it to be much more robust, so it has typically been employed in problems involving strong shocks. Time Integration {#sec:time} ---------------- Our time integration is based on the method of lines, performing a Runge-Kutta timestep for the time evolution of the conserved variables, and for the motion of the mesh points. For most problems, we use a third order Runge-Kutta timestep which is TVD (total variation diminishing,) and which updates both the values of the conserved variables, and the positions of the mesh generating points. The timestep is Courant-limited: $$\Delta t = C_{cfl} \cdot min( { R_i \over | \lambda^{max}_i | })$$ ${C_{cfl}}$ is the Courant factor, typically chosen between 0.2 and 0.5. ${R_i}$ is the effective radius of a cell, ${R = \sqrt{dV/\pi}}$ in 2D. ${\lambda^{max}_i}$ is the eigenvalue in cell i with the largest magnitude. Currently the code is set up to operate with a single global timestep. To take a timestep from ${U^n}$ to ${U^{n+1}}$ which is third-order in time, for example, we use the following prescription: $$\begin{array}{rcl} U^{(1)} & = & U^n + \Delta t L( U^n , \vec r^n ) \\ \vec r^{(1)} & = & \vec r^n + \Delta t \vec w^n\\ \\ U^{(2)} & = & {3 \over 4}U^n + {1 \over 4} U^{(1)} + {1 \over 4} \Delta t L( U^{(1)} , \vec r^{(1)} ) \\ \vec r^{(2)} & = & {3 \over 4} \vec r^n + {1 \over 4} \vec r^{(1)} + {1 \over 4} \Delta t \vec w^{(1)} \\ \\ U^{n+1} & = & {1 \over 3}U^n + {2 \over 3} U^{(2)} + {2 \over 3} \Delta t L( U^{(2)} , \vec r^{(2)} ) \\ \vec r^{n+1} & = & {1 \over 3} \vec r^n + {2 \over 3} \vec r^{(2)} + {2 \over 3} \Delta t \vec w^{(2)} \\ \end{array}$$ Here, L is an operator representing the numerically integrated time derivative of U, and the variables ${U^{(1)}, U^{(2)}, \vec r^{(1)}}$, and ${\vec r^{(2)}}$ represent intermediate states in the time integration. The Voronoi Mesh {#sec:mesh} ---------------- Equation (\[eqn:evolve\]) tells us exactly what geometric information we need in order to evolve the conserved quantities. We need to know the following about the Voronoi cells: - which cells are neighbors - the volume of each cell - the area of each face - the velocity of each face - the center of mass of each cell - the center of mass of each face The last two elements of this list are necessary for the piecewise linear reconstruction of primitive variables. We must determine how to extract all of this information given the positions and velocities of all the mesh generating points. The velocities of mesh generators are freely specifiable, and we shall typically choose to set them equal to the local fluid velocity. Before we determine this completely, we can show that all of the above can be calculated easily if we know which cells are neighbors, and if we also know the center of mass and area of each face. In other words, when performing the Voronoi tessellation, this will be the only information we need to store. Given a single mesh generator and its neighbors, it is straightforward to calculate its cell’s volume, if the area of each face is known. This is done by dividing the cell into pyramids (in 2D, triangles), each of whose tip is the mesh generating point, and whose base is a Voronoi face. Then the volume of the cell is the sum of the volumes of all the pyramids: $$dV_i = \Sigma_j V_{\Delta j} \label{eqn:begingeo}$$ The volume of a pyramid can be expressed generally in D-dimensions: $$V_\Delta = (\mbox{area of base})(\mbox{height})/D$$ Because the face is in the plane halfway between the two mesh generating points, the height should be half the distance between these points. $$V_{\Delta j} = dA_{ij} (|r_{ij}|/2) /D$$ $$dV_i = \Sigma_j {dA_j (|r_{ij}|/2) \over D}$$ Similarly, the center of mass of a cell can be directly calculated from the area and center of mass of the faces. We can use the weighted average over pyramids: $$\begin{aligned} \vec s_i = {1 \over dV_i} \Sigma_j V_{\Delta j} \vec s_{\Delta j} \\ = {1 \over dV_i} \Sigma_j {dA_j (|r_{ij}|/2) \over D} \vec s_{\Delta j} .\end{aligned}$$ The center of mass of a pyramid also depends on the number of dimensions: $$\vec s_{\Delta j} = {D \over D + 1} \vec f_{ij} + {1 \over D + 1} \vec r_i .$$ Here, ${\vec f_{ij}}$ denotes the center of mass of the face adjacent to cells i and j. Next, we need to determine the velocity of the faces. It is assumed here that the mesh-generating points themselves have been given some velocity ${\vec w_i}$, typically the local fluid velocity (though corrections can be added to steer the cells in ways that make the mesh better-behaved). [@arepo] showed that the velocity of a face can be calculated from the position and velocity of the mesh generating points and the center of mass of the face. The result is the average of the velocity of the two adjacent mesh generators, plus a correction due to the fact that the center of mass of the face is not generally at the midpoint between the two mesh generating points, and so acquires a velocity due to rotation about this midpoint: $$\vec w_F = (\vec w_L + \vec w_R)/2 + \vec w'$$ $$\vec w' = (\vec w_L - \vec w_R) \cdot ( \vec f - (\vec r_L + \vec r_R)/2 ) { \vec r_R - \vec r_L \over (\vec r_R - \vec r_L)^2 } . \label{eqn:endgeo}$$ Again, ${\vec f}$ is the center of mass of a face. We can use equations (\[eqn:begingeo\] - \[eqn:endgeo\]) to pare down the list of information that we need to extract directly from the tessellation itself. The tessellation procedure now only consists in determining the following: - which cells are neighbors - the area of each face - the center of mass of each face All relevant geometrical information can be easily extracted from this data. This is advantageous because it means the tessellation can take up a relatively small amount of memory. The tessellation procedure consists of generating a new set of faces each timestep, based on the locations of the mesh generators. In one dimension, the tessellation procedure is trivial; neighboring cells do not change, the face area is always set to unity, and the face center of mass is simply the midpoint between the two mesh generators. The two-dimensional version turns out to be surprisingly simple, because we use the tessellation from the previous timestep to generate the new faces. Since we know the neighbors of each cell on the previous timestep, we can use the neighbors of neighbors (“friends of friends”) of a cell as a list of candidates for the neighbors on the next timestep. Because the length of each step in time is Courant-limited, the tessellation will not change significantly in one timestep, and hence this list of candidates is big enough for our purposes. Optionally, we can choose to use “neighbors of neighbors of neighbors” but we have not found this to make a difference in any scenario we’ve encountered. Already having this list of candidates simplifies the algorithm immensely, as in principle it can reduce to a brute force guess-and-check procedure using this small list of candidates. In practice, the 2D algorithm is not totally brute force, but very simple nonetheless. We consider a single mesh generating point X and all of its “friends of friends”. First we find the nearest neighbor to X (which is guaranteed to share a face with X). Call this neighbor Y. Next we take the rest of potential neighbors and order the list by clockwise angle with respect to the segment XY. What follows is an elimination process; at the end of this process, the elements in the list will be exactly those which share a face with X. At each step in the process, we consider three consecutive elements of the list; call them A,B,C (see Fig. \[fig:tess1\]). We denote the element before A “P”, and the element after C “Q”. It is determined whether or not to keep point B in the list, by checking whether C lies within the circle generated by X, A, and B. If it does, then we remove B from the list, and take one step backward (checking whether C lies within XPA). If C does not lie within the original circle, we keep B and move forward to check triangle BCQ). More concisely, if ${\bigcirc_{XAB}}$ contains C, remove B from the list and take one step back: ${A \rightarrow P , B \rightarrow A}$. Otherwise take one step forward: ${C \rightarrow Q, B \rightarrow C, A \rightarrow B}$. Fig. \[fig:tess2\] demonstrates an example of this full procedure. Note that this algorithm is not sensitive to the presence of degenerate sets of points (that is, sets of four points which all lie on the same circle). For practical purposes, it will not matter whether our code chooses to accept or reject a point in this configuration, because a degeneracy in the Delaunay tessellation corresponds to a face of zero area in the Voronoi tessellation. If the face has zero area, then there will be zero flux through it, and hence it will have no influence on the resulting physics. Once this operation has been performed for all members of the list (i.e., point B is now point Y, the nearest neighbor) all remaining list members are neighbors of point X. All that remains is to calculate the areas and centers of mass of the faces, which is straightforward given the vertices of the polygon generated by this list. These vertices are the centers of the circles generated by consecutive triples in the list. The details of the tessellation algorithm do not completely extend to three dimensions, but the idea of using friends-of-friends as a list of candidates still works, so in the worst case scenario we could use a brute force guess-and-check algorithm when D=3. One might ask whether there is a major efficiency advantage to this tessellation procedure over more conventional ones such as direct insertion. While it is not clear which method should be the fastest (given an optimized implementation), it is assumed they are comparably efficient. Moreover, while the tessellation procedure takes up a non-negligible percentage of the code’s overall runtime, it does not take up a $majority$ of the runtime, and as such there is no major incentive to optimize its efficiency. The main advantage to the method described here is that the algorithm is very simple and does not require making a lot of exceptions. Additionally, this algorithm is expected to be very easy to parallelize, because the tessellation is performed locally. In the most simple prescription, we could make the code parallel via a simple domain decomposition, where different processes only share boundary data. The main disadvantage to the tessellation procedure is that we must have an approximate tessellation to begin with, so that we can use “friends of friends” for our initial pool of neighbors. In principle, this simply amounts to saving the tessellation from the previous timestep, but in practice this makes the implementation of boundary conditions a bit more complicated. Not only do we need to create an appropriate set of “ghost cells” outside the boundary, but we need to generate “ghost faces” as well (the tessellation procedure won’t automatically do this for us, because it relies on having an approximate tessellation from a previous timestep). This also adds some complication to the parallelization of the code, for the same reasons. One might wonder why we worry so much about the positions of ghost cells. For periodic boundary conditions, we don’t really need ghost cells at all, since we could in principle associate left-most neighbors with right-most neighbors and so on, so that no boundary need be created. For reflecting boundaries, one might hope we could have a fixed set of ghost cells lining our reflecting wall, so that we don’t need to generate a new set each timestep. Unfortunately, we only have control over the positions of mesh generators; we don’t directly control shape of the cells. Thus, if we want a flat wall, we need to have the mesh generators reflected across the wall. The only reason we include ghost cells for periodic boundary conditions is for the sake of overall simplicity in the code (fewer parts of the code depend on the choice of boundary conditions). In this case, ghost cells are translated to the opposite end of the domain with respect to their “real” counterparts. The boundary conditions are set in each dimension sequentially. The first step involves flagging cells according to whether they are inside or outside the boundary. If we are using periodic boundary conditions and a cell moves off of the computational domain, it is set to be a ghost cell and its corresponding ghost cell is set to be a real cell. All cells are flagged to be in one of three categories: Inside the domain, outside the domain, or a “border cell”, meaning it is inside the domain and neighbors a cell outside the domain. The next step involves generating a new list of ghost cells, first by making copies of all border cells. Additional ghost cells get added to this list, by using all neighbors of ghost cells which are inside the computational domain. This procedure is repeated until the desired number of ghost cells is achieved (see Fig. \[fig:bcs\]). For our second-order methods, we need two layers of ghost cells. Next these copies are moved according to the boundary conditions, e.g. if the boundaries are reflecting, their positions are reflected across the reflecting wall. Finally, neighbors are assigned to the copies by using the neighbor data from the original tessellation. These associations are of two kinds: associations between two copied cells, and associations between a copied cell and a border cell. Both kinds of associations can be extracted from the original tessellation. After this step, we discard all the old ghost cells and replace them with the new ghost cells. When this is done, the tessellation algorithm is performed as previously described. This implementation of boundary conditions is essentially the same as the method used in AREPO, though we require a bit more care because we need to retain the tessellation information in the boundary cells. As a final note on this formulation, very little of the code depends on the number of dimensions; only the tessellation algorithm itself is significantly affected by D. Mesh Regularity --------------- For most problems, evolving the mesh according to the above prescription can generically lead to cells which are long and skinny, and whose mesh generating points are very close to their edges. These cells will evolve in an unstable manner, because their faces can move very quickly even while their mesh generators are moving slowly, and they can also have a very short sound-crossing time. It is therefore desirable to steer cells in such a way that they tend to become more regular. [@arepo] found an effective prescription for this, which is to give the mesh generators an additional component to their velocity, pointed in the direction of the center of mass. We repeat this prescription here: $$\vec w'_i = \vec w_i + \chi \left\{ \begin{array} {l@{\quad:\quad}l} 0 & d_i / (\eta R_i) < 0.9 \\ c_i {\vec s_i - \vec r_i \over d_i} {d_i - 0.9 \eta R_i \over 0.2 \eta R_i} & 0.9 < d_i / (\eta R_i) < 1.1 \\ c_i {\vec s_i - \vec r_i \over d_i} & 1.1 < d_i / (\eta R_i) \end{array} \right.$$ ${R_i = \sqrt{dV_i/\pi}}$ is the effective radius of cell i, ${d_i}$ is the distance between the cell’s mesh generating point ${\vec r_i}$ and its center of mass ${\vec s_i}$. ${c_i}$ is the local sound speed. $\eta$ and $\chi$ are arbitrary parameters for this prescription, which are typically set to ${\eta = 0.25}$ and ${\chi = 1.0}$ (the same values typically used by AREPO). Note that we we do not implement a relativistic velocity addition formula, as ${\vec w}$ need not be interpreted as a physical velocity. Test Problems {#sec:test} ============= As this is a new method for solving relativistic hydrodynamics, we present a large number of test problems in one and two dimensions. In addition to the relativistic tests, we have included some nonrelativistic ones, to compare our code with AREPO. One-Dimensional Tests --------------------- [ccccc]{} Nonrelativistic & & ${x<.5}$ & ${x>.5}$\ Shock Tube & $\rho$ & 1 & .25\ N = 100 & v & 0 & 0\ ${\Gamma = 1.4}$ & P & 1 & .1795\ t = 3.0 & & &\ Nonrelativistic & & ${x<.1}$ & ${.1<x<.9}$ & ${x>.9}$\ Interacting Shocks & $\rho$ & 1 & 1 & 1\ N = 400 & v & 0 & 0 & 0\ ${\Gamma = 1.4}$ & P & 1000 & .01 & 100\ t = .038 & & &\ Easy Relativistic & & ${x<.5}$ & ${x>.5}$\ Shock Tube & $\rho$ & 1 & 1\ & vx & 0 & 0\ N = 400 & vy & 0 & .99\ ${\Gamma = 5/3}$ & P & 1000 & .01\ t = 0.4 & & &\ Hard Relativistic & & ${x<.5}$ & ${x>.5}$\ Shock Tube & $\rho$ & 1 & 1\ & vx & 0 & 0\ N = 100 & vy & .9 & .9\ ${\Gamma = 5/3}$ & P & 1000 & .01\ t = 0.6 & & & All 1D tests involving piecewise constant states are summarized in table \[tab:1d\]. Our first two tests are identical to tests performed by [@arepo]. The first is a simple shock tube, a test which has also been performed by [@shock1]. To demonstrate the importance of the HLLC Riemann solver in our scheme, we perform four tests, varying whether the mesh is static or moving, and varying whether we use HLL or HLLC. Results are plotted in Fig. \[fig:shock1a\]. When the cells were moved and HLLC was employed, the contact discontinuity was very well approximated. In the other three cases, there was far more diffusion of the contact discontinuity. This demonstrates that the accuracy in this scheme comes from the combination of using a moving mesh and employing a multi-state Riemann solver. The second nonrelativistic test we perform involves the interaction of multiple shocks [@dblast]. Fig. \[fig:dblast\] shows the multiple-shock problem at ${t=.038}$. We compare results with a fixed and moving mesh, and using the HLL and HLLC Riemann solvers. Again, we see that the combination of using the HLLC solver and allowing the cells to move leads to a very high accuracy in the solution. [cccc]{} TESS & 100 & 4.23e-1 &\ & 200 & 2.57e-1 & 0.82\ & 400 & 1.36e-1 & 0.82\ & 800 & 7.45e-2 & 0.98\ & 1600 & 3.54e-2 & 0.91\ & 3200 & 2.48e-2 & 0.95\ RAM & 100 & 8.48e-1 &\ & 200 & 4.25e-1 & 1.0\ & 400 & 2.41e-1 & 0.82\ & 800 & 1.27e-1 & 0.92\ & 1600 & 6.43e-2 & 0.99\ & 3200 & 3.34e-2 & 0.95 For relativistic one-dimensional tests, we compare our code to the relativistic adaptive mesh refinement code RAM [@ram] which was tested for convergence against a variety of test problems. The first two are Riemann problems with transverse velocity, the so-called “Easy” and “Hard” shock tube tests. The easy shock tube test is plotted in Fig. \[fig:shock2\] at time ${t=0.4}$. For the easy shock tube, we find nearly first-order convergence, and smaller errors than RAM (see table \[tab:easy\] for convergence rates). For the hard shock tube, we find that like RAM, we need very high resolution to capture the solution accurately. However, because we can initially place the cells anywhere we want, we can resolve the initial discontinuity very well and capture the solution with as few as 100 conputational zones (see Fig. \[fig:shock3\]). 50 of the zones were concentrated uniformly within a region ${\Delta x = .005}$ of the discontinuity and the remaining 50 cells were distributed uniformly through the rest of the domain. Using a uniform initial mesh, TESS showed first order convergence for this problem (table \[tab:hard\]). [cccc]{} TESS & 400 & 7.12e-1 &\ & 800 & 4.54e-1 & 0.64\ & 1600 & 2.26e-1 & 1.0\ & 3200 & 1.10e-1 & 1.0\ RAM & 400 & 5.21e-1 &\ & 800 & 3.63e-1 & 0.52\ & 1600 & 2.33e-1 & 0.64\ & 3200 & 1.26e-1 & 0.89 The last test we perform in one dimension is to demonstrate the convergence rate on a smooth problem. We set up an isentropic pulse identical to the one used by [@ram]: $$\rho = \rho_{ref} ( 1 + \alpha f(x) )$$ $$f(x) = \left\{ \begin{array} {l@{\quad:\quad}l} ((x/L)^2 - 1)^4 & |x| < L \\ 0 & otherwise \end{array} \right.$$ $$P = K \rho^\Gamma$$ ${\rho_{ref}}$, ${K}$,${L}$, and ${\alpha}$ are constants. In our case, ${\rho_{ref} = 1.0}$ ${K = 100}$, ${L = 0.3}$, ${\alpha = 1}$. ${\Gamma}$ is the adiabatic index, chosen to be ${5/3}$. To determine the velocity, we use $$\begin{aligned} J_- = \frac{1}{2} \ln (\frac{1+v}{1-v}) - \frac{1}{\sqrt{\Gamma-1}} \ln (\frac{\sqrt{\Gamma-1}+c_s}{\sqrt{\Gamma-1}-c_s} ) \\ \nonumber \\ = constant \nonumber \label{eqn:rmninv}\end{aligned}$$ where ${c_s}$ is the sound speed. [cccc]{} TESS & 80 & 4.88e-3 &\ & 160 & 1.78e-3 & 1.8\ & 320 & 4.84e-4 & 1.9\ & 640 & 1.20e-4 & 2.0\ & 1280 & 2.93e-5 & 2.1\ & 2560 & 6.97e-6 & 2.1\ & 5120 & 1.47e-6 & 2.1\ RAM & 80 & 1.12e-2 &\ (U-PLM-RK4) & 160 & 3.56e-3 & 1.7\ & 320 & 1.02e-3 & 1.8\ & 640 & 2.60e-4 & 2.0\ & 1280 & 6.49e-5 & 2.0\ & 2560 & 1.62e-5 & 2.0\ & 5120 & 4.04e-6 & 2.0\ RAM & 80 & 1.10e-2 &\ (U-PPM-RK4) & 160 & 2.56e-3 & 2.1\ & 320 & 5.74e-4 & 2.2\ & 640 & 1.34e-4 & 2.1\ & 1280 & 3.10e-5 & 2.1\ & 2560 & 7.33e-6 & 2.1\ & 5120 & 1.82e-6 & 2.1 Fig. \[fig:isentropic\] shows the pulse at ${t=0.8}$ on a mesh with 400 cells. The L1 error and convergence rates are shown in table \[tab:isentropic\]. In order to make a reasonable comparison between the codes, we chose two different methods employed by RAM which are second order. Of course, if we had chosen to compare with the WENO solvers employed by RAM, it would be no contest, as RAM can get up to fifth order convergence for smooth flows. For this problem, we not only find smaller errors, but also slightly faster convergence than RAM (RAM employed piecewise linear reconstruction of the primitive variables, and a fourth-order Runge-Kutta time integration, leading to an overall second-order scheme). In fact, convergence for TESS was slightly better than second order. This could be due to the fact that the method becomes “more Lagrangian” as the resolution increases; face velocities approach the velocities of contact waves in this limit. Convergence in Multiple Dimensions ---------------------------------- On the surface, it is certainly not obvious that this second-order convergence extends to multiple dimensions; it would be quite difficult to prove such a thing mathematically. A convergence test in 2D is essential if we want to establish confidence in our numerical scheme. A straightforward test which satisfies this is to propagate an isentropic wave diagonally across the mesh. The mesh will initially be a square grid, but because the flow is nonuniform, the mesh will move in a nontrivial way and we will be able to check whether second-order convergence continues to hold during such distortions. The result we present will employ a nonrelativistic wave in a periodic box ${(x,y) \in [0,1] \times [0,1]}$, with density and pressure given by the same formulas as in the 1D case, but now with $$f(x,y) = sin^2( \pi (x+y) ).$$ For this test, we use ${K=.01}$, but all other constants are the same as in the previous example. The nonrelativistic limit of equation (\[eqn:rmninv\]) is simply $$v = {2 \over \Gamma-1} ( c_s - c_s^{ref} ). \label{eqn:isenv}$$ The direction of this velocity is of course diagonal to the mesh, $$\vec v = v( \hat x + \hat y )/\sqrt{2}.$$ Condition (\[eqn:isenv\]) continues to hold as the wave evolves, so we can use this as a way of measuring L1 error for this problem. In Fig. \[fig:isen2d\] we see the pulse at ${t=1.0}$ on a ${50 \times 50}$ mesh, and the cells have clearly changed their shape and size in a nontrivial way. L1 error for this problem is plotted in Fig. \[fig:l12d\]. The convergence rate for this problem was calculated to be 2.3, again slightly better than second order. Tests in Spherical and Cylindrical Coordinates ---------------------------------------------- TESS is written in a modular way so that it is straightforward to change the set of conserved quantities, and to add source terms. As an example, we have implemented the equations in alternative coordinate systems for the two special cases of spherical symmetry and axisymmetry. Our extension to these coordinate systems takes the most naïve approach: we simply express the equations in an alternate coordinate system, and solve them like any other hyperbolic system in conservation-law form. We do not wish to endure the complication of curvilinear voronoi cells, which is why we specialize to the cases of spherical symmetry (1-D) and axisymmetry (2-D). For example, in cylindrical coordinates in axisymmetry the nonrelativistic conservation laws take the following form: $$U = \left( \begin{array}{c} r \rho \\ r \rho v_r \\ r \rho v_z \\ r^2 \rho v_\phi \\ r( \rho v^2/2 + \rho e ) \end{array} \right)$$ $$\vec F \cdot \hat n = \left( \begin{array}{c} r \rho \vec v \cdot \hat n \\ r ( \rho v_r \vec v \cdot \hat n + P \hat n_r ) \\ r ( \rho v_z \vec v \cdot \hat n + P \hat n_z ) \\ r^2 \rho v_\phi \vec v \cdot \hat n \\ r ( \rho v^2/2 + \rho e + P ) \vec v \cdot \hat n \end{array} \right)$$ $$S = \left( \begin{array}{c} 0 \\ \rho v_\phi^2 + P \\ 0 \\ 0 \\ 0 \end{array} \right)$$ where $r$ is the distance from the z-axis (note that ${\hat n}$ lives in the r-z plane). The two new developments here are the presence of the radial coordinate $r$ in the equations and the non-zero source term. We evaluate both of these at the center of mass of a cell or face, depending on the context. The description in spherical coordinates and the extensions to relativistic hydrodynamics are completely analogous. We use this opportunity to test the method’s extension to these coordinate systems, as well as the ability for TESS to preserve symmetries using an unstructured mesh. We set up very simple shock-tube like initial conditions: $$\rho = \left\{ \begin{array} {l@{\quad:\quad}l} 1.0 & r < .25 \\ 0.1 & r > .25 \end{array} \right.$$ $$P = \rho$$ $$v = 0$$ We perform a relativistic cylindrical and spherical explosion. For the cylindrical explosion, we perform the calculation at high resolution in 1-d using cylindrical coordinates, and in low resolution in 2D using cartesian coordinates. The resolution is as low as 50 $\times$ 50. In Fig. \[fig:cyl\] we show the results for a moving and fixed mesh. We see that moving the cells continues to improve resolution of the contact discontinuity, and in this case we also see some improvement in symmetry – the values for the density profile are not as scattered in the moving-mesh case, they tend to lie along a single curve. For the spherical explosion (Fig. \[fig:sph\]), we see a similar story; the contact discontinuity is well preserved, and the spherical symmetry is captured somewhat better with the moving code. Test Problems in Magnetohydrodyamics ------------------------------------ To further illustrate that TESS in principle extends to an arbitrary set of conserved quantities, we demonstrate a few test problems in magnetohydrodynamics. From one point of view, MHD (whether relativistic or nonrelativistic) is just another hyperbolic set of equations in conservation-law form, where electromagnetic energy, momentum, stress, and pressure are added to the fluid equations, and the magnetic induction also satisfies a conservation law (Faraday’s Law), which in the the case of a perfect conductor (${\vec E = -\vec v \times \vec B}$) has the form $$\partial_t \vec B + \partial_j ( v^j \vec B - B^j \vec v ) = 0$$ The main ingredient which distinguishes MHD from other conservation laws is that the equations also satisfy an elliptic constraint, ${\nabla \cdot B = 0}$, which must be satisfied every timestep. This constraint is analytically guaranteed to vanish, but depending on your numerical scheme, the constraint can generically grow exponentially when the equations are solved in discrete form. Over the years, many techniques have been suggested to address this issue, the most popular methods being variants of constrained transport [@toth], where steps are taken to ensure that all fluxes added to the B fields have zero divergence, so that the time derivative of div B is guaranteed to remain zero through the entire evolution (in other words, only a solenoidal B field is ever added during the time evolution). This method would be quite difficult to extend to an unstructured mesh, so it seems that we must choose some other way of preventing these constraint-violating solutions from growing. We employ a hyperbolic divergence-cleaning scheme [@telegraph] which adds an extra conserved quantity, $\psi$, and modifies the equations so that the evolution equations for the magnetic field become $$\begin{aligned} \partial_t \vec B + \nabla_j ( v_j \vec B - B_j \vec v ) + \vec \nabla \psi = 0 \\ \partial_t \psi + c^2_h \nabla \cdot B = - {c^2_h \over c^2_p} \psi\end{aligned}$$ where ${c^2_h}$ and ${c^2_p}$ are freely specifiable constants which alter the behavior of the divergence-cleaning terms. If ${\nabla \cdot B = 0}$ and ${\psi=0}$ the equations revert to the usual MHD equations, but this introduction of $\psi$ has the effect of altering the time evolution of the constraint so that it satisfies a wave equation, specifically the telegraph equation. This technique is generally not as desirable as constrained transport, partly because it doesn’t guarantee zero divergence, but also because its success is strongly dependent on two freely specifiable constants, which have no obvious choice for what their values should be. Because we don’t have a constrained transport scheme (though we do speculate that one is possible), and also because this method is easy to implement, we currently use this divergence-cleaning form of the equations in our MHD code. We now look at several MHD tests. The first is a one-dimensional shock tube test in relativistic MHD, the relativistic version of the Brio-Wu shock tube [@bw88; @vp93]. The left and right states are as follows: $$\rho_L = 1, \qquad \rho_R = .125$$ $$P_L = 1, \qquad P_R = .1$$ The initial velocity is zero everywhere, and the magnetic field is given by the following: $$B_x = 0.5, \qquad B_z = 0$$ $$B_{yL} = 1, \qquad B_{yR} = -1$$ The adiabatic index ${\Gamma = 2}$. The calculation was done with 400 zones. The final state is shown at ${t = 0.4}$, in Fig. \[fig:briowu\]. It is clear that an advantage is gained in moving the cells, both in the sharpness of the contact discontinuity and in the shock front. Additionally, the value of the density in the shock wave appears to be more accurate when using a moving mesh. So, it would seem that there is great potential in using this Lagrangian approach for problems in magnetohydrodynamics. However, we wish to avoid reading too much from this result, as this is a one-dimensional problem, and most of the challenges in magnetohydrodynamics only appear when solving multidimensional problems. We consider now a test problem in nonrelativistic magnetohydrodynamics, a cylindrical explosion in a uniform magnetic field. The calculation is done in a box of dimensions \[0,1\] x \[0,1\]. The initial conditions are as follows: $$\rho = 1.0$$ $$B_x = 1.0$$ $$P = \left\{ \begin{array} {l@{\quad:\quad}l} 10.0 & r < 0.1 \\ 0.1 & r > 0.1 \end{array} \right.$$ The magnetic pressure at t=0.1 is plotted in Fig. \[fig:mag\]. Also plotted is the numerically calculated divergence of the magnetic field. This divergence is normalized in the following manner: $$(div B)_{norm} = {\nabla \cdot B \over |B|/R }$$ Where R is the characteristic size of the Voronoi cell. We see the code does a reasonable job of resolving the explosion at low resolution, however the magnetic divergence is unacceptably large (the normalized magnetic divergence grows to be as large as 0.14). This large error in div B appears to be particularly problematic for this method. Any time the Voronoi mesh changes topology, there can be extremely rapid growth in magnetic divergence due to the appearance of new faces in the tessellation. This magnetic growth is so rapid that it shows up even in simple test problems like the one displayed. In fact, the growth rate is such that conventional divergence-cleaning techniques for suppressing the growth of div B seem to be insufficient to resolve the problem. We believe that this problem is not insurmountable, but the eventual solution might require a novel approach for treating div B. As such, we have much work to do in improving the magnetohydrodynamics part of our code. Another possible idea is to use the same basic numerical scheme, but impose the geometry of the cells by hand rather than determining them based on the motion of mesh points. If such a scheme were developed, constrained transport could potentially be possible, since the geometry of the cells could be known at runtime. This idea could have many other possible advantages, so it is a thought worth pursuing, but for TESS as it is currently written, we would like a better means for damping the growth of magnetic monopoles. On the other hand, for some problems, div B may not be the most important concern. If the magnetic divergence does not have time to grow, we may get to the end of a calculation before it starts to have an effect on the solution. As an example, we show the relativistic MHD rotor test [@delzanna], a challenging test of the robustness of our numerical scheme. The calculation is done in a box of size ${[-.5,.5] \times [-.5,.5]}$. The initial conditions are given by the following: $$\rho = \left\{ \begin{array} {l@{\quad:\quad}l} 10.0 & r < 0.1 \\ 1.0 & r > 0.1 \end{array} \right.$$ $$P = 1.0, \qquad B_x = 1.0.$$ The disc ${r<0.1}$ is initially rigidly rotating with angular frequency ${\omega = 9.95}$, so that the the edge of the disc has lorentz factor ${\gamma \approx 10}$. The velocity field is zero outside the disc. The calculation was done using a ${400 \times 400}$ mesh, and the results are shown in Fig. \[fig:rotor\] at time ${t=0.4}$. This is ordinarily a very challenging RMHD test problem, but we required no special fail-safe procedures to solve it. Actually, for this particular problem, we are at a slight disadvantage, since the mesh does not necessarily give us adaptive resolution exactly where we want it; the rarefaction is under-resolved (we start with an initially uniform mesh, but the central cells expand by roughly a factor of five in length). However, it seems that, at least for our method and for this problem, div B does not interfere with our results. Kelvin Helmholtz Instability ---------------------------- We find that TESS is very good at solving problems involving fluid instabilities, particularly in cases where a contact discontinuity is being perturbed. The Kelvin-Helmholtz instability is one such example, which occurs at the interface between shearing fluids. We look at the nonrelativistic case first, to compare with AREPO. We use a periodic box, ${[0,1] \times [0,1]}$, with a stripe in the middle half of the box ${1/4 < y < 3/4}$. The pressure is uniform throughout the domain, and the density and x-velocity are uniform in each region: $$P = 2.5$$ $$\rho = \left\{ \begin{array} {l@{\quad:\quad}l} 2 & .25 < y < .75 \\ 1 & otherwise \end{array} \right.$$ $$v_x = \left\{ \begin{array} {l@{\quad:\quad}l} .5 & .25 < y < .75 \\ -.5 & otherwise \end{array} \right.$$ This sets up the initial conditions for the instability, and to excite a particular mode we add a small perturbation to the y-component of the velocity: $$v_y = w_0 sin(4 \pi x) f(y)$$ $$f(y) = exp(-{ (y-.25)^2 \over 2 \sigma^2}) + exp(-{(y-.75)^2\over 2 \sigma^2}) .$$ We choose ${w_0 = .1}$ and ${\sigma = .05/\sqrt{2}}$. We use an adiabatic index of ${\Gamma = 5/3}$. These initial conditions are identical to those used by [@arepo]. Our visualization is different, so it is difficult to directly compare results, but it appears that the growth rate of the instability is the same for a 50x50 lattice (see Fig. \[fig:kh1\]). Tests of Galilean invariance, on the other hand, indicate that our numerics do depend on the moving reference frame in which the calculation is performed. When adding an overall velocity to the initial conditions, we find that the results are significantly changed if the boost velocity is large compared to the flow velocity. This can be traced back to the fact that the HLLC solver is not invariant under Galilean boosts. It would be interesting to see how differently the code would perform using an exact Riemann solver, though this is not a very high priority, because TESS is ultimately designed to solve relativistic problems. A more optimistic way to interpret this result is to say that Galilean invariance is not a necessary condition for accurately capturing contact discontinuities. As a relativistic test, we use TESS to calculate the growth rate of the Kelvin-Helmholtz instability in the relativistic case. For this purpose, we change the initial conditions slightly: the initial density is uniform (${\rho = 1}$) everywhere, the pressure is now ${P = 1}$ and the adiabatic index ${\Gamma = 1.4}$. The initial perturbation is given to the y component of the four-velocity, ${u^y}$, and the x component ${u^x}$ is varied. To measure the development of the instability, we add an additional passive scalar quantity, X, which is advected with the fluid velocity, for which we choose ${X=0}$ outside the stripe and ${X=1}$ inside the stripe. We measure the growth of the instability by tracking the position of the contact discontinuity in X. Note that this is much more straightforward to do with TESS than it would be for an Eulerian code. In fact, we can track the development of the instability even while the perturbation to the contact is smaller than the size of a Voronoi cell. We compare the linear growth rate against the analytic solution. The growth rate has been calculated analytically by [@kh]. It is the imaginary part of ${\phi k c_s}$ where $k$ is the wavevector corresponding to the wavelength of the perturbation (${k = 4\pi}$ in this case), ${c_s}$ is the sound speed, and ${\phi}$ is given by the following formula: $${\phi^2 \over \mathcal{M}^2} = {1-v^2 \over 1-c_s^2} { {\mathcal{M}^2 + 1 - v^2 - \sqrt{ 4 \mathcal{M}^2(1-v^2) + (1+v^2)^2 } } \over {\mathcal{M}^2 + 2 v^2 } }$$ Here, v is the flow speed, and ${\mathcal{M}}$ is the relativistic Mach number, defined: $$\mathcal{M} = {\gamma v \over \gamma_s c_s}$$ where ${c_s}$ is the relativistic sound speed, and ${\gamma_s}$ is the corresponding Lorentz factor. In Fig. \[fig:khgrowth\] we plot the numerically calculated growth rates against this analytical formula. At ${64 \times 64}$ resolution, the growth rate was not very accurately calculated, but for all other tests we seem to get very accurate results, considering the low resolution. Notice that in contrast with the nonrelativistic example, it does not make sense to even ask about the Lorentz invariance of the code for this problem, since periodic boundary conditions are only periodic in one reference frame, due to relative simultaneity. In general, even if we do not employ periodic boundaries, testing Lorentz invariance can be an extremely complicated task, because one would have to take into account that different mesh points can be sampling the fluid motion at different points in time. In any case, we know that our code is not manifestly Lorentz invariant, so we would not expect a striking improvement in Lorentz invariance over an Eulerian code. Rayleigh Taylor Instability --------------------------- A related fluid instability is the Rayleigh-Taylor instability, where a fluid of high density is balanced on top of a fluid of low density, and the contact discontinuity between these two different densities is perturbed by the force of gravity. In the relativistic case, we can get a gravitational field by transforming to an accelerated reference frame. This is not difficult to do, but we can also simply solve the relativistic equations in the weak-field limit. For the purposes of calculating the linear growth rate, it would not matter which we do, since for a small enough perturbation, the potential jump between the highest and lowest points of the perturbation will be small, and hence the weak-field limit applies. We set up the following initial conditions in the computational domain ${(x,y) \in [0,1] \times [-1,1]}$: $$\rho = \left\{ \begin{array} {l@{\quad:\quad}l} 1.0 & y < 0 \\ \rho_2 & y > 0 \end{array} \right.$$ $$P = P_0 e^{-gy/(\Gamma-1)} +( \Gamma-1)\rho(e^{-gy/(\Gamma-1)} - 1)$$ For the initial velocity perturbation: $$v_y = w_0 cos(2 \pi x) exp(-{y^2 \over 2 \sigma^2} ) .$$ We choose constants ${P_0 = 10}$, ${w_0 = 0.03}$, ${\sigma = .05/\sqrt{2}}$, and the gravitational field is ${g = 0.1}$. ${\rho_2}$ is varied to get different growth rates. In Fig. \[fig:rt1\], we show the growth of the instability using a moving or fixed mesh, at fairly low resolution, 128x128. The initial mesh was not perfectly square; the cells were randomly offset slightly to improve the regularity of the mesh throughout the calculation. This small ( .1%) offset is enough to cause asymmetry, even in the fixed-mesh case, but we decided to employ the offset in both cases, so that the only difference between the two cases is whether the mesh points are allowed to move. The linear growth rate of the instability is easy to derive even in the relativistic case, because the only relativistic effect for small perturbations is that gravity couples to energy instead of mass. Therefore, the growth rate for the relativistic case is still $$R = \sqrt{ g k \mathcal{A} },$$ Where $g$ is the gravitational field strength, $k$ is the wavevector corresponding to the perturbation (${k=2\pi}$ in this case), and ${\mathcal{A}}$ is the relativistic Atwood number, defined as $$\mathcal{A} = { \rho_2( 1+e_2 ) - \rho_1( 1+e_1 ) \over \rho_2 ( 1 + e_2 ) + \rho_1 ( 1 + e_1 ) } \label{eqn:rtgrowth}$$ In Fig. \[fig:rtgrowth\] we plot the growth rates numerically calculated using TESS, and compare with the analytical result (\[eqn:rtgrowth\]). Even at extremely low resolutions, we capture the growth rates quite accurately. Rayleigh Taylor Instability in a Decelerating Shock =================================================== The Rayleigh-Taylor instability provides us with a useful astrophysical application, in the context of a decelerating relativistic shock. The reason this is relevant is that an external gravitational field is equivalent to an accelerating reference frame. Hence a low-density gas accelerating into a high-density gas or a high-density gas decelerating into a low-density gas will exhibit this same instability. The latter could potentially occur in a shockwave, as matter is pushing its way through some ambient medium [@levinson]. If the explosion is spherical, and the density profile has a power law dependence, ${ \rho \sim r^{-k} }$, then the shock will decelerate for ${k < 3}$. If we assume a simple point explosion in this density profile, we recover the Blandford-McKee solution [@bmk], or at late enough times, the nonrelativistic Sedov-Taylor solution. In both cases, the shock front and contact discontinuity coincide. However, if there is excess mass in the initial explosion, so that the total mass is greater than the integral of the power-law density profile, then there will be a coasting period until the shock overtakes an amount of mass comparable to this excess mass. This is followed by deceleration to a two-shock solution [@ns06], because the information that the shock is now decelerating needs to be transported back upstream. In this case, there will be a contact discontinuity between the two shocks, and the density discontinuity across this contact can be quite significant. For this situation, the Rayleigh-Taylor instability can play an important role in the solution, because the contact is quite unstable. A calculation in spherical symmetry would be incorrect, even though the initial conditions are spherically symmetric. We capture the Rayleigh-Taylor instability from a spherically symmetric explosion in two dimensions using axisymmetric coordinates, for density profiles corresponding to ${k=0,1,2}$. We use the domain ${(s,z) \in [0,1] \times [-1,1]}$ where ${r^2 = s^2 + z^2}$ and set up the initial conditions as follows: $$\rho = \left\{ \begin{array} {l@{\quad:\quad}l} \rho_0 (r_0/r_{min})^{k_0} & r<r_{min} \\ \rho_0 (r_0/r)^{k_0} & r < r_0 \\ \rho_0 (r_0/r)^{k} & r > r_0 \\ \end{array} \right.$$ We set ${k_0 = 4}$ and ${r_0 = .1}$ to start with an accelerating shock which typically coasts until around ${r \sim .2}$, then decelerates. We use ${r_{min} = .001}$ to ensure that the system has a finite amount of mass. The initial pressure profile is given by $$P = \rho \left\{ \begin{array} {l@{\quad:\quad}l} e_0/3 & r < r_{exp} \\ 10^{-6} & r > r_{exp} \\ \end{array} \right.$$ We use a gamma-law equation of state with ${\Gamma = 4/3}$. We choose ${r_{exp} = .003}$, and choose ${e_0}$ to get a shock which reaches Lorentz factors of ${\gamma \sim 10}$ before it decelerates to mildly relativistic flow by ${t = .95}$, when the calculation ends. The values of ${e_0}$ are given in table \[tab:e0\], along with a rough estimate of the bulk Lorentz factor for the fluid at its maximum speed, and also the Lorentz factor at the end of the calculation. Note that these initial conditions are fundamentally different from the conditions chosen by [@levinson]; his results came from perturbative analysis of the self-similar solution of [@ns06], whereas ours begins with an explosion. In spherical symmetry, our explosion approaches this self-similar solution, but due to the instability, in 2D our calculation should never find this solution (at late enough times, however, it should eventually find the Sedov solution). [cccc]{} 0 & 6 & 11 & 4\ 1 & 4 & 9 & 4\ 2 & 6 & 13 & 11 In Figures \[fig:rtx1\] - \[fig:rtx3\] we show the growth of the instability for three power-law density profiles. It appears that the instability grows rapidly enough to catch up to the forward shock. In fact, relativity appears to help us in this case, because for large Lorentz factors the shock front must remain a short distance from the contact discontinuity, keeping it within reach until the Rayleigh-Taylor fingers can catch up to the shock. The fact that the instability overtakes the forward shock appears to be generic and does not require special initial conditions, although it would of course not occur for a simple point explosion, because in this case the shock and contact discontinuity would already coincide. Using steep power-laws with little deceleration does not seem to change this picture; the instability can still catch up to the shock even when ${k=2}$ and the lorentz factor is 11, as in our third test case. This result is in qualitative agreement with the results of [@levinson], whose work was restricted to the linear growth rate. To our knowledge, ours is the first direct numerical calculation of the instability for a relativistic shock. It is worth noting another advantage to TESS which is shown most plainly in Fig. \[fig:rtx3\]. If we know in advance where to initially place the cells, we can get high resolution exactly where we need it. In this case, we initially concentrated most of the cells near the origin, and the region of high resolution followed the motion of the shock. Nearly all of the zones have been compressed into the shock front, giving us very high resolution there. We generated this initial mesh in the following way: We started with an initial ${512 \times 1024}$ uniform mesh in the domain ${(x,y) \in [0,1] \times [-1,1]}$ (though we staggered mesh points so that our initial mesh was not exactly square). We then changed the location of the mesh points via the following prescription: $$\begin{aligned} r = max( |x| , |y| ) \\ x \rightarrow x e^{k(r-1)} \\ y \rightarrow y e^{k(r-1)}\end{aligned}$$ where we chose ${k = 3.5}$. Thus our initial resolution was ${e^k \sim 33}$ times higher near the origin than it would be for a uniform mesh. It should be noted that this effective resolution changes throughout the time evolution, partially due to the shock overtaking more cells, and partially due to the increasing amount of volume which is being well resolved. Summary {#sec:summary} ======= TESS is a new relativistic hydrodynamics code based on the Voronoi tessellation. It performs particularly well on problems with sharp discontinuities, especially contact discontinuities. TESS has been extensively tested. For smooth problems, convergence is slightly better than second order. For problems involving a moving contact discontinuity, TESS demonstrated a clear advantage over the Eulerian codes in that smearing of contact discontinuities is greatly reduced. It was also demonstrated that employing the HLLC solver was necessary to get full advantage out of the Lagrangian nature of this method. For many nonrelativistic problems, TESS has success comparable to that of AREPO, but unfortunately lacks the Galilean invariance property. We have applied TESS to an astrophysical example, testing the stability of a decelerating spherical shock wave. We found that the contact discontinuity behind the shock was unstable to the Rayleigh-Taylor instability, so much so that the instability was able to reach the forward shock. A detailed study will be presented in a future publication. The structure of TESS is relatively simple, there is no need to perform any explicit rotations or boosts, and we therefore have the luxury of using an approximate Riemann solver, so long as the solver doesn’t inherently diffuse contact discontinuities. This is a definite advantage, because for some hyperbolic systems, calculating the exact solution to the Riemann problem is computationally intensive if not impossible. Our method can be used to solve a wide range of hyperbolic problems, like general relativistic hydrodynamics, or magnetohydrodynamics. The tessellation algorithm is also quite simple, and very robust; it takes little effort to implement, and is not bug-prone. In speed tests, we find the tessellation algorithm consumes roughly half of the code’s run-time. We urge care in the interpretation of this, however, as very little work has been spent in optimization, and moreover this figure is highly problem-dependent, as is the code’s overall runtime. On a 2.67GHz Intel Xeon X5550 processor, TESS spends roughly 40 microseconds per zone per timestep, which is comparable to RAM’s efficiency. We hope to improve the efficiency in future developments. The code is structured in such a way that an extension to three dimensions merely requires writing a three-dimensional tessellation algorithm, which would be an extension of the two-dimensional algorithm already outlined. Making the code run in parallel does not pose any major academic hurdles (apart from load-balancing). Both of these ideas are extensions to the code which we are currently pursuing. We are also interested in implementing more complicated boundary conditions, mesh refinement, and a local time step, as these have all been implemented in AREPO and are thus a solved problem. This work was supported in part by NASA under grant NNX10AF62G issued through the Astrophysics Theory Program (ATP) and by the NSF through grant AST-1009863. The authors would like to thank the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics for hospitality while finishing this work. We are grateful to John Hawley, Debora Sijacki, Volker Springel, Jim Stone, Mark Vogelsberger, Weiqun Zhang, and Jonathan Zrake for helpful comments or discussions. We would also like to thank the anonymous referee for his/her thorough review. Primitive Variable Solver for Relativistic Magnetohydrodynamics {#app} =============================================================== The conversion from conserved variables back to their primitive counterparts is nontrivial for relativistic magnetohydrodynamics. It is particularly challenging if we wish to use an arbitrary equation of state. Our code employs a three-dimensional Newton-Raphson solver to recover the primitive variables from the conserved variables. The equations being solved are nearly the same as those given by [@cons2prim]; we derive them again here for completeness. We begin with the eight conserved variables, which we know: $$D = \gamma \rho, \qquad \vec B, \qquad Q^\mu = T^{0 \mu}_{fluid} + T^{0 \mu}_{EM}$$ We also make the following convenient definitions: $$K \equiv \gamma^2 v^2 = \gamma^2-1$$ $$W \equiv \gamma^2 \rho h = \sqrt{K+1} D + (K+1)( \epsilon + P ) \label{eqn:W}$$ The electromagnetic energy and momentum is straightforward to calculate assuming ${\vec E = - \vec v \times \vec B}$: $$\begin{aligned} T^{0 0}_{EM} = {1 \over 2} ( E^2 + B^2 ) = {1 \over 2} ( (v \times B )^2 + B^2 ) = B^2 - {1 \over 2 } ( B^2 + (\vec u \cdot \vec B)^2 ) / \gamma^2 \\ T^{0 i}_{EM} = ( E \times B )^i = - ( ( v \times B ) \times B )^i = B^2 v^i - B^i (\vec u \cdot \vec B)/ \gamma\end{aligned}$$ So, the four-momentum Q can be written out explicitly: $$\begin{aligned} Q^0 = W - P + B^2 - {1 \over 2} { B^2 + (\vec u \cdot \vec B)^2 \over \gamma^2} \\ \vec Q = ( W + B^2 ) \vec v - ( \vec u \cdot \vec B) \vec B /\gamma\end{aligned}$$ Following Noble, we can also take the dot product ${\vec Q \cdot \vec B}$ to evaluate ${\vec u \cdot \vec B}$: $$\vec u \cdot \vec B = \gamma \vec B \cdot \vec Q / W$$ If we substitute this back into the equations for the four-momentum, we arrive at the following two relations: $$Q^0 = W - P + B^2 - {1 \over 2}( B^2 / (K+1) + ( \vec B \cdot \vec Q / W )^2 ) \label{eqn:c2p1}$$ $$\vec Q^2 = ( W + B^2 )^2 K/(K+1) - (2 W + B^2) ( \vec B \cdot \vec Q)^2 / W^2 \label{eqn:c2p2}$$ Equations (\[eqn:c2p1\]) and (\[eqn:c2p2\]), along with the definition for W (\[eqn:W\]), constitute three equations with the three unknown variables, W, K, and the temperature T. We can now easily turn this into a Newton-Raphson rootfinding scheme where we search the three-dimensional parameter space of W,K, and T for a solution to the three equations. All of this assumes an arbitrary equation of state, $$\begin{aligned} P = P(\rho,T) = P( {D \over \sqrt{K+1}} , T ) \\ \epsilon = \epsilon (\rho,T) = \epsilon( {D \over \sqrt{K+1}} , T )\end{aligned}$$ Note that ${W, K , T \in [0,\infty)}$, so that we don’t have to worry about what to do if one of our variables is too large (this would be the case if, for example, we used velocity as one of our unknowns). If K becomes negative on a given iteration, it is reset to zero to prevent taking a square root of a negative, but W and T are generally not constrained. For an adiabatic equation of state, we use the same framework but make the definition $$P(\rho,T) \equiv T, \qquad \epsilon(\rho,T) \equiv T/(\Gamma - 1)$$ so that effectively pressure takes the place of temperature as the third variable being solved for. For our initial guess values, we use the values of W,K,and T from the previous timestep. However, if the solver fails to find a root, we try again using new guess values. One idea is to try an estimate which is exact in the limit that rest mass or magnetic energy dominates. In doing so, we obtain the following estimates for K: $$K_D = { \vec Q^2 \over D^2 } \qquad K_B = { (\vec Q \cdot \vec B)^2 \over D^2 B^2 }$$ To take into account both of these possibilities, we use a guess value which is a weighted average of these two estimates: $$K_{est} = { D K_D + B^2 K_B \over D + B^2 }$$ If this estimate fails, we can use a method found by [@cd08] and try a more safe set of guess values, based on the maximum possible values (also assuming the Lorentz factor is never larger than ${10^4}$): $$\epsilon^{*} = ( Q^0 - D - B^2/2 )/D, \qquad \rho^{*} = D , \qquad P^{*} = P( \rho^{*} , \epsilon^{*} )$$ $$W_{guess} = Q^0 + P^{*} - B^2/2 , \qquad K_{guess} = 10^8 , \qquad T_{guess} = T( \rho^{*} , \epsilon^{*} )$$ To determine when we have converged on the inverted state, we need some error function which will measure how close we are to the true solution. For example, one natural choice is $$\delta = | {\Delta W \over W }|$$ where ${\Delta W}$ measures how much W has changed since the last iteration. The strategy then would be to iterate until ${\delta < TOL}$ for some specified tolerance, typically ${TOL \sim 10^{-11}}$. Unfortunately, it’s possible for W to be changing very slowly while K and T are still far from the solution. To prevent this from being a problem, we modify the error function as follows: $$\delta = | {\Delta W \over W }| + ({\Delta K \over K })^2 + ({\Delta T \over T })^2$$ This agrees with the previous definition when we are close to the true solution, but away from the solution, it helps to prevent “false positives”, where ${\delta}$ becomes small before the true solution has been found. Aloy, M. A., Ib[á]{}[ñ]{}ez, J. M., Mart[í]{}, J. M., Muller, E. 1999, , 122, 151 Anderson, M., Hirschmann, E. W., Liebling, S. L., & Neilsen, D. 2006, Classical and Quantum Gravity, 23, 6503 Anninos, P., Fragile, P. C., & Salmonson, J. D. 2005, , 635, 723 Ant[ó]{}n, L., Zanotti, O., Miralles, J. A., Mart[í]{}, J. M., Ib[á]{}[ñ]{}ez, J. M., Font, J. A., & Pons, J. A. 2006, , 637, 296 Beckwith, K., & Stone, J. M. 2011, , 193, 6 Blandford, R. D. & McKee, C. F. 1976, PhFl, 19, 1130 Bodo, G., Mignone, A., & Rosner, R. 2004, , 70, 036304 Börgers, C., & Peskin, C. S. 1987, J. Comput. Phys., 70, 397 Brio, M., & Wu, C. C. 1988, J. Comput. Phys., 75, 400 Cerdá-Duran, P., Font, J. A., Antón, L., Müller, E. 2008, , 492, 937 Dedner, A., Kemm, F., Kröner, D., Munz, C.-D., Schnitzer, T., Wesenberg, M. 2002, J. Comput. Phys., 175, 645 Del Zanna, L., Bucciantini, N., Londrillo, P. 2003, , 400, 397 Del Zanna, L., Zanotti, O., Bucciantini, N., & Londrillo, P. 2007, , 473, 11 De Villiers, J.-P., & Hawley, J. F. 2003, , 589, 458 Duncan, G. C., & Hughes, P. A. 1994, , 436, L119 Etienne, Z. B., Liu, Y. T., & Shapiro, S. L. 2010, , 82, 084031 Falle, S. A. E. G., & Komissarov, S. S. 1996, , 278, 586 Font, J. A. 2003, Living Reviews in Relativity, 6, 4 Gammie, C. F., McKinney, J. C., & T[ó]{}th, G. 2003, , 589, 444 Giacomazzo, B., & Rezzolla, L. 2006, Journal of Fluid Mechanics, 562, 223 Giacomazzo, B., & Rezzolla, L. 2007, Classical and Quantum Gravity, 24, 235 Greif, T. et al. 2011, , in press, arXiv:1101.5491v2 Hernquist, L., & Katz, N. 1989, , 70, 419 Kurganov, A., & Tadmor, E. 2000, J. Comput. Phys., 160, 241 Levinson, A. 2010, GAFD, 104, 85 Mart[í]{}, J. M., & Muller, E. 2003, Living Reviews in Relativity, 2, 3, http://www.livingreviews.org/lrr-2003-7 Meliani, Z., Keppens, R., Casse, F., & Giannios, D. 2007, , 376, 1189 Mignone, A., & Bodo, G. 2005, , 364, 126 Mignone, A., Bodo, G., Massaglia, S., Matsakos, T., Tesileanu, O., Zanni, C., & Ferrari, A. 2007, , 170, 228 Nagataki, S. 2009, , 704, 937 Nakamura, K., & Shigeyama, T. 2006, , 645, 431 Noble, S. C., Gammie, C. F., McKinney, J. C., DelZanna, L. 2006, , 641, 626 Noh, W. F. 1964, Methods Comput. Phys., 3, 117 Pons, J. A., Ma Mart[í]{}, J., Muller, E. 2000, Journal of Fluid Mechanics, 422, 125 Rosswog, S. 2010, Journal of Computational Physics, 229, 8591 Springel, V. 2010, , 401, 791 Tchekhovskoy, A., McKinney, J. C., & Narayan, R. 2007, , 379, 469 Tóth, G. 2000, J. Comput. Phys, 161, 605 van der Holst, B., Keppens, R., & Meliani, Z. 2008, Computer Physics Communications, 179, 617 van Putten, M. H. P. M., J. Comput. Phys., 105, 339 Whitehurst, R. 1995, , 227, 655 Woodward, P. & Colella, P. 1984, J. Comput. Phys., 54, 115 Zhang, W., & MacFadyen, A. I. 2006, , 164, 255
--- abstract: 'An on-demand single photon source is a key element in a series of prospective quantum technologies and applications. We demonstrate the operation of a tuneable on-demand microwave photon source based on a fully controllable superconducting artificial atom strongly coupled to an open-end transmission line (a 1D half-space). The atom emits a photon upon excitation by a short microwave $\pi$-pulse applied through a control line weakly coupled to the atom. The emission and control lines are well decoupled from each other, preventing the direct leakage of radiation from the $\pi$-pulses used for excitation. The estimated efficiency of the source is higher than 75% and remains to be about 50% or higher over a wide frequency range from 6.7 to 9.1 GHz continuously tuned by an external magnetic field.' author: - 'Z.H. Peng' - 'J.S. Tsai' - 'O.V. Astafiev' title: 'Tuneable on-demand single-photon source' --- Control and manipulation with light at the single-photon level [@Kimble2008; @Duan2010; @Sangouard2011; @Northup2014] is interesting from fundamental and practical viewpoints. In particular, on-demand single-photon sources are of high interest due to their promising applications in quantum communication, quantum informatics, sensing, and other fields. In spite of several realisations in optics [@Kim1999; @Lounis2000; @Kurtsiefer2000; @Keller2004; @He2013], practical implementation of the photon sources imposes a number of requirements, such as high photon generation and collection efficiencies, frequency tunability. Recently developed superconducting quantum systems provide a novel basis for the realisation of microwave (MW) photon sources with photons confined in resonator modes [@Houck2006; @Hofheinz2008; @Bozyigit2011; @Lang2011; @Lang2013; @Pechal2014]. However, all the circuits consist of two elements (the resonator and the quantum emitter system) and the generated photon frequency fixed to the resonator frequency. We propose and realise a different approach: a single-photon source based on a tuneable artificial atom coupled asymmetrically to two open-end transmission lines (1D half-spaces). The atom is excited from a weakly coupled control line ($c$) and emits a photon to a strongly coupled emission line ($e$). The photon freely propagates in the emission line and can be further processed using, for example, nonlinear circuit elements. Among the advantages of our circuit is its simplicity: it consists of the single element. ![[****]{} Single-photon source. [(**a)**]{} Optical analogue of the source. [(**b)**]{} Mechanism of single-photon generation. The reemitted single photon (red) is decoupled from the incident $\pi$-pulse (blue), which prepares state $|1\rangle$ in the atom. [(**c)**]{} The optical micrograph of the device. The artificial atom is in the middle and the thin long metallic line from the atomic loop forms capacitances between the atom and the control/emission transmission lines. The left and right panels show transition from the central parts of lines to the usual coplanar ones. [(**d)**]{} Equivalent electrical circuit of the photon source.[]{data-label="devicelayout"}](Fig1.pdf) **Operation principles and device description.** An optical analogue of the proposed single-photon source consists of a two-level atom situated near a tiny hole (much smaller than the wavelength) in a non-transparent screen (Fig. \[devicelayout\]a). The atom is slightly shifted towards the right-hand-side space, defining an asymmetric coupling to the half-spaces. By applying a powerful light from the left side, the atom can be excited by evanescent waves, which cannot propagate in the right-hand-side space due to their rapid decay. On the other hand, the excited atom emits photons into the right-hand-side space (Fig. \[devicelayout\]b). In practice, the presented layout is difficult to build using natural atoms and, even if one succeeds, another problem must be solved: the low collection efficiency of emitted photons in the 3D space. These problems can be easily avoided by using on-chip superconducting quantum circuits coupled to 1D transmission lines [@Astafiev2010; @Abdumalikov2011; @Hoi2012]. Fig. \[devicelayout\]c shows a circuit with an artificial atom coupled asymmetrically to a pair of open-end coplanar transmission lines (1D-half spaces), each with $Z$ = 50 $\Omega$ impedance. The coupling capacitances $C_c$ and $C_e$ are between the artificial atom and the control and emission lines, respectively (shown on the equivalent circuit in Fig. \[devicelayout\]d). The capacitances can be approximated as point-like objects because their sizes are much smaller than the wavelength of the radiation ($\sim$ 1 cm). Note also that the transmission lines in the centre of our device (about 240 $\mu$m for each line) slightly differ from 50 $\Omega$ due to the shifted down ground plane, however we can ignore it because they are also much shorter than the wavelength. A MW pulse is applied from the control line, exciting the atom, and then the atom emits a photon mainly to the emission line due to asymmetric coupling: $C_e / C_c \approx$ 5. The following are intrinsic features of the device: (i) The two lines are well isolated from each other so that the excitation pulse does not leak from the control line to the emission line; (ii) due to the strong asymmetry, the excited atom emits a photon with up to $1 - (C_c/C_e)^2$ probability; (iii) the photon is confined in the 1D transmission line and can be easily delivered to other circuit elements through the line. The artificial atom schematically shown in Fig. \[devicelayout\]d is a controllable two-level system based on a tuneable gap flux qubit [@Mooij1999; @Wal2000; @Paauw2009; @Zhu2010], that is coupled to two Nb coplanar lines. The atom is fabricated by Al/AlOx/Al shadow evaporation techniques. It contains two identical junctions in series implemented in the loop together with a dc-SQUID (called also an $\alpha$-loop), shown in the bottom part of the device in Fig. 1d. Here $\alpha\approx0.7$ specifies the nominal ratio between the two critical currents in the dc-SQUID and the other two Josephson junctions in the loop. The magnetic fluxes are quantised in the loop: an integer number, $N$, of the magnetic flux quanta, $\Phi_0$, can be trapped. At the magnetic fields where the induced magnetic flux in the loop is equal to $\Phi = \Phi_0 (N+1/2)$, two adjacent flux states $|0\rangle$ and $|1\rangle$ with $N$ and $N+1$ flux quanta, which is corresponding to oppositely circulating persistent currents, are degenerated. The degeneracy is lifted due to the finite flux tunnelling energy $\Delta_N$, determined by the effective dc-SQUID Josephson energy and varies between different degeneracy points (depends on $N$). The energy splitting of the atom $\hbar\omega_{10}=\sqrt{(2I_{p}\delta\Phi)^{2}+\Delta_N^{2}}$ is controlled by fine adjustment of the magnetic field $\delta\Phi$ in the vicinity of the degeneracy points, where $\delta\Phi=\Phi-(N+1/2)\Phi_0$ and $I_{p}$ is the persistent current in the main loop. (We neglect the weak dependence of $\Delta_N$ on $\delta\Phi$.) The capacitances of the circuit are estimated to be $C_c\approx 1$ fF and $C_e\approx 5$ fF. The effective impedance between the two lines due to the capacitive coupling is $Z_{C} = 1/i\omega (C_{c}+C_e)$, which is about 3 k$\Omega$ for $\omega/2\pi$ = 10 GHz and the transmitted part of the power as low as $|2 Z/Z_{C}|^2 \approx 10^{-3}$ enables nearly perfect line decoupling.\ \ **Device characterisation.** Our experiment is carried out in a dilution refrigerator at a base temperature of around $30\,$mK. We first characterise our device by measuring the transmission coefficient $t_{ce}$ from the control line to the emission line using a vector network analyser (VNA) and the reflection coefficient $r_e$ from the emission line. Fig. \[transmissionspectrum\]a shows a 2D plot of the normalised transmission amplitude $|t_{ce}/t_{0}|$ in the frequency range 6.5 – 9.1 GHz with the magnetic flux bias $\delta\Phi$ from -50 to 50 m$\Phi_0$ around the energy minimum, where $t_0$ is the maximal transmission amplitude. The transmission is suppressed everywhere except in the narrow line that corresponds to the expected atomic resonance at $\omega_{10}$ and is a result of the photon emission from the continuously driven atom. The spectroscopic curve is slightly asymmetric with respect to $\delta\Phi = 0$ due to the weak dependence of $\Delta_N$ on $\delta\Phi$. From the spectroscopy line we deduce the parameters of the two-level system: the tunnelling energy is $\Delta = \min(\hbar\omega_{10}) = h\times6.728\,$GHz at $\delta\Phi=0$ and the persistent current in the loop is $I_p \approx $ 24 nA. To evaluate the coupling of our atom to the emission line, we also measure the reflection at $\delta\Phi=0$ with different probing powers from -149 to -125 dBm. Next, Fig. \[transmissionspectrum\]b shows the reflection coefficient $r_e$ mapped on a Smith chart measured in the case of atom excitation from the emission line (opposite to the case of source operation). The curve changes its form from circular to oval, which reflects the transition from the linear weak-driving regime up to the nonlinear strong-driving regime of the two-level system [@Astafiev2010]. ![image](Fig2.pdf) We next derive the dynamics of the point like atom (the loop size $\sim10\,\rm{\mu m}$ is much smaller than the wavelength $\lambda\sim1\,$cm) located at $x = 0$ and coupled to the 1D open space via an electrical dipole. We also take into account the fact that $\omega Z C_{c,e} \ll 1$. The atom is driven by the oscillating voltage at the frequency $\omega$ of the incident wave $V_0(x,t) = V_{0}e^{-i\omega t+ikx}$ in the control line and the resulting driving amplitude of $2 V_0 \cos\omega t = {\rm{Re}}[V_0 e^{-i\omega t+ikx} + V_0 e^{-i\omega t-ikx}] |_{x=0}$ is the sum of the incident and reflected waves. The Hamiltonian of the atom in the rotating-wave approximation is $H=-(\hbar\delta\omega\sigma_{z}+\hbar\Omega\sigma_{x})/2$, where $\delta\omega=\omega-\omega_{10}$ and $\hbar\Omega = - 2V_0 C_c \nu_a$ with the electric dipole moment of the atom $\nu_a$ (between $C_c$ and $C_e$). The atomic voltage creation/annihilation operator is $\hat{\nu}^\pm = \nu_a\sigma^\pm$, where $\sigma^{\pm} = (\sigma_x \mp i\sigma_y)/2$. The driven atom generates voltage amplitudes of $V_{c,e}(t)/2 = i\omega Z C_{c,e} \nu_a \langle \sigma^- \rangle e^{-i\omega t}$ in the control ($x < 0$) and emission ($x > 0$) lines. Substituting the relaxation rates $\Gamma_1^{c,e} = S_V(\omega) (C_{c,e} \nu_a)^2/\hbar^2$ due to voltage quantum noise ($S_V(\omega) = 2\hbar\omega Z$) from the line impedance $Z$ in each line, we obtain \[coherentwaves\] $$\begin{aligned} &x < 0: &V_{c}(x,t)=i\frac{\hbar\Gamma_1^{c}}{C_{c} \nu_{a}}\langle\sigma^{-}\rangle e^{-i\omega t - ikx}, \\ &x > 0: &V_{e}(x,t)=i\frac{\hbar\Gamma_1^{e}}{C_{e} \nu_{a}}\langle\sigma^{-}\rangle e^{-i\omega t + ikx}.\end{aligned}$$ In the ideal case of suppressed pure dephasing ($\gamma = 0$) and in the absence of nonradiative decay $\Gamma_1^{nr} = 0$, the power ratio between the control and emission lines generated by the atom under resonance is $|V_c(0,t)/V_e(0,t)|^2 = C_c^2/C_e^2 \approx 0.04$, which means that up to 96% of the power generated by the atom can be emitted into the emission line. This allows us to measure the spectroscopy curve shown in Fig. \[transmissionspectrum\]a. To find $\langle \sigma^- \rangle$ under continuous driving, we solve the master equation by considering the total relaxation rate $\Gamma_1 = \Gamma_1^c + \Gamma_1^e + \Gamma_1^{nr}$, where $\Gamma_1^{nr}$ is the nonradiative relaxation rate (for a photon absorbed by the environment). Here, the dephasing rate is $\Gamma_2 = \Gamma_1/2+\gamma$, where $\gamma$ is the pure dephasing rate. The solution is $\langle \sigma^- \rangle = -i\frac{\Omega}{2\Gamma_2}\frac{1+i\delta\omega/\Gamma_2}{1+(\delta\omega/\Gamma_2)^2+\Omega^2/\Gamma_1\Gamma_2}$. The reflection in the control line and the transmission coefficient from the control line to the emission line are $r_{c}=1+V_c(0,t)/V_0(0,t)$ and $t_{ce}=V_e(0,t)/V_0(0,t)$, respectively. At the weak-driving limit ($\Omega \ll (\Gamma_1, \Gamma_2)$), \[rt\] $$\begin{aligned} &r_c& = 1-2\frac{\Gamma_1^c}{2\Gamma_2}\frac{1}{1-i\delta\omega/\Gamma_2}\label{r}, \\ &t_{ce}& = -2\frac{\Gamma_1^e}{2\Gamma_2}\frac{C_c}{C_e}\frac{1}{1-i\delta\omega/\Gamma_2}\label{t}\end{aligned}$$ are circular plots on the Smith chart. Similarly to Eq. (\[r\]), we can write down the following expression for the reflection in the emission line $$\begin{aligned} &r_e& = 1-2\frac{\Gamma_1^e}{2\Gamma_2}\frac{1}{1-i\delta\omega/\Gamma_2},\label{re}\end{aligned}$$ with the substitution $\Gamma_1^c$ by $\Gamma_1^e$. We will further use this expression to characterise the coupling strength and efficiency of our device. On the other hand, the excited atom emits an instantaneous power proportional to the atomic population $P_1(t)$ [@Abdumalikov2011] that can be straightforwardly expressed as $$\label{power emission} W_{c,e}(t)=\hbar\omega\Gamma_1^{c,e}P_1(t), %\frac{1-\langle\sigma_z\rangle}{2}.$$ where $P_1 = (1-\langle\sigma_z\rangle)/2$. If the atom is prepared in the excited state $|1\rangle$ at $t = 0$, the probability decays according with $P_1(t) = \exp(-\Gamma_1 t)$. Also, the efficiency of the photon emission to the right line is $\Gamma_1^e/\Gamma_1$, which again can be as high as $C_e^2/(C_e^2 + C_c^2) \approx 0.96$ in the ideal case. The plot in Fig. \[transmissionspectrum\]b gives us a measure of the coupling strength of the atom to the emission line. Using Eq. (\[re\]), we estimate the efficiency of photon generation to be $\Gamma_1^e/\Gamma_1 \geq \Gamma_1^e/2\Gamma_2$ = 0.75.\ **Device operation.** Our photon source based on the conversion of atomic excitation into a MW photon requires efficient control of the quantum states. Fig. \[emission\]a shows measured quantum oscillations. We monitor the coherent emission from the atom into the emission line by VNA when a train of identical excitation MW pulses, each of length $\Delta t$ with period $T$ = 100 ns, is applied from the control line. The amplitude of the emission oscillates with $\Delta t$. The maxima and minima of the oscillations correspond to $|\langle\sigma^\pm\rangle| \approx \pm 1$ when the atom is in the maximally superposed states with 50% population. We also measure incoherent emission spectra using a spectrum analyser (SA) as a function of $\Delta t$. In Fig. \[emission\]b, one can see the central narrow peak (yellow colour) corresponding to the coherent emission ($\propto\langle\sigma^\pm\rangle$) and the wider (incoherent) emission peak ($\propto\langle\sigma_z\rangle$). The two peaks oscillate in amplitude with a $\pi/2$ shift with respect to each other, as expected from the dynamics of $\langle\sigma^\pm\rangle$ and $\langle\sigma_z\rangle$ [@Abdumalikov2011]. For the single photon source operation, we tune the pulse length to obtain the maximum incoherent emission (defined as a $\pi$-pulse and its length is $\Delta t_\pi$ = 6.5 ns), emitting a single photon from atom excited state in a pulse period $T$. The average emission peak excited by the $\pi$-pulses with repetition time $T = 100$ ns, which is much greater than the atom relaxation time, is shown in Fig. \[emission\]c. Using a Lorentzian fit, we obtain FWHM $\Delta\omega/2\pi \approx$ 12.5 MHz, which is equal to the relaxation rate $\Gamma_1$ when $\gamma = 0$. ![image](Fig3.pdf) Next, we demonstrate the operation of the photon source in time domain. We apply a train of excitation $\pi$-pulses and digitise the amplified outcoming signal from the source using a fast analog-digital converter (ADC) with a 4 ns step. To cancel the background noise from the amplifier we also take the idle traces (without applying pulses). The on-off pulse signals are squared and subtracted. Fig. \[emission\]d demonstrates the result of $5.12\times10^8$ times averaging. The measured peaks well coincide with the simulated ones with decay $\Gamma_1$ and 30 MHz bandwidth set by the digital filter during the data acquisition. Finally, we evaluate the efficiency of our source over a wide frequency range by tuning the emission frequency $\omega_{10}$ controlled by $\delta\Phi$. First, we would like to point out that in our flux-qubit-based atom, pure dephasing ($\gamma \propto I_p^2$) is expected to be strongly suppressed due to the low persistent current $I_p$ being one order lower than in the conventional design [@Wal2000; @Paauw2009]. Therefore, the pure dephasing should not affect the efficiency too much, even when we detune the energy from the minimal ones. We characterise the coupling strength using Eq. (\[rt\]c) by measuring the circle radius of $r_e$ on Smith charts similar to that in Fig. \[transmissionspectrum\]b for different $\delta\Phi$. Fig. \[efficiency\] shows the derived efficiency as a function of generated single photon frequency. We obtained more than 50% efficiency almost everywhere over the range of frequencies from 6.7 to 9.1 GHz except at one point around $7.25\,$GHz. The efficiency can be affected by non-negligible pure dephasing $\gamma$ and/or non-radiative relaxation $\Gamma_1^{nr}$.\ ![[****]{} Extracted efficiency of the photon source versus different transition frequencies of the two-level atom. []{data-label="efficiency"}](Fig4.pdf) **Discussion**\ In conclusion, we demonstrated an on-chip tuneable on-demand single-microwave-photon source operating with high efficiency over a wide range. The source is expected to be useful for applications including quantum communication, quantum information processing, and sensing. [99]{} Kimble, H. J. The quantum internet. [*Nature*]{} [**453**]{}, 1023 (2008). Duan, L. M. $\&$ Monroe, C. Quantum networks with trapped ions. [*Rev. Mod. Phys.*]{} [**82**]{}, 1209 (2010). Sangouard, N. [*et al.*]{} Quantum repeaters based on atomic ensembles and linear optics. [*Rev. Mod. Phys.*]{} [**83**]{}, 33 (2011). Northup, T. E. $\&$ Blatt, R. Quantum information transfer using photons. [*Nat. Photonics.*]{} [**8**]{}, 356 (2014). Kim, J. [*et al.*]{} A single-photon turnstile device. [*Nature*]{} [**397**]{}, 500 (1999). Lounis, B. $\&$ Moerner, W. E. Single photons on demand from a single molecule at room temperature. [*Nature*]{} [**407**]{}, 491 (2000). Kurtsiefer, C. [*et al.*]{} Stable solid-state source of single photons. [*Phys. Rev. Lett.*]{} [**85**]{}, 290 (2000). Keller, M. [*et al.*]{} Continuous generation of single photons with controlled waveform in an ion-trap cavity system. [*Nature*]{} [**431**]{}, 1075 (2004). He, Y. M. [*et al.*]{} On-demand semiconductor single-photon source with near-unity indistinguishability. [*Nat. Nanotechnol.*]{} [**8**]{}, 213 (2013). Houck, A. A. [*et al.*]{} Generating single microwave photons in a circuit. [*Nature*]{} [**449**]{}, 328 (2006). Hofheinz, M. [*et al.*]{} Generation of Fock states in a superconducting quantum circuit. [*Nature*]{} [**454**]{}, 310 (2008). Bozyigit, D. [*et al.*]{} Antibunching of microwave-frequency photons observed in correlation measurements using linear detectors. [*Nat. Phys.*]{} [**7**]{}, 154 (2011). Lang, C. [*et al.*]{} Observation of resonant photon blockade at microwave frequencies using correlation function measurements. [*Phys. Rev. Lett.*]{} [**106**]{}, 243601 (2011). Lang, C. [*et al.*]{} Correlations, indistinguishability and entanglement in Hong-Ou-Mandel experiments at microwave frequencies. [*Nat. Phys.*]{} [**9**]{}, 345 (2013). Pechal, M. [*et al.*]{} Microwave-controlled generation of shaped single photons in circuit quantum electrodynamics. [*Phys. Rev. X*]{} [**4**]{}, 041010 (2014). Astafiev, O. [*et al.*]{} Resonance fluorescence of a single artificial atom. [*Science*]{} [**327**]{}, 840 (2010). Abdumalikov, A. A. [*et al.*]{} Dynamics of coherent and incoherent emission from an artificial atom in a 1D space. [*Phys. Rev. Lett.*]{} [**107**]{}, 043604 (2011). Hoi, I. C. [*et al.*]{} Generation of nonclassical microwave states using an artificial atom in 1D open space. [*Phys. Rev. Lett.*]{} [**108**]{}, 263601 (2012). Mooij, J. E. [*et al.*]{} Josephson persistent-current qubit. [*Science*]{} [**285**]{}, 1036 (1999). van der Wal, C. H. [*et al.*]{} Quantum superposition of macroscopic persistent-current states. [*Science*]{} [**290**]{}, 773 (2000). Paauw, F. G. [*et al.*]{} Tuning the gap of a superconducting flux qubit. [*Phys. Rev. Lett.*]{} [**102**]{}, 090501 (2009). Zhu, X. B. [*et al.*]{} Coherent operation of a gap-tunable flux qubit. [*Appl. Phys. Lett.*]{} [**97**]{}, 102503 (2010). **Acknowledgements**\ Peng would like to thank Z. R. Lin for help and Y. Kitagawa for preparing Nb wafers. This work was carried out within the project EXL03 MICROPHOTON of the European Metrology Research Programme (EMRP). EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union. This work was supported by ImPACT Program of Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). **Author contributions**\ Z.H.P and O.V.A designed, carried out the experiment and analysed the data. Z.H.P and O.V.A wrote the manuscript; and all the authors discussed the data and commented on the manuscript. **Additional information**\ **Competing financial interests:** The authors declare no competing financial interests.
--- abstract: 'The spin transfer torque in all-metal dual spin valve, in which two antiparallelly aligned pinned ferromagnetic layers are on the two sides of a free ferromagnetic layer with two thin nonmagnetic spacers in between, is studied in the ballistic regime. It is argued that, similar to the results in the diffusion regime, the spin transfer torque is dramatically enhanced in comparison to that in a conventional spin valve although no spin accumulation exists at the magnetic-nonmagnetic interfaces. Within the Slonczewski’s approach, an analytical expression of the torque on the free magnetic layer is obtained, which may serve as a theoretical model for the micromagnetic simulation of the spin dynamics in dual spin valve. Depending on the orientation of free layer and the degree of electron polarization, the spin transfer torque enhancement could be tens times. The general cases when transmission and reflection probabilities of free layer are different from zero or one are also numerically calculated.' author: - 'P. Yan' - 'Z. Z. Sun' - 'X. R. Wang' title: Spin transfer torque enhancement in dual spin valve in the ballistic regime --- INTRODUCTION ============ Current induced magnetization reversal of magnetic multilayers has attracted much research interest due to its rich physics and potential applications in spintronic devices. [@Katine; @Grollier; @Xia; @Urazhdin1; @Tsoi; @Chen; @Zhou; @Su] Spin valve, consisting of two ferromagnetic layers and one nonmagnetic spacer in between, is one of such multilayer structures. In a spin valve, one of the magnetic layers, acting as a spin polarizer, is thick so that conducting electrons are polarized after passing through it. The polarized conducting electrons will transfer their spin angular momentums to local magnetization of the thinner free magnetic layer, resulting in the spin transfer torque (STT) effect first proposed by Slonczewski [@Slon] and Berger. [@Berger1] Although the STT has many advantages over a magnetic field in manipulating magnetization state, [@xrw0; @xrw1] a large current density is needed to achieve a technologically useful magnetization switching speed, [@Yamaguchi; @Parkin] but the associated Joule heating could affect device performance. Therefore, the current density reduction becomes a challenging issue from the spintronics application viewpoint. [xrw2]{} Many efforts have been devoted to the issue, including using optimized time-dependent current pulse, [@xrw3] pure spin current, [@Lu] and thermal activation. [@Hatami; @Xia1] One direct approach is to increase the magnitude of the STT under a given current through unique geometry design. In 2003, Berger [@Berger2] proposed a novel magnetic multilayer architecture called dual spin valve (DSV) where the free magnetic layer is sandwiched between two thicker pinned magnetic layers with opposite magnetizations. It is predicted that the STT applied on the free magnetic layer should be much larger than that in a traditional spin valve for a given current in the diffusion regime. [@Berger2] The argument is that spins accumulate at both non-magnetic/magnetic interfaces of the free layer and STT is proportional to spin accumulations. [@Berger2] However, it is not clear whether this STT enhancement can occur in DSV in the ballistic regime without spin accumulations. Moreover, the analytical formalism for the STT in DSV is still an open problem up to now. [@You; @Balaz] These are the focuses of present paper. A full-quantum description of the STT, valid when both the mean-free path and the spin-flip relaxation length are larger than the thickness of the spacers, is presented. Averaged (over electron phases) STT is obtained analytically within the Slonczewski’s semiclassical approach [@Slon] when all magnetic layers are perfect polarizers. It is found that STT in DSV, depending on the orientation of free layer and the degree of electron polarization, can be enhanced by a factor of tens in comparison with that in a spin valve. The general cases of arbitrary transmission and reflection coefficients of free layer are also numerically calculated. This paper is organized as follows: In Sec. II, a physical picture of STT origin in both spin valve and DSV is presented. Sec. III is the theoretical model and formulation of the electron transport through a DSV. The results, including the analytical expression of the STT on the free layer within the Slonczewski approach, are given in Sec. IV. Sec. V is the summary. PHYSICAL PICTURE ================ ![(Color online) (a) Illustration of STT generated in spin valve. The circles with arrows illustrate the electron flow with spin polarization direction. (b) Schematic explanation of STT enhancement in DSV. $% V_{b}=E_{F0}-E_{F6}$ is the applied bias. FM$_{A,B,C}$ are ferromagnetic layers and NM$_{A,B}$ are nonmagnetic layers. (c) Wave functions at the interfaces of regions ($0-6$) in DSV and the coordinates orientation.](comparison.eps){width="8.cm"} It shall be useful to present first a physical picture about STT origin in spin valve and, in particular, its enhancement in DSV. Consider a spin valve, which is schematically shown in Fig. 1(a), consisting of a pinned ferromagnetic layer FM$_{A}$ and a free ferromagnetic layer FM$_{B}$ separated by a nonmagnetic spacer NM. The magnetizations of FM$_A$ and FM$_B$ are represented by unit vectors, $\mathbf{m}_{A}$ and $\mathbf{m}_{B}$, and their saturated magnetizations. For the simplicity, we assume that both ferromagnetic layers are perfect spin filters, such that spins aligned parallelly with the layer magnetization can completely transmit through the layer, while antiparallel spins are totally reflected. In this ideal case, a closed analytical solution of the STT can be obtained, which will be shown later. Consider electrons flowing from the left to the right in Fig. 1(a). The right-going electrons are initially polarized along $\mathbf{m}_A$ direction after passing through FM$_A$. They will remain their spin polarization when they impinge on FM$_B$, as long as the spacer thickness is much shorter than the spin-diffusion length, which is usually the case in nanoscale spin valves. Because the polarization direction of FM$_B$ is noncollinear with that of FM$_A$, an electron polarized along $\mathbf{m}_A$ is the superposition of two states along $\mathbf{m}_B$ and $-\mathbf{m}_B$, so that the component along $\mathbf{m}_B$ can transmit through FM$_B$ while that along $-\mathbf{m}_B$ is totally reflected since FM$_B$ is a perfect polarizer. Thus, there will be a net angular momentum transfer, perpendicular to $\mathbf{m}_B$, from impinged electrons to FM$_B$, resulting in a torque on FM$_B$ to align its magnetization toward $\mathbf{m}% _A$, as shown by the blue arrow in Fig. 1(a). This is the origin of STT in spin valve. It should be pointed out that the subsequent multiple reflections of electrons within the NM spacer by the two FM/NM interfaces will reduce the STT, since the reflected electrons (dashed symbols in Fig. 1(a)) from FM$_B$ to FM$_A$ and back to FM$_B$ will exert a torque along $-% \mathbf{m}_A$. However, the net torque will not be zero since the reflected electrons have smaller flux than that of the original injected electrons. Let us now examine the spin transfer in a DSV shown in Fig. 1(b). On the top of a usual spin valve (Fig. 1(a)), an additional pinned ferromagnetic layer FM$_{C}$ with an antiparallelly aligned magnetization to FM$_{A}$, i.e., $% \mathbf{m}_{C}=-\mathbf{m}_{A}$, is added so that the free layer FM$_{B}$ is now sandwiched between FM$_{A}$ and FM$_{C}$, separated by two nonmagnetic spacers NM$_{A}$ and NM$_{B}$, respectively. Similar to the case of a spin valve, right-going electrons transmitting through FM$_{A}$ will exert a STT on FM$_{B}$ to align $\mathbf{m}_{B}$ with $\mathbf{m}_{A}$, as shown by the blue arrow in Fig. 1(b). After the electrons transmit through FM$_{B}$, most of them will be reflected by FM$_{C}$ and then impinge again on FM$_{B}$, as shown by red-dashed symbols in Fig. 1(b). Thus, they will exert another STT on FM$_{B}$ along $-\mathbf{m}_{C}=\mathbf{m}_{A}$ direction, resulting in the STT enhancement. Multiple reflections in region 2 (Fig. 1(c)) tend to reduce the torque but will not cancel it totally. In the following sections we will verify this physical picture through a full-quantum mechanics calculation. MODEL AND FORMULATION ===================== Single charge and spin transport theory in magnetic multilayers was well developed and approaches vary from classical Valet-Fert theory in the diffusive regime, [@Valet] matrix Boltzmann equation formalism, [Stiles2]{} to full-quantum mechanical treatments. [Slon,Stiles1,Waldron,Brataas1,Krstajic,Waintal,Xiao]{} In the present paper, we will adopt a full-quantum mechanical description called scattering matrix method [@Krstajic; @Waintal; @Xiao] that is valid for ballistic transmission. We assume that interfaces are flat and clean and all spin-flip processes are negligible so that momentum $\mathbf{k}$ is a good quantum number in each layer. Wavefunction at the interface of layers labeled by $% n=1-6$ as shown in Fig. 1(c) can be written as a two-component spinor multiplied by spatial plane wave, $$\psi _{n,\alpha }\left( x\right) =\binom{\chi _{n,\alpha ,\uparrow }}{\chi _{n,\alpha ,\downarrow }}e^{ix\left( k_{x}\right) _{n,\alpha }},$$where $\alpha $ denotes propagation directions, $\alpha =L$ for leftward and $\alpha =R$ for rightward. $k_{x}$ is the $x-$component of wave vector. $% \chi _{n,\alpha ,\uparrow (\downarrow )}$ is the spin-up (-down) probability amplitude. $z-$axis is along $\mathbf{m}_{A}$. $\mathbf{m}_{B}$ is specified by a polar angle $\theta $ and an azimuthal angle $\phi $. Incoming and outgoing spinor states in each region are connected to each other by a scattering matrix. [@Krstajic; @Waintal; @Xiao] In region $n$ $% (=1,2,3,4,5),$ $\psi _{n,L}$ and $\psi _{n+1,R}$ are outgoing spinors while $% \psi _{n,R}$ and $\psi _{n+1,L}$ are incoming spinors. They relate to each other by a scattering matrix $\hat{S}_{n}$, $$\left( \begin{array}{c} \psi _{n,L} \\ \psi _{n+1,R}% \end{array}% \right) =\hat{S}_{n}\left( \begin{array}{c} \psi _{n,R} \\ \psi _{n+1,L}% \end{array}% \right) .$$Note that $\psi _{6,L}=0$ since electrons flow from the left reservoir to the right one. $\hat{S}_{n}$ is a $4\times 4$ matrix and can be expressed by transmission and reflection coefficients of each scattering region, $$\hat{S}_{n}=\left( \begin{array}{cc} \hat{r}_{n} & \hat{t}_{n} \\ \hat{t}_{n} & \hat{r}_{n}% \end{array}% \right) ,$$where $\hat{t}_{n}\left( \hat{r}_{n}\right) $ with $n=1,2,3,4,5$ are $% 2\times 2$ transmission (reflection) matrices for layers FM$_{A}$, NM$_{A}$, FM$_{B}$, NM$_{B}$, and FM$_{C}$, respectively. We will first treat the pinned ferromagnetic layers FM$_A$ and FM$_C$ as perfect polarizers. Hence $\hat{t}_1,$ $\hat{r}_1,$ $\hat{t}_5,$ and $\hat{r}% _5$ take the following forms $$\hat{t}_1 = \hat{r}_5= \left( \begin{array}{cc} 1 & 0 \\ 0 & 0% \end{array}% \right) ,\text{ }\hat{r}_1=\hat{t}_5= \left( \begin{array}{cc} 0 & 0 \\ 0 & 1% \end{array}% \right) .$$ Since there is no scatterings in nonmagnetic layers NM$_A$ and NM$_B$ and the propagation of electrons in these layers accumulate dynamical phases, we have $$\hat{t}_{2}=\exp \left( i\varphi _{A}\right) \hat{I},\text{ }\hat{t}% _{4}=\exp \left( i\varphi _{B}\right) \hat{I},\text{ }\hat{r}_{2}=\hat{r}% _{4}=0,$$where $\varphi _{A}$ and $\varphi _{B}$ are the corresponding phase shifts and $\hat{I}$ is the $2\times 2$ unit matrix. The scattering matrix for FM$% _{B}$ can be expressed by the angles $\theta $ and $\phi $ as $$\begin{aligned} \hat{t}_{3} &=&\hat{R}\left( \theta ,\phi \right) \mathbf{t}\hat{R}% ^{-1}\left( \theta ,\phi \right) , \notag \\ \hat{r}_{3} &=&\hat{R}\left( \theta ,\phi \right) \mathbf{r}\hat{R}% ^{-1}\left( \theta ,\phi \right) ,\end{aligned}$$where $\hat{R}\left( \theta ,\phi \right)$ is the rotation that brings $\hat z$ into $\mathbf{m}_{B}$, $$\hat{R}=e^{-\frac{i\phi }{2}\sigma _{z}}e^{-\frac{i\theta }{2}\sigma _{y}}=\left( \begin{array}{cc} e^{-\frac{i\phi }{2}}\cos \frac{\theta }{2} & -e^{-\frac{i\phi }{2}}\sin \frac{\theta }{2} \\ e^{\frac{i\phi }{2}}\sin \frac{\theta }{2} & e^{\frac{i\phi }{2}}\cos \frac{% \theta }{2}% \end{array}% \right) .$$ $\mathbf{t}$ and $\mathbf{r}$ are the transmission and reflection matrices when $\mathbf{m}_B$ is chosen as the quantization axis $$\mathbf{t}=\left( \begin{array}{cc} t_{u} & 0 \\ 0 & t_{d}% \end{array}% \right) ,\text{ }\mathbf{r}=\left( \begin{array}{cc} r_{u} & 0 \\ 0 & r_{d}% \end{array}% \right) ,$$where $t_{u},$ $t_{d},$ $r_{u},$ and $r_{d}$ are transmission and reflection parameters. Subscripts $u$ and $d$ stand for spin-up (majority) and spin-down (minority), respectively. These parameters are complex numbers in general. $t_u=1,$ $t_d=0$ and $r_u=0,$ $r_d=1$ if FM$_B$ is a perfect polarizer. To find the STT on free layer FM$_{B}$, we need to obtain the scattering states at the two interfaces of FM$_{B}$. The spin-dependent scattering wave functions can be expressed in terms of incoming wave $\psi _{1,R},$ such as $% \psi _{n,\alpha }=\hat{P}_{n,\alpha }\psi _{1,R}$ $\left( n=3,4,6\right) ,$ where matrices $\hat{P}_{n,\alpha }$ are given by $$\begin{aligned} \hat{P}_{3,R} &=&\Bigl [\left( \hat{I}-\hat{t}_{2}^{2}\hat{r}_{1}\hat{r}% _{3}\right) -\hat{t}_{2}^{2}\hat{t}_{4}^{2}\hat{r}_{1}\hat{t}_{3}\hat{r}_{5}% \Bigl (\hat{I}-\hat{t}_{4}^{2}\hat{r}_{3}\hat{r}_{5}\Bigr )^{-1}\hat{t}_{3}% \Bigr ]^{-1} \notag \\ &&\hat{t}_{2}\hat{t}_{1}, \notag \\ \hat{P}_{3,L} &=&\hat{r}_{3}\hat{P}_{3,R}+\hat{t}_{4}\hat{t}_{3}\hat{r}_{5}% \hat{Q}, \notag \\ \hat{P}_{4,R} &=&\hat{t}_{4}^{-1}\hat{Q},\text{ }\hat{P}_{4,L}=\hat{t}_{4}% \hat{r}_{5}\hat{Q}, \notag \\ \hat{P}_{6,R} &=&\hat{t}_{5}\hat{Q},\end{aligned}$$with $$\begin{aligned} \hat{Q} &=&\Bigl [\left( \hat{I}-\hat{t}_{4}^{2}\hat{r}_{3}\hat{r}% _{5}\right) -\hat{t}_{2}^{2}\hat{t}_{4}^{2}\hat{t}_{3}\Bigl (\hat{I}-\hat{t}% _{2}^{2}\hat{r}_{1}\hat{r}_{3}\Bigr )^{-1}\hat{r}_{1}\hat{t}_{3}\hat{r}_{5}% \Bigr ]^{-1} \notag \\ &&\hat{t}_{3}\Bigl (\hat{I}-\hat{t}_{2}^{2}\hat{r}_{1}\hat{r}_{3}\Bigr )^{-1}% \hat{t}_{2}\hat{t}_{4}\hat{t}_{1}.\end{aligned}$$ After some algebras, we have $$\begin{aligned} &&\hat{Q}=\frac{\sqrt{z_{1}z_{2}}}{D\left( z_{1},z_{2}\right) }\left[ \begin{array}{cc} Q^{\uparrow \uparrow }\left( z_{1},z_{2}\right) & 0 \\ Q^{\downarrow \uparrow }\left( z_{1},z_{2}\right) & 0% \end{array}% \right] , \label{Q} \\ &&\hat{P}_{3,R}=\frac{\sqrt{z_{1}}}{D\left( z_{1},z_{2}\right) }\left[ \begin{array}{cc} P_{3,R}^{\uparrow \uparrow }\left( z_{1},z_{2}\right) & 0 \\ P_{3,R}^{\downarrow \uparrow }\left( z_{1},z_{2}\right) & 0% \end{array}% \right] , \label{P1} \\ &&\hat{P}_{3,L}=\frac{\sqrt{z_{1}}}{D\left( z_{1},z_{2}\right) }\left[ \begin{array}{cc} P_{3,L}^{\uparrow \uparrow }\left( z_{1},z_{2}\right) & 0 \\ P_{3,L}^{\downarrow \uparrow }\left( z_{1},z_{2}\right) & 0% \end{array}% \right] , \label{P2} \\ &&\hat{P}_{4,R}=\frac{\sqrt{z_{1}}}{D\left( z_{1},z_{2}\right) }\left[ \begin{array}{cc} Q^{\uparrow \uparrow }\left( z_{1},z_{2}\right) & 0 \\ Q^{\downarrow \uparrow }\left( z_{1},z_{2}\right) & 0% \end{array}% \right] , \\ &&\hat{P}_{4,L}=\frac{z_{2}\sqrt{z_{1}}}{D\left( z_{1},z_{2}\right) }\left[ \begin{array}{cc} Q^{\uparrow \uparrow }\left( z_{1},z_{2}\right) & 0 \\ 0 & 0% \end{array}% \right] ,\end{aligned}$$ where $z_{1}=\exp \left( i2\varphi _{A}\right) ,\ z_{2}=\exp \left( i2\varphi _{B}\right) ,$ and $$\begin{aligned} &&Q^{\uparrow \uparrow }\left( z_{1},z_{2}\right) =t_{u}\cos ^{2}\frac{% \theta }{2}+t_{d}\sin ^{2}\frac{\theta }{2} \notag \\ &&-z_{1}\left( r_{d}t_{u}\cos ^{2}\frac{\theta }{2}+r_{u}t_{d}\sin ^{2}\frac{% \theta }{2}\right) , \\ &&Q^{\downarrow \uparrow }\left( z_{1},z_{2}\right) =\frac{1}{2}e^{i\phi }\sin \theta \Bigl [t_{u}-t_{d} \notag \\ &&+\left( z_{1}+z_{2}\right) \left( r_{u}t_{d}-r_{d}t_{u}\right) \\ &&+z_{1}z_{2}\Bigl (% t_{u}r_{d}^{2}+t_{d}t_{u}^{2}-t_{u}t_{d}^{2}-t_{d}r_{u}^{2}\Bigr )\Bigr ], \notag \\ &&D\left( z_{1},z_{2}\right) =1-z_{1}\left( r_{u}\sin ^{2}\frac{\theta }{2}% +r_{d}\cos ^{2}\frac{\theta }{2}\right) \notag \\ &&-z_{2}\left( r_{u}\cos ^{2}\frac{\theta }{2}+r_{d}\sin ^{2}\frac{\theta }{2% }\right) + \\ &&z_{1}z_{2}\Bigl [r_{u}r_{d}+\frac{\sin ^{2}\theta }{4}\Bigl (\left( r_{u}-r_{d}\right) ^{2}-\left( t_{u}-t_{d}\right) ^{2}\Bigr )\Bigr ], \notag \\ &&P_{3,R}^{\uparrow \uparrow }\left( z_{1},z_{2}\right) =D\left( z_{1},z_{2}\right) , \\ &&P_{3,R}^{\downarrow \uparrow }\left( z_{1},z_{2}\right) =\frac{1}{2}% z_{1}e^{i\phi }\sin \theta \Bigl [r_{u}-r_{d} \notag \\ &&+z_{2}\left( -r_{u}^{2}+r_{u}r_{d}+t_{u}^{2}-t_{u}t_{d}\right) \cos ^{2}% \frac{\theta }{2} \\ &&+z_{2}\left( r_{d}^{2}-r_{u}r_{d}-t_{d}^{2}+t_{u}t_{d}\right) \sin ^{2}% \frac{\theta }{2}\Bigr ], \notag \\ &&P_{3,L}^{\uparrow \uparrow }\left( z_{1},z_{2}\right) =z_{2}\left( t_{u}\cos ^{2}\frac{\theta }{2}+t_{d}\sin ^{2}\frac{\theta }{2}\right) ^{2} \notag \\ &&-z_{2}\left( r_{u}\cos ^{2}\frac{\theta }{2}+r_{d}\sin ^{2}\frac{\theta }{2% }\right) ^{2} \notag \\ &&+r_{u}\cos ^{2}\frac{\theta }{2}+r_{d}\sin ^{2}\frac{\theta }{2} \\ &&+z_{1}z_{2}\Bigl [r_{u}\left( r_{d}^{2}-t_{d}^{2}\right) \sin ^{2}\frac{% \theta }{2}+r_{d}\left( r_{u}^{2}-t_{u}^{2}\right) \cos ^{2}\frac{\theta }{2}% \Bigr ], \notag \\ &&P_{3,L}^{\downarrow \uparrow }\left( z_{1},z_{2}\right) =\frac{% P_{3,R}^{\downarrow \uparrow }\left( z_{1},z_{2}\right) }{z_{1}}.\end{aligned}$$ The notation $X^{\downarrow \uparrow }$ $(X=\hat{Q},\hat{P}_{3,R},\hat{P}% _{3,L})$ refers to the transition amplitude from spin-up to spin-down states. RESULTS AND DISCUSSIONS ======================= Charge current -------------- An applied bias $V_{b}$ shown in Fig. 1(b) generates a charge current $J_{e}$ and a spatially dependent spin current $\mathbf{J}_{s}$ through the device. At zero temperature, the charge current reads $$J_{e}=\int dE\sum\limits_{\mathbf{q}}j_{e}\left( \mathbf{q}\right) ,$$ with charge current density$$j_{e}\left( \mathbf{q}\right) =e\frac{\hslash k_{x}}{m}\psi _{6,R}^{\dag }\psi _{6,R},$$where $\mathbf{q}$ is the transverse wave vector with energy $E$, $k_{x}^{2}+% \mathbf{q}^{2}=2mE/\hslash ^{2},$ and $e$, $m$, $\hslash $ are the electron charge, electron mass and the Planck constant. The charge current density can also be written as $$j_{e}=e\frac{\hslash k_{x}}{m}T_{e}\left( z_{1},z_{2}\right) , \label{current}$$with transmission coefficient $$T_{e}\left( z_{1},z_{2}\right) =\frac{\left\vert Q^{\downarrow \uparrow }\left( z_{1},z_{2}\right) \right\vert ^{2}}{\left\vert D\left( z_{1},z_{2}\right) \right\vert ^{2}}. \label{Tc}$$ In the case that electrons propagate ballistically through NM$_A$ and NM$_B$, the phase shifts in normal metals are given by $\varphi _A=k_xl_A$ and $% \varphi _B=k_xl_B$ where $l_A$ and $l_B$ are the widths of NM$_A$ and NM$_B,$ respectively. For sufficiently thick (much bigger than electron Fermi wave length but still smaller than the spin diffusion length) NM layers, $% \varphi_A$ and $\varphi_B$ vary rapidly from state to state. Thus, when one sums up contributions from different electronic states (different $k_x$), it is justifiable to assume $\varphi_A$ and $\varphi_B$ to be random, [Waintal]{} and $z_1=\exp\left( i2\varphi_A\right)$ and $z_2=\exp\left(i2% \varphi_B\right)$ are equally distributed on the unit circle of the complex plane. [@Krstajic; @Waintal] However, one should note that $z_1$ and $z_2$ are not independent under the ballistic assumption since $z_2=\left( z_1\right)^p$ with $p=l_B/l_A.$ The average transmission coefficient is then $$\begin{aligned} \left\langle T_e\right\rangle &=&\frac{1}{\pi}\int_0^\pi T_e \left(\varphi_A,p\varphi_A\right) d\varphi_A \notag \\ &=&\frac{1}{2\pi i}\oint_C\frac{T_e\left( z_1,\left( z_1\right) ^p\right) }{% z_1}dz_1, \label{complex}\end{aligned}$$where $C$ is contour $\left\vert z_1\right\vert =1.$ The contour integral for $p=1$, corresponding to the symmetric DSV configuration, [Balaz,Gmitra]{} is $$\left\langle T_{e}\right\rangle =\sum_{l=1}^{s}\text{Res}\left( \frac{% T_{e}\left( z_{1}\right) }{z_{1}};z_{1,l}\right) , \label{Res0}$$where $z_{1,l}$ is the $l-$th pole of function $T_{e}\left( z_{1}\right) /z_{1}$ and $s$ is the total number of poles inside the unit circle. In case when FM$_B$ is perfect, i.e., $t_u=1$ and $t_d=0$ ($r_u=0,$ and $r_d=1$), function $T_e\left( z_1\right) /z_1$ has only a second-order pole $z_1=0$ inside the unit circle. Thus, we can get the average transmission$$\left\langle T_{e}\right\rangle \left( t_{u}=1,\text{ }t_{d}=0\right) =\frac{% \sin ^{2}\theta }{2}.$$ ![(Color online) Average transmission coefficient versus angle $% \protect\theta $ for (a) $t_u=1$ and $t_d=0$; (b) $\left\vert t_u\right\vert^2=0.84$ and $\left\vert t_d\right\vert ^2=0.17$; (c) $% \left\vert t_u\right\vert ^2=0.79$ and $\left\vert t_d\right\vert ^2=0.49$; (d) $\left\vert t_u\right\vert ^2=0.66$ and $\left\vert t_d\right\vert ^2=0.44$; and (e) $\left\vert t_u\right\vert ^2=0.73$ and $\left\vert t_d\right\vert ^2=0.54$.](Transmission.eps){width="8.0cm"} Figure 2 is the average transmission coefficient versus angle $\theta $ for (a) perfect polarizers; (b) (001) interface of Au/Fe; (c) (001) interface of Cu/Co; (d) (110) interface of Cu/Co; (e) (111) interface of Cu/Co. The model parameters for (b-e) are obtained from Ref. , where they were extracted from the first-principles calculations. All transmission and reflection amplitudes are assumed to be real. We find that the average total transmission approach zero as $\theta $ goes to $0$ or $\pi $ even if $t_{d}$ is finite for the minority electrons, which is different from the result (Fig. 3 in Ref. ) in traditional spin valve. The reasons are as follows. For $\theta =0$, all electrons are polarized along $% \mathbf{m}_{A}$ after passing through FM$_{B}$. They will be totally reflected by FM$_{C}$ because of $\mathbf{m}_{C}=-\mathbf{m}_{A}$, leading to zero electric current. On the other hand, for $\theta =\pi $, all electrons that transmit FM$_{A}$ will be completely reflected by FM$_{B}$ if it is a perfect polarizer because electron spins are antiparallel to $% \mathbf{m}_{B}$, while electrons passing through FM$_{B}$ will maintain polarization along $\mathbf{m}_{A}$ in the case of $t_{d}\neq 0$ and they will be totally reflected by FM$_{C}$ because of $\mathbf{m}_{C}=-\mathbf{m}% _{A}$. Hence, the average transmission in DSV vanishes at $\theta =0$ or $% \pi .$ The total charge current flowing through the DSV can be obtained by summing Eq.  over the transverse momentum $\mathbf{q}$. To find an analytical expression, we will adopt the semiclassical Slonczewski approach. [@Slon] Within the Stoner description of magnetism and let $\Delta$ be the exchange energy of two spin bands of FM$_B$, one can define two Fermi wave vectors $K_{+}$ and $K_-$ for majorities and minorities, $$K_+=\sqrt{2mE/\hslash^2},\text{ }K_-=\sqrt{2m\left( E-\Delta\right) /\hslash ^2}.$$For a ferromagnetic metal, we assume that the Fermi energy lies above the exchange potential, and electrons in NMs are ideally matched with the majority electrons in FM, i.e., $k=K_{+}.$ The possible momentum states that contribute to the current can be divided into three ranges. [Slon,Krstajic]{} *Range a*: $0\leq q<K_{-}.$ Electrons of both spins in these states contribute to charge current $J_a$, $$\begin{aligned} J_{a} &=&2e\frac{\hslash }{\left( 2\pi \right) ^{2}m}\int_{0}^{K_{-}}\sqrt{% K_{+}^{2}-q^{2}}qdq \notag \\ &=&\frac{2}{3}e\frac{\hslash }{\left( 2\pi \right) ^{2}m}\left[ \left( K_{+}^{2}\right) ^{3/2}-\left( K_{+}^{2}-K_{-}^{2}\right) ^{3/2}\right] .\end{aligned}$$ *Range b*: $K_{-}\leq q<K_{+}.$ Only majority spin electrons contribute to current $J_b$, $$\begin{aligned} J_{b} &=&e\frac{\hslash }{\left( 2\pi \right) ^{2}m}\int_{K_{-}}^{K_{+}}% \sqrt{K_{+}^{2}-q^{2}}qdq \notag \\ &=&\frac{1}{3}e\frac{\hslash }{\left( 2\pi \right) ^{2}m}\left( K_{+}^{2}-K_{-}^{2}\right) ^{3/2}.\end{aligned}$$ *Range c*: $K_{+}\leq q.$ All electrons are totally reflected, and there is no charge current flow, i.e., $J_{c}=0.$ Using the conventional definition of spin polarization $$P=\frac{n_{+}-n_{-}}{n_{+}+n_{-}}=\frac{K_{+}-K_{-}}{K_{+}+K_{-}},$$where $n_{\pm }$ are the majority/minority spin densities at Fermi level in the FMs, the ratio $J_{a}/J_{b}$ can be written as a function of polarization $P$, $$\frac{J_{a}}{J_{b}}=\frac{\left( 1+P\right) ^{3}}{4P^{3/2}}-2.$$ Note that $J_{b}$ is the maximal polarized current for parallel FMs configuration. Then, to get the total charge current, it should be multiplied by the average transmission coefficient. Thus, the total electron current is given by [@Slon; @Krstajic]$$J_{e}=J_{a}+\left\langle T_{e}\right\rangle J_{b}.$$ Spin current and spin transfer torque ------------------------------------- The spin currents on two sides of FM$_{B}$ are $$\begin{aligned} \mathbf{J}_{3s} &=&\int dE\sum\limits_{\mathbf{q}}\mathbf{j}_{3s}\left( \mathbf{q}\right) , \label{s3} \\ \mathbf{J}_{4s} &=&\int dE\sum\limits_{\mathbf{q}}\mathbf{j}_{4s}\left( \mathbf{q}\right) , \label{s4}\end{aligned}$$with spin current densities$$\begin{aligned} \mathbf{j}_{3s}\left( \mathbf{q}\right) &\mathbf{=}&\frac{\hslash ^{2}k_{x}}{% 2m}\left( \psi _{3,R}^{\dag }\mathbf{\hat{\sigma}}\psi _{3,R}-\psi _{3,L}^{\dag }\mathbf{\hat{\sigma}}\psi _{3,L}\right) , \\ \mathbf{j}_{4s}\left( \mathbf{q}\right) &\mathbf{=}&\frac{\hslash ^{2}k_{x}}{% 2m}\left( \psi _{4,R}^{\dag }\mathbf{\hat{\sigma}}\psi _{4,R}-\psi _{4,L}^{\dag }\mathbf{\hat{\sigma}}\psi _{4,L}\right) ,\end{aligned}$$where $\mathbf{\hat{\sigma}}=\left( \sigma _{x},\sigma _{y},\sigma _{z}\right) $ are Pauli matrices. It is convenient to recast $\mathbf{\hat{% \sigma}}$ in local orthogonal coordinates of $\mathbf{m}_{B}\times \left( \mathbf{m}_{B}\times \mathbf{m}_{A}\right) \mathbf{,}$ $\mathbf{m}_{B}\times \mathbf{m}_{A},$ and $\mathbf{m}_{B}$ such that $$\mathbf{\hat{\sigma}}\mathbf{=}\sigma _{1}\mathbf{m}_{B}\times \left( \mathbf{m}_{B}\times \mathbf{m}_{A}\right) +\sigma _{2}\mathbf{m}_{B}\times \mathbf{m}_{A}+\sigma _{3}\mathbf{m}_{B}, \label{local coordinates}$$with $$\begin{aligned} \sigma _{1} &=&\frac{1}{\sin \theta }\left( \begin{array}{cc} -\sin \theta & \cos \theta e^{-i\phi } \\ \cos \theta e^{i\phi } & \sin \theta% \end{array}% \right) , \notag \\ \sigma _{2} &=&\frac{1}{\sin \theta }\left( \begin{array}{cc} 0 & ie^{-i\phi } \\ -ie^{i\phi } & 0% \end{array}% \right) , \notag \\ \sigma _{3} &=&\left( \begin{array}{cc} \cos \theta & \sin \theta e^{-i\phi } \\ \sin \theta e^{i\phi } & -\cos \theta% \end{array}% \right) .\end{aligned}$$ The STT on FM$_{B}$ is equal to the difference of the spin currents on both sides of the ferromagnet, $$\boldsymbol{\Gamma }=\mathbf{J}_{3s}-\mathbf{J}_{4s},$$and the STT density is$$\boldsymbol{\tau }=\mathbf{j}_{3s}-\mathbf{j}_{4s}.$$Thus, we have $$\boldsymbol{\tau }=a_{1}\mathbf{m}_{B}\times \left( \mathbf{m}_{B}\times \mathbf{m}_{A}\right) +a_{2}\mathbf{m}_{B}\times \mathbf{m}_{A}+a_{3}\mathbf{% m}_{B},$$where $$\begin{aligned} a_{i} &=&\frac{\hslash ^{2}k_{x}}{2m}\Bigl (\psi _{3,R}^{\dag }\sigma _{i}\psi _{3,R}+\psi _{4,L}^{\dag }\sigma _{i}\psi _{4,L} \notag \\ &&-\psi _{3,L}^{\dag }\sigma _{i}\psi _{3,L}\mathbf{-}\psi _{4,R}^{\dag }\sigma _{i}\psi _{4,R}\Bigr ),\text{ }\left( i=1,2,3\right) . \label{a-term}\end{aligned}$$ First of all, we can show $a_{3}=0$ because of the particle current conservation and the absence of spin-flipping. This can be understood as follows: We first rotate $\hat{z}$ to $\mathbf{m}_{B},$ then spinor state $% \psi _{n,\alpha }=\hat{R}\left( \theta ,\phi \right) \tilde{\psi}_{n,\alpha } $ where $\tilde{\psi}_{n,\alpha }$ is the electronic state $seen$ along $% \mathbf{m}_{B}.$ Then, each spin density term $\psi _{n,\alpha }^{\dag }\sigma _{3}\psi _{n,\alpha }$ in Eq. becomes $\tilde{\psi}% _{n,\alpha }^{\dag }\hat{R}^{\dagger }\left( \theta ,\phi \right) \sigma _{3}% \hat{R}\left( \theta ,\phi \right) \tilde{\psi}_{n,\alpha }=\tilde{\psi}% _{n,\alpha }^{\dag }\sigma _{z}\tilde{\psi}_{n,\alpha },$ so that spin up state (parallel to $\mathbf{m}_{B},$) and spin down state (anti-parallel to $% \mathbf{m}_{B}$) are decoupled without mixing. In the absence of spin-flipping, both spin-up and spin-down particle currents are conserved. Thus, the STT projected along local magnetization $\mathbf{m}_{B}$ is zero. Here we have used identity $\hat{R}^{\dagger }\left( \theta ,\phi \right) \sigma _{3}\hat{R}\left( \theta ,\phi \right) =\sigma _{z}.$ The physical consequence is that STT can only rotate the magnetization without change its magnitude. Therefore, the STT can be divided into an in-plane (Slonczewski) term [Slon]{} $\boldsymbol{\tau}_\parallel=a_1\mathbf{m}_B\times \left( \mathbf{m}% _B\times \mathbf{m}_A\right)$ and an out-of-plane (field-like) term [Zhang]{} $\boldsymbol{\tau}_\perp=a_2\mathbf{m}_B\times \mathbf{m}_A.$ We note that the out-of-plane torque will vanish if $t_{u},$ $t_{d},$ $r_{u},$ and $r_{d}$ are real. This is because all the spins of both transmitted and reflected electrons are in the same plane spanned by $\mathbf{m}_A$ and $% \mathbf{m}_B$ under the condition, and they can only vary in this plane. Parameters $a_{1}$ and $a_{2}$ are important to understand the spin dynamics in DSV. [@You; @Balaz] In Ref. , these parameters are chosen to vary continuously without geometry dependence, while they are calculated in the diffusice transport limit in Ref. . To find $a_{1}$ and $a_{2}$ in our model, one needs to compute quantities $$\begin{aligned} T_{\sigma _{i}}\left( z_{1}\right) &=&\frac{1}{2}\text{Tr}\Bigl (\hat{P}% _{3,R}^{\dag }\sigma _{i}\hat{P}_{3,R}-\hat{P}_{3,L}^{\dag }\sigma _{i}\hat{P% }_{3,L}-\hat{P}_{4,R}^{\dag }\sigma _{i}\hat{P}_{4,R} \notag \\ &&+\hat{P}_{4,L}^{\dag }\sigma _{i}\hat{P}_{4,L}\Bigr ),\text{ }\left( i=1,2\right) ,\end{aligned}$$since $a_{i}=T_{\sigma _{i}}\left( z_{1}\right) (\hslash ^{2}k_{x})/m,$ $% \left( i=1,2\right) $. Perform the same averaging procedure as we did on the charge current early, one finds $$\left\langle T_{\sigma _{i}}\right\rangle =\sum_{l_{i}=1}^{s_{i}}\text{Res}% \left( \frac{T_{\sigma _{i}}\left( z_{1}\right) }{z_{1}};z_{1,l_{i}}\right) ,% \text{ }\left( i=1,2\right) , \label{Res1}$$where $z_{1,l_{i}}$ is the $l_{i}-$th pole of function $T_{\sigma _{i}}\left( z_{1}\right) /z_{1}$ inside the unit circle in the complex plane with $s_{i}$ the corresponding total pole number. Since only region *b* contributes to the spin current, [Slon,Krstajic]{} one has $\left\langle a_{i}\right\rangle =\frac{\hslash }{e}% \left\langle T_{\sigma _{i}}\right\rangle J_{b},$ $\left( i=1,2\right) .$ Thus the average STT on the free magnetic layer is $$\mathbf{\Gamma =}g_{1}\left( \theta \right) \frac{\hslash }{e}J_{e}\mathbf{m}% _{B}\times \left( \mathbf{m}_{B}\times \mathbf{m}_{A}\right) +g_{2}\left( \theta \right) \frac{\hslash }{e}J_{e}\mathbf{m}_{B}\times \mathbf{m}_{A}, \label{STT}$$with scalar functions $$g_{i}\left( \theta \right) =\frac{\left\langle T_{\sigma _{i}}\right\rangle }{J_{a}/J_{b}+\left\langle T_{e}\right\rangle },\text{ }\left( i=1,2\right) . \label{g1}$$ The STT $\mathbf{\Gamma }$ consists of two terms. The first one is Slonczewski torque $\mathbf{\Gamma }_{\parallel }\mathbf{=}g_{1}\left( \theta \right) \frac{\hslash }{e}J_{e}\mathbf{m}_{B}\times \left( \mathbf{m}% _{B}\times \mathbf{m}_{A}\right) .$ And the second one is field-like torque $% \mathbf{\Gamma }_{\perp }\mathbf{=}g_{2}\left( \theta \right) \frac{\hslash }{e}J_{e}\mathbf{m}_{B}\times \mathbf{m}_{A}.$ ![(Color online) Slonczewski torque ${\Gamma }_{\parallel }$ versus $% \protect\theta $ for polarization $P=0.6$ under three different conditions: (a) $t_{u}=1$ and $t_{d}=0 $; (b) $\left\vert t_{u}\right\vert ^{2}=0.99$ and $\left\vert t_{d}\right\vert ^{2}=0.1$; and (c) $\left\vert t_{u}\right\vert ^{2}=0.84$ and $\left\vert t_{d}\right\vert ^{2}=0.17$. The unit of torque is $\frac{\hslash }{e}J_{e}$.](Real.eps){width="8.0cm"} The general analytical forms of $g_{1}\left( \theta \right) $ and $% g_{2}\left( \theta \right) $ are difficult to find because of the complicated residue calculations in Eqs. and . However, they can be numerically evaluated for any given material with definite material parameters $t_{u},$ $t_{d},$ $r_{u},$ and $r_{d}.$ In Fig. 3, we present the numerical calculations of the magnitude of Slonczewski torque $\Gamma _{\parallel }\mathbf{=-}g_{1}\left( \theta \right) \frac{% \hslash }{e}J_{e}\sin \theta $ per unit current versus angle $\theta $ at polarization coefficient $P=0.6$ under three different conditions, which shows that different transmission probabilities have strong impact on the torque and may change the sign of the torque. Similar sign reversal of STT is demonstrated in usual spin valve. [@Waintal] Here, in order to directly compare our results in DSV with Slonczewski’s result in conventional spin valve, [@Slon] let us consider the case of ideal FM$_{B}$, i.e., $t_{u}=1$ and $t_{d}=0$ $\left( r_{u}=0,r_{d}=1\right) $. After some algebras, we obtain $T_{\sigma _{1}}=\frac{1}{4}\left( z_{1}+z_{1}^{\ast }-2\right) $ and $T_{\sigma _{2}}\left( z_{1}\right) =-% \frac{i}{4}\left( z_{1}-z_{1}^{\ast }\right) $. The averaged values are then $$\begin{aligned} \left\langle T_{\sigma _{1}}\right\rangle \left( t_{u}=1,\text{ }% t_{d}=0\right) &=&-\frac{1}{2}, \\ \left\langle T_{\sigma _{2}}\right\rangle \left( t_{u}=1,\text{ }% t_{d}=0\right) &=&0.\end{aligned}$$Thus we get the scalar $g-$functions $$\begin{aligned} g_{1}\left( \theta \right) &=&\frac{-1}{-3+\left( 1+P\right) ^{3}/\left( 2P^{3/2}\right) -\left( \mathbf{m}_{A}\cdot \mathbf{m}_{B}\right) ^{2}}, \notag \\ && \label{Slonczewski} \\ g_{2}\left( \theta \right) &=&0,\end{aligned}$$which show that only the Slonczewski torque exists in the ideal DSV. The absence of any layer-thickness dependence in Eq. results from the phase average across sufficiently thick normal metal layers. The values of $g_{1}\left( \theta =0,\pi \right) $ are crucial to evaluate the threshold current density $J_e^*$ needed for magnetization reversal of the free layer since $J_e^*\propto 1/g_{1}\left( \theta =0,\pi \right) $. [xrw3]{} ![(Color online) Slonczewski torque ${\Gamma }_{\parallel }$ versus angle $\protect\theta $ for various polarization $P$ in the ideal case where all FMs are perfect polarizers. The unit of torque is $\frac{\hslash }{e }% J_{e}$.](Slonczewski.eps){width="8.0cm"} Figure 4 is the magnitude of STT per unit current as a function of angle $% \theta $ for various polarization $P$ in ideal DSV. The STT is symmetric against $\pi /2$ due to the contributions from both fixed magnetic layers, which is different from the result in conventional spin valve. [Slon,Waintal,Krstajic]{} For polarization coefficient $P<1$ it generally vanishes at $\theta =0$ and $\theta =\pi $ (shown in Fig. 4). However, if $% P=1$, the torque is singular at $\theta =0$ and $\pi $. The divergence can be understood mathematically and from a physics viewpoint. Mathematically, the singularities at $\theta =0$ and $\pi $ are due to the factor of $% g_{1}=-\left( \sin \theta \right) ^{-2}$ when $P=1.$ Physically this is the consequence of perfect spin filter. In our model, every electron transfers its angular momentum to local magnetization when it impinges the interface of magnetic layer whose magnetization is not parallel to its spin. [Slon]{} Meanwhile, the STT per unit current is defined as the spin transfer per transmitted electron. [@Slon] However, in the case of $\theta =0$ or $\theta =\pi ,$ perfect spin filter does not allow any electron transmitting through the DSV, which leads to a zero electron transmission and results in the divergence. The above argument can also be applied to the STT divergence at $\theta =\pi $ in traditional spin valve in the original paper by Slonczewski. [@Slon] Nevertheless, we find that the total STT $\mathbf{% \Gamma =}-\frac{1}{2}\frac{\hslash }{e}J_{b}\mathbf{m}_{B}\times \left( \mathbf{m}_{B}\times \mathbf{m}_{A}\right) $ if $P=1,$ does not have such singularities. Finally, we would like to compare the magnitude of STT in our DSV with that obtained in traditional spin valve. [@Krstajic] In Ref. , Krstajić *et al.* calculated the $g-$function in spin valve $$g_{1}^{\ast }\left( \theta \right) =\frac{-1}{-4+\left( 1+P\right) ^{3}\left( 3+\mathbf{m}_{A}\cdot \mathbf{m}_{B}\right) /\left( 4P^{3/2}\right) }, \label{spin valve}$$which is the same as the Slonczewski’s result in Ref. . ![(Color online) Spin transfer torque enhancement ratio $g_1\left(% \protect\theta\right)/g_1^\ast\left( \protect\theta\right)$ versus $\protect% \theta $ for various polarization $P$ in the ideal case where all FMs are perfect polarizers. ](Enhancement.eps){width="8.0cm"} We plot the ratio $g_{1}\left( \theta \right) /g_{1}^{\ast }\left( \theta \right) $ as a function of angle $\theta $ for various polarization $P$ in Fig. 5. One can see that the STT is largely enhanced when $\theta $ is acute, but approaches to that in usual spin valve when $\theta >\pi /2$. Namely, the enhancement value sensitively depends on both the orientation of free layer and the polarization degree of electrons. Under small tilted angle $\theta $ and large polarization $P,$ the enhancement ratio is dramatic, which will substantially lower the required threshold current density for magnetization switching. For instance, $g_{1}/g_{1}^{\ast }=11.9$ if $\theta =\frac{\pi }{6}$ and $P=0.8.$ While the enhancement ratio approaches to $1$ (red dash line shown in Fig. 5) when $\theta $ is close to $\pi .$ The reason is that there is no difference between a DSV and a usual spin valve when layers FM$_{A}$ and FM$_{B}$ are antiparallelly aligned, since FM$_{C}$ would not reflect electrons coming out of FM$_{B}$ in such case. The results are qualitatively consistent with Fuchs *et al*.’s experiment [@Fuchs] which shows that the reduction of threshold current switching FM$% _{B}$ from parallel to antiparallel, with respect to FM$_{A}$, is substantial, while it is only of modest size from antiparallel to parallel in DSV compared to that in conventional spin valve. From the qualitative physical picture and quantum mechanical calculations, we conclude that STT can be greatly enhanced in DSV compared to that in spin valve structure in the ballistic regime without spin accumulations. The findings of the physics behind and the analytical formula of STT in this emerging geometry should be interesting for both theoretical [@You; @Balaz] and experimental [@Fuchs; @Watanabe] concerns. Micromagnetic simulation based on our new $g-$function (Eq. ) and Landau-Lifshitz-Gilbert (LLG) equation [@Gilbert] will be a direction of future research. The behavior of STT in asymmetric DSV when the widths of two NMs are different, i.e., $l_{A}\neq l_{B},$ is also an interesting issue for further investigation. Although the advantage of STT enhancement is unambiguously demonstrated in our results, the accuracy of the analytical formula still needs experimental confirmation. We suggest to use a recently developed technique called spin-transfer-driven ferromagnetic resonance (ST-FMR) [@Sankey] to measure the angular dependence of the STT in DSV. SUMMARY ======= In conclusion, we derive the STT acting on the free magnetic layer in a DSV structure in the ballistic regime. A full-quantum mechanics description of the STT is presented, which is valid for nanoscale DSVs where both the electron mean-free path and the spin-flip relaxation length are larger than the thickness of the spacers. [@Jedema] Using a quasi-one-dimensional model and within the Slonczewski’s approach, we obtained the analytical form of the STT when all magnetic layers are perfect polarizers. Similar to the results in the diffusive regime, the STT is dramatically enhanced in comparison to that in a conventional spin valve although no spin accumulation exists at the magnetic-nonmagnetic interfaces. Depending on the orientation of free magnetic layer and the polarization degree of electrons, the STT can be enhanced by a factor of a few tens. Our analytical $g-$function provides a theoretical base for the micromagnetic simulation of the spin dynamics in DSV. The general cases when transmission and reflection probabilities of free layer are different from zero or one are also numerically calculated, which shows that the sign of the torque may change under different transmission probabilities. These results should be useful for the switching current reduction in magnetization reversal. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work is supported by Hong Kong RGC grants (\#603007, 603508, 604109 and HKU10/CRF/08- HKUST17/CRF/08). X.R.W. would like to acknowledge the hospitality of Kavli Institute for Theoretical Physics China, CAS. Z.Z.S. thanks the Alexander von Humboldt Foundation (Germany) for a grant. [99]{} J.A. Katine, F.J. Albert, R.A. Buhrman, E.B. Myers, and D.C. Ralph, Phys. Rev. Lett. **84**, 3149 (2000). J. Grollier, V. Cros, A. Hamzic, J.M. George, H. Jaffrès, A. Fert, G. Faini, J.B. Youssef, and H. Legall, Appl. Phys. Lett. **78**, 3663 (2001). K. Xia, P.J. Kelley, G.E.W. Bauer, A. Brataas, and I. Turek, Phys. Rev. B **65**, 220504(R) (2002). S. Urazhdin, N.O. Birge, W.P. Pratt, and J. Bass, Phys. Rev. Lett. **91**, 146803 (2003). M. Tsoi, J.Z. Sun, and S.S.P. Parkin, Phys. Rev. Lett. **93**, 036602 (2004). T.Y. Chen, S.X. Huang, C.L. Chien, and M.D. Stiles, Phys. Rev. Lett. **96**, 207203 (2006). Y. Zhou, C.L. Zha, S. Bonetti, J. Persson, and J. Åkerman, Appl. Phys. Lett. **92**, 262508 (2008). X. Chen, Q.R. Zheng, and G. Su, Phys. Rev. B **78**, 104410 (2008). J.C. Slonczewski, J. Magn. Magn. Mater. **159**, L1 (1996). L. Berger, Phys. Rev. B **54**, 9353 (1996). Z.Z. Sun and X.R. Wang, Phys. Rev. B **71**, 174430 (2005); **73**, 092416 (2006); **74**, 132401 (2006). X.R. Wang, P. Yan, J. Lu, and C. He, Ann. Phys. (N. Y.) **324**, 1815 (2009); X.R. Wang, P. Yan, and J. Lu, Europhys. Lett. **86**, 67001 (2009). A. Yamaguchi, T. Ono, S. Nasu, K. Miyake, K. Mibu, and T. Shinjo, Phys. Rev. Lett. **92**, 077205 (2004). M. Hayashi, L. Thomas, C. Rettner, R. Moriya, Y.B. Bazaliy, and S.S.P. Parkin, Phys. Rev. Lett. **98**, 037204 (2007). P. Yan and X.R. Wang, Phys. Rev. B **80**, 214426 (2009); Appl. Phys. Lett. **96**, 162506 (2010). X.R. Wang and Z.Z. Sun, Phys. Rev. Lett. **98**, 077201 (2007); X.R. Wang, P. Yan, J. Lu, and C. He, Europhys. Lett. **84**, 27008 (2008). H.Z. Lu and S.Q. Shen, Phys. Rev. B **80**, 094401 (2009). M. Hatami, G.E.W. Bauer, Q. Zhang, and P.J. Kelly, Phys. Rev. Lett. **99**, 066603 (2007). Z. Yuan, S. Wang, and K. Xia, Solid State Commun. **150**, 548 (2010). L. Berger, J. Appl. Phys. **93**, 7693 (2003). C.Y. You, J. Appl. Phys. **107**, 073911 (2010). P. Baláž, M. Gmitra, and J. Barnaś, Phys. Rev. B **80**, 174404 (2009). T. Valet and A. Fert, Phys. Rev. B **48**, 7099 (1993). J. Xiao, A. Zangwill, and M.D. Stiles, Phys. Rev. B **70**, 172405 (2004). M.D. Stiles and A. Zangwill, Phys. Rev. B **66**, 014407 (2002). D. Waldron, P. Haney, B. Larade, A. MacDonald, and H. Guo, Phys. Rev. Lett. **96**, 166804 (2004). A. Brataas, Y.V. Nazarov, and G.E.W. Bauer, Phys. Rev. Lett. **84**, 2481 (2000); A. Brataas, Y.V. Nazarov, and G.E.W. Bauer, Eur. Phys. J. B **22**, 99 (2001). P.M. Krstajić, M. Keller, and F.M. Peeters, Phys. Rev. B **77**, 174428 (2008). X. Waintal, E.B. Myers, P.W. Brouwer, and D.C. Ralph, Phys. Rev. B **62**, 12317 (2000). J. Xiao and G.E.W. Bauer, Phys. Rev. B **77**, 224419 (2008). M. Gmitra and J. Barnaś, Appl. Phys. Lett. **89**, 223121 (2006). M.D. Stiles, J. Appl. Phys. **79**, 5805 (1996). S. Zhang, P.M. Levy, and A. Fert, Phys. Rev. Lett. **88**, 236601 (2002). G.D. Fuchs, I.N. Krivorotov, P.M. Braganca, N.C. Emley, A.G.F. Garcia, D.C. Ralph, and R.A. Buhrman, Appl. Phys. Lett. **86**, 152509 (2005). M. Watanabe, J. Okabayashi, H. Toyao, T. Yamaguchi, and J. Yoshino, Appl. Phys. Lett. **92**, 082506 (2008). T.L. Gilbert, IEEE Trans. Magn. **40**, 3443 (2004). J.C. Sankey, P.M. Braganca, A.G.F. Garcia, I.N. Krivorotov, R.A. Buhrman, and D.C. Ralph, Phys. Rev. Lett. **96**, 227601 (2006); J.C. Sankey, Y.T. Cui, J.Z. Sun, J.C. Slonczewski, R.A. Buhrman, and D.C. Ralph, Nature Phys. **4**, 67 (2008). F.J. Jedema, M.S. Nijboer, A.T. Filip, and B.J. van Wees, Phys. Rev. B **67**, 085319 (2003).
--- abstract: 'This paper proposes a hybrid-relaying scheme empowered by a self-sustainable intelligent reflecting surface (IRS) in a wireless powered communication network (WPCN), to simultaneously improve the performance of downlink energy transfer (ET) from a hybrid access point (HAP) to multiple users and uplink information transmission (IT) from users to the HAP. We propose time-switching (TS) and power-splitting (PS) schemes for the IRS, where the IRS can harvest energy from the HAP’s signals by switching between energy harvesting and signal reflection in the TS scheme or adjusting its reflection amplitude in the PS scheme. For both the TS and PS schemes, we formulate the sum-rate maximization problems by jointly optimizing the IRS’s phase shifts for both ET and IT and network resource allocation. To address each problem’s non-convexity, we propose a two-step algorithm, where the formulated problem is decoupled into two sub-problems and each sub-problem can be solved separately in an efficient way. To show the structure of resource allocation, we also investigate sub-optimal solutions for the schemes with random phase shifts. Through numerical results, we show that our proposed schemes can achieve over hundred times sum-rate gain compared to the baseline scheme without IRS.' author: - 'Bin Lyu, Parisa Ramezani, Dinh Thai Hoang, Shimin Gong, Zhen Yang, and Abbas Jamalipour' title: 'Optimized Energy and Information Relaying in Self-Sustainable IRS-Empowered WPCN' --- Wireless powered communication network, intelligent reflecting surface, time scheduling, phase shift optimization. Introduction ============ With nearly 50 billion Internet of Things (IoT) devices by 2020 and even 500 billion by 2030 [@IoT], we have already stepped into the new era of IoT. Having the vision of being self-sustainable, IoT has observed the energy limitation as a major issue for its widespread development. Recent advances in energy harvesting (EH) technologies, especially radio frequency (RF) EH [@RFSurvey], opened a new approach for self-sustainable IoT devices to harvest energy from dedicated or ambient RF sources. This leads to an emerging topic of wireless powered communication networks (WPCNs), in which low-cost IoT devices can harvest energy from a dedicated hybrid access point (HAP) and then use the harvested energy to transmit data to the HAP [@JuOne]. As a result, the development of WPCNs has been a promising step toward the future self-sustainable IoT networks [@Abbas]. Although possessing significant benefits and attractive features for low-cost IoT networks, e.g., multi-device and long-distance charging, WPCNs are facing some challenges which need to be addressed before they can be widely deployed in practice. In particular, the uplink information transmission (IT) of IoT devices in WPCNs relies on their harvested energy from downlink energy transfer (ET) of the HAP. However, the IoT devices typically suffer from doubly attenuations of RF signal power over distance [@JuOne], which severely limits the network performance, e.g., the amount of energy harvested by the IoT devices and achievable rates at the HAP. Hence, solutions to enhance the downlink ET efficiency and improve the uplink transmission rates for WPCNs are urgent needs. Reducing the distances between the HAP and IoT devices is one solution to enhance EH efficiency and achieve greater transmission rates. However, this is not a viable option because IoT devices are randomly deployed in practice, and thus we may not be able to control all of them over their locations. Hence, more efficient and cost-effective solutions are required to guarantee that WPCN can be seamlessly fitted into the IoT environment with satisfying performance. Relay cooperation is an efficient way to enhance the performances of WPCNs, which can be classified into two categories of active relaying and passive relaying. Active relaying refers to scenarios in which the communication between a transmitter and its destined receiver is assisted by a relay which forwards the user’s information to the destination via active RF transmission [@HeChen]-[@ZengTwo]. However, active relaying schemes have several limitations. Considering that the relays are energy-constrained, they need to harvest sufficient energy from the RF sources and use the harvested energy to actively forward information to the receiver. Due to the higher circuit power consumption of active relays, it may take long for these relay to harvest enough energy. This reduces the IT time of the network. Besides, most active relays operate in half-duplex mode, which further shortens the effective IT time, resulting in network performance degradation. Full-duplex (FD) relays can relax this issue; however, complex self-interference (SI) cancellation techniques are needed at FD relays to ensure that SI is effectively mitigated [@CJZhong]. Passive relaying exploits the idea of backscatter communication (BackCom) for assisting the source-destination communication [@LyuOne]-[@Gong]. Specifically, BackCom relay nodes do not need any RF components as they passively backscatter the source’s signal to strengthen the received signal at the receiver. Accordingly, the power consumption of BackCom relay nodes is extremely low and no dedicated time is needed for the relays’ EH [@LiuOne]. Nonetheless, as no active signal generation is involved and the passive relays simply reflect the received signal from the source, passive relaying schemes suffer from poor performance. Intelligent reflecting surface (IRS), consisting of a large number of low-cost reflecting elements, has recently emerged as a promising solution to improve the performance of wireless communication networks [@Wu2020Survey; @Gong2019Towards]. IRS elements smartly induce phase shifts and amplitude change to the incident signals and passively reflect them such that the signals are constructively combined at the receiver. In this way, IRS can adjust the communication environment and create favorable conditions for energy and information transmission without using energy-hungry RF chains. This technology has lately been utilized for total transmit power minimization in wireless communication networks [@WuIRS], ET enhancement in multiple-input single-output (MISO) systems [@Mishra], weighted sum-power and weighted sum-rate maximization in simultaneous wireless information and power transfer (SWIPT) systems [@WuSWIPT; @PanSWIPT], spectrum efficiency maximization in MISO communication systems [@Yu], secure transmit power allocation [@Chu], outage probability minimization [@Guo], etc., and demonstrated promising results and significant performance gains as compared to the conventional wireless networks without IRS. Having the capability of cooperating in downlink ET and uplink IT, IRS has several advantages over the conventional active and passive relaying techniques [@Wu2020Survey]. First of all, IRS is a cost-effective technology and it can be readily integrated into existing wireless communication networks without incurring high implementation costs. Furthermore, IRS is more energy- and spectrum-efficient as compared to conventional relaying methods because it consumes very low power and at the same time, helps using the limited spectrum resources more efficiently. IRS essentially works in the full-duplex mode without causing any interference and adding thermal noise, which further improves the spectral efficiency. Moreover, it is easy to increase the number of IRS elements to achieve higher performance gains. Motivated by its numerous benefits, IRS offers a promising green solution to improve the performance for WPCNs. Recently, a few research works have investigated the application of IRS for improving the performance of WPCNs [@LyuTwo; @SuzhiBi] In our preliminary work [@LyuTwo], we proposed a solution using the IRS as a hybrid relay to simultaneously improve energy and data communications efficiency for WPCNs, where the IRS first operates as an energy relay to assist the downlink ET from the HAP to a number of users and then works as an information relay to help the uplink IT from the users to the HAP. Zheng *et al.* proposed a similar idea to use the IRS as a hybrid relay, which is not only used to enhance both ET and IT efficiency but also involved in user cooperation [@SuzhiBi]. However, [@SuzhiBi] only considered a two-user scenario and the proposed solution is not applicable to the general scenario with multiple users. More importantly, [@WuIRS]-[@SuzhiBi] assume that the energy consumption of IRS is negligible because it does not need any active RF chains but only reflects incident signals passively. However, as mentioned in [@Huang2018Conf]-[@GongTwo], the energy consumption of IRS is in fact proportional to the number of IRS elements. Hence, the energy consumption of IRS cannot be ignored since the number of IRS elements is relatively large to achieve better performance. Thus, it is challenging to power IRS and keep its hybrid-relaying operations in WPCNs. To address the above issue effectively, we propose a self-sustainable IRS-empowered WPCN, where the IRS is equipped with EH circuit to harvest RF energy from the HAP to power its operations. Similar to the conventional wireless-powered active relays [@Nasir], time-switching (TS) and power-splitting (PS) schemes are proposed to enable IRS to harvest energy from the RF signals transmitted by the HAP. In the TS scheme, the ET phase is split into two sub-slots, where the IRS harvests energy in the first sub-slot and assists in downlink ET to the users in the second sub-slot. In the PS scheme, the IRS harvests energy from the HAP’s signal by adjusting its amplitude reflection coefficients. To maximize the system sum-rate for both TS and PS schemes, we thus need to optimize IRS phase shift design and network resource allocation jointly with EH time and amplitude reflection coefficients of the IRS. This optimization is thus much more challenging to address than the ones studied in other works on IRS and conventional wireless-powered relays. Therefore, we propose efficient algorithms to find the optimal solutions to the sum-rate maximization problems for both TS and PS schemes. Numerical simulations endorse the effectiveness of our proposed schemes for improving the performance of WPCN and show remarkable performance gain as compared to the baseline WPCN without IRS in [@JuOne]. The main contributions of this paper are summarized as follows: - [We propose a self-sustainable IRS-empowered WPCN, where a wireless-powered IRS acts as a hybrid relay to improve the performance of WPCN in both downlink ET from the HAP to the users and uplink IT from users to the HAP.]{} - [To enable energy collection and hybrid relaying functionalities at the IRS, we consider TS and PS schemes, where the IRS uses a portion of the ET phase for its own EH in the TS scheme or adjusts its amplitude reflection coefficients to harvest energy from a part of the received energy signal in the PS scheme.]{} - [We study the system sum-rate maximization problem for the TS scheme by jointly optimizing the IRS’s phase shift design in both ET and IT phase, time allocation for the IRS and users’ EH, time allocation for each user’ IT, and the users’ power allocation. To deal with the non-convexity of the formulated problem, we propose a two-step algorithm, where the problem is decoupled into two sub-problems. For the first sub-problem, showing that the optimal phase shift design in the IT phase is independent of other variables, we find the optimal phase shifts at the IRS in each time-slot of the IT phase by applying the semidefinite relaxation (SDR) [@Luo] and Gaussian randomization methods. We then solve the second sub-problem to find the optimal values of other optimization variables. In particular, we obtain the IRS’s EH time in a closed-form and discuss its implications. We finally obtain the optimal solution to our problem by sequentially solving the two sub-problems.]{} - [We then investigate the sum-rate maximization problem for the PS scheme and jointly optimize the IRS’s phase shift design in both ET and IT phase, time allocation for the EH and IT phases, power allocation at the users, and the amplitude reflection coefficients in the EH phase, using a similar two-step algorithm as for the TS scheme. We obtain the optimal amplitude reflection coefficient as a function of the EH time, from which some interesting observations are revealed. ]{} - [To shed more light on the structure of resource allocation for our proposed schemes, we also investigate special problems of optimizing time and power allocation with random phase shifts for both TS and PS schemes.]{} - [Finally, we evaluate the performance of our proposed schemes via numerical results, which show that our proposed schemes can achieve over *hundred times* sum-rate gain as compared to the baseline WPCN protocol in [@JuOne]. In addition, we show that the PS scheme usually achieves a better performance than the TS scheme. However, the PS scheme is not applicable when the HAP’s transmit power is low and/or the channel between the HAP and IRS is weak. ]{} This paper is organized as follows. Section \[sysmod\] describes the system model of the proposed IRS-assisted WPCN for both TS and PS methods. Sections \[TSMax\] and \[PSMax\] investigate the sum-rate maximization problems for TS and PS methods, respectively. Section \[Simulation\] evaluates the performance of the presented algorithms by conducting numerical simulations and Section \[Conclusion\] concludes the paper. System Model {#sysmod} ============= As illustrated in Fig. \[SystemModel\], we consider an IRS-assisted WPCN, consisting of an HAP with stable power supply, $N$ energy-constrained users (denoted by $U_i,~ i=1,\ldots,N$), and an energy-constrained IRS. HAP and users have single antenna each. The IRS is composed of $K$ passive reflecting elements, which can be configured to direct the incident signals to desired directions. The IRS assists in both downlink ET from the HAP to the users and uplink IT from the users to the HAP. The downlink channels from the HAP to $U_i$, from the HAP to the IRS, and from the IRS to $U_i$ are denoted by $h_{h,i}$, $\bm{h}_r \in \mathcal{C}^{K \times 1}$, and $\bm{h}_{u,i}^H \in \mathcal{C}^{1 \times K}$, respectively. The counterpart uplink channels are denoted by $g_{h,i}$, $\bm{g}_r^H \in \mathcal{C}^{1 \times K}$, and $\bm{g}_{u,i} \in \mathcal{C}^{K \times 1}$, respectively. All channels are assumed to be quasi-static flat fading, which remain constant during one block but may change from one block to another [@WuSWIPT]. We assume that the channel state information (CSI) of all links is perfectly known[^1]. The transmission block, normalized to one, is divided into two phases, i.e., ET phase and IT phase. In the ET phase, the HAP transfers energy to the users and IRS in the downlink. The IRS uses the harvested energy from HAP’s signals for its own EH and energy relaying to the users. In the IT phase, the users use the harvested energy to transmit data to the HAP with the assistance of the IRS. The details of the ET and IT phases are shown in Fig. \[BlockStructure\] and elaborated in the following subsections. Energy Transfer Phase --------------------- As mentioned earlier, the IRS is assumed to be energy-constrained, which needs to harvest energy from the HAP for powering its relaying operations. In this regard, we consider TS and PS schemes which have been widely used in conventional wireless-powered relay communications [@Nasir]. ### Time-switching scheme For the TS scheme, the ET phase with the duration of $t_0$ is divided into two sub-slots, having the duration of $\tau_0$ and $\tau_1$, respectively, which satisfy $\tau_0 + \tau_1 \le t_0$. In this scheme, the users can harvest energy over the entire ET phase. For the IRS, it will spend the first sub-slot in the ET phase for harvesting energy and the second sub-slot for improving the EH efficiency at the users. In particular, in the second sub-slot, the IRS can cooperate with the HAP by adjusting its elements’ phase shits in order to enhance the total received signal power at the users. The transmission block structure for the TS scheme is illustrated in Fig. \[BlockStructure\] (a). Denote the transmit signal in the ET phase as $$\begin{aligned} x_{h} = \sqrt{P_{h}} s_{h},\end{aligned}$$ where $P_{h}$ is the transmit power and $s_{h}$ is the energy-carrying signal with $s_{h} \sim \mathcal{CN}(0,1)$. The received signals at the IRS and $U_i$ in the first sub-slot are expressed as $$\begin{aligned} &\bm{y}_{r,0} = \bm{h}_r x_{h} + \bm{n}_{r}, \\ &y_{ts,0,i} = h_{h,i} x_{h} + n_{u,i},~ i=1,\ldots,N, \end{aligned}$$ where $\bm{n}_{r}$ and $n_{u,i}$ denote the additive white Gaussian noises (AWGNs) at the IRS and $U_i$, respectively. Note that the noise power is usually very small and ineffective for EH and can be thus neglected. Hence, the harvested energy by the IRS, denoted by $E_{ts,r}$, is expressed as $$\begin{aligned} E_{ts,r} = \eta P_{h} ||\bm{h}_r ||^2 \tau_0. \end{aligned}$$ In the second sub-slot, the IRS assists in the downlink ET. The phase shift matrix of the IRS during $\tau_1$ is denoted by $\bm{\Theta}_{e} = \text{diag} \{ \beta_{e,1} e^{j \theta_{e,1}}, \ldots, \beta_{e,K} e^{j \theta_{e,K}} \}$, where $\beta_{e,k} \in [0,1]$ and $\theta_{e,k} \in [0, 2\pi]$ are the amplitude reflection coefficient and the phase shift of the $k$-th element, respectively. For the TS scheme, since the IRS only harvests energy during $\tau_0$, all incident signals at the IRS during $\tau_1$ can be reflected to enhance the EH efficiency, i.e., $\beta_{e,k}=1,~\forall k$ [@Wu2020Survey]. Let $\bar{\bm{\Theta}}_e = \text{diag} \{ e^{j \theta_{e,1}}, \ldots, e^{j \theta_{e,K}} \}$. During $\tau_1$, the received signal at $U_i$ for the TS scheme is given by $$\begin{aligned} y_{ts,i} = (\bm{h}_{u,i}^H \bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i}) x_{h} + n_{u,i}, ~i =1,\ldots,N. \end{aligned}$$ The harvested energy of $U_i$ for the TS scheme, denoted by $E_{ts,i}$, is thus obtained as $$\begin{aligned} \label{HarvUser} E_{ts,i} = \eta P_{h} |h_{h,i}|^2 \tau_0 + \eta P_{h} |\bm{h}_{u,i}^H \bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i} |^2 \tau_1, \end{aligned}$$ where the first term denotes the energy harvested by $U_i$ from the HAP directly during $\tau_0$, and the second term is the harvested energy with the aid of the IRS during $\tau_1$. ### Power-splitting scheme Different from the TS scheme, the dedicated EH time is not required in the PS scheme and the IRS harvests energy from the HAP by adjusting the amplitude reflection coefficients ($\beta_{e,k},\forall k$), as illustrated in Fig. \[BlockStructure\] (b). To be specific, only a part of the HAP’s energy signals is reflected by the IRS and the remaining part is fed into the IRS’s EH unit for harvesting. It is assumed that all the amplitude reflection coefficients of the IRS elements have the same value, i.e. $\beta_{e,k} = \beta_{e},~ \forall k$, which is a reasonable assumption for simplifying the circuit design and reducing the circuit power consumption of the IRS. The received signal at $U_i$ in the ET phase for the PS scheme is thus given by $$\begin{aligned} y_{ps,i} = (\bm{h}_{u,i}^H \beta_{e}\bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i} ) x_{h} + n_{u,i}, ~i= 1, \ldots,N, \end{aligned}$$ The harvested energy of the IRS and $U_i$ for the PS scheme are denoted as $E_{ps,r}$ and $E_{ps,i}$, which are respectively given by $$\begin{aligned} &E_{ps,r} = \eta P_{h} (1-\beta_{e}^2) || \bm{h}_r ||^2 t_0, \\ & E_{ps,i} = \eta P_{h} | \bm{h}_{u,i}^H \beta_{e} \bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i} |^2 t_0.\end{aligned}$$ Information Transmission Phase ------------------------------ In the IT phase, the users transmit information to the HAP via time division multiple access, using the harvested energy in the ET phase. Denote the duration of IT for $U_i$ as $t_i$. Let $s_{u,i}$ be the information-carrying signal of $U_i$ with unit power. The transmit signal of $U_i$ during $t_i$ is then expressed as $$\begin{aligned} x_{u,i} = \sqrt{P_{u,i}} s_{u,i},\end{aligned}$$ where $P_{u,i}$ is $U_i$’s transmit power and satisfies $$\begin{aligned} P_{u,i} t_i + P_{c,i} t_i \le E_{f,i},~ f = \{ts,ps\},\end{aligned}$$ with $P_{c,i}$ being the circuit power consumption of $U_i$. The circuit power consumption of the IRS is given by $K\mu$, where $\mu$ denotes the circuit power consumption of each reflecting element [@Huang2018Conf]-[@GongTwo]. To power its operations, IRS needs to harvest sufficient energy in the ET phase. Therefore, the following constraints are hold $$\begin{aligned} &K \mu ( \tau_1 + \sum_{i=1}^N t_i) \le E_{ts,r}, \\ & K \mu (t_0 + \sum_{i=1}^N t_i) \le E_{ps,r},\end{aligned}$$for TS and PS schemes, respectively. Note that in the first sub-slot of the TS scheme, it is not necessary to adjust the IRS’s phase shifts to reflect signals and its circuit power consumption for only EH during $\tau_0$ is thus negligible [@Huang2019IRS]. Denote the phase shift matrix during $t_i$ for the IT as $\bm{\Theta}_{d,i}$, where $\bm{\Theta}_{d,i} = \text{diag} \{e^{j \theta_{d,i,1}}, \ldots, e^{j \theta_{d,i,K}} \}$. Note that we have set the amplitude reflection coefficients to be 1 to maximize the signal reflection in the IT phase. The received signal at the HAP from $U_i$, denoted by $y_{h,i}$, is thus given by $$\begin{aligned} y_{h,i} = (\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i} ) \sqrt{P_{u,i} } s_{u,i} + n_h,\end{aligned}$$ where $n_h \sim \mathcal{CN} (0, \sigma_h^2) $ is the AWGN at the HAP. The signal-noise-ratio (SNR) at the HAP during $t_i $, denoted by $\gamma_{i}$, is expressed as $$\begin{aligned} \gamma_i = \frac{P_{u,i} |\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2 } {\sigma_h^2}.\end{aligned}$$ The achievable rate from $U_i$ to the HAP is then formulated as $$\begin{aligned} \label{AchievableRate} R_i = t_i \log_2 \left(1 + \frac{P_{u,i} |\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2 } {\sigma_h^2} \right).\end{aligned}$$ Sum-rate maximization for the TS scheme {#TSMax} ======================================= In this section, we aim to maximize the system sum-rate by jointly optimizing the phase shift design at the IRS in both ET and IT phases, time scheduling of the network for ET and IT, and power allocation at the users. The optimization problem is formulated as $$\tag{$\textbf{P1}$} \begin{aligned} \max_{\bm{\theta}_{e}, \{\bm{\theta}_{d,i}\}_{i=1}^N, \bm{t}, \bm{\tau}, \bm{P}_u } ~ &\sum_{i=1}^N R_{i}, \\ \text{s.t.}~~~~~~~~ &\text{C1:}~ K \mu ( \tau_1 + \sum_{i=1}^N t_i) \le E_{ts,r}, \\ &\text{C2:}~ P_{u,i} t_i + P_{c,i} t_i \le E_{ts,i},~\forall i, \\ &\text{C3:}~ \tau_0 + \tau_1 \le t_0, \\ &\text{C4:}~ \sum_{i=0}^N t_i \le 1, \\ &\text{C5:}~ \tau_0, \tau_1 \ge 0, \\ &\text{C6:}~ t_i \ge 0, ~\forall i,\\ &\text{C7:}~ P_{u,i} \ge 0,~\forall i,\\ &\text{C8:}~ 0 \le \theta_{e,k} \le 2 \pi, ~\forall k,\\ &\text{C9:} ~ 0 \le \theta_{d,i,k} \le 2 \pi, ~\forall i,~\forall k, \\ \end{aligned}$$ where $\bm{t} = [t_0,t_1,\ldots,t_N]$, $\bm{\tau} = [\tau_0,\tau_1]$, $\bm{\theta}_{e} = [\theta_{e,1}, \ldots, \theta_{e,K}]$, $\bm{\theta}_{d,i} = [\theta_{d,i,1}, \ldots, \theta_{d,i,K}]$, and $\bm{P}_u = [P_{u,1}, \ldots, P_{u,N}]$. Optimal Solution to **P1** -------------------------- It is obvious that **P1** is a non-convex optimization problem due to the coupling of variables in the objective function and the constraints. In the following, we propose a two-step solution to solve the sum-rate maximization problem in **P1**. Specifically, we split **P1** into two decoupled sub-problems, and solve each sub-problem separately. ### Optimizing the phase shift design for IT We first present a proposition for the optimal design of phase shifts of the IRS for the IT. \[LemmaOne\] The optimal IRS phase shifts for the IT during $t_i$ ($i=1,\ldots,N$) can be found by solving the following problem $$\tag{$\textbf{P2}$} \begin{aligned} \bm{\theta}_{d,i}^* = &\arg \max_{\bm{\theta}_{d,i}} |\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2, \\ \text{s.t.} \ & 0 \le \theta_{d,i,k} \le 2 \pi, ~\forall k. \end{aligned}$$ Refer to Appendix \[App:LemmaOne\]. According to Lemma \[LemmaOne\], we proceed to solve **P2** to obtain the optimal phase shifts for the IT. Denote $v_{d,i,k} = e^{j \theta_{d,i,k}}$, where $|v_{d,i,k}| =1$. Let $\bm{\phi}_i = \text{diag}(\bm{g}_r^H) \bm{g}_{u,i}$ and $\bm{v}_{d,i} = [v_{d,i,1},\ldots,v_{d,i,K}]^H$. Then, $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2$ can be rewritten as $|\bm{v}_{d,i}^H \bm{\phi}_i + g_{h,i} |^2 = \bm{v}_{d,i}^H \bm{\phi}_i \bm{\phi}_i^H \bm{v}_{d,i} + \bm{v}_{d,i}^H \bm{\phi}_i g_{h,i}^H + g_{h,i} \bm{\phi}_i^H \bm{v}_{d,i} + |g_{h,i}|^2$. Thus, **P2** is reformulated as $$\tag{$\textbf{P2.1}$} \begin{aligned} \max_{\bm{v}_{d,i}} ~~& \bm{v}_{d,i}^H \bm{\phi}_i \bm{\phi}_i^H \bm{v}_{d,i} + \bm{v}_{d,i}^H \bm{\phi}_i g_{h,i}^H + g_{h,i} \bm{\phi}_i^H \bm{v}_{d,i} + |g_{h,i}|^2 , \\ \text{s.t.} ~~& |v_{d,i,k}| =1, ~\forall k. \\ \end{aligned}$$ Note that **P2.1** is non-convex and difficult to be solved directly. Hence, we introduce an auxiliary matrix $\bm{R}_{d,i} $ and an auxiliary vector $\bar{\bm{v}}_{d,i}$ for mathematical manipulation, which are given by $$\bm{R}_{d,i} = \begin{bmatrix} \bm{\phi}_{i} \bm{\phi}_i^H & \bm{\phi}_i g_{h,i}^H \\ \bm{\phi}_i^H g_{h,i} & 0 \end{bmatrix}, \quad \bar{\bm{v} }_{d,i} = \begin{bmatrix} \bm{v}_{d,i} \\ 1 \end{bmatrix}.$$ Based on $\bm{R}_{d,i}$ and $\bar{\bm{v}}_{d,i}$, the objective function of **P2.1** is rewritten as $\bm{\bar{v}}_{d,i}^H \bm{R}_{d,i} \bm{\bar{v}}_{d,i} + |g_{h,i}|^2 = \text{Tr} (\bm{R}_{d,i} \bar{\bm{v}}_{d,i} \bar{\bm{v}}_{d,i}^H) + |g_{h,i}|^2$. **P2.1** can be then expressed as $$\tag{$\textbf{P2.2}$} \begin{aligned} \max_{\bar{\bm{v}}_{d,i}} ~~& \text{Tr} (\bm{R}_{d,i} \bar{\bm{v}}_{d,i} \bar{\bm{v}}_{d,i}^H) + |g_{h,i}|^2 , \\ \text{s.t.} ~~ & |v_{d,i,k}| =1, ~\forall k. \\ \end{aligned}$$ Let $\bm{V}_{d,i} = \bar{\bm{v}}_{d,i} \bar{\bm{v}}_{d,i}^H$, where $\bm{V}_{d,i} \succeq 0 $ and $\text{rank} (\bm{V}_{d,i}) =1$. **P2.2** is then equivalent to $$\tag{$\textbf{P2.3}$} \begin{aligned} \max_{\bm{V}_{d,i}}~~&\text{Tr} (\bm{R}_{d,i} \bm{V}_{d,i}) + |g_{h,i}|^2 , \\ \text{s.t.} ~~ &\text{C10:}~\bm{V}_{d,i,k,k} =1, ~\forall k, \\ &\text{C11:}~\bm{V}_{d,i} \succeq 0, \\ &\text{C12:}~\text{rank} (\bm{V}_{d,i}) = 1, \end{aligned}$$ where $\bm{V}_{d,i,k,k}$ denotes the $k$-th diagonal element of $\bm{V}_{d,i}$. **P2.3** is still a non-convex due to the rank-one constraint in C12. However, using the semidefinite relaxation (SDR) technique [@Luo], we can relax the rank-one constraint to obtain a convex semidefinite programming (SDP) problem [@BoydOne], which can be optimally solved using convex optimization toolboxes, e.g., CVX [@BoydTwo]. However, the solution obtained for the relaxed version of **P2.3** by CVX may not satisfy the rank-one constraint. To achieve a rank-one solution, i.e., an approximate solution with satisfying accuracy for **P2.3** (**P2.2**), the Gaussian randomization method is employed, which can construct a rank-one solution from the solution obtained by CVX. Denote the solution to the relaxed problem as $\bar{\bm{V}}_{d,i}$. The singular value decomposition (SVD) of $\bar{\bm{V}}_{d,i}$ is expressed as $\bar{\bm{V}}_{d,i} = \bm{U}_{d,i} \bm{\varSigma}_{d,i} \bm{U}_{d,i}^H$, where $\bm{U}_{d,i} \in \mathcal{C}^{(K+1) \times (K+1)}$ and $\bm{\varSigma}_{d,i} \in \mathcal{C}^{(K+1) \times (K+1)}$ are the unitary matrix and diagonal matrix, respectively. Then, the approximate solution for **P2.2**, denoted by $\hat{\bm{v}}_i$, can be constructed as follows $$\begin{aligned} \label{RandomVectorOne} \hat{\bm{v}}_{d,i} = \bm{U}_{d,i} \sqrt{\bm{\varSigma}_{d,i}} \bm{r}_{d,i},\end{aligned}$$ where $\bm{r}_{d,i}$ is a random vector with $\bm{r}_{d,i} \sim \mathcal{CN} (\bm{0}, \bm{I}_{K+1} )$. We generate $D_1$ times of random vectors and compute the corresponding objective values for **P2.2**. The near-optimal solution to **P2.2**, $\hat{\bm{v}}_{d,i}^*$, is the one achieving the maximum objective function value of **P2.2**. The near-optimal solution to **P2.1**, denoted by $\bm{v}_{d,i}^*$, is finally recovered by $$\begin{aligned} \label{OptiVectorOne} \bm{v}_{d,i}^* = e^{j \arg \Big( \Big[\frac{\hat{\bm{v}}_{d,i}^*}{\hat{{v}}^*_{d,i, K+1}}\Big]_{(1:K) }\Big) },\end{aligned}$$ where $[\bm{\omega}]_{(1:M)}$ represents that the first $M$ elements of $\bm{\omega}$ are taken, $\hat{{v}}_{d,i, K+1}^*$ denotes the $(K+1)$-th element of $\hat{\bm{v}}_{d,i}^*$ [@Chu]. According to [@SPR], the SDR technique followed by quite large number of randomizations based on the Gaussian randomization method can guarantee at least an $\frac{\pi}{4}$ approximation of the maximum objective function value of **P2.1**. Algorithm \[Alg:One\] outlines the procedure for optimizing the IRS phase shift design in the IT phase. Use the SDR technique and solve the relaxed version of **P2.3** by CVX to obtain $\bar{\bm{V}}_{d,i}$. Compute the SVD of $\bar{\bm{V}}_{d,i}$ and obtain $\bm{U}_{d,i}$ and $\bm{\varSigma}_{d,i}$. Initialize $\mathcal{Q} = \emptyset$. According to Lemma \[LemmaOne\], we can further obtain the following corollary. \[CorollaryOne\] The optimal phase shifts for the IT will align the signal of the HAP-IRS-$U_i$ link with the signal of the HAP-$U_i$ link, i.e., $$\begin{aligned} \label{EqCorollaryOne} \bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i}= \delta g_{h,i},\end{aligned}$$ where $\delta$ is a positive scalar.[^2] Refer to Appendix \[App:CorollaryOne\]. \[RemarkITEnhancement\] From Corollary \[CorollaryOne\], we can observe that for a given $P_{u,i}$, the received SNR at the HAP during $t_i$ with the assistance of the IRS can be enhanced up to $(1+\delta)^2$ compared with that of without IRS. It should be noted that $\delta$ is usually proportional to the number of reflecting elements. Hence, the system sum-rate can be significantly improved with the assistance of the IRS consisting of a large number of reflecting elements. ### Optimizing phase shift design for ET, time scheduling, and power allocation {#IIIA2} According to Lemma \[LemmaOne\], **P1** can be simplified as $$\tag{$\textbf{P3}$} \begin{aligned} \max_{\bm{t}, \bm{\tau}, \bm{\theta}_{e}, \bm{P}_u} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{P_{u,i} \bar{\gamma}_i} {\sigma_h^2}), \\ \text{s.t.} ~~& \text{C1}-\text{C8}, \end{aligned}$$ where $\bar{\gamma}_i = |\bm{g}_r^H \bm{\Theta}_{d,i}^* \bm{g}_{u,i} + g_{h,i}|^2$, and $\bm{\Theta}_{d,i}^*$ is obtained via Algorithm \[Alg:One\]. **P3** is still non-convex because the variables are coupled in the objective function and the constraints. We introduce auxiliary variables $v_{e,k} = e^{j \theta_{e,k}},~\forall k$ and $e_{u,i} = P_{u,i} t_i,~ \forall i$. We also set $\psi_i = \text{diag}(\bm{h}_{u,i}^H ) \bm{h}_r$. Then, we have $\bm{v}_{e} = [v_{e,1}, \ldots, v_{e,K}]^H$, and $\bm{e}_u=[e_{u,1},\ldots,e_{u,N}]$. Let $\bar{\bm v}_{e} = [\bm{v}_{e}^H,1]^H$ and $\bm{V}_{e} = \bar{\bm{v}}_{e} \bar{\bm{v}}_{e}^H$, where $\bm{V}_{e} \succeq 0 $ and $\text{rank} (\bm{V}_{e}) = 1$. Based on these new variables, the constraint C2 is recast as follows: $$\begin{aligned} & \text{C13:}~e_{u,i} + P_{c,i} t_i \le \eta P_h |h_{h,i}|^2 \tau_0 \nonumber \\ &~~~~~ + \eta P_h [\text{Tr}(R_{e,i} \bm{V}_{e}) + |h_{h,i}|^2 ] \tau_1,~\forall i,\end{aligned}$$ where $${\bm{R}}_{e,i} = \begin{bmatrix} \bm{\psi}_{i} \bm{\psi}_i^H & \bm{\psi}_i h_{h,i}^H \\ \bm{\psi}_i^H h_{h,i} & 0 \end{bmatrix}.$$ Then, **P3** can be rewritten as $$\tag{$\textbf{P3.1}$} \begin{aligned} \max_{\bm{t}, \bm{\tau}, \bm{V}_{e}, \bm{e}_u} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{e_{u,i} \bar{\gamma}_i} {t_i \sigma_h^2}), \\ \text{s.t.} ~~& \text{C1},~ \text{C3}-\text{C6},~\text{C13}, \\ &\text{C14:}~ e_{u,i} \ge 0,~ \forall i, \\ & \text{C15:}~\bm{V}_{e} \succeq 0, \\ &\text{C16:}~\text{rank} (\bm{V}_{e,i}) = 1, ~\forall i. \end{aligned}$$ Due to the rank-one constraint in C16 and coupling of ${\bm{V}}_{e}$ and $\tau_1$ in C13, **P3.1** is still non-convex and difficult to be solved directly. However, it is straightforward to obtain the optimal duration of the first sub-slot in the ET phase, i.e., $\tau_0$, as stated in the following proposition. \[ProOptiTime\] The optimal duration of the first sub-slot in the ET phase can be obtained as $$\begin{aligned} \label{Optitau0} \tau_0^* = \frac{K \mu}{K \mu + \eta P_h ||\bm{h}_r||^2}.\end{aligned}$$ Refer to Appendix \[App:ProOptiTime\]. From Proposition \[ProOptiTime\], we can observe that the duration of the first sub-slot in the ET phase is mainly determined by the IRS’s setting, e.g., the number of passive reflecting elements, each element’s circuit power consumption, and the channel power gain between the HAP and IRS. If each element has a higher circuit power consumption, the IRS needs more time to harvest sufficient energy, which leaves shorter time for other network operations, i.e., users’ EH with the assistance of IRS and users’ IT. However, adding the number of elements may not increase the IRS’s EH time because each element can harvest energy from the HAP individually. We now proceed to solve **P3.1** with $\tau_0^*$ obtained in Proposition \[ProOptiTime\]. For solving **P3.1**, we first fix $\tau_1$ and optimize time and energy allocation in the IT phase as well as the IRS phase shift design for the ET phase. We can then find the optimal $\tau_1$ by a one-dimensional search over $[0,1-\tau_0]=\big[0, 1- K \mu / (K \mu + \eta P_h ||\bm{h}_r||^2)\big]$. With fixed $\tau_1$, **P3.1** is reformulated as $$\tag{$\textbf{P3.2}$} \begin{aligned} \max_{\bar{\bm{t}}, \bm{V}_{e}, \bm{e}_u} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{e_{u,i} \bar{\gamma}_i} {t_i \sigma_h^2}), \\ \text{s.t.} ~~& \text{C1}, ~\text{C6},~\text{C13}-\text{C16}, \\ & \sum_{i=1}^N t_i \le 1- \tau_0^* - \tau_1. \end{aligned}$$where $\bar{\bm {t}}=[t_1,...,t_N]$. Relaxing the rank-one constraint in C16, **P3.2** will be a convex optimization problem [@BoydOne]. We thus use the CVX tool [@BoydTwo] to solve the relaxed **P3.2** and obtain its optimal solution $\{ t_1^*,\ldots,t_N^*, \bar{e}_{u,1},\ldots,\bar{e}_{u,N}, \bar{\bm{V}}_e \}$. Similar to **P2.3**, the obtained $\bar{\bm{V}}_e$ generally does not satisfy the rank-one constraint. Hence, the Gaussian randomization method is adopted to construct an approximate rank-one solution, which is given by $$\begin{aligned} \label{RandomVectorTwo} \hat{\bm{v}}_e = \bm{U}_e \sqrt{\Sigma_e} \bm{r}_e,\end{aligned}$$ where $\bm{U}_{e} \in \mathcal{C}^{(K+1) \times (K+1)}$ and $\bm{\varSigma}_{e} \in \mathcal{C}^{(K+1) \times (K+1)}$ are the unitary matrix and diagonal matrix resulting from SVD of $\bar{\bm{V}}_e$, and $\bm{r}_{e} \sim \mathcal{CN} (\bm{0}, \bm{I}_{K+1} )$. Note that as the objective function is an increasing function of $e_{u,i}$, C13 must be active at the optimal solution. Therefore, based on the generated random vectors, the energy allocation of the users is computed as $$\begin{aligned} \label{RecomputedEnergy} \hat{e}_{u,i} &= \Big(\eta P_h |h_{h,i}|^2 \tau_0^* + \eta P_h \text{Tr}(\bm{R}_{e,i} \hat{\bm{v}}_e \hat{\bm{v}}_e^H) \tau_1 \nonumber \\ & + \eta P_h |h_{h,i}|^2 \tau_1 - P_{c,i} t_i^*\Big)^+, ~\forall i,\end{aligned}$$ where $(x)^+$ means $\max (x,0) $. Finally, $\hat{\bm{v}}_{e}^*$ is the vector achieving the maximum objective function value and $e_{u,i}^*,~\forall i$ is the corresponding energy allocation solution obtained from . The procedure for solving the sum-rate maximization problem for the TS scheme is summarized in Algorithm \[Alg:Two\]. By running Algorithm \[Alg:Two\], we can obtain the near-optimal solution for **P1**. Note that the accuracy of our obtained solution is related with the number of randomizations for the Gaussian randomization method used in each sub-problem and the step-size for updating $t_1$ in the second sub-problem. Hence, we can achieve a solution for **P1** with satisfying accuracy by increasing the number of randomizations and using a smaller step-size. Solve the relaxed version of **P3.2** with fixed $\tau_1$ and obtain its optimal solution $\bar{\bm{V}}_{e}$. Compute the SVD of $\bar{\bm{V}}_{e}$ and obtain $\bm{U}_{e} $ and $\bm{\varSigma}_{e}$. Calculate the objective function value of **P3.2** and denote it by $R_{\text{sum}}(\bar{D})$. . $\tau_1 = \tau_1 +\Delta$. . . Random phase shifts with optimized resource allocation for the TS scheme {#RndPhase} ------------------------------------------------------------------------ To reduce the computational complexity and show more insights about resource allocation, we consider a special case with random design of phase shifts and focus on the time and power allocation optimization in the IRS-assisted WPCN. As will be shown in Section \[Simulation\], using IRS is beneficial for improving the performance of WPCN even with randomly designed phase shifts [@WuIRS; @Arun]. Letting $e_{u,i} = P_{u,i} t_i $ and $\gamma_{u,i} = |\bm{h}_{u,i}^H \bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i} |^2$, we have $$\begin{aligned} \text{C17:}~ e_{u,i} + P_{c,i} t_i \le \eta P_h |h_{h,i}|^2 \tau_0 + \eta P_h \gamma_{u,i} \tau_1,~\forall i,\end{aligned}$$and the sum-rate maximization problem with random phase shifts is formulated as $$\tag{$\textbf{P4}$} \begin{aligned} \max_{\bm{t}, {\tau}, \bm{e}_u } ~& \sum_{i=1}^N t_i \log_2(1+ \frac{\gamma_{d,i}}{\sigma_h^2} \frac{e_{u,i} }{t_i }), \\ \text{s.t.} \ \ & \text{C1},~\text{C3}-\text{C6}, ~\text{C17},~e_{u,i}\ge 0, \forall i, \end{aligned}$$where $\gamma_{d,i} =|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2$. The constraint C17 is an equality at the optimal solution as we discussed earlier. Hence, we have $$\begin{aligned} \label{EqualEnergy} e_{u,i}^* = \eta P_h |h_{h,i}|^2 \tau_0^* + \eta P_h \gamma_{u,i} \tau_1^* - P_{c,i} t_i^* ,~\forall i\end{aligned}$$ Substituting into $R_i$, we have $$\begin{aligned} R_i = t_i \log_2(1+ \frac{a_i + b_i \tau_1}{t_i} -c_i),\end{aligned}$$ where $a_i = \eta P_h |h_{h,i}|^2 \gamma_{d,i} \tau_0 / \sigma_h^2$, $b_i = \eta P_h \gamma_{u,i} \gamma_{d,i} / \sigma_h^2$ and $c_i = P_{c,i} \gamma_{d,i} / \sigma_h^2$. Proposition \[ProOptiTime\] holds here as well. Hence, **P4** is modified as $$\tag{$\textbf{P4.1}$} \begin{aligned} \max_{\bar{\bm {t}}, {\tau}_1} ~& \sum_{i=1}^N t_i \log_2(1+ \frac{a_i + b_i \tau_1}{t_i } -c_i), \\ \text{s.t.} \ \ & \text{C4},~\text{C6}, ~\tau_1 \ge 0, \end{aligned}$$ It can be verified that **P4.1** is a convex optimization problem [@BoydOne], which can be solved by standard convex optimization techniques, e.g., Lagrange duality method. The Lagrangian of **P4.1** is given by $$\begin{aligned} \mathcal{L} (\bar{\bm {t}}, \rho, \tau_1) &= \sum_{i=1}^N t_i \log_2(1+ \frac{a_i + b_i \tau_1}{t_i} -c_i) \nonumber \\ &- \rho \Big[\tau_0 + \tau_1 + \sum_{i=1}^N t_i -1 \Big],\end{aligned}$$ where $\rho \ge 0$ is the Lagrange multiplier associated with the constraint C4. \[ProOptiTimeAllocation\] With random design of phase shifts, the optimal time scheduling for the TS scheme is given by $$\begin{aligned} \label{Opttau1} &\tau_1^* = \frac{1- \frac{K \mu }{K \mu + \eta P_h ||\bm{h}_r||^2 } - \sum_{i=1}^N \frac{a_i}{z_i^* + c_i} }{1+ \sum_{i=1}^N \frac{b_i}{z_i^* + c_i } }, \\ \label{Optti} & t_i^* = \frac{a_i + b_i\tau_1^*} { z_i^* + c_i }, ~\forall i,\end{aligned}$$ where $z_i^* >0$ is the unique solution of $$\begin{aligned} \log_2(1+z_i) - \frac{z_i+c_i}{\ln(2) (1+z_i) } = \rho^*,\end{aligned}$$ and $\rho^*$ is the optimal dual variable. Refer to Appendix \[App:ProOptiTimeAllocation\]. Using and Propositions \[ProOptiTime\] and \[ProOptiTimeAllocation\], the optimal energy allocation at each user can be easily obtained. Sum-rate maximization for the PS scheme {#PSMax} ======================================= In this section, we investigate the optimal solution to the sum-rate maximization problem for the PS scheme. The problem is formulated as $$\tag{$\textbf{P5}$} \begin{aligned} \max_{ \bm{t}, \bm{\theta}_{e}, \{\bm{\theta}_{d,i}\}_{i=1}^{N}, \bm{P}_u, {\beta}_e } ~ &\sum_{i=1}^N R_{i}, \\ \text{s.t.} \ \ &\text{C4},~\text{C6}-\text{C9},\\ & \text{C18:}~ K \mu (t_0 + \sum_{i=1}^N t_i ) \le E_{ps,r},~ \forall i, \\ & \text{C19:}~ P_{u,i} t_i + P_{c,i} t_i \le E_{ps,i}, ~\forall i, \\ & \text{C20:}~ 0 \le \beta_{e} \le 1. \end{aligned}$$ Optimal solution to **P5** -------------------------- Similar to **P1**, **P5** is a non-convex optimization problem due to the coupled variables in the objective function and the constraints. It is straightforward to observe that Lemma \[LemmaOne\] also holds for **P5**. Hence, the optimal phase shifts for the IT phase can be found from Algorithm \[Alg:One\]. Accordingly, **P5** can be reformulated as $$\tag{$\textbf{P5.1}$} \begin{aligned} \max_{ \bm{t}, \bm{\theta}_{e}, \bm{P}_u, \bm{\beta}_e} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{P_{u,i} \bar{\gamma}_i} {\sigma_h^2}), \\ \text{s.t.} ~~&\text{C4}, ~\text{C6}-\text{C8}, ~\text{C18}-\text{C20}. \end{aligned}$$ \[LemmaFeasible\] To guarantee that the value of sum-rate for the PS scheme is greater than zero, the following condition that the maximum harvested energy at the IRS from the HAP is larger than its circuit power consumption must be satisfied, i.e., $$\begin{aligned} \label{FeasibleCondition} K \mu < \eta P_h ||\bm{h}_r ||^2.\end{aligned}$$ Refer to Appendix \[App:LemmaFeasible\]. From Lemma \[LemmaFeasible\], we can observe that unlike the TS scheme, the PS scheme cannot always be used, i.e., if is not satisfied. However, we can increase the transmit power at the HAP and/or reduce the distance between the HAP and IRS to enable the PS scheme. In the following, we investigate **P5.1** under the condition that is satisfied, because otherwise, using IRS would be infeasible. Following the same steps as in Section \[IIIA2\], the sum-rate maximization problem is formulated as $$\tag{$\textbf{P5.2}$} \begin{aligned} \max_{\bm{t}, \bm{e}_u, \beta_e, \bm{V}_e} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{\bar{\gamma}_i} {\sigma_h^2}\frac{e_{u,i}}{t_{i}}), \\ \text{s.t.} ~~& \text{C4},~\text{C6},~\text{C14},~\text{C18},~\text{C20},\\ &\text{C21:}~e_{u,i}+ P_{c,i} t_i \le \eta P_h [\text{Tr}(\bm{\bar{R}}_{e,i} \bm{V}_{e}) + |h_{h,i}|^2 ] t_0, \\ % &~~~~~~e_{u,i},\forall i,\\ &~~~~~~\bm{V}_{e} \succeq 0, \\ &~~~~~~\text{rank} (\bm{V}_{e}) = 1. \end{aligned}$$ where $${\bm{\bar{R}}}_{e,i} = \begin{bmatrix} \beta_e^2\bm{\psi}_{i} \bm{\psi}_i^H & \beta_e\bm{\psi}_i h_{h,i}^H \\ \beta_e\bm{\psi}_i^H h_{h,i} & 0 \end{bmatrix}.$$ \[Relationship\] The optimal value of the amplitude reflection coefficient $\beta_e$ is obtained as $$\begin{aligned} \label{Opt_Beta} \beta_e^* = \sqrt{1-\dfrac{K\mu}{\eta P_h||\bm{h}_{r}||^2 t_0^*}},\end{aligned}$$ where $\frac{K \mu}{\eta P_h ||\bm{h}_r ||^2 } < t_0^* \le 1$. The proof of Proposition \[Relationship\] is similar to that of Proposition \[ProOptiTime\] and is thus omitted for brevity. According to Proposition \[Relationship\], $t_0$ is increasing with respect to $\beta$. It is because a larger $\beta$ makes the instantaneous harvested power by the IRS be reduced so that the IRS needs more time to harvest sufficient energy. For the scenario that the transmit power at the HAP is small, a longer duration for the ET phase is required. For solving **P5.2**, we first fix $t_0$ and optimize other variables. The optimal value of $t_{0}$ can then be obtained by a one-dimensional search over $\big( K \mu / (\eta P_h ||\bm{h}_r ||^2), 1 \big]$. Given $t_0$, the optimal value of $\beta_e$ can be found from Proposition \[Relationship\] and we will have the following optimization problem: $$\tag{$\textbf{P5.3}$} \begin{aligned} \max_{\bar{\bm{t}}, \bm{e}_u, \bm{V}_e} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{\bar{\gamma}_i} {\sigma_h^2}\frac{e_{u,i}}{t_{i}}), \\ \text{s.t.} ~~& \text{C4},~\text{C14},~\text{C21},\\ % &~e_{u,i},\forall i,\\ &~\bm{V}_{e} \succeq 0, \\ &~\text{rank} (\bm{V}_{e}) = 1. \end{aligned}$$ After the relaxation of the rank-one constraint, problem **P5.3** is similar to **P3.2** in Section \[IIIA2\] and can be solved following the same procedure. For brevity and to avoid repetition, we do not explain the details of solving problem **P5.3** here. Algorithm \[Alg:Three\] describes the process of solving the sum-rate maximization problem for the PS scheme (**P5**). Obtain $\beta_e^* (t_0)$ from . . . $t_0>1$. . . Random phase shifts with optimized resource allocation for the PS scheme ------------------------------------------------------------------------ Similar to Section \[RndPhase\], we consider the random design of phase shifts for the PS scheme and optimize the resource allocation in the network. With randomly generated phase shifts and after setting $e_{u,i}=P_{u,i}t_i,~\forall i$, we have the following resource allocation problem: $$\tag{$\textbf{P6}$} \begin{aligned} \max_{\bm{t}, \bm{e}_u, \beta_e} ~ &\sum_{i=1}^N t_i \log_2(1 + \frac{{\gamma}_{d,i} } {\sigma_h^2}\frac{e_{u,i}}{t_{i}}), \\ \text{s.t.} ~~& \text{C4},~\text{C6},~\text{C14},~\text{C18},~\text{C20}, \\ &\text{C22:}~~e_{u,i}+ P_{c,i} t_i \le \eta P_h \bar{\gamma}_{u,i} t_0, ~\forall i,\\ % &~~~~~~~~e_{u,i},\forall i. \end{aligned}$$ where $\bar{\gamma}_{u,i} = |\bm{h}_{u,i}^H \beta_e \bar{\bm{\Theta}}_{e} \bm{h}_r + h_{h,i} |^2$. It can be observed that Proposition \[Relationship\] also holds for **P6**. Due to the non-convexity of the constraint C22, it is still challenging to solve **P6**. Hence, we also first fix $t_0$ and optimize the time and energy allocation in the IT phase. We then find the optimal value of $t_0$ by searching over $\big(K \mu / (\eta P_h ||\bm{h}_r ||^2) , 1 \big]$. We know from previous discussions that C22 must be met with equality at the optimal solution, i.e., $$\begin{aligned} \label{EqualEnergy2} e_{u,i}^*+ P_{c,i} t_i^* = \eta P_h \bar{\gamma}_{u,i} t_0^*.\end{aligned}$$ Consequently, given $t_0$ and $\beta_e$, **P6** is rewritten as $$\tag{$\textbf{P6.1}$} \begin{aligned} \max_{\bar{\bm{t}}} ~ &\sum_{i=1}^N t_i \log_2(1 + d_i \frac{t_0}{t_i} -c_i ), \\ \text{s.t.} ~~& \text{C4},~\text{C6}, \end{aligned}$$ where $d_i = \eta P_h \gamma_{d,i} \bar{\gamma}_{u,i} / \sigma_h^2$. The Lagrangian of the above convex problem is given by $$\begin{aligned} &\mathcal{L}(\bm{\bar{t}}, \zeta)= \sum_{i=1}^N t_i \log_2(1 + d_i\frac{t_0}{t_i} -c_i )-\zeta \big(t_0+\sum_{i=1}^{N}t_i -1), \end{aligned}$$ where $\zeta$ is the Lagrange multiplier associated with the constraint C4. \[ProFive\] With fixed $t_0$ and $\beta_e$, the optimal time allocation in the IT phase for the PS scheme is given by $$\begin{aligned} t_i^*&=\frac{d_i}{w_i^*+c_i},~\forall i,\end{aligned}$$ where $w_i^* >0$ is the unique solution of $$\begin{aligned} \log_2(1+ w_i)-\frac{w_i+c_i}{\ln(2) (1+w_i)}=\zeta^*,\end{aligned}$$ and $\zeta^*$ is the optimal dual variable. The proof of Proposition \[ProFive\] is similar to that of Proposition \[ProOptiTimeAllocation\] and is thus omitted for brevity. The optimal energy allocation can then be easily found via and Proposition \[ProFive\]. ![Simulation setup for the IRS-assisted WPCN. []{data-label="LocationIllustration"}](LocationIllustration.pdf){width="3.5"} Performance Evaluation {#Simulation} ====================== In this section, we present numerical results to evaluate the performance of the proposed solutions for the IRS-assisted WPCN. The simulated network topology is a 2-D coordinate system as shown in Fig. \[LocationIllustration\], where the coordinates of the HAP and the IRS are given as (0,0) and $(x_r,x_h)$, the users are randomly deployed within a circular area centered at $(x_u, 0)$ with radius 2 m. The large-scale path-loss is modeled as $A (d/{d_0})^{-\alpha}$, where $A$ is the path-loss at the reference distance $d_0=1$ m and set to $A=-10$ dB, $d$ denotes the distance between two nodes, and $\alpha$ is the path-loss exponent. The path-loss exponents of the links between the HAP and users are assumed to be $3.6$ since the users are randomly deployed, while the path-loss exponents of the links between the HAP and IRS and between the IRS and users are set as $2.2$ because the IRS can be carefully deployed to avoid the severe signal blockage [@WuSWIPT]. The small-scale channel coefficients are generated as circularly symmetric Gaussian random variables with zero mean and unit variance. The results are obtained by averaging over different channel realizations. Unless otherwise stated, other parameters are given as follows: $\eta = 0.8$, $\sigma_h^2 = -110$ dBm, $\mu=1$ dBm, $P_{c,i} = 5$ dBm, $N=3$, $K=20$, $P=30$ dBm, $x_r = 3$ m, $x_h = 0.5$ m, and $ x_u = 10$ m. The scheme with random design of phase shifts and the scheme without IRS are used as benchmarks for performance comparisons. Fig. \[HAPPower\] shows the influence of HAP’s transmit power on the average system sum-rate. As expected, the sum-rate is improved with the increase of the HAP’s transmit power because the users can harvest more energy when the HAP’s transmit power is higher. Further, according to Proposition \[ProOptiTime\], the time needed for the IRS’s EH in the TS scheme is reduced when the transmit power of the HAP is increased. This gives more time for the IRS to assist in downlink ET from the HAP to the users, which boosts the harvested energy at the user and consequently improves the sum-rate. As for the PS scheme, increasing the HAP’s transmit power results in higher amplitude reflection coefficient according to Proposition \[Relationship\], which enhances the users’ harvested energy. It can be seen that our proposed schemes with optimized phase shift design outperform the benchmark ones for both the TS and PS schemes. We can also observe that the performance gap between the proposed optimized design and the benchmark schemes increases when the HAP’s transmit power gets larger. The figure also shows that when $P \le 20$ dBm, there is no gain in using IRS for improving the performance of WPCN for the PS method, which is consistent with what has been noted in Lemma \[LemmaFeasible\]. It is also worth mentioning that even the schemes with random phase shifts can bring performance gains to the conventional WPCN. That is because the RF energy can still be transferred from the HAP to the users through the reflecting links [@Arun]. It endorses the effectiveness of using IRS for performance enhancement even if the channel knowledge of the IRS’s links for optimized phase shift design is unavailable. ![Sum-rate versus HAP’s transmit power. []{data-label="HAPPower"}](HAPPower.pdf){width="4"} ![Sum-rate versus number of IRS reflecting elements. []{data-label="IRSEelements"}](NumIRSEl.pdf){width="4"} ![Sum-rate versus number of users. []{data-label="NumUsers"}](NumUser.pdf){width="4"} ![Sum-rate versus distance between HAP and users. []{data-label="Distance"}](Distance.pdf){width="4"} In Fig. \[IRSEelements\], we study the impact of the number of IRS reflecting elements on the average sum-rate. It can be clearly observed that our proposed solutions can achieve a significant gain in terms of the average sum-rate compared with other schemes. For the scheme without IRS, the average sum-rate is very small. It is because the received power at each user from the HAP through the direct link only is much smaller than its circuit power consumption, thus most of the transmission block time is used to harvest energy to power its circuit, and the remaining energy and time for the IT is very limited. However, the received power from the HAP through the direct and reflecting links can be comparable to the circuit power consumption under our proposed scheme. Hence, each user has relatively sufficient energy and time to transmit data. With the increase of number of IRS elements, the received power at each user becomes larger, which leads to a better system performance. Next, we study the effect of the number of network users on the average sum-rate in Fig. \[NumUsers\]. Again, the proposed IRS-assisted WPCN with optimal phase shift design notably outperforms the other two schemes. It can be observed that the average sum-rate is increasing with the number of WPCN users because more energy can be harvested with the increase of the number of users. Nevertheless, the average sum-rate does not increase when the number of users reaches a high number, e.g., over 10 users. The reason for this observation is that adding new users implies that more time is needed for the IT phase, which in consequence decreases the ET phase duration. Shorter ET duration in the TS scheme means that less time will be left for the IRS to assist in the downlink ET. In the PS scheme, the IRS needs to decrease its amplitude reflection coefficient $\beta_e$ to compensate for the loss of energy incurred by shortening the ET duration. Therefore, the gain brought by incrementing the number of users is neutralized by shortened ET time and the average sum-rate converges to an upper bound. Finally, we investigate the effect of users’ location on the sum-rate performance. As shown in Fig. \[Distance\], increasing $x_u$ results in sum-rate reduction because as $x_u$ increases, the users move further from both the HAP and IRS. Therefore, the signals received by the users in the ET phase from both the HAP and the IRS become weaker. Similarly, the signals received by the HAP in the uplink IT also becomes weaker. Once again, the proposed schemes can significantly outperform the other two schemes. Conclusions {#Conclusion} =========== This paper has proposed the hybrid-relaying scheme empowered by a self-sustainable IRS to enhance the performance of WPCN, where the IRS is deployed to improve the efficiency of downlink ET from the HAP to a number of users and uplink IT from the users to the HAP. In addition, we have proposed the TS and PS schemes for the IRS to harvest sufficient energy from the HAP to power its operations and investigated system sum-rate maximization problems for both schemes. To address the non-convexity of each formulated problem, we have developed the two-step method to efficiently obtain the near-optimal solution with satisfying accuracy. The special problems with random phase shifts have also been investigated to revel the structure of time and energy allocation. Then, we have performed simulations to evaluate the superiority of our proposed schemes, which have shown that our proposed schemes can achieve remarkable sum-rate gain compared to the baseline WPCN without IRS. From simulation results, we have also observed that the PS scheme can achieve a better performance than the TS scheme if the transmit power at the HAP is large enough or the channel between the HAP and IRS is strong. However, compared to the PS scheme, the TS scheme can be more widely applied because it is free from the constraint defined in Lemma \[LemmaFeasible\] for the PS scheme. Proof of Lemma \[LemmaOne\] {#App:LemmaOne} =========================== It is straightforward that $R_i$ is an increasing function with respect to $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2$ for $i=1,\ldots,N$. Therefore, at the optimal solution of **P1**, $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2$ must be maximized $\forall i$. In addition, $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2$ only depends on $\bm{\theta}_{d,i}$ ($\bm{\Theta}_{d,i}$), where $0 \le \theta_{d,i,k} \le 2 \pi,~\forall k$. As a result, for any given and feasible $\bm{t}$ and $\bm{P}_u$, maximizing the objective function of **P1** is equivalent to solving **P2** independently for $i=1,\ldots,N$. This thus proves Lemma \[LemmaOne\]. Proof of Corollary \[CorollaryOne\] {#App:CorollaryOne} =================================== $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2 $ can be rewritten as $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i}|^2 + |g_{h,i}|^2 + 2 |\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i}| |g_{h,i}| \cos{\alpha}$, where $$\begin{aligned} \alpha = \arctan {\frac{ \text{Im}(\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i})} {\text{Re} (\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i})} - \arctan { \frac{\text{Im} (g_{h,i})} {\text{Re}(g_{h,i})} } }.\end{aligned}$$ It is obvious that the maximum of $|\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i} + g_{h,i}|^2 $ is achieved if $\alpha = 0$, i.e., when $\bm{g}_r^H \bm{\Theta}_{d,i} \bm{g}_{u,i}$ and $g_{h,i}$ are aligned. This thus proves Corollary \[CorollaryOne\]. Proof of Proposition \[ProOptiTime\] {#App:ProOptiTime} ==================================== It can be verified that the objective function of **P3.1** is an increasing function with respect to $t_i$ for $i=1,\ldots,N$. Hence, at the optimal solution, the constraint C1 must be satisfied with equality. We can also observe that the right hand side of C1 is increasing with respect to $\tau_0$. Thus, the constraint C3 must be met with equality at the optimal solution because otherwise we can always increase $\tau_0$ as a result of which $t_i$ can be increased. Similarly, the constraint C4 must also be an equality at the optimal solution as otherwise we can increase $t_0$, which leads to the increase of $\tau_0$. Based on the three equalities from the constraints C1, C3 and C4, we can straightforwardly obtain the optimal value of $\tau_0$ as given by . Proof of Proposition \[ProOptiTimeAllocation\] {#App:ProOptiTimeAllocation} ============================================== The dual function of **P4.1** is given by $$\begin{aligned} \mathcal{G}(\rho) = \max_{\bar{\bm t} \ge 0, \tau_1 \ge 0} \mathcal{L} (\bar{\bm t}, \tau_1, \rho),\end{aligned}$$ Karush-Kuhn-Tucker (KKT) conditions are both necessary and sufficient for the optimality of **P4.1** [@BoydOne], which are given by $$\begin{aligned} \label{Partialti} &\frac{\partial{\mathcal{L}}} {\partial{t_i}} = \log_2 \Big(1+ \frac{a_i +b_i \tau_1^*} {t_i^* } - c_i \Big) - \frac{\frac{a_i +b_i \tau_1^* }{t_i^* } }{ \ln(2) \Big(1+ \frac{a_i + b_i \tau_1^*}{t_i^* } -c_i \Big) } \nonumber \\ & ~~~- \rho^* = 0, \\ \label{Partitau1} &\frac{\partial{\mathcal{L}}} {\partial{\tau_1}}=\sum_{i=1}^N \frac{b_i}{\ln(2) \Big(1+ \frac{a_i +b_i \tau_1^*} {t_i^* } - c_i \Big) } - \rho^* = 0, \\ \label{Slack} &\rho^* \Big[\tau_0 + \tau_1^* + \sum_{i=1}^N t_i^* -1 \Big] = 0.\end{aligned}$$ Setting $z_i = \frac{a_i + b_i \tau_1}{t_i } - c_i$ and substituting it into and , we have $$\begin{aligned} \label{PartialtiNew} &\log_2(1+z_i) - \frac{z_i + c_i}{ \ln(2) (1+z_i) } = \rho^*,\\ \label{Partialtau1New} & \sum_{i=1}^N \frac{b_i}{\ln(2) (1+z_i) } = \rho^*.\end{aligned}$$ It is straightforward to verify that the left hand side of is a strictly increasing function with respect to $z_i >0$. Hence, there exists a unique solution, denoted by $z_i^*$, satisfying . From , we can observe that $\rho^*$ is upper-bounded by $\frac{1}{\ln(2)}\sum_{i=1}^N b_i$ and can be thus found by the bisection method. Also, indicates that $\rho^*>0$. Having $\tau_0 + \tau_1^* + \sum_{i=1}^N t_i^* = 1$ from and $z_i^* = \frac{a_i +b_i\tau_1^*}{t_i^* } - c_i$, and are obtained with some simple mathematical calculations. This thus proves Proposition \[ProOptiTimeAllocation\]. Proof of Lemma \[LemmaFeasible\] {#App:LemmaFeasible} ================================ From C18, we have $$\begin{aligned} \label{BetaRange} \beta_e \leq \sqrt{1-\dfrac{K\mu(\sum_{i=0}^N t_i)}{\eta P_h||\bm{h}_{r}||^2 t_0}}.\end{aligned}$$ According to , for the value of $\beta_e$ to be strictly positive, we must have $$\begin{aligned} K\mu < \frac{\eta P_h||\bm{h}_{r}||^2 t_0}{\sum_{i=0}^N t_i} \le \eta P_h||\bm{h}_{r}||^2.\end{aligned}$$ Lemma \[LemmaFeasible\] is thus proved. [1]{} Cisco edge-to-enterprise IoT analytics for electric utilities. Available Online: <https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/big-data/solution-overview-c22-740248.html>, Feb. 2018. X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks with RF energy harvesting: A contemporary survey," *IEEE Commun. Surv. Tut.*, vol. 17, no. 2, pp. 757-789, Secondquarter 2015. H. Ju and R. Zhang, “Throughput maximization in wireless powered communication networks,” *IEEE Trans. Wireless Commun.*, vol. 13, no. 1, pp. 418-428, Jan. 2014. P. Ramezani and A. Jamalipour, “Toward the evolution of wireless powered communication networks for the future Internet of Things,” *IEEE Network*, vol. 31, no. 6, pp. 62-69, Nov./Dec. 2017. H. Chen, Y. Li, J. L. Rebelatto, B. F. Uchoa-Filho, and B. Vucetic, “Harvest-then-cooperate: Wireless-powered cooperative communications,” *IEEE Trans. Signal Process.*, vol. 63, no. 7, pp. 1700-1711, Apr., 2015. H. Ju and R. Zhang, “Uer cooperation in wireless powered communication networks,” in *Proc. IEEE GLOBECOM*, Austin, TX, USA, Dec. 2014, pp. 1430-1435. Y. Zeng, H. Chen, and R. Zhang, “Bidirectional wireless information and power transfer with a helping relay,“ *IEEE Commun. Letters*, vol. 20, no. 5, pp. 862-865, May 2016. C. Zhong, H. A. Suraweera, G. Zheng, I. Krikidis, and Z. Zhang, “Wireless information and power transfer with full duplex relaying,“ *IEEE Trans. Commun.*, vol. 62, no. 10, pp. 3447-3461, Oct. 2014. B. Lyu, D. T. Hoang, and Z. Yang, “User Cooperation in wireless-powered backscatter communication networks,” *IEEE Wireless Commun. Lett.*, vol. 8, no. 2, pp. 632-635, Apr. 2019. S. H. Kim and D. I. Kim, “Hybrid backscatter communication for wireless-powered heterogeneous networks,” *IEEE Trans. Wireless Commmun.*, vol. 16, no. 10, pp. 6557-6570, Oct. 2017. S. Gong, X. Huang, J. Xu, W. Liu, P. Wang, and D. Niyato, “Backscatter relay communications powered by wireless energy beamforming,” *IEEE Trans. Commun.*, vol. 66, no. 7, pp. 3187-3200, July 2018. V. Liu, A. Parks, V. Talla, S. Gollakota, D. Wetherall, and J. R. Smith, “Ambient backscatter: Wireless communication out of thin air,” in *Proc. SIGCOMM*, pp. 39-50, Hong Kong, Aug. 2013. Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” *IEEE Commun. Mag.*, vol. 58, no. 1. pp 106-112, Jan. 2020. S. Gong, X. Lu, D. T. Hoang, D. Niyato, L. Shu, D. I. Kim, and Y. C. Liang, “Towards smart radio environment for wireless communications via intelligent reflecting surfaces: A comprehensive survey”. Available Online: <https://arxiv.org/pdf/1912.07794.pdf>, Dec. 2019. Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” *IEEE Trans. Wireless Commun.*, vol. 18, no. 11, pp. 5394-5409, Nov. 2019. D. Mishra and H. Johansson, “Channel estimation and low-complexity beamforming design for passive intelligent surface assisted MISO wireless energy transfer,” in *Proc. IEEE ICASSP*, Brighton, UK, May 2019, pp. 4659–4663. Q. Wu and R. Zhang, “Weighted sum power maximization for intelligent reflecting surface aided SWIPT,” *IEEE Wireless Commun. Lett.*, Dec. 2019, doi: 10.1109/LWC.2019.2961656. C. Pan, H. Ren, K. Wang, M. Elkashlan, A. Nallanathan, J. Wang, and L. Hanzo “Intelligent reflecting surface aided MIMO broadcasting for simultaneous wireless information and power transfer,” *IEEE J. Sel. Area. Commun.*, 2020. X. Yu, D. Xu, and R. Schober, “MISO wireless communication systems via intelligent reflecting surfaces,” in *Proc. IEEE/CIC ICCC*, Changchun, China, Aug. 2019, pp. 735–740. Z. Chu, W. Hao, P. Xiao, and J. Shi, “Intelligent reflect surface aided multi-antenna secure transmission,” *IEEE Wireless Commun. Lett.*, vol. 9, no. 1, Jan. 2020. C. Guo, Y. Cui, F. Yang, and L. Ding, “Outage probability analysis and minimization in intelligent reflecting surface-assisted MISO systems,” *IEEE Commun. Lett.*, doi: 10.1109/LCOMM.2020.2975182, Feb. 2020. B. Lyu, D. T. Hoang, S. Gong, and Z. Yang, “Intelligent reflecting surface assisted wireless powered communication networks,” in *Proc. WCNC Workshops*, Seoul, South Korea, Apr. 2020, pp. 1-6. Y. Zheng, S. Bi, Y. J. Zhang, Z. Quan, and H. Wang, “Intelligent reflecting surface enhanced user cooperation in wireless powered communication networks,” *IEEE Wireless Commun. Lett.*, doi: 10.1109/LWC.2020.2974721, Feb. 2020. C. Huang, G. C. Alexandropoulos, A. Zappone, M. Debbah, and C. Yuen, “Energy efficient multi-user MISO communication using low resolution large intelligent surfaces,” in *Proc. IEEE GLOBECOM Workshops*, Abu Dhabi, United Arab Emirates, 2018, pp. 1-6. C. Huang, A. Zappone, G. C. Alexandropoulos, M. Debbah, and C. Yuen, “Reconfigurable intelligent surfaces for energy efficiency in wireless communication,” *IEEE Trans. Wireless Commun.*, vol. 18, no. 8, pp. 4157-4170, Jun. 2019. Y. Zou, Y. Liu, S. Gong, W. Cheng, D. T. Hoang, and D. Niyato, “Joint energy beamforming and optimization for intelligent reflecting surface enhanced communications,” in *Proc. WCNC Workshops*, Seoul, South Korea, Apr. 2020, pp. 1-6. A. A. Nasir, X. Zhou, S. Durrani, and R. A. Kennedy, “Relaying protocols for wireless energy harvesting and information processing," *IEEE Trans. Wireless Commun.*, vol. 12, no. 7, pp. 3622-3636, Jul. 2013. Z. Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” *IEEE Signal Process.*, vol. 27, no. 3, pp. 20-34, May 2010. S. Boyd and L. Vandenberghe, *Convex Optimization*. Cambridge University Press, 2004. Michael Grant *et al.*, “CVX: Matlab software for disciplined convex programming,” version 2.0 beta. http://cvxr.com/cvx, September 2013. A. M.-C. So, J. Zhang, and Y. Ye, “On approximating complex quadratic optimization problems via semidefinite programming relaxations,” *Mathematical Programming,* vol. 110, no. 1, pp. 93–110, Jun. 2007. V. Arun and H. Balakrishnan, “RFocus: Practical beamforming for small devices”. Available Online: <https://arxiv.org/pdf/1912.07794.pdf>, May 2019. [^1]: The CSI of all links can be precisely estimated by the advanced channel estimation techniques. Even if there exist channel estimation errors in realistic scenarios, the sum-rate derived under the perfect CSI condition can serve as an upper-bound for the system performance. [^2]: From Corollary \[CorollaryOne\], we find that solving **P2** is equivalent to finding the optimal value of $\delta$ defined in .